Sample records for require massive amounts

  1. Concentration-dependent effect of hypocalcaemia on mortality of patients with critical bleeding requiring massive transfusion: a cohort study.

    PubMed

    Ho, K M; Leonard, A D

    2011-01-01

    Mortality of patients with critical bleeding requiring massive transfusion is high. Although hypothermia, acidosis and coagulopathy have been well described as important determinants of mortality in patients with critical bleeding requiring massive transfusion, the risk factors and outcome associated with hypocalcaemia in these patients remain uncertain. This cohort study assessed the relationship between the lowest ionised calcium concentration during the 24-hour period of critical bleeding and the hospital mortality of 352 consecutive patients, while adjusting for diagnosis, acidosis, coagulation results, transfusion requirements and use of recombinant factor VIIa. Hypocalcaemia was common (mean concentrations 0.77 mmol/l, SD 0.19) and had a linear; concentration-dependent relationship with mortality (odds ratio [OR] 1.25 per 0.1 mmol/l decrement, 95% confidence interval [CI]: 1.04 to 1.52; P = 0.02). Hypocalcaemia accounted for 12.5% of the variability and was more important than the lowest fibrinogen concentrations (10.8%), acidosis (7.9%) and lowest platelet counts (7.7%) in predicting hospital mortality. The amount of fresh frozen plasma transfused (OR 1.09 per unit, 95% CI: 1.02 to 1.17; P = 0.02) and acidosis (OR 1.45 per 0.1 decrement, 95% CI: 1.19 to 1.72; P = 0.01) were associated with the occurrence of severe hypocalcaemia (< 0.8 mmol/l). In conclusion, ionised calcium concentrations had an inverse concentration-dependent relationship with mortality of patients with critical bleeding requiring massive transfusion. Both acidosis and the amount of fresh frozen plasma transfused were the main risk factors for severe hypocalcaemia. Further research is needed to determine whether preventing ionised hypocalcaemia can reduce mortality of patients with critical bleeding requiring massive transfusion.

  2. The minimal amount of starting DNA for Agilent’s hybrid capture-based targeted massively parallel sequencing

    PubMed Central

    Chung, Jongsuk; Son, Dae-Soon; Jeon, Hyo-Jeong; Kim, Kyoung-Mee; Park, Gahee; Ryu, Gyu Ha; Park, Woong-Yang; Park, Donghyun

    2016-01-01

    Targeted capture massively parallel sequencing is increasingly being used in clinical settings, and as costs continue to decline, use of this technology may become routine in health care. However, a limited amount of tissue has often been a challenge in meeting quality requirements. To offer a practical guideline for the minimum amount of input DNA for targeted sequencing, we optimized and evaluated the performance of targeted sequencing depending on the input DNA amount. First, using various amounts of input DNA, we compared commercially available library construction kits and selected Agilent’s SureSelect-XT and KAPA Biosystems’ Hyper Prep kits as the kits most compatible with targeted deep sequencing using Agilent’s SureSelect custom capture. Then, we optimized the adapter ligation conditions of the Hyper Prep kit to improve library construction efficiency and adapted multiplexed hybrid selection to reduce the cost of sequencing. In this study, we systematically evaluated the performance of the optimized protocol depending on the amount of input DNA, ranging from 6.25 to 200 ng, suggesting the minimal input DNA amounts based on coverage depths required for specific applications. PMID:27220682

  3. Social Data Analytics Using Tensors and Sparse Techniques

    ERIC Educational Resources Information Center

    Zhang, Miao

    2014-01-01

    The development of internet and mobile technologies is driving an earthshaking social media revolution. They bring the internet world a huge amount of social media content, such as images, videos, comments, etc. Those massive media content and complicate social structures require the analytic expertise to transform those flood of information into…

  4. U.S. Air Forces Aerial Spray Mission: Should the Department of Defense Continue to Operate this Weapon of Mass Dispersion

    DTIC Science & Technology

    2015-12-01

    pesticide application over farm fields to produce a better crop.2 On 3 August 1921 in a joint effort between the U.S. Army Signal Corps in Dayton, Ohio... pesticide dissemination because of the relatively small amount of product needed to spray for nuisance insects over a vast area. The ULV system is... pesticide per minute. Applications that require massive amounts of liquid herbicide to neutralize cheatgrass and other fire-prone, invasive vegetation on

  5. SCOOP: A Measurement and Database of Student Online Search Behavior and Performance

    ERIC Educational Resources Information Center

    Zhou, Mingming

    2015-01-01

    The ability to access and process massive amounts of online information is required in many learning situations. In order to develop a better understanding of student online search process especially in academic contexts, an online tool (SCOOP) is developed for tracking mouse behavior on the web to build a more extensive account of student web…

  6. Thought Leaders during Crises in Massive Social Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corley, Courtney D.; Farber, Robert M.; Reynolds, William

    The vast amount of social media data that can be gathered from the internet coupled with workflows that utilize both commodity systems and massively parallel supercomputers, such as the Cray XMT, open new vistas for research to support health, defense, and national security. Computer technology now enables the analysis of graph structures containing more than 4 billion vertices joined by 34 billion edges along with metrics and massively parallel algorithms that exhibit near-linear scalability according to number of processors. The challenge lies in making this massive data and analysis comprehensible to an analyst and end-users that require actionable knowledge tomore » carry out their duties. Simply stated, we have developed language and content agnostic techniques to reduce large graphs built from vast media corpora into forms people can understand. Specifically, our tools and metrics act as a survey tool to identify thought leaders' -- those members that lead or reflect the thoughts and opinions of an online community, independent of the source language.« less

  7. Smart integrated microsystems: the energy efficiency challenge (Conference Presentation) (Plenary Presentation)

    NASA Astrophysics Data System (ADS)

    Benini, Luca

    2017-06-01

    The "internet of everything" envisions trillions of connected objects loaded with high-bandwidth sensors requiring massive amounts of local signal processing, fusion, pattern extraction and classification. From the computational viewpoint, the challenge is formidable and can be addressed only by pushing computing fabrics toward massive parallelism and brain-like energy efficiency levels. CMOS technology can still take us a long way toward this goal, but technology scaling is losing steam. Energy efficiency improvement will increasingly hinge on architecture, circuits, design techniques such as heterogeneous 3D integration, mixed-signal preprocessing, event-based approximate computing and non-Von-Neumann architectures for scalable acceleration.

  8. The Feasibility of Linear Motors and High-Energy Thrusters for Massive Aerospace Vehicles

    NASA Astrophysics Data System (ADS)

    Stull, M. A.

    A combination of two propulsion technologies, superconducting linear motors using ambient magnetic fields and high- energy particle beam thrusters, may make it possible to develop massive aerospace vehicles the size of aircraft carriers. If certain critical thresholds can be attained, linear motors can enable massive vehicles to fly within the atmosphere and can propel them to orbit. Thrusters can do neither, because power requirements are prohibitive. However, unless superconductors having extremely high critical current densities can be developed, the interplanetary magnetic field is too weak for linear motors to provide sufficient acceleration to reach even nearby planets. On the other hand, high-energy thrusters can provide adequate acceleration using a minimal amount of reaction mass, at achievable levels of power generation. If the requirements for linear motor propulsion can be met, combining the two modes of propulsion could enable huge nuclear powered spacecraft to reach at least the inner planets of the solar system, the asteroid belt, and possibly Jupiter, in reasonably short times under continuous acceleration, opening them to exploration, resource development and colonization.

  9. Massive naproxen overdose with serial serum levels.

    PubMed

    Al-Abri, Suad A; Anderson, Ilene B; Pedram, Fatehi; Colby, Jennifer M; Olson, Kent R

    2015-03-01

    Massive naproxen overdose is not commonly reported. Severe metabolic acidosis and seizure have been described, but the use of renal replacement therapy has not been studied in the context of overdose. A 28-year-old man ingested 70 g of naproxen along with an unknown amount of alcohol in a suicidal attempt. On examination in the emergency department 90 min later, he was drowsy but had normal vital signs apart from sinus tachycardia. Serum naproxen level 90 min after ingestion was 1,580 mg/L (therapeutic range 25-75 mg/L). He developed metabolic acidosis requiring renal replacement therapy using sustained low efficiency dialysis (SLED) and continuous venovenous hemofiltration (CVVH) and had recurrent seizure activity requiring intubation within 4 h from ingestion. He recovered after 48 h. Massive naproxen overdose can present with serious toxicity including seizures, altered mental status, and metabolic acidosis. Hemodialysis and renal replacement therapy may correct the acid base disturbance and provide support in cases of renal impairment in context of naproxen overdose, but further studies are needed to determine the extraction of naproxen.

  10. Dynamic Database. Efficiently Convert Massive Quantities of Sensor Data into Actionable Information for Tactical Commanders

    DTIC Science & Technology

    2000-06-01

    As the number of sensors, platforms, exploitation sites, and command and control nodes continues to grow in response to Joint Vision 2010 information ... dominance requirements, Commanders and analysts will have an ever increasing need to collect and process vast amounts of data over wide areas using a large number of disparate sensors and information gathering sources.

  11. The Synthesis of 44Ti and 56Ni in Massive Stars

    NASA Astrophysics Data System (ADS)

    Chieffi, Alessandro; Limongi, Marco

    2017-02-01

    We discuss the influence of rotation on the combined synthesis of {}44{Ti} and {}56{Ni} in massive stars. While {}56{Ni} is significantly produced by both complete and incomplete explosive Si burning, {}44{Ti} is mainly produced by complete explosive Si burning, with a minor contribution (in standard non-rotating models) from incomplete explosive Si burning and O burning (both explosive and hydrostatic). We find that, in most cases, the thickness of the region exposed to incomplete explosive Si burning increases in rotating models (initial velocity, v ini = 300 km s-1) and since {}56{Ni} is significantly produced in this zone, the fraction of mass coming from the complete explosive Si burning zone necessary to get the required amount of {}56{Ni} reduces. Therefore the amount of {}44{Ti} ejected for a given fixed amount of {}56{Ni} decreases in rotating models. However, some rotating models at [Fe/H] = -1 develop a very extended O convective shell in which a consistent amount of {}44{Ti} is formed, preserved, and ejected in the interstellar medium. Hence a better modeling of the thermal instabilities (convection) in the advanced burning phases together with a critical analysis of the cross sections of the nuclear reactions operating in O burning are relevant for the understanding of the synthesis of {}44{Ti}.

  12. World-wide amateur observations

    NASA Astrophysics Data System (ADS)

    Eversberg, T.; Aldoretta, E. J.; Knapen, J. H.; Moffat, A. F. J.; Morel, T.; Ramiaramanantsoa, T.; Rauw, G.; Richardson, N. D.; St-Louis, N.; Teodoro, M.

    For some years now, spectroscopic measurements of massive stars in the amateur domain have been fulfilling professional requirements. Various groups in the northern and southern hemispheres have been established, running successful professional-amateur (ProAm) collaborative campaigns, e.g., on WR, O and B type stars. Today high quality data (echelle and long-slit) are regularly delivered and corresponding results published. Night-to-night long-term observations over months to years open a new opportunity for massive-star research. We introduce recent and ongoing sample campaigns (e.g. ɛ Aur, WR 134, ζ Pup), show respective results and highlight the vast amount of data collected in various data bases. Ultimately it is in the time-dependent domain where amateurs can shine most.

  13. Grid Computing for Earth Science

    NASA Astrophysics Data System (ADS)

    Renard, Philippe; Badoux, Vincent; Petitdidier, Monique; Cossu, Roberto

    2009-04-01

    The fundamental challenges facing humankind at the beginning of the 21st century require an effective response to the massive changes that are putting increasing pressure on the environment and society. The worldwide Earth science community, with its mosaic of disciplines and players (academia, industry, national surveys, international organizations, and so forth), provides a scientific basis for addressing issues such as the development of new energy resources; a secure water supply; safe storage of nuclear waste; the analysis, modeling, and mitigation of climate changes; and the assessment of natural and industrial risks. In addition, the Earth science community provides short- and medium-term prediction of weather and natural hazards in real time, and model simulations of a host of phenomena relating to the Earth and its space environment. These capabilities require that the Earth science community utilize, both in real and remote time, massive amounts of data, which are usually distributed among many different organizations and data centers.

  14. Citrate metabolism and its complications in non-massive blood transfusions: association with decompensated metabolic alkalosis+respiratory acidosis and serum electrolyte levels.

    PubMed

    Bıçakçı, Zafer; Olcay, Lale

    2014-06-01

    Metabolic alkalosis, which is a non-massive blood transfusion complication, is not reported in the literature although metabolic alkalosis dependent on citrate metabolism is reported to be a massive blood transfusion complication. The aim of this study was to investigate the effect of elevated carbon dioxide production due to citrate metabolism and serum electrolyte imbalance in patients who received frequent non-massive blood transfusions. Fifteen inpatients who were diagnosed with different conditions and who received frequent blood transfusions (10-30 ml/kg/day) were prospectively evaluated. Patients who had initial metabolic alkalosis (bicarbonate>26 mmol/l), who needed at least one intensive blood transfusion in one-to-three days for a period of at least 15 days, and whose total transfusion amount did not fit the massive blood transfusion definition (<80 ml/kg) were included in the study. The estimated mean total citrate administered via blood and blood products was calculated as 43.2 ± 34.19 mg/kg/day (a total of 647.70 mg/kg in 15 days). Decompensated metabolic alkalosis+respiratory acidosis developed as a result of citrate metabolism. There was a positive correlation between cumulative amount of citrate and the use of fresh frozen plasma, venous blood pH, ionized calcium, serum-blood gas sodium and mortality, whereas there was a negative correlation between cumulative amount of citrate and serum calcium levels, serum phosphorus levels and amount of urine chloride. In non-massive, but frequent blood transfusions, elevated carbon dioxide production due to citrate metabolism causes intracellular acidosis. As a result of intracellular acidosis compensation, decompensated metabolic alkalosis+respiratory acidosis and electrolyte imbalance may develop. This situation may contribute to the increase in mortality. In conclusion, it should be noted that non-massive, but frequent blood transfusions may result in certain complications. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. America’s Achilles Heel: Defense Against High-altitude Electromagnetic Pulse-policy vs. Practice

    DTIC Science & Technology

    2014-12-12

    Directives SCADA Supervisory Control and Data Acquisition Systems SHIELD Act Secure High-voltage Infrastructure for Electricity from Lethal Damage Act...take place, it is important to understand the effects of the components of EMP from a high-altitude nuclear detonation. The requirements for shielding ...Mass Ejection (CME). A massive, bubble-shaped burst of plasma expanding outward from the Sun’s corona, in which large amounts of superheated

  16. Flood inundation extent mapping based on block compressed tracing

    NASA Astrophysics Data System (ADS)

    Shen, Dingtao; Rui, Yikang; Wang, Jiechen; Zhang, Yu; Cheng, Liang

    2015-07-01

    Flood inundation extent, depth, and duration are important factors affecting flood hazard evaluation. At present, flood inundation analysis is based mainly on a seeded region-growing algorithm, which is an inefficient process because it requires excessive recursive computations and it is incapable of processing massive datasets. To address this problem, we propose a block compressed tracing algorithm for mapping the flood inundation extent, which reads the DEM data in blocks before transferring them to raster compression storage. This allows a smaller computer memory to process a larger amount of data, which solves the problem of the regular seeded region-growing algorithm. In addition, the use of a raster boundary tracing technique allows the algorithm to avoid the time-consuming computations required by the seeded region-growing. Finally, we conduct a comparative evaluation in the Chin-sha River basin, results show that the proposed method solves the problem of flood inundation extent mapping based on massive DEM datasets with higher computational efficiency than the original method, which makes it suitable for practical applications.

  17. A new archival infrastructure for highly-structured astronomical data

    NASA Astrophysics Data System (ADS)

    Dovgan, Erik; Knapic, Cristina; Sponza, Massimo; Smareglia, Riccardo

    2018-03-01

    With the advent of the 2020 Radio Astronomy Telescopes era, the amount and format of the radioastronomical data is becoming a massive and performance-critical challenge. Such an evolution of data models and data formats require new data archiving techniques that allow massive and fast storage of data that are at the same time also efficiently processed. A useful expertise for efficient archiviation has been obtained through data archiving of Medicina and Noto Radio Telescopes. The presented archival infrastructure named the Radio Archive stores and handles various formats, such as FITS, MBFITS, and VLBI's XML, which includes description and ancillary files. The modeling and architecture of the archive fulfill all the requirements of both data persistence and easy data discovery and exploitation. The presented archive already complies with the Virtual Observatory directives, therefore future service implementations will also be VO compliant. This article presents the Radio Archive services and tools, from the data acquisition to the end-user data utilization.

  18. Query-Structure Based Web Page Indexing

    DTIC Science & Technology

    2012-11-01

    the massive amount of data present on the web. In our third participation in the web track at TREC 2012, we explore the idea of building an...the ad-hoc and diversity task. 1 INTRODUCTION The rapid growth and massive quantities of data on the Internet have increased the importance and...complexity of information retrieval systems. The amount and the diversity of the web data introduce shortcomings in the way search engines rank their

  19. Delivery of Fuel and Construction Materials to South Pole Station

    DTIC Science & Technology

    1993-07-01

    AID-A270 431 Delivery of Fuel and Construction Materials to South Pole Station Stephen L. DenHartog and George L. Blaisdell July 993 DTIC ELECT S OCT...South Pole Station, ideally with minimal impact on the current science and operational program. The new station will require the delivery of massive...amounts of construction materials to this remote site. The existing means of delivering material and fuel to the South Pole include the use of specialized

  20. Ecoinformatics: supporting ecology as a data-intensive science.

    PubMed

    Michener, William K; Jones, Matthew B

    2012-02-01

    Ecology is evolving rapidly and increasingly changing into a more open, accountable, interdisciplinary, collaborative and data-intensive science. Discovering, integrating and analyzing massive amounts of heterogeneous data are central to ecology as researchers address complex questions at scales from the gene to the biosphere. Ecoinformatics offers tools and approaches for managing ecological data and transforming the data into information and knowledge. Here, we review the state-of-the-art and recent advances in ecoinformatics that can benefit ecologists and environmental scientists as they tackle increasingly challenging questions that require voluminous amounts of data across disciplines and scales of space and time. We also highlight the challenges and opportunities that remain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. MTL distributed magnet measurement system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nogiec, J.M.; Craker, P.A.; Garbarini, J.P.

    1993-04-01

    The Magnet Test Laboratory (MTL) at the Superconducting Super collider Laboratory will be required to precisely and reliably measure properties of magnets in a production environment. The extensive testing of the superconducting magnets comprises several types of measurements whose main purpose is to evaluate some basic parameters characterizing magnetic, mechanic and cryogenic properties of magnets. The measurement process will produce a significant amount of data which will be subjected to complex analysis. Such massive measurements require a careful design of both the hardware and software of computer systems, having in mind a reliable, maximally automated system. In order to fulfillmore » this requirement a dedicated Distributed Magnet Measurement System (DMMS) is being developed.« less

  2. Mass loss and stellar superwinds

    NASA Astrophysics Data System (ADS)

    Vink, Jorick S.

    2017-09-01

    Mass loss bridges the gap between massive stars and supernovae (SNe) in two major ways: (i) theoretically, it is the amount of mass lost that determines the mass of the star prior to explosion and (ii) observations of the circumstellar material around SNe may teach us the type of progenitor that made the SN. Here, I present the latest models and observations of mass loss from massive stars, both for canonical massive O stars, as well as very massive stars that show Wolf-Rayet type features. This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'.

  3. GPU-Meta-Storms: computing the structure similarities among massive amount of microbial community samples using GPU.

    PubMed

    Su, Xiaoquan; Wang, Xuetao; Jing, Gongchao; Ning, Kang

    2014-04-01

    The number of microbial community samples is increasing with exponential speed. Data-mining among microbial community samples could facilitate the discovery of valuable biological information that is still hidden in the massive data. However, current methods for the comparison among microbial communities are limited by their ability to process large amount of samples each with complex community structure. We have developed an optimized GPU-based software, GPU-Meta-Storms, to efficiently measure the quantitative phylogenetic similarity among massive amount of microbial community samples. Our results have shown that GPU-Meta-Storms would be able to compute the pair-wise similarity scores for 10 240 samples within 20 min, which gained a speed-up of >17 000 times compared with single-core CPU, and >2600 times compared with 16-core CPU. Therefore, the high-performance of GPU-Meta-Storms could facilitate in-depth data mining among massive microbial community samples, and make the real-time analysis and monitoring of temporal or conditional changes for microbial communities possible. GPU-Meta-Storms is implemented by CUDA (Compute Unified Device Architecture) and C++. Source code is available at http://www.computationalbioenergy.org/meta-storms.html.

  4. Mask data processing in the era of multibeam writers

    NASA Astrophysics Data System (ADS)

    Abboud, Frank E.; Asturias, Michael; Chandramouli, Maesh; Tezuka, Yoshihiro

    2014-10-01

    Mask writers' architectures have evolved through the years in response to ever tightening requirements for better resolution, tighter feature placement, improved CD control, and tolerable write time. The unprecedented extension of optical lithography and the myriad of Resolution Enhancement Techniques have tasked current mask writers with ever increasing shot count and higher dose, and therefore, increasing write time. Once again, we see the need for a transition to a new type of mask writer based on massively parallel architecture. These platforms offer a step function improvement in both dose and the ability to process massive amounts of data. The higher dose and almost unlimited appetite for edge corrections open new windows of opportunity to further push the envelope. These architectures are also naturally capable of producing curvilinear shapes, making the need to approximate a curve with multiple Manhattan shapes unnecessary.

  5. Pericardial Effusion with Cardiac Tamponade as a form of presentation of Primary Hypothyroidism.

    PubMed

    Agarwal, Arun; Chowdhury, Nikhil; Mathur, Ankit; Sharma, Samiksha; Agarwal, Aakanksha

    2016-12-01

    Hypothyroidism is a rare cause of pericardial effusion (PE). Pericardial effusion secondary to hypothyroidism remains a diagnostic challenge for clinicians because of its inconsistency between symptoms and amount of pericardial effusion. We report an atypical case that presented with ascites and was diagnosed to have cardiac tamponade secondary to primary hypothyroidism. Besides repeated pericardiocentesis she eventually required surgical management and optimization of medical therapy to manage the massive pericardial effusion. © Journal of the Association of Physicians of India 2011.

  6. High-performance Negative Database for Massive Data Management System of The Mingantu Spectral Radioheliograph

    NASA Astrophysics Data System (ADS)

    Shi, Congming; Wang, Feng; Deng, Hui; Liu, Yingbo; Liu, Cuiyin; Wei, Shoulin

    2017-08-01

    As a dedicated synthetic aperture radio interferometer in China, the MingantU SpEctral Radioheliograph (MUSER), initially known as the Chinese Spectral RadioHeliograph (CSRH), has entered the stage of routine observation. More than 23 million data records per day need to be effectively managed to provide high-performance data query and retrieval for scientific data reduction. In light of these massive amounts of data generated by the MUSER, in this paper, a novel data management technique called the negative database (ND) is proposed and used to implement a data management system for the MUSER. Based on the key-value database, the ND technique makes complete utilization of the complement set of observational data to derive the requisite information. Experimental results showed that the proposed ND can significantly reduce storage volume in comparison with a relational database management system (RDBMS). Even when considering the time needed to derive records that were absent, its overall performance, including querying and deriving the data of the ND, is comparable with that of a relational database management system (RDBMS). The ND technique effectively solves the problem of massive data storage for the MUSER and is a valuable reference for the massive data management required in next-generation telescopes.

  7. Interstitial pulmonary fibrosis and progressive massive fibrosis related to smoking methamphetamine with talc as filler.

    PubMed

    Baylor, Peter A; Sobenes, Juan R; Vallyathan, Val

    2013-05-01

    We present a case of interstitial pulmonary fibrosis accompanied by radiographic evidence of progressive massive fibrosis in a patient who had a 15-20 year history of almost daily recreational inhalation of methamphetamine. Mineralogical analysis confirmed the presence of talc on biopsy of the area of progressive massive fibrosis. The coexistence of interstitial pulmonary fibrosis and progressive massive fibrosis suggests that prolonged recreational inhalation of methamphetamine that has been "cut" with talc can result in sufficient amount of talc being inhaled to result in interstitial pulmonary fibrosis and progressive massive fibrosis in the absence of other causes.

  8. Mass loss and stellar superwinds.

    PubMed

    Vink, Jorick S

    2017-10-28

    Mass loss bridges the gap between massive stars and supernovae (SNe) in two major ways: (i) theoretically, it is the amount of mass lost that determines the mass of the star prior to explosion and (ii) observations of the circumstellar material around SNe may teach us the type of progenitor that made the SN. Here, I present the latest models and observations of mass loss from massive stars, both for canonical massive O stars, as well as very massive stars that show Wolf-Rayet type features.This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'. © 2017 The Author(s).

  9. Massive thoracoabdominal aortic thrombosis in a patient with iatrogenic Cushing syndrome.

    PubMed

    Kim, Dong Hun; Choi, Dong-Hyun; Lee, Young-Min; Kang, Joon Tae; Chae, Seung Seok; Kim, Bo-Bae; Ki, Young-Jae; Kim, Jin Hwa; Chung, Joong-Wha; Koh, Young-Youp

    2014-01-01

    Massive thoracoabdominal aortic thrombosis is a rare finding in patients with iatrogenic Cushing syndrome in the absence of any coagulation abnormality. It frequently represents an urgent surgical situation. We report the case of an 82-year-old woman with massive aortic thrombosis secondary to iatrogenic Cushing syndrome. A follow-up computed tomography scan showed a decreased amount of thrombus in the aorta after anticoagulation therapy alone.

  10. Finding Cardinality Heavy-Hitters in Massive Traffic Data and Its Application to Anomaly Detection

    NASA Astrophysics Data System (ADS)

    Ishibashi, Keisuke; Mori, Tatsuya; Kawahara, Ryoichi; Hirokawa, Yutaka; Kobayashi, Atsushi; Yamamoto, Kimihiro; Sakamoto, Hitoaki; Asano, Shoichiro

    We propose an algorithm for finding heavy hitters in terms of cardinality (the number of distinct items in a set) in massive traffic data using a small amount of memory. Examples of such cardinality heavy-hitters are hosts that send large numbers of flows, or hosts that communicate with large numbers of other hosts. Finding these hosts is crucial to the provision of good communication quality because they significantly affect the communications of other hosts via either malicious activities such as worm scans, spam distribution, or botnet control or normal activities such as being a member of a flash crowd or performing peer-to-peer (P2P) communication. To precisely determine the cardinality of a host we need tables of previously seen items for each host (e. g., flow tables for every host) and this may infeasible for a high-speed environment with a massive amount of traffic. In this paper, we use a cardinality estimation algorithm that does not require these tables but needs only a little information called the cardinality summary. This is made possible by relaxing the goal from exact counting to estimation of cardinality. In addition, we propose an algorithm that does not need to maintain the cardinality summary for each host, but only for partitioned addresses of a host. As a result, the required number of tables can be significantly decreased. We evaluated our algorithm using actual backbone traffic data to find the heavy-hitters in the number of flows and estimate the number of these flows. We found that while the accuracy degraded when estimating for hosts with few flows, the algorithm could accurately find the top-100 hosts in terms of the number of flows using a limited-sized memory. In addition, we found that the number of tables required to achieve a pre-defined accuracy increased logarithmically with respect to the total number of hosts, which indicates that our method is applicable for large traffic data for a very large number of hosts. We also introduce an application of our algorithm to anomaly detection. With actual traffic data, our method could successfully detect a sudden network scan.

  11. Efficiently modeling neural networks on massively parallel computers

    NASA Technical Reports Server (NTRS)

    Farber, Robert M.

    1993-01-01

    Neural networks are a very useful tool for analyzing and modeling complex real world systems. Applying neural network simulations to real world problems generally involves large amounts of data and massive amounts of computation. To efficiently handle the computational requirements of large problems, we have implemented at Los Alamos a highly efficient neural network compiler for serial computers, vector computers, vector parallel computers, and fine grain SIMD computers such as the CM-2 connection machine. This paper describes the mapping used by the compiler to implement feed-forward backpropagation neural networks for a SIMD (Single Instruction Multiple Data) architecture parallel computer. Thinking Machines Corporation has benchmarked our code at 1.3 billion interconnects per second (approximately 3 gigaflops) on a 64,000 processor CM-2 connection machine (Singer 1990). This mapping is applicable to other SIMD computers and can be implemented on MIMD computers such as the CM-5 connection machine. Our mapping has virtually no communications overhead with the exception of the communications required for a global summation across the processors (which has a sub-linear runtime growth on the order of O(log(number of processors)). We can efficiently model very large neural networks which have many neurons and interconnects and our mapping can extend to arbitrarily large networks (within memory limitations) by merging the memory space of separate processors with fast adjacent processor interprocessor communications. This paper will consider the simulation of only feed forward neural network although this method is extendable to recurrent networks.

  12. On the spatial distributions of dense cores in Orion B

    NASA Astrophysics Data System (ADS)

    Parker, Richard J.

    2018-05-01

    We quantify the spatial distributions of dense cores in three spatially distinct areas of the Orion B star-forming region. For L1622, NGC 2068/NGC 2071, and NGC 2023/NGC 2024, we measure the amount of spatial substructure using the Q-parameter and find all three regions to be spatially substructured (Q < 0.8). We quantify the amount of mass segregation using ΛMSR and find that the most massive cores are mildly mass segregated in NGC 2068/NGC 2071 (ΛMSR ˜ 2), and very mass segregated in NGC 2023/NGC 2024 (Λ _MSR = 28^{+13}_{-10} for the four most massive cores). Whereas the most massive cores in L1622 are not in areas of relatively high surface density, or deeper gravitational potentials, the massive cores in NGC 2068/NGC 2071 and NGC 2023/NGC 2024 are significantly so. Given the low density (10 cores pc-2) and spatial substructure of cores in Orion B, the mass segregation cannot be dynamical. Our results are also inconsistent with simulations in which the most massive stars form via competitive accretion, and instead hint that magnetic fields may be important in influencing the primordial spatial distributions of gas and stars in star-forming regions.

  13. Improving Performance and Predictability of Storage Arrays

    ERIC Educational Resources Information Center

    Altiparmak, Nihat

    2013-01-01

    Massive amount of data is generated everyday through sensors, Internet transactions, social networks, video, and all other digital sources available. Many organizations store this data to enable breakthrough discoveries and innovation in science, engineering, medicine, and commerce. Such massive scale of data poses new research problems called big…

  14. Implications of the Large O VI Columns around Low-redshift L ∗ Galaxies

    NASA Astrophysics Data System (ADS)

    McQuinn, Matthew; Werk, Jessica K.

    2018-01-01

    Observations reveal massive amounts of O VI around star-forming L * galaxies, with covering fractions of near unity extending to the host halo’s virial radius. This O VI absorption is typically kinematically centered upon photoionized gas, with line widths that are suprathermal and kinematically offset from the galaxy. We discuss various scenarios and whether they could result in the observed phenomenology (cooling gas flows, boundary layers, shocks, virialized gas). If collisionally ionized, as we argue is most probable, the O VI observations require that the circumgalactic medium (CGM) of L * galaxies holds nearly all of the associated baryons within a virial radius (∼ {10}11 {M}ȯ ) and hosts massive flows of cooling gas with ≈ 30[{nT}/30 {{cm}}-3 {{K}}] {M}ȯ {{yr}}-1, which must be largely prevented from accreting onto the host galaxy. Cooling and feedback energetics considerations require 10< {nT}< 100 cm‑3 K for the warm and hot halo gases. We argue that virialized gas, boundary layers, hot winds, and shocks are unlikely to directly account for the bulk of the O VI. Furthermore, we show that there is a robust constraint on the number density of many of the photoionized ∼ {10}4 {{K}} absorption systems that yields upper bounds in the range n< (0.1-3) × {10}-3(Z/0.3) cm‑3, suggesting that the dominant pressure in some photoionized clouds is nonthermal. This constraint is in accordance with the low densities inferred from more complex photoionization modeling. The large amount of cooling gas that is inferred could re-form these clouds in a fraction of the halo dynamical time, and it requires much of the feedback energy available from supernovae to be dissipated in the CGM.

  15. Specificity vs. Generalizability: Emergence of Especial Skills in Classical Archery

    PubMed Central

    Czyż, Stanisław H.; Moss, Sarah J.

    2016-01-01

    There is evidence that the recall schema becomes more refined after constant practice. It is also believed that massive amounts of constant practice eventually leads to the emergence of especial skills, i.e., skills that have an advantage in performance over other actions from within the same class of actions. This advantage in performance was noticed when one-criterion practice, e.g., basketball free throws, was compared to non-practiced variations of the skill. However, there is no evidence whether multi-criterion massive amounts of practice would give an advantage to the trained variations of the skill over non-trained, i.e., whether such practice would eventually lead to the development of (multi)-especial skills. The purpose of this study was to determine whether massive amount of practice involving four criterion variations of the skill will give an advantage in performance to the criterions over the class of actions. In two experiments, we analyzed data from female (n = 8) and male classical archers (n = 10), who were required to shoot 30 shots from four accustomed distances, i.e., males at 30, 50, 70, and 90 m and females at 30, 50, 60, and 70 m. The shooting accuracy for the untrained distances (16 distances in men and 14 in women) was used to compile a regression line for distance over shooting accuracy. Regression determined (expected) values were then compared to the shooting accuracy of the trained distances. Data revealed no significant differences between real and expected results at trained distances, except for the 70 m shooting distance in men. The F-test for lack of fit showed that the regression computed for trained and non-trained shooting distances was linear. It can be concluded that especial skills emerge only after very specific practice, i.e., constant practice limited to only one variation of the skill. PMID:27547196

  16. Specificity vs. Generalizability: Emergence of Especial Skills in Classical Archery.

    PubMed

    Czyż, Stanisław H; Moss, Sarah J

    2016-01-01

    There is evidence that the recall schema becomes more refined after constant practice. It is also believed that massive amounts of constant practice eventually leads to the emergence of especial skills, i.e., skills that have an advantage in performance over other actions from within the same class of actions. This advantage in performance was noticed when one-criterion practice, e.g., basketball free throws, was compared to non-practiced variations of the skill. However, there is no evidence whether multi-criterion massive amounts of practice would give an advantage to the trained variations of the skill over non-trained, i.e., whether such practice would eventually lead to the development of (multi)-especial skills. The purpose of this study was to determine whether massive amount of practice involving four criterion variations of the skill will give an advantage in performance to the criterions over the class of actions. In two experiments, we analyzed data from female (n = 8) and male classical archers (n = 10), who were required to shoot 30 shots from four accustomed distances, i.e., males at 30, 50, 70, and 90 m and females at 30, 50, 60, and 70 m. The shooting accuracy for the untrained distances (16 distances in men and 14 in women) was used to compile a regression line for distance over shooting accuracy. Regression determined (expected) values were then compared to the shooting accuracy of the trained distances. Data revealed no significant differences between real and expected results at trained distances, except for the 70 m shooting distance in men. The F-test for lack of fit showed that the regression computed for trained and non-trained shooting distances was linear. It can be concluded that especial skills emerge only after very specific practice, i.e., constant practice limited to only one variation of the skill.

  17. CD-ROM And Knowledge Integration

    NASA Astrophysics Data System (ADS)

    Rann, Leonard S.

    1988-06-01

    As the title of this paper suggests, it is about CD-ROM technology and the structuring of massive databases. Even more, it is about the impact CD-ROM has had on the publication of massive amounts of information, and the unique qualities of the medium that allows for the most sophisticated computer retrieval techniques that have ever been used. I am not drawing on experience as a pedant in the educational field, but rather as a software and database designer who has worked with CD-ROM since its inception. I will be giving examples from my company's current applications, as well as discussing some of the challenges that face information publishers in the future. In particular I have a belief about what the most valuable outlet can be created using CD-ROM will be: The CD-ROM is particularly suited for the mass delivery of information systems and databases that either require or utilize a large amount of computational preprocessing to allow a real-time or interactive response to be achieved. Until the advent of CD-ROM technology this level of sophistication in publication was virtually impossible. I will further explain this later in this paper. First, I will discuss the salient features of CD-ROM that make it unique in the world of data storage for electronic publishing.

  18. Condensate of massive graviton and dark matter

    NASA Astrophysics Data System (ADS)

    Aoki, Katsuki; Maeda, Kei-ichi

    2018-02-01

    We study coherently oscillating massive gravitons in the ghost-free bigravity theory. This coherent field can be interpreted as a condensate of the massive gravitons. We first define the effective energy-momentum tensor of the coherent massive gravitons in a curved spacetime. We then study the background dynamics of the Universe and the cosmic structure formation including the effects of the coherent massive gravitons. We find that the condensate of the massive graviton behaves as a dark matter component of the Universe. From the geometrical point of view the condensate is regarded as a spacetime anisotropy. Hence, in our scenario, dark matter is originated from the tiny deformation of the spacetime. We also discuss a production of the spacetime anisotropy and find that the extragalactic magnetic field of a primordial origin can yield a sufficient amount for dark matter.

  19. Peer Assessment for Massive Open Online Courses (MOOCs)

    ERIC Educational Resources Information Center

    Suen, Hoi K.

    2014-01-01

    The teach-learn-assess cycle in education is broken in a typical massive open online course (MOOC). Without formative assessment and feedback, MOOCs amount to information dump or broadcasting shows, not educational experiences. A number of remedies have been attempted to bring formative assessment back into MOOCs, each with its own limits and…

  20. Big Data Analytics for Modelling and Forecasting of Geomagnetic Field Indices

    NASA Astrophysics Data System (ADS)

    Wei, H. L.

    2016-12-01

    A massive amount of data are produced and stored in research areas of space weather and space climate. However, the value of a vast majority of the data acquired every day may not be effectively or efficiently exploited in our daily practice when we try to forecast solar wind parameters and geomagnetic field indices using these recorded measurements or digital signals, probably due to the challenges stemming from the dealing with big data which are characterized by the 4V futures: volume (a massively large amount of data), variety (a great number of different types of data), velocity (a requirement of quick processing of the data), and veracity (the trustworthiness and usability of the data). In order to obtain more reliable and accurate predictive models for geomagnetic field indices, it requires that models should be developed from the big data analytics perspective (or it at least benefits from such a perspective). This study proposes a few data-based modelling frameworks which aim to produce more efficient predictive models for space weather parameters forecasting by means of system identification and big data analytics. More specifically, it aims to build more reliable mathematical models that characterise the relationship between solar wind parameters and geomagnetic filed indices, for example the dependent relationship of Dst and Kp indices on a few solar wind parameters and magnetic field indices, namely, solar wind velocity (V), southward interplanetary magnetic field (Bs), solar wind rectified electric field (VBs), and dynamic flow pressure (P). Examples are provided to illustrate how the proposed modelling approaches are applied to Dst and Kp index prediction.

  1. Beyond methane: Towards a theory for the Paleocene-Eocene Thermal Maximum

    NASA Astrophysics Data System (ADS)

    Higgins, John A.; Schrag, Daniel P.

    2006-05-01

    Extreme global warmth and an abrupt negative carbon isotope excursion during the Paleocene-Eocene Thermal Maximum (PETM) have been attributed to a massive release of methane hydrate from sediments on the continental slope [1]. However, the magnitude of the warming (5 to 6 °C [2],[3]) and rise in the depth of the CCD (> 2 km; [4]) indicate that the size of the carbon addition was larger than can be accounted for by the methane hydrate hypothesis. Additional carbon sources associated with methane hydrate release (e.g. pore-water venting and turbidite oxidation) are also insufficient. We find that the oxidation of at least 5000 Gt C of organic carbon is the most likely explanation for the observed geochemical and climatic changes during the PETM, for which there are several potential mechanisms. Production of thermogenic CH4 and CO2 during contact metamorphism associated with the intrusion of a large igneous province into organic rich sediments [5] is capable of supplying large amounts of carbon, but is inconsistent with the lack of extensive carbon loss in metamorphosed sediments, as well as the abrupt onset and termination of carbon release during the PETM. A global conflagration of Paleocene peatlands [6] highlights a large terrestrial carbon source, but massive carbon release by fire seems unlikely as it would require that all peatlands burn at once and then for only 10 to 30 ky. In addition, this hypothesis requires an order of magnitude increase in the amount of carbon stored in peat. The isolation of a large epicontinental seaway by tectonic uplift associated with volcanism or continental collision, followed by desiccation and bacterial respiration of the aerated organic matter is another potential mechanism for the rapid release of large amounts of CO2. In addition to the oxidation of the underlying marine sediments, the desiccation of a major epicontinental seaway would remove a large source of moisture for the continental interior, resulting in the desiccation and bacterial oxidation of adjacent terrestrial wetlands.

  2. Distributed Factorization Computation on Multiple Volunteered Mobile Resource to Break RSA Key

    NASA Astrophysics Data System (ADS)

    Jaya, I.; Hardi, S. M.; Tarigan, J. T.; Zamzami, E. M.; Sihombing, P.

    2017-01-01

    Similar to common asymmeric encryption, RSA can be cracked by usmg a series mathematical calculation. The private key used to decrypt the massage can be computed using the public key. However, finding the private key may require a massive amount of calculation. In this paper, we propose a method to perform a distributed computing to calculate RSA’s private key. The proposed method uses multiple volunteered mobile devices to contribute during the calculation process. Our objective is to demonstrate how the use of volunteered computing on mobile devices may be a feasible option to reduce the time required to break a weak RSA encryption and observe the behavior and running time of the application on mobile devices.

  3. Downlink Training Techniques for FDD Massive MIMO Systems: Open-Loop and Closed-Loop Training With Memory

    NASA Astrophysics Data System (ADS)

    Choi, Junil; Love, David J.; Bidigare, Patrick

    2014-10-01

    The concept of deploying a large number of antennas at the base station, often called massive multiple-input multiple-output (MIMO), has drawn considerable interest because of its potential ability to revolutionize current wireless communication systems. Most literature on massive MIMO systems assumes time division duplexing (TDD), although frequency division duplexing (FDD) dominates current cellular systems. Due to the large number of transmit antennas at the base station, currently standardized approaches would require a large percentage of the precious downlink and uplink resources in FDD massive MIMO be used for training signal transmissions and channel state information (CSI) feedback. To reduce the overhead of the downlink training phase, we propose practical open-loop and closed-loop training frameworks in this paper. We assume the base station and the user share a common set of training signals in advance. In open-loop training, the base station transmits training signals in a round-robin manner, and the user successively estimates the current channel using long-term channel statistics such as temporal and spatial correlations and previous channel estimates. In closed-loop training, the user feeds back the best training signal to be sent in the future based on channel prediction and the previously received training signals. With a small amount of feedback from the user to the base station, closed-loop training offers better performance in the data communication phase, especially when the signal-to-noise ratio is low, the number of transmit antennas is large, or prior channel estimates are not accurate at the beginning of the communication setup, all of which would be mostly beneficial for massive MIMO systems.

  4. Structure and function of isozymes: Evolutionary aspects and role of oxygen in eucaryotic organisms

    NASA Technical Reports Server (NTRS)

    Satyanarayana, T.

    1985-01-01

    Oxygen is not only one of the most abundant elements on the Earth, but it is also one of the most important elements for life. In terms of composition, the feature of the atmosphere that most distinguishes Earth from other planets is the presence of abundant amounts of oxygen. The first forms of life may have been similar to present day anaerobic bacteria such as clostridium. The relationship between prokaryotes and eukaryotes, if any, has been a topic of much speculation. With only a few exceptions eukaryotes are oxygen-utilizing organisms. This research eukaryotes or eukaryotic biochemical processes requiring oxygen, could have arisen quite early in evolution and utilized the small quantities of photocatalytically produced oxygen which are thought to have been present on the Earth prior to the evolution of massive amounts of photosynthetically-produced oxygen.

  5. Galaxy And Mass Assembly (GAMA): the connection between metals, specific SFR and H I gas in galaxies: the Z-SSFR relation

    NASA Astrophysics Data System (ADS)

    Lara-López, M. A.; Hopkins, A. M.; López-Sánchez, A. R.; Brough, S.; Colless, M.; Bland-Hawthorn, J.; Driver, S.; Foster, C.; Liske, J.; Loveday, J.; Robotham, A. S. G.; Sharp, R. G.; Steele, O.; Taylor, E. N.

    2013-06-01

    We study the interplay between gas phase metallicity (Z), specific star formation rate (SSFR) and neutral hydrogen gas (H I) for galaxies of different stellar masses. Our study uses spectroscopic data from Galaxy and Mass Assembly and Sloan Digital Sky Survey (SDSS) star-forming galaxies, as well as H I detection from the Arecibo Legacy Fast Arecibo L-band Feed Array (ALFALFA) and Galex Arecibo SDSS Survey (GASS) public catalogues. We present a model based on the Z-SSFR relation that shows that at a given stellar mass, depending on the amount of gas, galaxies will follow opposite behaviours. Low-mass galaxies with a large amount of gas will show high SSFR and low metallicities, while low-mass galaxies with small amounts of gas will show lower SSFR and high metallicities. In contrast, massive galaxies with a large amount of gas will show moderate SSFR and high metallicities, while massive galaxies with small amounts of gas will show low SSFR and low metallicities. Using ALFALFA and GASS counterparts, we find that the amount of gas is related to those drastic differences in Z and SSFR for galaxies of a similar stellar mass.

  6. Negativity in Massive Online Open Courses: Impacts on Learning and Teaching and How Instructional Teams May Be Able to Address It

    ERIC Educational Resources Information Center

    Comer, Denise; Baker, Ryan; Wang, Yuan

    2015-01-01

    There are many positive aspects of teaching and learning in Massive Online Open Courses (MOOCs), for both instructors and students. However, there is also a considerable amount of negativity in MOOCs, emerging from learners on discussion forums and through peer assessment, from disciplinary colleagues and from public discourse around MOOCs.…

  7. The European project Merlin on multi-gigabit, energy-efficient, ruggedized lightwave engines for advanced on-board digital processors

    NASA Astrophysics Data System (ADS)

    Stampoulidis, L.; Kehayas, E.; Karppinen, M.; Tanskanen, A.; Heikkinen, V.; Westbergh, P.; Gustavsson, J.; Larsson, A.; Grüner-Nielsen, L.; Sotom, M.; Venet, N.; Ko, M.; Micusik, D.; Kissinger, D.; Ulusoy, A. C.; King, R.; Safaisini, R.

    2017-11-01

    Modern broadband communication networks rely on satellites to complement the terrestrial telecommunication infrastructure. Satellites accommodate global reach and enable world-wide direct broadcasting by facilitating wide access to the backbone network from remote sites or areas where the installation of ground segment infrastructure is not economically viable. At the same time the new broadband applications increase the bandwidth demands in every part of the network - and satellites are no exception. Modern telecom satellites incorporate On-Board Processors (OBP) having analogue-to-digital (ADC) and digital-to-analogue converters (DAC) at their inputs/outputs and making use of digital processing to handle hundreds of signals; as the amount of information exchanged increases, so do the physical size, mass and power consumption of the interconnects required to transfer massive amounts of data through bulk electric wires.

  8. Mineralogy, geochemistry, and Sr-Pb isotopic geochemistry of hydrothermal massive sulfides from the 15.2°S hydrothermal field, Mid-Atlantic Ridge

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Li, Xiaohu; Chu, Fengyou; Li, Zhenggang; Wang, Jianqiang; Yu, Xing; Bi, Dongwei

    2018-04-01

    The 15.2°S hydrothermal field is located at 15.2°S, 13.4°W within the Mid-Atlantic Ridge (MAR) and was initially discovered during Cruise DY125-22 by the Chinese expedition during R/V Dayangyihao in 2011. Here, we provide detailed mineralogical, bulk geochemical, and Sr-Pb isotopic data for massive sulfides and basalts from the 15.2°S hydrothermal field to improve our understanding of the mineral compositions, geochemical characteristics, type of hydrothermal field, and the source of metals present at this vent site. The samples include 14 massive sulfides and a single basalt. The massive sulfides are dominated by pyrite with minor amounts of sphalerite and chalcopyrite, although a few samples also contain minor amounts of gordaite, a sulfate mineral. The sulfides have bulk compositions that contain low concentrations of Cu + Zn (mean 7.84 wt%), Co (mean 183 ppm), Ni (mean 3 ppm), and Ba (mean 16 ppm), similar to the Normal Mid-Ocean Ridge Basalt (N-MORB) type deposits along the MAR but different to the compositions of the Enriched-MORB (E-MORB) and ultramafic type deposits along this spreading ridge. Sulfides from the study area have Pb isotopic compositions (206Pb/204Pb = 18.4502-18.4538, 207Pb/204Pb = 15.4903-15.4936, 208Pb/204Pb = 37.8936-37.9176) that are similar to those of the basalt sample (206Pb/204Pb = 18.3381, 207Pb/204Pb = 15.5041, 208Pb/204Pb = 37.9411), indicating that the metals within the sulfides were derived from leaching of the surrounding basaltic rocks. The sulfides also have 87Sr/86Sr ratios (0.708200-0.709049) that are much higher than typical MAR hydrothermal fluids (0.7028-0.7046), suggesting that the hydrothermal fluids mixed with a significant amount of seawater during massive sulfide precipitation.

  9. Integrating a geographic information system, a scientific visualization system and an orographic precipitation model

    USGS Publications Warehouse

    Hay, L.; Knapp, L.

    1996-01-01

    Investigating natural, potential, and man-induced impacts on hydrological systems commonly requires complex modelling with overlapping data requirements, and massive amounts of one- to four-dimensional data at multiple scales and formats. Given the complexity of most hydrological studies, the requisite software infrastructure must incorporate many components including simulation modelling, spatial analysis and flexible, intuitive displays. There is a general requirement for a set of capabilities to support scientific analysis which, at this time, can only come from an integration of several software components. Integration of geographic information systems (GISs) and scientific visualization systems (SVSs) is a powerful technique for developing and analysing complex models. This paper describes the integration of an orographic precipitation model, a GIS and a SVS. The combination of these individual components provides a robust infrastructure which allows the scientist to work with the full dimensionality of the data and to examine the data in a more intuitive manner.

  10. The Livermore Brain: Massive Deep Learning Networks Enabled by High Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Barry Y.

    The proliferation of inexpensive sensor technologies like the ubiquitous digital image sensors has resulted in the collection and sharing of vast amounts of unsorted and unexploited raw data. Companies and governments who are able to collect and make sense of large datasets to help them make better decisions more rapidly will have a competitive advantage in the information era. Machine Learning technologies play a critical role for automating the data understanding process; however, to be maximally effective, useful intermediate representations of the data are required. These representations or “features” are transformations of the raw data into a form where patternsmore » are more easily recognized. Recent breakthroughs in Deep Learning have made it possible to learn these features from large amounts of labeled data. The focus of this project is to develop and extend Deep Learning algorithms for learning features from vast amounts of unlabeled data and to develop the HPC neural network training platform to support the training of massive network models. This LDRD project succeeded in developing new unsupervised feature learning algorithms for images and video and created a scalable neural network training toolkit for HPC. Additionally, this LDRD helped create the world’s largest freely-available image and video dataset supporting open multimedia research and used this dataset for training our deep neural networks. This research helped LLNL capture several work-for-others (WFO) projects, attract new talent, and establish collaborations with leading academic and commercial partners. Finally, this project demonstrated the successful training of the largest unsupervised image neural network using HPC resources and helped establish LLNL leadership at the intersection of Machine Learning and HPC research.« less

  11. Enabling Graph Appliance for Genome Assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Rina; Graves, Jeffrey A; Lee, Sangkeun

    2015-01-01

    In recent years, there has been a huge growth in the amount of genomic data available as reads generated from various genome sequencers. The number of reads generated can be huge, ranging from hundreds to billions of nucleotide, each varying in size. Assembling such large amounts of data is one of the challenging computational problems for both biomedical and data scientists. Most of the genome assemblers developed have used de Bruijn graph techniques. A de Bruijn graph represents a collection of read sequences by billions of vertices and edges, which require large amounts of memory and computational power to storemore » and process. This is the major drawback to de Bruijn graph assembly. Massively parallel, multi-threaded, shared memory systems can be leveraged to overcome some of these issues. The objective of our research is to investigate the feasibility and scalability issues of de Bruijn graph assembly on Cray s Urika-GD system; Urika-GD is a high performance graph appliance with a large shared memory and massively multithreaded custom processor designed for executing SPARQL queries over large-scale RDF data sets. However, to the best of our knowledge, there is no research on representing a de Bruijn graph as an RDF graph or finding Eulerian paths in RDF graphs using SPARQL for potential genome discovery. In this paper, we address the issues involved in representing a de Bruin graphs as RDF graphs and propose an iterative querying approach for finding Eulerian paths in large RDF graphs. We evaluate the performance of our implementation on real world ebola genome datasets and illustrate how genome assembly can be accomplished with Urika-GD using iterative SPARQL queries.« less

  12. Problematic usage among highly-engaged players of massively multiplayer online role playing games.

    PubMed

    Peters, Christopher S; Malesky, L Alvin

    2008-08-01

    One popular facet of Internet gaming is the massively multiplayer online role playing game (MMORPG). Some individuals spend so much time playing these games that it creates problems in their lives. This study focused on players of World of Warcraft. Factor analysis revealed one factor related to problematic usage, which was correlated with amount of time played, and personality characteristics of agreeableness, conscientiousness, neuroticism, and extraversion.

  13. Exploiting NASA's Cumulus Earth Science Cloud Archive with Services and Computation

    NASA Astrophysics Data System (ADS)

    Pilone, D.; Quinn, P.; Jazayeri, A.; Schuler, I.; Plofchan, P.; Baynes, K.; Ramachandran, R.

    2017-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has started prototyping with commercial cloud providers to make this data available in elastic cloud compute environments, allowing application developers direct access to the massive EOSDIS holdings. In this talk we'll explain the principles behind the archive architecture and share our experience of dealing with large amounts of data with serverless architectures including AWS Lambda, the Elastic Container Service (ECS) for long running jobs, and why we dropped thousands of lines of code for AWS Step Functions. We'll discuss best practices and patterns for accessing and using data available in a shared object store (S3) and leveraging events and message passing for sophisticated and highly scalable processing and analysis workflows. Finally we'll share capabilities NASA and cloud services are making available on the archives to enable massively scalable analysis and computation in a variety of formats and tools.

  14. Approaches to advancescientific understanding of macrosystems ecology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levy, Ofir; Ball, Becky; Bond-Lamberty, Benjamin

    Macrosystem ecological studies inherently investigate processes that interact across multiple spatial and temporal scales, requiring intensive sampling and massive amounts of data from diverse sources to incorporate complex cross-scale and hierarchical interactions. Inherent challenges associated with these characteristics include high computational demands, data standardization and assimilation, identification of important processes and scales without prior knowledge, and the need for large, cross-disciplinary research teams that conduct long-term studies. Therefore, macrosystem ecology studies must utilize a unique set of approaches that are capable of encompassing these methodological characteristics and associated challenges. Several case studies demonstrate innovative methods used in current macrosystem ecologymore » studies.« less

  15. Calculating Trajectories And Orbits

    NASA Technical Reports Server (NTRS)

    Alderson, Daniel J.; Brady, Franklyn H.; Breckheimer, Peter J.; Campbell, James K.; Christensen, Carl S.; Collier, James B.; Ekelund, John E.; Ellis, Jordan; Goltz, Gene L.; Hintz, Gerarld R.; hide

    1989-01-01

    Double-Precision Trajectory Analysis Program, DPTRAJ, and Orbit Determination Program, ODP, developed and improved over years to provide highly reliable and accurate navigation capability for deep-space missions like Voyager. Each collection of programs working together to provide desired computational results. DPTRAJ, ODP, and supporting utility programs capable of handling massive amounts of data and performing various numerical calculations required for solving navigation problems associated with planetary fly-by and lander missions. Used extensively in support of NASA's Voyager project. DPTRAJ-ODP available in two machine versions. UNIVAC version, NPO-15586, written in FORTRAN V, SFTRAN, and ASSEMBLER. VAX/VMS version, NPO-17201, written in FORTRAN V, SFTRAN, PL/1 and ASSEMBLER.

  16. Too Much Information--Too Much Apprehension

    ERIC Educational Resources Information Center

    Hijazi, Sam

    2004-01-01

    The information age along with the exponential increase in information technology has brought an unexpected amount of information. The endeavor to sort and extract a meaning from the massive amount of data has become a challenging task to many educators and managers. This research is an attempt to collect the most common suggestions to reduce the…

  17. ALFIL: A Crowd Simulation Serious Game for Massive Evacuation Training and Awareness

    ERIC Educational Resources Information Center

    García-García, César; Fernández-Robles, José Luis; Larios-Rosillo, Victor; Luga, Hervé

    2012-01-01

    This article presents the current development of a serious game for the simulation of massive evacuations. The purpose of this project is to promote self-protection through awareness of the procedures and different possible scenarios during the evacuation of a massive event. Sophisticated behaviors require massive computational power and it has…

  18. A different method for calculation of the deflection angle of light passing close to a massive object by Fermat’s principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akkus, Harun, E-mail: physicisthakkus@gmail.com

    2013-12-15

    We introduce a method for calculating the amount of deflection angle of light passing close to a massive object. It is based on Fermat’s principle. The varying refractive index of medium around the massive object is obtained from the Buckingham pi-theorem. Highlights: •A different and simpler method for the calculation of deflection angle of light. •Not a curved space, only 2-D Euclidean space. •Getting a varying refractive index from the Buckingham pi-theorem. •Obtaining the some results of general relativity from Fermat’s principle.

  19. Incidence, management and outcome of women requiring massive transfusion after childbirth in the Netherlands: secondary analysis of a nationwide cohort study between 2004 and 2006.

    PubMed

    Ramler, Paul I; van den Akker, Thomas; Henriquez, Dacia D C A; Zwart, Joost J; van Roosmalen, Jos

    2017-06-19

    Postpartum hemorrhage remains the leading cause of maternal morbidity and mortality worldwide. Few population-based studies have examined the epidemiology of massive transfusion for postpartum hemorrhage. The aim of this study was to determine the incidence, management, and outcomes of women with postpartum hemorrhage who required massive transfusion in the Netherlands between 2004 and 2006. Data for all women from a gestational age of 20 weeks onwards who had postpartum hemorrhage requiring eight or more red blood cell concentrates were obtained from a nationwide population-based cohort study including all 98 hospitals with a maternity unit in the Netherlands. Three hundred twenty-seven women who had postpartum hemorrhage requiring massive transfusion were identified (massive transfusion rate 91 per 100,000 deliveries (95% confidence interval: 81-101)). The median blood loss was 4500 mL (interquartile range 3250-6000 mL) and the median number of red blood cell concentrates transfused was 11 units (interquartile range 9-16 units). Among women receiving massive transfusion, the most common cause of hemorrhage was uterine atony. Eighty-three women (25%) underwent hysterectomy, 227 (69%) were admitted to an intensive care unit, and three women died (case fatality rate 0,9%). The number of women in the Netherlands who had postpartum hemorrhage treated with massive transfusion was relatively high compared to other comparable settings. Evidence-based uniform management guidelines are necessary.

  20. Especial Skills in Experienced Archers.

    PubMed

    Nabavinik, Mahdi; Abaszadeh, Ali; Mehranmanesh, Mehrab; Rosenbaum, David A

    2017-09-05

    Especial skills are skills that are distinctive by virtue of massive practice within the narrow contexts in which they are expressed. In the first demonstration of especial skills, Keetch, Schmidt, Lee, and Young (2005) showed that experienced basketball players are better at shooting baskets from the foul line, where they had massive amounts of practice, than would expected from their success at other locations closer to or farther from the basket. Similar results were obtained for baseball throwing. The authors asked whether especial skills hold in archery, a sport requiring less movement. If the emergence of especial skills depends on large-scale movement, one would expect archery to escape so-called especialism. But if the emergence of especial skills reflects a more general tendency for highly specific learning, experienced archers should show especial skills. The authors obtained evidence consistent with the latter prediction. The expert archers did much better at their most highly practiced distance than would be expected by looking at the overall function relating shooting score to distance. We offer a mathematical model to account for this result. The findings attest to the generality of the especial skills phenomenon.

  1. Nonparametric Density Estimation Based on Self-Organizing Incremental Neural Network for Large Noisy Data.

    PubMed

    Nakamura, Yoshihiro; Hasegawa, Osamu

    2017-01-01

    With the ongoing development and expansion of communication networks and sensors, massive amounts of data are continuously generated in real time from real environments. Beforehand, prediction of a distribution underlying such data is difficult; furthermore, the data include substantial amounts of noise. These factors make it difficult to estimate probability densities. To handle these issues and massive amounts of data, we propose a nonparametric density estimator that rapidly learns data online and has high robustness. Our approach is an extension of both kernel density estimation (KDE) and a self-organizing incremental neural network (SOINN); therefore, we call our approach KDESOINN. An SOINN provides a clustering method that learns about the given data as networks of prototype of data; more specifically, an SOINN can learn the distribution underlying the given data. Using this information, KDESOINN estimates the probability density function. The results of our experiments show that KDESOINN outperforms or achieves performance comparable to the current state-of-the-art approaches in terms of robustness, learning time, and accuracy.

  2. [Implementation of a new electronic patient record in surgery].

    PubMed

    Eggli, S; Holm, J

    2001-12-01

    The increasing amount of clinical data, intensified interest of patients in medical information, medical quality management and the recent cost explosion in health care systems have forced medical institutions to improve their strategy in handling medical data. In the orthopedic department (3,600 surgeries, 75 beds, 14,000 consultations) software application for comprehensive patient data management has been developed. When implementing the electronic patient history following criteria were evaluated: 1. software evaluation, 2. implementation, 3. work flow, 4. data security/system stability. In the first phase the functional character was defined. Implementation required 3 months after parametrization. The expense amounted to 130,000 DM (30 clients). The training requirements were one afternoon for the secretaries and a 2-h session for the residents. The access speed on medically relevant data averaged under 3 s. The average saving in working hours was approximately 5 h/week for the secretaries and 4 h/week for the residents. The saving in paper amounted to 36,000 sheets/year. In 3 operational years there were 3 server breakdowns. Evaluation of the saving on working hours showed that such a system can amortize within a year. The latest improvements in hardware and software technology made the electronic medical record with integrated quality-control practicable without massive expenditure. The system supplies an extensive platform of information for patient treatment and an instrument to evaluate the efficiency of therapy strategies independent of the clinical field.

  3. Virtual reality visualization algorithms for the ALICE high energy physics experiment on the LHC at CERN

    NASA Astrophysics Data System (ADS)

    Myrcha, Julian; Trzciński, Tomasz; Rokita, Przemysław

    2017-08-01

    Analyzing massive amounts of data gathered during many high energy physics experiments, including but not limited to the LHC ALICE detector experiment, requires efficient and intuitive methods of visualisation. One of the possible approaches to that problem is stereoscopic 3D data visualisation. In this paper, we propose several methods that provide high quality data visualisation and we explain how those methods can be applied in virtual reality headsets. The outcome of this work is easily applicable to many real-life applications needed in high energy physics and can be seen as a first step towards using fully immersive virtual reality technologies within the frames of the ALICE experiment.

  4. Bridging the gap: from massive stars to supernovae

    PubMed Central

    Crowther, Paul A.; Janka, Hans-Thomas; Langer, Norbert

    2017-01-01

    Almost since the beginning, massive stars and their resultant supernovae have played a crucial role in the Universe. These objects produce tremendous amounts of energy and new, heavy elements that enrich galaxies, encourage new stars to form and sculpt the shapes of galaxies that we see today. The end of millions of years of massive star evolution and the beginning of hundreds or thousands of years of supernova evolution are separated by a matter of a few seconds, in which some of the most extreme physics found in the Universe causes the explosive and terminal disruption of the star. Key questions remain unanswered in both the studies of how massive stars evolve and the behaviour of supernovae, and it appears the solutions may not lie on just one side of the explosion or the other or in just the domain of the stellar evolution or the supernova astrophysics communities. The need to view massive star evolution and supernovae as continuous phases in a single narrative motivated the Theo Murphy international scientific meeting ‘Bridging the gap: from massive stars to supernovae’ at Chicheley Hall, UK, in June 2016, with the specific purpose of simultaneously addressing the scientific connections between theoretical and observational studies of massive stars and their supernovae, through engaging astronomers from both communities. This article is part of the themed issue ‘Bridging the gap: from massive stars to supernovae’. PMID:28923995

  5. Bridging the gap: from massive stars to supernovae.

    PubMed

    Maund, Justyn R; Crowther, Paul A; Janka, Hans-Thomas; Langer, Norbert

    2017-10-28

    Almost since the beginning, massive stars and their resultant supernovae have played a crucial role in the Universe. These objects produce tremendous amounts of energy and new, heavy elements that enrich galaxies, encourage new stars to form and sculpt the shapes of galaxies that we see today. The end of millions of years of massive star evolution and the beginning of hundreds or thousands of years of supernova evolution are separated by a matter of a few seconds, in which some of the most extreme physics found in the Universe causes the explosive and terminal disruption of the star. Key questions remain unanswered in both the studies of how massive stars evolve and the behaviour of supernovae, and it appears the solutions may not lie on just one side of the explosion or the other or in just the domain of the stellar evolution or the supernova astrophysics communities. The need to view massive star evolution and supernovae as continuous phases in a single narrative motivated the Theo Murphy international scientific meeting 'Bridging the gap: from massive stars to supernovae' at Chicheley Hall, UK, in June 2016, with the specific purpose of simultaneously addressing the scientific connections between theoretical and observational studies of massive stars and their supernovae, through engaging astronomers from both communities.This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'. © 2017 The Author(s).

  6. Tourmaline in Appalachian - Caledonian massive sulphide deposits and its exploration significance.

    USGS Publications Warehouse

    Slack, J.F.

    1982-01-01

    Tourmaline is a common gangue mineral in several types of stratabound mineral deposits, including some massive base-metal sulphide ores of the Appalachian - Caledonian orogen. It is most abundant (sometimes forming massive foliated tourmalinite) in sediment-hosted deposits, such as those at the Elizabeth Cu mine and the Ore Knob Cu mine (North Carolina, USA). Trace amounts of tourmaline occur associated with volcanic-hosted deposits in the Piedmont and New England and also in the Trondheim district. Tourmaline associated with the massive sulphide deposits are Mg- rich dravites with major- and trace-element compositions significantly different from schorl. It is suggested that the necessary B was produced by submarine exhalative processes as a part of the same hydrothermal system that deposited the ores. An abundance of dravite in non-evaporitic terrains is believed to indicate proximity to former subaqueous fumarolic centres.-R.A.H.

  7. Passive contribution of the rotator cuff to abduction and joint stability.

    PubMed

    Tétreault, Patrice; Levasseur, Annie; Lin, Jenny C; de Guise, Jacques; Nuño, Natalia; Hagemeister, Nicola

    2011-11-01

    The purpose of this study is to compare shoulder joint biomechanics during abduction with and without intact non-functioning rotator cuff tissue. A cadaver model was devised to simulate the clinical findings seen in patients with a massive cuff tear. Eight full upper limb shoulder specimens were studied. Initially, the rotator cuff tendons were left intact, representing a non-functional rotator cuff, as seen in suprascapular nerve paralysis or in cuff repair with a patch. Subsequently, a massive rotator cuff tear was re-created. Three-dimensional kinematics and force requirements for shoulder abduction were analyzed for each condition using ten abduction cycles in the plane of the scapula. Mediolateral displacements of the glenohumeral rotation center (GHRC) during abduction with an intact non-functioning cuff were minimal, but massive cuff tear resulted in significant lateral displacement of the GHRC (p < 0.013). Similarly, massive cuff tear caused increased superior migration of the GHRC during abduction compared with intact non-functional cuff (p < 0.01). From 5 to 30° of abduction, force requirements were significantly less with an intact non-functioning cuff than with massive cuff tear (p < 0.009). During abduction, an intact but non-functioning rotator cuff resulted in decreased GHRC displacement in two axes as well as lowered the force requirement for abduction from 5 to 30° as compared with the results following a massive rotator cuff tear. This provides insight into the potential biomechanical effect of repairing massive rotator cuff tears with a biological or synthetic "patch," which is a new treatment for massive cuff tear.

  8. The Cycle of Dust in the Milky Ways: Clues from the High-Redshift and the Local Universe

    NASA Technical Reports Server (NTRS)

    Dwek, Eli

    2008-01-01

    Massive amount of dust has been observed at high-redshifts when the universe was a mere 900 Myr old. The formation and evolution of dust is there dominated by massive stars and interstellar processes. In contrast, in the local universe lower mass stars, predominantly 2-5 Msun AGB stars, play the dominant role in the production of interstellar dust. These two extreme environments offer fascinating clues about the evolution of dust in the Milky Way galaxy

  9. Report on Development of CFEA (Collective Front-End Analysis) Procedures: Specification of CFEA Model & Results of the HAWK CFEA

    DTIC Science & Technology

    1985-03-01

    0423 9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT, PROJECT, TASK AREA & WORK UNIT NUMBERSApplied Science Associates, Inc. AREAOU U...the workload on the user and help organize the massive amounts of data involved, job aids have been developed for CFEA users. A trial CFEA was conducted...coupled with the complex and massive nature of a CFEA of a battalion-sized [ organization could make the prospect of conducting a CFEA seem a bit over

  10. JAliEn - A new interface between the AliEn jobs and the central services

    NASA Astrophysics Data System (ADS)

    Grigoras, A. G.; Grigoras, C.; Pedreira, M. M.; Saiz, P.; Schreiner, S.

    2014-06-01

    Since the ALICE experiment began data taking in early 2010, the amount of end user jobs on the AliEn Grid has increased significantly. Presently 1/3 of the 40K CPU cores available to ALICE are occupied by jobs submitted by about 400 distinct users, individually or in organized analysis trains. The overall stability of the AliEn middleware has been excellent throughout the 3 years of running, but the massive amount of end-user analysis and its specific requirements and load has revealed few components which can be improved. One of them is the interface between users and central AliEn services (catalogue, job submission system) which we are currently re-implementing in Java. The interface provides persistent connection with enhanced data and job submission authenticity. In this paper we will describe the architecture of the new interface, the ROOT binding which enables the use of a single interface in addition to the standard UNIX-like access shell and the new security-related features.

  11. Big Data Challenges in Climate Science: Improving the Next-Generation Cyberinfrastructure

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Lee, Tsengdar J.; Mattmann, Chris A.; Lynnes, Christopher S.; Cinquini, Luca; Ramirez, Paul M.; Hart, Andre F.; Williams, Dean N.; Waliser, Duane; Rinsland, Pamela; hide

    2016-01-01

    The knowledge we gain from research in climate science depends on the generation, dissemination, and analysis of high-quality data. This work comprises technical practice as well as social practice, both of which are distinguished by their massive scale and global reach. As a result, the amount of data involved in climate research is growing at an unprecedented rate. Climate model intercomparison (CMIP) experiments, the integration of observational data and climate reanalysis data with climate model outputs, as seen in the Obs4MIPs, Ana4MIPs, and CREATE-IP activities, and the collaborative work of the Intergovernmental Panel on Climate Change (IPCC) provide examples of the types of activities that increasingly require an improved cyberinfrastructure for dealing with large amounts of critical scientific data. This paper provides an overview of some of climate science's big data problems and the technical solutions being developed to advance data publication, climate analytics as a service, and interoperability within the Earth System Grid Federation (ESGF), the primary cyberinfrastructure currently supporting global climate research activities.

  12. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    DOE PAGES

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; ...

    2015-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-end NGS analysis requirements. The Globus Genomicsmore » system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.« less

  13. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    PubMed Central

    Bhuvaneshwar, Krithika; Sulakhe, Dinanath; Gauba, Robinder; Rodriguez, Alex; Madduri, Ravi; Dave, Utpal; Lacinski, Lukasz; Foster, Ian; Gusev, Yuriy; Madhavan, Subha

    2014-01-01

    Next generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon 's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research. PMID:26925205

  14. A New Way to Manage TCGA Data - TCGA

    Cancer.gov

    Rachel Karchin, of Johns Hopkins University's Department of Biomedical Engineering, is developing a tool that will help researchers sort through the massive amounts of genomic data gathered from TCGA's ovarian cancer tumor samples.

  15. Intelligent transportation systems data compression using wavelet decomposition technique.

    DOT National Transportation Integrated Search

    2009-12-01

    Intelligent Transportation Systems (ITS) generates massive amounts of traffic data, which posts : challenges for data storage, transmission and retrieval. Data compression and reconstruction technique plays an : important role in ITS data procession....

  16. Atypical Findings in Massive Bupropion Overdose: A Case Report and Discussion of Psychopharmacologic Issues.

    PubMed

    Zhu, Yuanjia; Kolawole, Tiwalola; Jimenez, Xavier F

    2016-09-01

    Bupropion is an atypical antidepressant that is structurally similar to amphetamines. Its primary toxic effects include seizure, sinus tachycardia, hypertension, and agitation; however, at higher amounts of ingestion, paradoxical cardiac effects are seen. We report the case of a 21-year-old woman who ingested 13.5 g of bupropion, a dose higher than any other previously reported. The patient presented with seizure, sinus tachycardia with prolonged QTc and QRS intervals, dilated pupils, and agitation. Four days after overdose, the patient's sinus tachycardia and prolonged QTc and QRS intervals resolved with symptomatic management, but she soon developed sinus bradycardia, hypotension, and mild transaminitis. With continued conservative management and close monitoring, her sinus bradycardia resolved 8 days after the overdose. The transaminitis resolved 12 days after the overdose. Our findings are consistent with previously reported toxic effects associated with common overdose amounts of bupropion. In addition, we have observed transient cardiotoxicity manifesting as sinus bradycardia associated with massive bupropion overdose. These findings are less frequently reported and must be considered when managing patients with massive bupropion overdose. We review the psychopharmacologic implications of this and comment on previous literature.

  17. A Pervasive Parallel Processing Framework for Data Visualization and Analysis at Extreme Scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moreland, Kenneth; Geveci, Berk

    2014-11-01

    The evolution of the computing world from teraflop to petaflop has been relatively effortless, with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipelinemore » model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.« less

  18. A review of bioinformatic methods for forensic DNA analyses.

    PubMed

    Liu, Yao-Yuan; Harbison, SallyAnn

    2018-03-01

    Short tandem repeats, single nucleotide polymorphisms, and whole mitochondrial analyses are three classes of markers which will play an important role in the future of forensic DNA typing. The arrival of massively parallel sequencing platforms in forensic science reveals new information such as insights into the complexity and variability of the markers that were previously unseen, along with amounts of data too immense for analyses by manual means. Along with the sequencing chemistries employed, bioinformatic methods are required to process and interpret this new and extensive data. As more is learnt about the use of these new technologies for forensic applications, development and standardization of efficient, favourable tools for each stage of data processing is being carried out, and faster, more accurate methods that improve on the original approaches have been developed. As forensic laboratories search for the optimal pipeline of tools, sequencer manufacturers have incorporated pipelines into sequencer software to make analyses convenient. This review explores the current state of bioinformatic methods and tools used for the analyses of forensic markers sequenced on the massively parallel sequencing (MPS) platforms currently most widely used. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A Split-Path Schema-Based RFID Data Storage Model in Supply Chain Management

    PubMed Central

    Fan, Hua; Wu, Quanyuan; Lin, Yisong; Zhang, Jianfeng

    2013-01-01

    In modern supply chain management systems, Radio Frequency IDentification (RFID) technology has become an indispensable sensor technology and massive RFID data sets are expected to become commonplace. More and more space and time are needed to store and process such huge amounts of RFID data, and there is an increasing realization that the existing approaches cannot satisfy the requirements of RFID data management. In this paper, we present a split-path schema-based RFID data storage model. With a data separation mechanism, the massive RFID data produced in supply chain management systems can be stored and processed more efficiently. Then a tree structure-based path splitting approach is proposed to intelligently and automatically split the movement paths of products. Furthermore, based on the proposed new storage model, we design the relational schema to store the path information and time information of tags, and some typical query templates and SQL statements are defined. Finally, we conduct various experiments to measure the effect and performance of our model and demonstrate that it performs significantly better than the baseline approach in both the data expression and path-oriented RFID data query performance. PMID:23645112

  20. COMAN: a web server for comprehensive metatranscriptomics analysis.

    PubMed

    Ni, Yueqiong; Li, Jun; Panagiotou, Gianni

    2016-08-11

    Microbiota-oriented studies based on metagenomic or metatranscriptomic sequencing have revolutionised our understanding on microbial ecology and the roles of both clinical and environmental microbes. The analysis of massive metatranscriptomic data requires extensive computational resources, a collection of bioinformatics tools and expertise in programming. We developed COMAN (Comprehensive Metatranscriptomics Analysis), a web-based tool dedicated to automatically and comprehensively analysing metatranscriptomic data. COMAN pipeline includes quality control of raw reads, removal of reads derived from non-coding RNA, followed by functional annotation, comparative statistical analysis, pathway enrichment analysis, co-expression network analysis and high-quality visualisation. The essential data generated by COMAN are also provided in tabular format for additional analysis and integration with other software. The web server has an easy-to-use interface and detailed instructions, and is freely available at http://sbb.hku.hk/COMAN/ CONCLUSIONS: COMAN is an integrated web server dedicated to comprehensive functional analysis of metatranscriptomic data, translating massive amount of reads to data tables and high-standard figures. It is expected to facilitate the researchers with less expertise in bioinformatics in answering microbiota-related biological questions and to increase the accessibility and interpretation of microbiota RNA-Seq data.

  1. Inconsistency of topologically massive hypergravity

    NASA Technical Reports Server (NTRS)

    Aragone, C.; Deser, S.

    1985-01-01

    The coupled topologically massive spin-5/2 gravity system in D = 3 dimensions whose kinematics represents dynamical propagating gauge invariant massive spin-5/2 and spin-2 excitations, is shown to be inconsistent, or equivalently, not locally hypersymmetric. In contrast to D = 4, the local constraints on the system arising from failure of the fermionic Bianchi identities do not involve the 'highest spin' components of the field, but rather the auxiliary spinor required to construct a consistent massive model.

  2. Ontology-Based Information Extraction for Business Intelligence

    NASA Astrophysics Data System (ADS)

    Saggion, Horacio; Funk, Adam; Maynard, Diana; Bontcheva, Kalina

    Business Intelligence (BI) requires the acquisition and aggregation of key pieces of knowledge from multiple sources in order to provide valuable information to customers or feed statistical BI models and tools. The massive amount of information available to business analysts makes information extraction and other natural language processing tools key enablers for the acquisition and use of that semantic information. We describe the application of ontology-based extraction and merging in the context of a practical e-business application for the EU MUSING Project where the goal is to gather international company intelligence and country/region information. The results of our experiments so far are very promising and we are now in the process of building a complete end-to-end solution.

  3. Proceedings of the Scientific Data Compression Workshop

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K. (Editor)

    1989-01-01

    Continuing advances in space and Earth science requires increasing amounts of data to be gathered from spaceborne sensors. NASA expects to launch sensors during the next two decades which will be capable of producing an aggregate of 1500 Megabits per second if operated simultaneously. Such high data rates cause stresses in all aspects of end-to-end data systems. Technologies and techniques are needed to relieve such stresses. Potential solutions to the massive data rate problems are: data editing, greater transmission bandwidths, higher density and faster media, and data compression. Through four subpanels on Science Payload Operations, Multispectral Imaging, Microwave Remote Sensing and Science Data Management, recommendations were made for research in data compression and scientific data applications to space platforms.

  4. a Spatiotemporal Aggregation Query Method Using Multi-Thread Parallel Technique Based on Regional Division

    NASA Astrophysics Data System (ADS)

    Liao, S.; Chen, L.; Li, J.; Xiong, W.; Wu, Q.

    2015-07-01

    Existing spatiotemporal database supports spatiotemporal aggregation query over massive moving objects datasets. Due to the large amounts of data and single-thread processing method, the query speed cannot meet the application requirements. On the other hand, the query efficiency is more sensitive to spatial variation then temporal variation. In this paper, we proposed a spatiotemporal aggregation query method using multi-thread parallel technique based on regional divison and implemented it on the server. Concretely, we divided the spatiotemporal domain into several spatiotemporal cubes, computed spatiotemporal aggregation on all cubes using the technique of multi-thread parallel processing, and then integrated the query results. By testing and analyzing on the real datasets, this method has improved the query speed significantly.

  5. Massive transfusion and nonsurgical hemostatic agents.

    PubMed

    Perkins, Jeremy G; Cap, Andrew P; Weiss, Brendan M; Reid, Thomas J; Bolan, Charles D; Bolan, Charles E

    2008-07-01

    Hemorrhage in trauma is a significant challenge, accounting for 30% to 40% of all fatalities, second only to central nervous system injury as a cause of death. However, hemorrhagic death is the leading preventable cause of mortality in combat casualties and typically occurs within 6 to 24 hrs of injury. In cases of severe hemorrhage, massive transfusion may be required to replace more than the entire blood volume. Early prediction of massive transfusion requirements, using clinical and laboratory parameters, combined with aggressive management of hemorrhage by surgical and nonsurgical means, has significant potential to reduce early mortality. Although the classification of massive transfusion varies, the most frequently used definition is ten or more units of blood in 24 hrs. Transfusion of red blood cells is intended to restore blood volume, tissue perfusion, and oxygen-carrying capacity; platelets, plasma, and cryoprecipitate are intended to facilitate hemostasis through prevention or treatment of coagulopathy. Massive transfusion is uncommon in civilian trauma, occurring in only 1% to 3% of trauma admissions. As a result of a higher proportion of penetrating injury in combat casualties, it has occurred in approximately 8% of Operation Iraqi Freedom admissions and in as many as 16% during the Vietnam conflict. Despite its potential to reduce early mortality, massive transfusion is not without risk. It requires extensive blood-banking resources and is associated with high mortality. This review describes the clinical problems associated with massive transfusion and surveys the nonsurgical management of hemorrhage, including transfusion of blood products, use of hemostatic bandages/agents, and treatment with hemostatic medications.

  6. Massively parallel algorithms for real-time wavefront control of a dense adaptive optics system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fijany, A.; Milman, M.; Redding, D.

    1994-12-31

    In this paper massively parallel algorithms and architectures for real-time wavefront control of a dense adaptive optic system (SELENE) are presented. The authors have already shown that the computation of a near optimal control algorithm for SELENE can be reduced to the solution of a discrete Poisson equation on a regular domain. Although, this represents an optimal computation, due the large size of the system and the high sampling rate requirement, the implementation of this control algorithm poses a computationally challenging problem since it demands a sustained computational throughput of the order of 10 GFlops. They develop a novel algorithm,more » designated as Fast Invariant Imbedding algorithm, which offers a massive degree of parallelism with simple communication and synchronization requirements. Due to these features, this algorithm is significantly more efficient than other Fast Poisson Solvers for implementation on massively parallel architectures. The authors also discuss two massively parallel, algorithmically specialized, architectures for low-cost and optimal implementation of the Fast Invariant Imbedding algorithm.« less

  7. Purely Dry Mergers do not Explain the Observed Evolution of Massive Early-type Galaxies since z ~ 1

    NASA Astrophysics Data System (ADS)

    Sonnenfeld, Alessandro; Nipoti, Carlo; Treu, Tommaso

    2014-05-01

    Several studies have suggested that the observed size evolution of massive early-type galaxies (ETGs) can be explained as a combination of dry mergers and progenitor bias, at least since z ~ 1. In this paper we carry out a new test of the dry-merger scenario based on recent lensing measurements of the evolution of the mass density profile of ETGs. We construct a theoretical model for the joint evolution of the size and mass density profile slope γ' driven by dry mergers occurring at rates given by cosmological simulations. Such dry-merger model predicts a strong decrease of γ' with cosmic time, inconsistent with the almost constant γ' inferred from observations in the redshift range 0 < z < 1. We then show with a simple toy model that a modest amount of cold gas in the mergers—consistent with the upper limits on recent star formation in ETGs—is sufficient to reconcile the model with measurements of γ'. By fitting for the amount of gas accreted during mergers, we find that models with dissipation are consistent with observations of the evolution in both size and density slope, if ~4% of the total final stellar mass arises from the gas accreted since z ~ 1. Purely dry merger models are ruled out at >99% CL. We thus suggest a scenario where the outer regions of massive ETGs grow by accretion of stars and dark matter, while small amounts of dissipation and nuclear star formation conspire to keep the mass density profile constant and approximately isothermal.

  8. Connections in wood and foliage

    Treesearch

    Kevin T. Smith

    2009-01-01

    Trees are networked systems that capture energy, move massive amounts of water and material, and provide the setting for human society and for the lives of many associated organisms. Tree survival depends on making and breaking the right connections within these networks.

  9. Chromoplast biogenesis and carotenoid accumulation

    USDA-ARS?s Scientific Manuscript database

    Chromoplasts are special organelles that possess superior ability to synthesize and store massive amounts of carotenoids. They are responsible for the distinctive colors found in fruits, flowers, and roots. Chromoplasts exhibit various morphologies and are derived from either pre-existing chloroplas...

  10. Clustering Patterns of Engagement in Massive Open Online Courses (MOOCs): The Use of Learning Analytics to Reveal Student Categories

    ERIC Educational Resources Information Center

    Khalil, Mohammad; Ebner, Martin

    2017-01-01

    Massive Open Online Courses (MOOCs) are remote courses that excel in their students' heterogeneity and quantity. Due to the peculiarity of being massiveness, the large datasets generated by MOOC platforms require advanced tools and techniques to reveal hidden patterns for purposes of enhancing learning and educational behaviors. This publication…

  11. GHOSTS: The Stellar Populations in the Outskirts of Massive Disk Galaxies

    NASA Astrophysics Data System (ADS)

    De Jong, Roelof; Radburn-Smith, D. J.; Seth, A. C.; GHOSTS Team

    2007-12-01

    In recent years we have started to appreciate that the outskirts of galaxies contain valuable information about the formation process of galaxies. In hierarchical galaxy formation the stellar halos and thick disks of galaxies are thought to be the result of accretion of minor satellites, predominantly in the earlier assembly phases. The size, metallicity, and amount of substructure in current day halos are therefore directly related to issues like the small scale properties of the primordial power spectrum of density fluctuations and the suppression of star formation in small dark matter halos. I will show highlights from our ongoing HST/ACS/WFPC2 GHOSTS survey of the resolved stellar populations of 14 nearby, massive disk galaxies. I will show that the smaller galaxies (Vrot 100 km/s) have very small halos, but that most massive disk galaxies (Vrot 200 km/s) have very extended stellar envelopes. The luminosity of these envelopes seems to correlate with Hubble type and bulge-to-disk ratio, calling into question whether these are very extended bulge populations or inner halo populations. The amount of substructure varies strongly between galaxies. Finally, I will present the stellar populations of a very low surface brightness stream around M83, showing that it is old and fairly metal rich.

  12. Massive Submucosal Ganglia in Colonic Inertia.

    PubMed

    Naemi, Kaveh; Stamos, Michael J; Wu, Mark Li-Cheng

    2018-02-01

    - Colonic inertia is a debilitating form of primary chronic constipation with unknown etiology and diagnostic criteria, often requiring pancolectomy. We have occasionally observed massively enlarged submucosal ganglia containing at least 20 perikarya, in addition to previously described giant ganglia with greater than 8 perikarya, in cases of colonic inertia. These massively enlarged ganglia have yet to be formally recognized. - To determine whether such "massive submucosal ganglia," defined as ganglia harboring at least 20 perikarya, characterize colonic inertia. - We retrospectively reviewed specimens from colectomies of patients with colonic inertia and compared the prevalence of massive submucosal ganglia occurring in this setting to the prevalence of massive submucosal ganglia occurring in a set of control specimens from patients lacking chronic constipation. - Seven of 8 specimens affected by colonic inertia harbored 1 to 4 massive ganglia, for a total of 11 massive ganglia. One specimen lacked massive ganglia but had limited sampling and nearly massive ganglia. Massive ganglia occupied both superficial and deep submucosal plexus. The patient with 4 massive ganglia also had 1 mitotically active giant ganglion. Only 1 massive ganglion occupied the entire set of 10 specimens from patients lacking chronic constipation. - We performed the first, albeit distinctly small, study of massive submucosal ganglia and showed that massive ganglia may be linked to colonic inertia. Further, larger studies are necessary to determine whether massive ganglia are pathogenetic or secondary phenomena, and whether massive ganglia or mitotically active ganglia distinguish colonic inertia from other types of chronic constipation.

  13. Energy-efficient STDP-based learning circuits with memristor synapses

    NASA Astrophysics Data System (ADS)

    Wu, Xinyu; Saxena, Vishal; Campbell, Kristy A.

    2014-05-01

    It is now accepted that the traditional von Neumann architecture, with processor and memory separation, is ill suited to process parallel data streams which a mammalian brain can efficiently handle. Moreover, researchers now envision computing architectures which enable cognitive processing of massive amounts of data by identifying spatio-temporal relationships in real-time and solving complex pattern recognition problems. Memristor cross-point arrays, integrated with standard CMOS technology, are expected to result in massively parallel and low-power Neuromorphic computing architectures. Recently, significant progress has been made in spiking neural networks (SNN) which emulate data processing in the cortical brain. These architectures comprise of a dense network of neurons and the synapses formed between the axons and dendrites. Further, unsupervised or supervised competitive learning schemes are being investigated for global training of the network. In contrast to a software implementation, hardware realization of these networks requires massive circuit overhead for addressing and individually updating network weights. Instead, we employ bio-inspired learning rules such as the spike-timing-dependent plasticity (STDP) to efficiently update the network weights locally. To realize SNNs on a chip, we propose to use densely integrating mixed-signal integrate-andfire neurons (IFNs) and cross-point arrays of memristors in back-end-of-the-line (BEOL) of CMOS chips. Novel IFN circuits have been designed to drive memristive synapses in parallel while maintaining overall power efficiency (<1 pJ/spike/synapse), even at spike rate greater than 10 MHz. We present circuit design details and simulation results of the IFN with memristor synapses, its response to incoming spike trains and STDP learning characterization.

  14. Wildfire Health and Economic Impacts Case Study###

    EPA Science Inventory

    Since 2008 eastern North Carolina experienced 6 major wildfires, far exceeding the historic 50 year expected rate of return. Initiated by the lighting strikes, these fires spread across multiple feet deep, dry and extremely vulnerable peat bogs. The fires produced massive amounts...

  15. Observations of the Large Magellanic Cloud with Fermi

    DOE PAGES

    Abdo, A. A.; Ackermann, M.; Ajello, M.; ...

    2010-03-18

    Context. The Large Magellanic Cloud (LMC) is to date the only normal external galaxy that has been detected in high-energy gamma rays. High-energy gamma rays trace particle acceleration processes and gamma-ray observations allow the nature and sites of acceleration to be studied. Aims. We characterise the distribution and sources of cosmic rays in the LMC from analysis of gamma-ray observations. Methods. We analyse 11 months of continuous sky-survey observations obtained with the Large Area Telescope aboard the Fermi Gamma-Ray Space Telescope and compare it to tracers of the interstellar medium and models of the gamma-ray sources in the LMC. Results.more » The LMC is detected at 33σ significance. The integrated >100 MeV photon flux of the LMC amounts to (2.6 ± 0.2) × 10 -7 ph cm -2 s -1 which corresponds to an energy flux of (1.6 ± 0.1) × 10 -10 erg cm -2 s -1, with additional systematic uncertainties of 16%. The analysis reveals the massive star forming region 30 Doradus as a bright source of gamma-ray emission in the LMC in addition to fainter emission regions found in the northern part of the galaxy. The gamma-ray emission from the LMC shows very little correlation with gas density and is rather correlated to tracers of massive star forming regions. The close confinement of gamma-ray emission to star forming regions suggests a relatively short GeV cosmic-ray proton diffusion length. In conclusion, the close correlation between cosmic-ray density and massive star tracers supports the idea that cosmic rays are accelerated in massive star forming regions as a result of the large amounts of kinetic energy that are input by the stellar winds and supernova explosions of massive stars into the interstellar medium.« less

  16. [Massive trichuriasis in an adult diagnosed by colonoscopy].

    PubMed

    Sapunar, J; Gil, L C; Gil, J G

    1999-01-01

    A case of massive trichuriasis in a 37-year-old female from a rural locality of the Metropolitan Region of Chile, with antecedents of alcoholism, chronic hepatic damage and portal cavernomatosis, is presented. Since 12 year ago she has had geophagia. In the last six months she has frequently presented liquid diarrhea, colic abdominal pains, tenesmus and sensation of abdominal distention. Clinical and laboratory tests confirmed her hepatic affection associated with a celiac disease with anemia and hypereosinophilia. Within a week diarrhea became worse and dysentery appeared. A colonoscopy revealed an impressive and massive trichuriasis. The patient was successfully treated with two cures of 200 mg tablets of mebendazole twice daily for three days with a week interval. After the first cure she evacuated a big amount of Tricuris trichiura, fecal evacuations became normal, geophagia disappeared and recovered 4 kg of body weight.

  17. A massively parallel computational approach to coupled thermoelastic/porous gas flow problems

    NASA Technical Reports Server (NTRS)

    Shia, David; Mcmanus, Hugh L.

    1995-01-01

    A new computational scheme for coupled thermoelastic/porous gas flow problems is presented. Heat transfer, gas flow, and dynamic thermoelastic governing equations are expressed in fully explicit form, and solved on a massively parallel computer. The transpiration cooling problem is used as an example problem. The numerical solutions have been verified by comparison to available analytical solutions. Transient temperature, pressure, and stress distributions have been obtained. Small spatial oscillations in pressure and stress have been observed, which would be impractical to predict with previously available schemes. Comparisons between serial and massively parallel versions of the scheme have also been made. The results indicate that for small scale problems the serial and parallel versions use practically the same amount of CPU time. However, as the problem size increases the parallel version becomes more efficient than the serial version.

  18. Thermal stress control using waste steel fibers in massive concretes

    NASA Astrophysics Data System (ADS)

    Sarabi, Sahar; Bakhshi, Hossein; Sarkardeh, Hamed; Nikoo, Hamed Safaye

    2017-11-01

    One of the important subjects in massive concrete structures is the control of the generated heat of hydration and consequently the potential of cracking due to the thermal stress expansion. In the present study, using the waste turnery steel fibers in the massive concretes, the amount of used cement was reduced without changing the compressive strength. By substituting a part of the cement with waste steel fibers, the costs and the generated hydration heat were reduced and the tensile strength was increased. The results showed that by using 0.5% turnery waste steel fibers and consequently, reducing to 32% the cement content, the hydration heat reduced to 23.4% without changing the compressive strength. Moreover, the maximum heat gradient reduced from 18.5% in the plain concrete sample to 12% in the fiber-reinforced concrete sample.

  19. Zonal methods for the parallel execution of range-limited N-body simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Kevin J.; Dror, Ron O.; Shaw, David E.

    2007-01-20

    Particle simulations in fields ranging from biochemistry to astrophysics require the evaluation of interactions between all pairs of particles separated by less than some fixed interaction radius. The applicability of such simulations is often limited by the time required for calculation, but the use of massive parallelism to accelerate these computations is typically limited by inter-processor communication requirements. Recently, Snir [M. Snir, A note on N-body computations with cutoffs, Theor. Comput. Syst. 37 (2004) 295-318] and Shaw [D.E. Shaw, A fast, scalable method for the parallel evaluation of distance-limited pairwise particle interactions, J. Comput. Chem. 26 (2005) 1318-1328] independently introducedmore » two distinct methods that offer asymptotic reductions in the amount of data transferred between processors. In the present paper, we show that these schemes represent special cases of a more general class of methods, and introduce several new algorithms in this class that offer practical advantages over all previously described methods for a wide range of problem parameters. We also show that several of these algorithms approach an approximate lower bound on inter-processor data transfer.« less

  20. Martian stepped-delta formation by rapid water release.

    PubMed

    Kraal, Erin R; van Dijk, Maurits; Postma, George; Kleinhans, Maarten G

    2008-02-21

    Deltas and alluvial fans preserved on the surface of Mars provide an important record of surface water flow. Understanding how surface water flow could have produced the observed morphology is fundamental to understanding the history of water on Mars. To date, morphological studies have provided only minimum time estimates for the longevity of martian hydrologic events, which range from decades to millions of years. Here we use sand flume studies to show that the distinct morphology of martian stepped (terraced) deltas could only have originated from a single basin-filling event on a timescale of tens of years. Stepped deltas therefore provide a minimum and maximum constraint on the duration and magnitude of some surface flows on Mars. We estimate that the amount of water required to fill the basin and deposit the delta is comparable to the amount of water discharged by large terrestrial rivers, such as the Mississippi. The massive discharge, short timescale, and the associated short canyon lengths favour the hypothesis that stepped fans are terraced delta deposits draped over an alluvial fan and formed by water released suddenly from subsurface storage.

  1. Enabling Big Geoscience Data Analytics with a Cloud-Based, MapReduce-Enabled and Service-Oriented Workflow Framework

    PubMed Central

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists. PMID:25742012

  2. Enabling big geoscience data analytics with a cloud-based, MapReduce-enabled and service-oriented workflow framework.

    PubMed

    Li, Zhenlong; Yang, Chaowei; Jin, Baoxuan; Yu, Manzhu; Liu, Kai; Sun, Min; Zhan, Matthew

    2015-01-01

    Geoscience observations and model simulations are generating vast amounts of multi-dimensional data. Effectively analyzing these data are essential for geoscience studies. However, the tasks are challenging for geoscientists because processing the massive amount of data is both computing and data intensive in that data analytics requires complex procedures and multiple tools. To tackle these challenges, a scientific workflow framework is proposed for big geoscience data analytics. In this framework techniques are proposed by leveraging cloud computing, MapReduce, and Service Oriented Architecture (SOA). Specifically, HBase is adopted for storing and managing big geoscience data across distributed computers. MapReduce-based algorithm framework is developed to support parallel processing of geoscience data. And service-oriented workflow architecture is built for supporting on-demand complex data analytics in the cloud environment. A proof-of-concept prototype tests the performance of the framework. Results show that this innovative framework significantly improves the efficiency of big geoscience data analytics by reducing the data processing time as well as simplifying data analytical procedures for geoscientists.

  3. Runaway electron dynamics in tokamak plasmas with high impurity content

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martín-Solís, J. R., E-mail: solis@fis.uc3m.es; Loarte, A.; Lehnen, M.

    2015-09-15

    The dynamics of high energy runaway electrons is analyzed for plasmas with high impurity content. It is shown that modified collision terms are required in order to account for the collisions of the relativistic runaway electrons with partially stripped impurity ions, including the effect of the collisions with free and bound electrons, as well as the scattering by the full nuclear and the electron-shielded ion charge. The effect of the impurities on the avalanche runaway growth rate is discussed. The results are applied, for illustration, to the interpretation of the runaway electron behavior during disruptions, where large amounts of impuritiesmore » are expected, particularly during disruption mitigation by massive gas injection. The consequences for the electron synchrotron radiation losses and the resulting runaway electron dynamics are also analyzed.« less

  4. Fast and accurate automated cell boundary determination for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  5. Challenge Paper: Validation of Forensic Techniques for Criminal Prosecution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erbacher, Robert F.; Endicott-Popovsky, Barbara E.; Frincke, Deborah A.

    2007-04-10

    Abstract: As in many domains, there is increasing agreement in the user and research community that digital forensics analysts would benefit from the extension, development and application of advanced techniques in performing large scale and heterogeneous data analysis. Modern digital forensics analysis of cyber-crimes and cyber-enabled crimes often requires scrutiny of massive amounts of data. For example, a case involving network compromise across multiple enterprises might require forensic analysis of numerous sets of network logs and computer hard drives, potentially involving 100?s of gigabytes of heterogeneous data, or even terabytes or petabytes of data. Also, the goal for forensic analysismore » is to not only determine whether the illicit activity being considered is taking place, but also to identify the source of the activity and the full extent of the compromise or impact on the local network. Even after this analysis, there remains the challenge of using the results in subsequent criminal and civil processes.« less

  6. A Novel Data System for Verification of Internal Parameters of Motor Design

    NASA Technical Reports Server (NTRS)

    Smith, Doug; Saint Jean, Paul; Everton, Randy; Uresk, Bonnie

    2003-01-01

    Three major obstacles have limited the amount of information that can be obtained from inside an operating solid rocket motor. The first is a safety issue due to the presence of live propellant interacting with classical, electrical instrumentation. The second is a pressure vessel feed through risk arising from bringing a large number of wires through the rocket motor wall safely. The third is an attachment/protection issue associated with connecting gages to live propellant. Thiokol has developed a highly miniaturized, networked, electrically isolated data system that has safely delivered information from classical, electrical instrumentation (even on the burning propellant surface) to the outside world. This system requires only four wires to deliver 80 channels of data at 2300 samples/second/channel. The feed through leak path risk is massively reduced from the current situation where each gage requires at least three pressure vessel wire penetrations. The external electrical isolation of the system is better than that of the propellant itself. This paper describes the new system.

  7. SX-Ella Danis stent in massive upper gastrointestinal bleeding in cirrhosis - a case series.

    PubMed

    Jain, Mayank; Balkrishanan, Mahadevan; Snk, Chenduran; Cgs, Sridhar; Ramakrishnan, Ravi; Venkataraman, Jayanthi

    2018-06-01

    We report our experience of three cases of decompensated cirrhosis with massive upper gastrointestinal bleeding, which required insertion of an SX-Ella Danis stent for hemostasis. The procedure is safe and effective.

  8. Late-time Dust Emission from the Type IIn Supernova 1995N

    NASA Astrophysics Data System (ADS)

    Van Dyk, Schuyler D.

    2013-05-01

    Type IIn supernovae (SNe IIn) have been found to be associated with significant amounts of dust. These core-collapse events are generally expected to be the final stage in the evolution of highly massive stars, either while in an extreme red supergiant phase or during a luminous blue variable phase. Both evolutionary scenarios involve substantial pre-supernova mass loss. I have analyzed the SN IIn 1995N in MCG -02-38-017 (Arp 261), for which mid-infrared archival data obtained with the Spitzer Space Telescope in 2009 (~14.7 yr after explosion) and with the Wide-field Infrared Survey Explorer in 2010 (~15.6-16.0 yr after explosion) reveal a luminous (~2 × 107 L ⊙) source detected from 3.4 to 24 μm. These observations probe the circumstellar material, set up by pre-SN mass loss, around the progenitor star and indicate the presence of ~0.05-0.12 M ⊙ of pre-existing, cool dust at ~240 K. This is at least a factor ~10 lower than the dust mass required to be produced from SNe at high redshift, but the case of SN 1995N lends further evidence that highly massive stars could themselves be important sources of dust.

  9. Parallel Preconditioning for CFD Problems on the CM-5

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Kremenetsky, Mark D.; Richardson, John; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Up to today, preconditioning methods on massively parallel systems have faced a major difficulty. The most successful preconditioning methods in terms of accelerating the convergence of the iterative solver such as incomplete LU factorizations are notoriously difficult to implement on parallel machines for two reasons: (1) the actual computation of the preconditioner is not very floating-point intensive, but requires a large amount of unstructured communication, and (2) the application of the preconditioning matrix in the iteration phase (i.e. triangular solves) are difficult to parallelize because of the recursive nature of the computation. Here we present a new approach to preconditioning for very large, sparse, unsymmetric, linear systems, which avoids both difficulties. We explicitly compute an approximate inverse to our original matrix. This new preconditioning matrix can be applied most efficiently for iterative methods on massively parallel machines, since the preconditioning phase involves only a matrix-vector multiplication, with possibly a dense matrix. Furthermore the actual computation of the preconditioning matrix has natural parallelism. For a problem of size n, the preconditioning matrix can be computed by solving n independent small least squares problems. The algorithm and its implementation on the Connection Machine CM-5 are discussed in detail and supported by extensive timings obtained from real problem data.

  10. Biomechanical effect of latissimus dorsi tendon transfer for irreparable massive cuff tear.

    PubMed

    Oh, Joo Han; Tilan, Justin; Chen, Yu-Jen; Chung, Kyung Chil; McGarry, Michelle H; Lee, Thay Q

    2013-02-01

    The purpose of this study was to determine the biomechanical effects of latissimus dorsi transfer in a cadaveric model of massive posterosuperior rotator cuff tear. Eight cadaveric shoulders were tested at 0°, 30°, and 60° of abduction in the scapular plane with anatomically based muscle loading. Humeral rotational range of motion and the amount of humeral rotation due to muscle loading were measured. Glenohumeral kinematics and contact characteristics were measured throughout the range of motion. After testing in the intact condition, the supraspinatus and infraspinatus were resected. The cuff tear was then repaired by latissimus dorsi transfer. Two muscle loading conditions were applied after latissimus transfer to simulate increased tension that may occur due to limited muscle excursion. A repeated-measures analysis of variance was used for statistical analysis. The amount of internal rotation due to muscle loading and maximum internal rotation increased with massive cuff tear and was restored with latissimus transfer (P < .05). At maximum internal rotation, the humeral head apex shifted anteriorly, superiorly, and laterally at 0° of abduction after massive cuff tear (P < .05); this abnormal shift was corrected with latissimus transfer (P < .05). However, at 30° and 60° of abduction, latissimus transfer significantly altered kinematics (P < .05) and latissimus transfer with increased muscle loading increased contact pressure, especially at 60° of abduction. Latissimus dorsi transfer is beneficial in restoring humeral internal/external rotational range of motion, the internal/external rotational balance of the humerus, and glenohumeral kinematics at 0° of abduction. However, latissimus dorsi transfer with simulated limited excursion may lead to an overcompensation that can further deteriorate normal biomechanics, especially at higher abduction angles. Published by Mosby, Inc.

  11. PREPping Students for Authentic Science

    ERIC Educational Resources Information Center

    Dolan, Erin L.; Lally, David J.; Brooks, Eric; Tax, Frans E.

    2008-01-01

    In this article, the authors describe a large-scale research collaboration, the Partnership for Research and Education in Plants (PREP), which has capitalized on publicly available databases that contain massive amounts of biological information; stock centers that house and distribute inexpensive organisms with different genotypes; and the…

  12. Efficient Access to Massive Amounts of Tape-Resident Data

    NASA Astrophysics Data System (ADS)

    Yu, David; Lauret, Jérôme

    2017-10-01

    Randomly restoring files from tapes degrades the read performance primarily due to frequent tape mounts. The high latency and time-consuming tape mount and dismount is a major issue when accessing massive amounts of data from tape storage. BNL’s mass storage system currently holds more than 80 PB of data on tapes, managed by HPSS. To restore files from HPSS, we make use of a scheduler software, called ERADAT. This scheduler system was originally based on code from Oak Ridge National Lab, developed in the early 2000s. After some major modifications and enhancements, ERADAT now provides advanced HPSS resource management, priority queuing, resource sharing, web-browser visibility of real-time staging activities and advanced real-time statistics and graphs. ERADAT is also integrated with ACSLS and HPSS for near real-time mount statistics and resource control in HPSS. ERADAT is also the interface between HPSS and other applications such as the locally developed Data Carousel, providing fair resource-sharing policies and related capabilities. ERADAT has demonstrated great performance at BNL.

  13. Constraining the physics of carbon crystallization through pulsations of a massive DAV BPM37093

    NASA Astrophysics Data System (ADS)

    Nitta, Atsuko; Kepler, S. O.; Chené, André-Nicolas; Koester, D.; Provencal, J. L.; Kleinmani, S. J.; Sullivan, D. J.; Chote, Paul; Sefako, Ramotholo; Kanaan, Antonio; Romero, Alejandra; Corti, Mariela; Kilic, Mukremin; Montgomery, M. H.; Winget, D. E.

    We are trying to reduce the largest uncertainties in using white dwarf stars as Galactic chronometers by understanding the details of carbon crystalliazation that currently result in a 1-2 Gyr uncertainty in the ages of the oldest white dwarf stars. We expect the coolest white dwarf stars to have crystallized interiors, but theory also predicts hotter white dwarf stars, if they are massive enough, will also have some core crystallization. BPM 37093 is the first discovered of only a handful of known massive white dwarf stars that are also pulsating DAV, or ZZ Ceti, variables. Our approach is to use the pulsations to constrain the core composition and amount of crystallization. Here we report our analysis of 4 hours of continuous time series spectroscopy of BPM 37093 with Gemini South combined with simultaneous time-series photometry from Mt. John (New Zealand), SAAO, PROMPT, and Complejo Astronomico El Leoncito (CASLEO, Argentina).

  14. Heterogeneous computing architecture for fast detection of SNP-SNP interactions.

    PubMed

    Sluga, Davor; Curk, Tomaz; Zupan, Blaz; Lotric, Uros

    2014-06-25

    The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems.

  15. Heterogeneous computing architecture for fast detection of SNP-SNP interactions

    PubMed Central

    2014-01-01

    Background The extent of data in a typical genome-wide association study (GWAS) poses considerable computational challenges to software tools for gene-gene interaction discovery. Exhaustive evaluation of all interactions among hundreds of thousands to millions of single nucleotide polymorphisms (SNPs) may require weeks or even months of computation. Massively parallel hardware within a modern Graphic Processing Unit (GPU) and Many Integrated Core (MIC) coprocessors can shorten the run time considerably. While the utility of GPU-based implementations in bioinformatics has been well studied, MIC architecture has been introduced only recently and may provide a number of comparative advantages that have yet to be explored and tested. Results We have developed a heterogeneous, GPU and Intel MIC-accelerated software module for SNP-SNP interaction discovery to replace the previously single-threaded computational core in the interactive web-based data exploration program SNPsyn. We report on differences between these two modern massively parallel architectures and their software environments. Their utility resulted in an order of magnitude shorter execution times when compared to the single-threaded CPU implementation. GPU implementation on a single Nvidia Tesla K20 runs twice as fast as that for the MIC architecture-based Xeon Phi P5110 coprocessor, but also requires considerably more programming effort. Conclusions General purpose GPUs are a mature platform with large amounts of computing power capable of tackling inherently parallel problems, but can prove demanding for the programmer. On the other hand the new MIC architecture, albeit lacking in performance reduces the programming effort and makes it up with a more general architecture suitable for a wider range of problems. PMID:24964802

  16. Place in Perspective: Extracting Online Information about Points of Interest

    NASA Astrophysics Data System (ADS)

    Alves, Ana O.; Pereira, Francisco C.; Rodrigues, Filipe; Oliveirinha, João

    During the last few years, the amount of online descriptive information about places has reached reasonable dimensions for many cities in the world. Being such information mostly in Natural Language text, Information Extraction techniques are needed for obtaining the meaning of places that underlies these massive amounts of commonsense and user made sources. In this article, we show how we automatically label places using Information Extraction techniques applied to online resources such as Wikipedia, Yellow Pages and Yahoo!.

  17. On the Formation of Massive Stars

    NASA Technical Reports Server (NTRS)

    Yorke, Harold W.; Sonnhalter, Cordula

    2002-01-01

    We calculate numerically the collapse of slowly rotating, nonmagnetic, massive molecular clumps of masses 30,60, and 120 Stellar Mass, which conceivably could lead to the formation of massive stars. Because radiative acceleration on dust grains plays a critical role in the clump's dynamical evolution, we have improved the module for continuum radiation transfer in an existing two-dimensional (axial symmetry assumed) radiation hydrodynamic code. In particular, rather than using "gray" dust opacities and "gray" radiation transfer, we calculate the dust's wavelength-dependent absorption and emission simultaneously with the radiation density at each wavelength and the equilibrium temperatures of three grain components: amorphous carbon particles. silicates, and " dirty ice " -coated silicates. Because our simulations cannot spatially resolve the innermost regions of the molecular clump, however, we cannot distinguish between the formation of a dense central cluster or a single massive object. Furthermore, we cannot exclude significant mass loss from the central object(s) that may interact with the inflow into the central grid cell. Thus, with our basic assumption that all material in the innermost grid cell accretes onto a single object. we are able to provide only an upper limit to the mass of stars that could possibly be formed. We introduce a semianalytical scheme for augmenting existing evolutionary tracks of pre-main-sequence protostars by including the effects of accretion. By considering an open outermost boundary, an arbitrary amount of material could, in principal, be accreted onto this central star. However, for the three cases considered (30, 60, and 120 Stellar Mass originally within the computation grid), radiation acceleration limited the final masses to 3 1.6, 33.6, and 42.9 Stellar Mass, respectively, for wavelength-dependent radiation transfer and to 19.1, 20.1, and 22.9 Stellar Mass. for the corresponding simulations with gray radiation transfer. Our calculations demonstrate that massive stars can in principle be formed via accretion through a disk. The accretion rate onto the central source increases rapidly after one initial free-fall time and decreases monotonically afterward. By enhancing the nonisotropic character of the radiation field, the accretion disk reduces the effects of radiative acceleration in the radial direction - a process we call the "flashlight effect." The flashlight effect is further amplified in our case by including the effects of frequency-dependent radiation transfer. We conclude with the warning that a careful treatment of radiation transfer is a mandatory requirement for realistic simulations of the formation of massive stars.

  18. Proteomic analysis of chromoplasts from six crop species reveals insights into chromoplast function and development

    USDA-ARS?s Scientific Manuscript database

    Chromoplasts are unique plastids that accumulate massive amounts of carotenoids. To gain a general and comparative characterization of chromoplast proteins, we performed proteomic analysis of chromoplasts from six carotenoid-rich crops: watermelon, tomato, carrot, orange cauliflower, red papaya, and...

  19. Multi-source and ontology-based retrieval engine for maize mutant phenotypes

    USDA-ARS?s Scientific Manuscript database

    In the midst of this genomics era, major plant genome databases are collecting massive amounts of heterogeneous information, including sequence data, gene product information, images of mutant phenotypes, etc., as well as textual descriptions of many of these entities. While basic browsing and sear...

  20. Infants Hierarchically Organize Memory Representations

    ERIC Educational Resources Information Center

    Rosenberg, Rebecca D.; Feigenson, Lisa

    2013-01-01

    Throughout development, working memory is subject to capacity limits that severely constrain short-term storage. However, adults can massively expand the total amount of remembered information by grouping items into "chunks". Although infants also have been shown to chunk objects in memory, little is known regarding the limits of this…

  1. Administrative Uses of Microcomputers.

    ERIC Educational Resources Information Center

    Crawford, Chase

    1987-01-01

    This paper examines the administrative uses of the microcomputer, stating that high performance educational managers are likely to have microcomputers in their organizations. Four situations that would justify the use of a computer are: (1) when massive amounts of data are processed through well-defined operations; (2) when data processing is…

  2. Health burden from peat wildfire in North Carolina

    EPA Science Inventory

    In June 2008, a wildfire smoldering through rich peat deposits in the Pocosin Lakes National Wildlife Refuge produced massive amounts of smoke and exposed a largely rural North Carolina area to air pollution in access of the National Ambient Air Quality Standards. In this talk, w...

  3. Coma, Hyperthermia and Bleeding Associated with Massive LSD Overdose

    PubMed Central

    Klock, John C.; Boerner, Udo; Becker, Charles E.

    1974-01-01

    Eight patients were seen within 15 minutes of intranasal self-administration of large amounts of pure D-lysergic acid diethylamide (LSD) tartrate powder. Emesis and collapse occurred along with signs of sympathetic overactivity, hyperthermia, coma and respiratory arrest. Mild generalized bleeding occurred in several patients and evidence of platelet dysfunction was present in all. Serum and gastric concentrations of LSD tartrate ranged from 2.1 to 26 nanograms per ml and 1,000 to 7,000 μg per 100 ml, respectively. With supportive care, all patients recovered. Massive LSD overdose in man is life-threatening and produces striking and distinctive manifestations. ImagesFigure 1. PMID:4816396

  4. Coma, hyperthermia and bleeding associated with massive LSD overdose. A report of eight cases.

    PubMed

    Klock, J C; Boerner, U; Becker, C E

    1974-03-01

    Eight patients were seen within 15 minutes of intranasal self-administration of large amounts of pure D-lysergic acid diethylamide (LSD) tartrate powder. Emesis and collapse occurred along with signs of sympathetic overactivity, hyperthermia, coma and respiratory arrest. Mild generalized bleeding occurred in several patients and evidence of platelet dysfunction was present in all. Serum and gastric concentrations of LSD tartrate ranged from 2.1 to 26 nanograms per ml and 1,000 to 7,000 mug per 100 ml, respectively. With supportive care, all patients recovered. Massive LSD overdose in man is life-threatening and produces striking and distinctive manifestations.

  5. QCD corrections to massive color-octet vector boson pair production

    NASA Astrophysics Data System (ADS)

    Freitas, Ayres; Wiegand, Daniel

    2017-09-01

    This paper describes the calculation of the next-to-leading order (NLO) QCD corrections to massive color-octet vector boson pair production at hadron colliders. As a concrete framework, a two-site coloron model with an internal parity is chosen, which can be regarded as an effective low-energy approximation of Kaluza-Klein gluon physics in universal extra dimensions. The renormalization procedure involves several subtleties, which are discussed in detail. The impact of the NLO corrections is relatively modest, amounting to a reduction of 11-14% in the total cross-section, but they significantly reduce the scale dependence of the LO result.

  6. Proxy-equation paradigm: A strategy for massively parallel asynchronous computations

    NASA Astrophysics Data System (ADS)

    Mittal, Ankita; Girimaji, Sharath

    2017-09-01

    Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.

  7. Does massive intraabdominal free gas require surgical intervention?

    PubMed

    Furihata, Tadashi; Furihata, Makoto; Ishikawa, Kunibumi; Kosaka, Masato; Satoh, Naoki; Kubota, Keiichi

    2016-08-28

    We describe a rare case of an 81-year-old man who presented with severe epigastralgia. A chest radiograph showed massive free gas bilaterally in the diaphragmatic spaces. Computed tomography (CT) scan also showed massive free gas in the peritoneal cavity with portal venous gas. We used a wait-and-see approach and carefully considered surgery again when the time was appropriate. The patient received conservative therapy with fasting, an intravenous infusion of antibiotics, and nasogastric intubation. The patient soon recovered and was able to start eating meals 4 d after treatment; thus, surgical intervention was avoided. Thereafter, colonoscopy examination showed pneumatosis cystoides intestinalis in the ascending colon. On retrospective review, CT scan demonstrated sporadic air-filled cysts in the ascending colon. The present case taught us a lesson: the presence of massive intraabdominal free gas with portal venous gas does not necessarily require surgical intervention. Pneumatosis cystoides intestinalis should be considered as a potential causative factor of free gas with portal venous gas when making the differential diagnosis.

  8. Cosmic string loops as the seeds of super-massive black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramberger, Sebastian F.; Brandenberger, Robert H.; Jreidini, Paul

    2015-06-01

    Recent discoveries of super-massive black holes at high redshifts indicate a possible tension with the standard ΛCDM paradigm of early universe cosmology which has difficulties in explaining the origin of the required nonlinear compact seeds which trigger the formation of these super-massive black holes. Here we show that cosmic string loops which result from a scaling solution of strings formed during a phase transition in the very early universe lead to an additional source of compact seeds. The number density of string-induced seeds dominates at high redshifts and can help trigger the formation of the observed super-massive black holes.

  9. Using Kepler for Tool Integration in Microarray Analysis Workflows.

    PubMed

    Gan, Zhuohui; Stowe, Jennifer C; Altintas, Ilkay; McCulloch, Andrew D; Zambon, Alexander C

    Increasing numbers of genomic technologies are leading to massive amounts of genomic data, all of which requires complex analysis. More and more bioinformatics analysis tools are being developed by scientist to simplify these analyses. However, different pipelines have been developed using different software environments. This makes integrations of these diverse bioinformatics tools difficult. Kepler provides an open source environment to integrate these disparate packages. Using Kepler, we integrated several external tools including Bioconductor packages, AltAnalyze, a python-based open source tool, and R-based comparison tool to build an automated workflow to meta-analyze both online and local microarray data. The automated workflow connects the integrated tools seamlessly, delivers data flow between the tools smoothly, and hence improves efficiency and accuracy of complex data analyses. Our workflow exemplifies the usage of Kepler as a scientific workflow platform for bioinformatics pipelines.

  10. Documentation of Heritage Structures Through Geo-Crowdsourcing and Web-Mapping

    NASA Astrophysics Data System (ADS)

    Dhonju, H. K.; Xiao, W.; Shakya, B.; Mills, J. P.; Sarhosis, V.

    2017-09-01

    Heritage documentation has become increasingly urgent due to both natural impacts and human influences. The documentation of countless heritage sites around the globe is a massive project that requires significant amounts of financial and labour resources. With the concepts of volunteered geographic information (VGI) and citizen science, heritage data such as digital photographs can be collected through online crowd participation. Whilst photographs are not strictly geographic data, they can be geo-tagged by the participants. They can also be automatically geo-referenced into a global coordinate system if collected via mobile phones which are now ubiquitous. With the assistance of web-mapping, an online geo-crowdsourcing platform has been developed to collect and display heritage structure photographs. Details of platform development are presented in this paper. The prototype is demonstrated with several heritage examples. Potential applications and advancements are discussed.

  11. iSeq: Web-Based RNA-seq Data Analysis and Visualization.

    PubMed

    Zhang, Chao; Fan, Caoqi; Gan, Jingbo; Zhu, Ping; Kong, Lei; Li, Cheng

    2018-01-01

    Transcriptome sequencing (RNA-seq) is becoming a standard experimental methodology for genome-wide characterization and quantification of transcripts at single base-pair resolution. However, downstream analysis of massive amount of sequencing data can be prohibitively technical for wet-lab researchers. A functionally integrated and user-friendly platform is required to meet this demand. Here, we present iSeq, an R-based Web server, for RNA-seq data analysis and visualization. iSeq is a streamlined Web-based R application under the Shiny framework, featuring a simple user interface and multiple data analysis modules. Users without programming and statistical skills can analyze their RNA-seq data and construct publication-level graphs through a standardized yet customizable analytical pipeline. iSeq is accessible via Web browsers on any operating system at http://iseq.cbi.pku.edu.cn .

  12. Enhanced sampling techniques in biomolecular simulations.

    PubMed

    Spiwok, Vojtech; Sucur, Zoran; Hosek, Petr

    2015-11-01

    Biomolecular simulations are routinely used in biochemistry and molecular biology research; however, they often fail to match expectations of their impact on pharmaceutical and biotech industry. This is caused by the fact that a vast amount of computer time is required to simulate short episodes from the life of biomolecules. Several approaches have been developed to overcome this obstacle, including application of massively parallel and special purpose computers or non-conventional hardware. Methodological approaches are represented by coarse-grained models and enhanced sampling techniques. These techniques can show how the studied system behaves in long time-scales on the basis of relatively short simulations. This review presents an overview of new simulation approaches, the theory behind enhanced sampling methods and success stories of their applications with a direct impact on biotechnology or drug design. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Modified electrokinetic sample injection method in chromatography and electrophoresis analysis

    DOEpatents

    Davidson, J. Courtney; Balch, Joseph W.

    2001-01-01

    A sample injection method for horizontal configured multiple chromatography or electrophoresis units, each containing a number of separation/analysis channels, that enables efficient introduction of analyte samples. This method for loading when taken in conjunction with horizontal microchannels allows much reduced sample volumes and a means of sample stacking to greatly reduce the concentration of the sample. This reduction in the amount of sample can lead to great cost savings in sample preparation, particularly in massively parallel applications such as DNA sequencing. The essence of this method is in preparation of the input of the separation channel, the physical sample introduction, and subsequent removal of excess material. By this method, sample volumes of 100 nanoliter to 2 microliters have been used successfully, compared to the typical 5 microliters of sample required by the prior separation/analysis method.

  14. Massive Volcanic SO2 Oxidation and Sulphate Aerosol Deposition in Cenozoic North America

    EPA Science Inventory

    Volcanic eruptions release a large amount of sulphur dioxide (SO2) into the atmosphere. SO2 is oxidized to sulphate and can subsequently form sulphate aerosol, which can affect the Earth's radiation balance, biologic productivity and high-altitude ozone co...

  15. Generation and Limiters of Rogue Waves

    DTIC Science & Technology

    2014-06-01

    Jacobs, 7320 Ruth H. Preller, 7300 1231 1008.3 E. R. Franchi , 7000 Erick Rogers, 7322 1. REFERENCES AND ENCLOSURES 2. TYPE OF PUBLICATION OR...wave heights do not grow unlimited. With massive amount of global wave observations available nowadays, wave heights much in excess of 30m have never

  16. Fermilab | Tevatron | Experiments

    Science.gov Websites

    electrons, muons and charged hadrons followed curved paths through them. The slower or less massive the particles, the greater was the magnet's effect on them, and the more they curved. Scientists therefore used the amount which a particle's track curved to determine its momentum. This information helped them

  17. Recent Developments in Young-Earth Creationist Geology

    ERIC Educational Resources Information Center

    Heaton, Timothy H.

    2009-01-01

    Young-earth creationism has undergone a shift in emphasis toward building of historical models that incorporate Biblical and scientific evidence and the acceptance of scientific conclusions that were formerly rejected. The RATE Group admitted that massive amounts of radioactive decay occurred during earth history but proposed a period of…

  18. Frameworks Coordinate Scientific Data Management

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Jet Propulsion Laboratory computer scientists developed a unique software framework to help NASA manage its massive amounts of science data. Through a partnership with the Apache Software Foundation of Forest Hill, Maryland, the technology is now available as an open-source solution and is in use by cancer researchers and pediatric hospitals.

  19. Communities of Practice and Professional Development

    ERIC Educational Resources Information Center

    Chalmers, Lex; Keown, Paul

    2006-01-01

    The Internet has had a transformative effect on many aspects of contemporary living. While there may be a tendency to overstate the impacts of this technology, workplaces and work practices in many societies have been greatly affected by almost instant access to massive amounts of information, delivered through broadening bandwidth. This paper…

  20. A HIERARCHICAL MODELING FRAMEWORK FOR GEOLOGICAL STORAGE OF CARBON DIOXIDE

    EPA Science Inventory

    Carbon Capture and Storage, or CCS, is likely to be an important technology in a carbonconstrained world. CCS will involve subsurface injection of massive amounts of captured CO2, on a scale that has not previously been approached. The unprecedented scale of t...

  1. Virtual Bioinformatics Distance Learning Suite

    ERIC Educational Resources Information Center

    Tolvanen, Martti; Vihinen, Mauno

    2004-01-01

    Distance learning as a computer-aided concept allows students to take courses from anywhere at any time. In bioinformatics, computers are needed to collect, store, process, and analyze massive amounts of biological and biomedical data. We have applied the concept of distance learning in virtual bioinformatics to provide university course material…

  2. Social Studies Special Issue: Civic Literacy in a Digital Age

    ERIC Educational Resources Information Center

    VanFossen, Phillip J.; Berson, Michael J.

    2008-01-01

    Young people today consume large amounts of information through various media outlets and simultaneously create and distribute their own messages via information and communication technologies and massively multiplayer online gaming. In doing so, these "digital natives" are often exposed to violent, racist, or other deleterious messages.…

  3. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks.

    PubMed

    Lee, Byung Moo

    2017-12-29

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices.

  4. Simplified Antenna Group Determination of RS Overhead Reduced Massive MIMO for Wireless Sensor Networks

    PubMed Central

    2017-01-01

    Massive multiple-input multiple-output (MIMO) systems can be applied to support numerous internet of things (IoT) devices using its excessive amount of transmitter (TX) antennas. However, one of the big obstacles for the realization of the massive MIMO system is the overhead of reference signal (RS), because the number of RS is proportional to the number of TX antennas and/or related user equipments (UEs). It has been already reported that antenna group-based RS overhead reduction can be very effective to the efficient operation of massive MIMO, but the method of deciding the number of antennas needed in each group is at question. In this paper, we propose a simplified determination scheme of the number of antennas needed in each group for RS overhead reduced massive MIMO to support many IoT devices. Supporting many distributed IoT devices is a framework to configure wireless sensor networks. Our contribution can be divided into two parts. First, we derive simple closed-form approximations of the achievable spectral efficiency (SE) by using zero-forcing (ZF) and matched filtering (MF) precoding for the RS overhead reduced massive MIMO systems with channel estimation error. The closed-form approximations include a channel error factor that can be adjusted according to the method of the channel estimation. Second, based on the closed-form approximation, we present an efficient algorithm determining the number of antennas needed in each group for the group-based RS overhead reduction scheme. The algorithm depends on the exact inverse functions of the derived closed-form approximations of SE. It is verified with theoretical analysis and simulation that the proposed algorithm works well, and thus can be used as an important tool for massive MIMO systems to support many distributed IoT devices. PMID:29286339

  5. Ongoing Massive Star Formation in NGC 604

    NASA Astrophysics Data System (ADS)

    Martínez-Galarza, J. R.; Hunter, D.; Groves, B.; Brandl, B.

    2012-12-01

    NGC 604 is the second most massive H II region in the Local Group, thus an important laboratory for massive star formation. Using a combination of observational and analytical tools that include Spitzer spectroscopy, Herschel photometry, Chandra imaging, and Bayesian spectral energy distribution fitting, we investigate the physical conditions in NGC 604 and quantify the amount of massive star formation currently taking place. We derive an average age of 4 ± 1 Myr and a total stellar mass of 1.6+1.6 - 1.0 × 105 M ⊙ for the entire region, in agreement with previous optical studies. Across the region, we find an effect of the X-ray field on both the abundance of aromatic molecules and the [Si II] emission. Within NGC 604, we identify several individual bright infrared sources with diameters of about 15 pc and luminosity-weighted masses between 103 M ⊙ and 104 M ⊙. Their spectral properties indicate that some of these sources are embedded clusters in process of formation, which together account for ~8% of the total stellar mass in the NGC 604 system. The variations of the radiation field strength across NGC 604 are consistent with a sequential star formation scenario, with at least two bursts in the last few million years. Our results indicate that massive star formation in NGC 604 is still ongoing, likely triggered by the earlier bursts.

  6. Petermann Glacier, North Greenland: massive calving in 2010 and the past half century

    NASA Astrophysics Data System (ADS)

    Johannessen, O. M.; Babiker, M.; Miles, M. W.

    2011-01-01

    Greenland's marine-terminating glaciers drain large amounts of solid ice through calving of icebergs, as well as melting of floating glacial ice. Petermann Glacier, North Greenland, has the Northern Hemisphere's long floating ice shelf. A massive (~270 km2) calving event was observed from satellite sensors in August 2010. In order to understand this in perspective, here we perform a comprehensive retrospective data analysis of Petermann Glacier calving-front variability spanning half a century. Here we establish that there have been at least four massive (100+ km2) calving events over the past 50 years: (1) 1959-1961 (~153 km2), (2) 1991 (~168 km2), (3) 2001 (~71 km2) and (4) 2010 (~270 km2), as well as ~31 km2 calved in 2008. The terminus position in 2010 has retreated ~15 km beyond the envelope of previous observations. Whether the massive calving in 2010 represents natural episodic variability or a response to global and/or ocean warming in the fjord remains speculative, although this event supports the contention that the ice shelf recently has become vulnerable due to extensive fracturing and channelized basal melting.

  7. Unsupervised classification of variable stars

    NASA Astrophysics Data System (ADS)

    Valenzuela, Lucas; Pichara, Karim

    2018-03-01

    During the past 10 years, a considerable amount of effort has been made to develop algorithms for automatic classification of variable stars. That has been primarily achieved by applying machine learning methods to photometric data sets where objects are represented as light curves. Classifiers require training sets to learn the underlying patterns that allow the separation among classes. Unfortunately, building training sets is an expensive process that demands a lot of human efforts. Every time data come from new surveys; the only available training instances are the ones that have a cross-match with previously labelled objects, consequently generating insufficient training sets compared with the large amounts of unlabelled sources. In this work, we present an algorithm that performs unsupervised classification of variable stars, relying only on the similarity among light curves. We tackle the unsupervised classification problem by proposing an untraditional approach. Instead of trying to match classes of stars with clusters found by a clustering algorithm, we propose a query-based method where astronomers can find groups of variable stars ranked by similarity. We also develop a fast similarity function specific for light curves, based on a novel data structure that allows scaling the search over the entire data set of unlabelled objects. Experiments show that our unsupervised model achieves high accuracy in the classification of different types of variable stars and that the proposed algorithm scales up to massive amounts of light curves.

  8. Eta Carinae in the Context of the Most Massive Stars

    NASA Technical Reports Server (NTRS)

    Gull, Theodore R.; Damineli, Augusto

    2009-01-01

    Eta Car, with its historical outbursts, visible ejecta and massive, variable winds, continues to challenge both observers and modelers. In just the past five years over 100 papers have been published on this fascinating object. We now know it to be a massive binary system with a 5.54-year period. In January 2009, Car underwent one of its periodic low-states, associated with periastron passage of the two massive stars. This event was monitored by an intensive multi-wavelength campaign ranging from -rays to radio. A large amount of data was collected to test a number of evolving models including 3-D models of the massive interacting winds. August 2009 was an excellent time for observers and theorists to come together and review the accumulated studies, as have occurred in four meetings since 1998 devoted to Eta Car. Indeed, Car behaved both predictably and unpredictably during this most recent periastron, spurring timely discussions. Coincidently, WR140 also passed through periastron in early 2009. It, too, is a intensively studied massive interacting binary. Comparison of its properties, as well as the properties of other massive stars, with those of Eta Car is very instructive. These well-known examples of evolved massive binary systems provide many clues as to the fate of the most massive stars. What are the effects of the interacting winds, of individual stellar rotation, and of the circumstellar material on what we see as hypernovae/supernovae? We hope to learn. Topics discussed in this 1.5 day Joint Discussion were: Car: the 2009.0 event: Monitoring campaigns in X-rays, optical, radio, interferometry WR140 and HD5980: similarities and differences to Car LBVs and Eta Carinae: What is the relationship? Massive binary systems, wind interactions and 3-D modeling Shapes of the Homunculus & Little Homunculus: what do we learn about mass ejection? Massive stars: the connection to supernovae, hypernovae and gamma ray bursters Where do we go from here? (future directions) The Science Organizing Committee: Co-chairs: Augusto Damineli (Brazil) & Theodore R. Gull (USA). Members: D. John Hillier (USA), Gloria Koenigsberger (Mexico), Georges Meynet (Switzerland), Nidia Morrell (Chile), Atsuo T. Okazaki (Japan), Stanley P. Owocki (USA), Andy M.T. Pol- lock (Spain), Nathan Smith (USA), Christiaan L. Sterken (Belgium), Nicole St Louis (Canada), Karel A. van der Hucht (Netherlands), Roberto Viotti (Italy) and GerdWeigelt (Germany)

  9. A 15.65-solar-mass black hole in an eclipsing binary in the nearby spiral galaxy M 33.

    PubMed

    Orosz, Jerome A; McClintock, Jeffrey E; Narayan, Ramesh; Bailyn, Charles D; Hartman, Joel D; Macri, Lucas; Liu, Jiefeng; Pietsch, Wolfgang; Remillard, Ronald A; Shporer, Avi; Mazeh, Tsevi

    2007-10-18

    Stellar-mass black holes are found in X-ray-emitting binary systems, where their mass can be determined from the dynamics of their companion stars. Models of stellar evolution have difficulty producing black holes in close binaries with masses more than ten times that of the Sun (>10; ref. 4), which is consistent with the fact that the most massive stellar black holes known so far all have masses within one standard deviation of 10. Here we report a mass of (15.65 +/- 1.45) for the black hole in the recently discovered system M 33 X-7, which is located in the nearby galaxy Messier 33 (M 33) and is the only known black hole that is in an eclipsing binary. To produce such a massive black hole, the progenitor star must have retained much of its outer envelope until after helium fusion in the core was completed. On the other hand, in order for the black hole to be in its present 3.45-day orbit about its (70.0 +/- 6.9) companion, there must have been a 'common envelope' phase of evolution in which a significant amount of mass was lost from the system. We find that the common envelope phase could not have occurred in M 33 X-7 unless the amount of mass lost from the progenitor during its evolution was an order of magnitude less than what is usually assumed in evolutionary models of massive stars.

  10. GPUmotif: An Ultra-Fast and Energy-Efficient Motif Analysis Program Using Graphics Processing Units

    PubMed Central

    Zandevakili, Pooya; Hu, Ming; Qin, Zhaohui

    2012-01-01

    Computational detection of TF binding patterns has become an indispensable tool in functional genomics research. With the rapid advance of new sequencing technologies, large amounts of protein-DNA interaction data have been produced. Analyzing this data can provide substantial insight into the mechanisms of transcriptional regulation. However, the massive amount of sequence data presents daunting challenges. In our previous work, we have developed a novel algorithm called Hybrid Motif Sampler (HMS) that enables more scalable and accurate motif analysis. Despite much improvement, HMS is still time-consuming due to the requirement to calculate matching probabilities position-by-position. Using the NVIDIA CUDA toolkit, we developed a graphics processing unit (GPU)-accelerated motif analysis program named GPUmotif. We proposed a “fragmentation" technique to hide data transfer time between memories. Performance comparison studies showed that commonly-used model-based motif scan and de novo motif finding procedures such as HMS can be dramatically accelerated when running GPUmotif on NVIDIA graphics cards. As a result, energy consumption can also be greatly reduced when running motif analysis using GPUmotif. The GPUmotif program is freely available at http://sourceforge.net/projects/gpumotif/ PMID:22662128

  11. Computing Platforms for Big Biological Data Analytics: Perspectives and Challenges.

    PubMed

    Yin, Zekun; Lan, Haidong; Tan, Guangming; Lu, Mian; Vasilakos, Athanasios V; Liu, Weiguo

    2017-01-01

    The last decade has witnessed an explosion in the amount of available biological sequence data, due to the rapid progress of high-throughput sequencing projects. However, the biological data amount is becoming so great that traditional data analysis platforms and methods can no longer meet the need to rapidly perform data analysis tasks in life sciences. As a result, both biologists and computer scientists are facing the challenge of gaining a profound insight into the deepest biological functions from big biological data. This in turn requires massive computational resources. Therefore, high performance computing (HPC) platforms are highly needed as well as efficient and scalable algorithms that can take advantage of these platforms. In this paper, we survey the state-of-the-art HPC platforms for big biological data analytics. We first list the characteristics of big biological data and popular computing platforms. Then we provide a taxonomy of different biological data analysis applications and a survey of the way they have been mapped onto various computing platforms. After that, we present a case study to compare the efficiency of different computing platforms for handling the classical biological sequence alignment problem. At last we discuss the open issues in big biological data analytics.

  12. Andromeda's SMBH Projected Accretion Rate

    NASA Astrophysics Data System (ADS)

    Wilson, John

    2014-03-01

    A formula for calculating the half-life of galaxy clusters is proposed. A galactic half-life is the estimated amount of time that the most massive supermassive black hole (SMBH) in the galaxy cluster will have accreted one half of the mass in the cluster. The calculation is based on a projection of the SMBH continuing its exponentially decreasing rate of accretion that it had in its first 13 billion years. The calculated half-life for the Andromeda SMBH is approximately 1.4327e14 years from the Big Bang. Several proposals have suggested that black holes could be significant factors in the formation of new universes. Part of the verification or falsification of this hypothesis could be done by an N-body simulation. These simulations require an enormous amount of computer power and time. Some plausible projection of the growth of the supermassive black hole is needed to prepare an N-body simulation budget proposal. For now, this method provides an estimate for the growth rate of the Andromeda SMBH and deposition of the outcome of most of the galaxy cluster's mass which is either accreted by the SMBH, lost by ejection from the cluster, or lost in the form of energy.

  13. GPUmotif: an ultra-fast and energy-efficient motif analysis program using graphics processing units.

    PubMed

    Zandevakili, Pooya; Hu, Ming; Qin, Zhaohui

    2012-01-01

    Computational detection of TF binding patterns has become an indispensable tool in functional genomics research. With the rapid advance of new sequencing technologies, large amounts of protein-DNA interaction data have been produced. Analyzing this data can provide substantial insight into the mechanisms of transcriptional regulation. However, the massive amount of sequence data presents daunting challenges. In our previous work, we have developed a novel algorithm called Hybrid Motif Sampler (HMS) that enables more scalable and accurate motif analysis. Despite much improvement, HMS is still time-consuming due to the requirement to calculate matching probabilities position-by-position. Using the NVIDIA CUDA toolkit, we developed a graphics processing unit (GPU)-accelerated motif analysis program named GPUmotif. We proposed a "fragmentation" technique to hide data transfer time between memories. Performance comparison studies showed that commonly-used model-based motif scan and de novo motif finding procedures such as HMS can be dramatically accelerated when running GPUmotif on NVIDIA graphics cards. As a result, energy consumption can also be greatly reduced when running motif analysis using GPUmotif. The GPUmotif program is freely available at http://sourceforge.net/projects/gpumotif/

  14. A novel explosive process is required for the gamma-ray burst GRB 060614.

    PubMed

    Gal-Yam, A; Fox, D B; Price, P A; Ofek, E O; Davis, M R; Leonard, D C; Soderberg, A M; Schmidt, B P; Lewis, K M; Peterson, B A; Kulkarni, S R; Berger, E; Cenko, S B; Sari, R; Sharon, K; Frail, D; Moon, D-S; Brown, P J; Cucchiara, A; Harrison, F; Piran, T; Persson, S E; McCarthy, P J; Penprase, B E; Chevalier, R A; MacFadyen, A I

    2006-12-21

    Over the past decade, our physical understanding of gamma-ray bursts (GRBs) has progressed rapidly, thanks to the discovery and observation of their long-lived afterglow emission. Long-duration (> 2 s) GRBs are associated with the explosive deaths of massive stars ('collapsars', ref. 1), which produce accompanying supernovae; the short-duration (< or = 2 s) GRBs have a different origin, which has been argued to be the merger of two compact objects. Here we report optical observations of GRB 060614 (duration approximately 100 s, ref. 10) that rule out the presence of an associated supernova. This would seem to require a new explosive process: either a massive collapsar that powers a GRB without any associated supernova, or a new type of 'engine', as long-lived as the collapsar but without a massive star. We also show that the properties of the host galaxy (redshift z = 0.125) distinguish it from other long-duration GRB hosts and suggest that an entirely new type of GRB progenitor may be required.

  15. Topologically massive magnetic monopoles

    NASA Astrophysics Data System (ADS)

    Aliev, A. N.; Nutku, Y.; Saygili, K.

    2000-10-01

    We show that in the Maxwell-Chern-Simons theory of topologically massive electrodynamics the Dirac string of a monopole becomes a cone in anti-de Sitter space with the opening angle of the cone determined by the topological mass, which in turn is related to the square root of the cosmological constant. This proves to be an example of a physical system, a priori completely unrelated to gravity, which nevertheless requires curved spacetime for its very existence. We extend this result to topologically massive gravity coupled to topologically massive electrodynamics within the framework of the theory of Deser, Jackiw and Templeton. The two-component spinor formalism, which is a Newman-Penrose type approach for three dimensions, is extended to include both the electrodynamical and gravitational topologically massive field equations. Using this formalism exact solutions of the coupled Deser-Jackiw-Templeton and Maxwell-Chern-Simons field equations for a topologically massive monopole are presented. These are homogeneous spaces with conical deficit. Pure Einstein gravity coupled to the Maxwell-Chern-Simons field does not admit such a monopole solution.

  16. Before Industrialization: A Rural Social System Base Study.

    ERIC Educational Resources Information Center

    Summers, Gene F.; And Others

    A recent trend in American economic life has been the location of industrial complexes in traditionally rural areas. When this occurs, there are often accompanying rapid and sometimes traumatic changes in the rural community. These changes, in part, result from investment of new and massive amounts of capital, new employment opportunities,…

  17. Overcoming Challenges of the Technological Age by Teaching Information Literacy Skills

    ERIC Educational Resources Information Center

    Burke, Melynda

    2010-01-01

    The technological age has forever altered every aspect of life and work. Technology has changed how people locate and view information. However, the transition from print to electronic formats has created numerous challenges for individuals to overcome. These challenges include coping with the massive amounts of information bombarding people and…

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mercier, C.W.

    The Network File System (NFS) will be the user interface to a High-Performance Data System (HPDS) being developed at Los Alamos National Laboratory (LANL). HPDS will manage high-capacity, high-performance storage systems connected directly to a high-speed network from distributed workstations. NFS will be modified to maximize performance and to manage massive amounts of data. 6 refs., 3 figs.

  19. Surveillance in the Information Age: Text Quantification, Anomaly Detection, and Empirical Evaluation

    ERIC Educational Resources Information Center

    Lu, Hsin-Min

    2010-01-01

    Deep penetration of personal computers, data communication networks, and the Internet has created a massive platform for data collection, dissemination, storage, and retrieval. Large amounts of textual data are now available at a very low cost. Valuable information, such as consumer preferences, new product developments, trends, and opportunities,…

  20. Alginate-based polysaccharide beads for cationic contaminant sorption from water

    Treesearch

    Mei Li; Thomas Elder; Gisela Buschle-Diller

    2016-01-01

    Massive amounts of agricultural and industrial water worldwide are polluted by different types of contaminants that harm the environment and impact human health. Removing the contaminants from effluents by adsorbent materials made from abundant, inexpensive polysaccharides is a feasible approach to deal with this problem. In this research, alginate beads combined with...

  1. In situ BioTransformation Of Mercury-Contaminated Groundwater In Kazakhstan Utilizing Native Bacteria

    EPA Science Inventory

    The Northern outskirts of Pavlodar were contaminated with mercury as a result of activity at the former PO "Khimprom" chemical plant. The plant produced chlorine and alkali from the 1970s into the 1990s using the electrolytic amalgam method entailing the use of massive amounts o...

  2. Pilot Scale In Situ BioTransformation Of Mercury-Contaminated Groundwater In Kazakhstan Utilizing Native Bacteria

    EPA Science Inventory

    The Northern outskirts of Pavlodar were contaminated with mercury as a result of activity at the former PO "Khimprom" chemical plant. The plant produced chlorine and alkali from the 1970s into the 1990s using the electrolytic amalgam method entailing the use of massive amounts o...

  3. In Situ BioTransformation of Mercury-Contaminated Groundwater In Kazakhstan Utilizing Native Bacteria (Presentation)

    EPA Science Inventory

    The Northern outskirts of Pavlodar were contaminated with mercury as a result of activity at the former PO "Khimprom" chemical plant. The plant produced chlorine and alkali from the 1970's into the 1990's using the electrolytic amalgam method entailing the use of massive amounts...

  4. Preliminary Validation of Composite Material Constitutive Characterization

    Treesearch

    John G. Michopoulos; Athanasios lliopoulos; John C. Hermanson; Adrian C. Orifici; Rodney S. Thomson

    2012-01-01

    This paper is describing the preliminary results of an effort to validate a methodology developed for composite material constitutive characterization. This methodology involves using massive amounts of data produced from multiaxially tested coupons via a 6-DoF robotic system called NRL66.3 developed at the Naval Research Laboratory. The testing is followed by...

  5. Banning the Bottle

    ERIC Educational Resources Information Center

    Palliser, Janna

    2010-01-01

    Bottled water is ubiquitous, taken for granted, and seemingly benign. Americans are consuming bottled water in massive amounts and spending a lot of money: In 2007, Americans spent $11.7 billion on 8.8 billions gallons of bottled water (Gashler 2008). That same year, two million plastic water bottles were used in the United States every five…

  6. SQ 10 R.

    ERIC Educational Resources Information Center

    Shaughnessy, Michael F.

    While many students have found SQ3R (Survey, Question, Read, Recite, Review) and PQ 4 R (Preview, Question, Read, Reflect, Recite, Review) systems to be helpful, developmental/remedial students may need more assistance than the average freshman. Students who need more help to deal with the massive amounts of reading that needs to be done in…

  7. Listen, Listen, Listen and Listen: Building a Comprehension Corpus and Making It Comprehensible

    ERIC Educational Resources Information Center

    Mordaunt, Owen G.; Olson, Daniel W.

    2010-01-01

    Listening comprehension input is necessary for language learning and acculturation. One approach to developing listening comprehension skills is through exposure to massive amounts of naturally occurring spoken language input. But exposure to this input is not enough; learners also need to make the comprehension corpus meaningful to their learning…

  8. Exploring the Integration of Data Mining and Data Visualization

    ERIC Educational Resources Information Center

    Zhang, Yi

    2011-01-01

    Due to the rapid advances in computing and sensing technologies, enormous amounts of data are being generated everyday in various applications. The integration of data mining and data visualization has been widely used to analyze these massive and complex data sets to discover hidden patterns. For both data mining and visualization to be…

  9. V for Voice: Strategies for Bolstering Communication Skills in Statistics

    ERIC Educational Resources Information Center

    Khachatryan, Davit; Karst, Nathaniel

    2017-01-01

    With the ease and automation of data collection and plummeting storage costs, organizations are faced with massive amounts of data that present two pressing challenges: technical analysis of the data themselves and communication of the analytics process and its products. Although a plethora of academic and practitioner literature have focused on…

  10. Especial Skills: Their Emergence with Massive Amounts of Practice

    ERIC Educational Resources Information Center

    Keetch, Katherine M.; Schmidt, Richard A.; Lee, Timothy D.; Young, Douglas E.

    2005-01-01

    Differing viewpoints concerning the specificity and generality of motor skill representations in memory were compared by contrasting versions of a skill having either extensive or minimal specific practice. In Experiments 1 and 2, skilled basketball players more accurately performed set shots at the foul line than would be predicted on the basis…

  11. Impact of Improved Combat Casualty Care on Combat Wounded Undergoing Exploratory Laparotomy and Massive Transfusion

    DTIC Science & Technology

    2011-07-01

    given to evidence - based medicine in the 20th century has not only allowed improved dissemination of information to civilian providers but has also...limiting the amount of crystalloid used to resuscitate patients by 61%. This is further confirmation that evidence - based medicine changes in practice are at

  12. Garnets within geode-like serpentinite veins: Implications for element transport, hydrogen production and life-supporting environment formation

    NASA Astrophysics Data System (ADS)

    Plümper, Oliver; Beinlich, Andreas; Bach, Wolfgang; Janots, Emilie; Austrheim, Håkon

    2014-09-01

    Geochemical micro-environments within serpentinizing systems can abiotically synthesize hydrocarbons and provide the ingredients required to support life. Observations of organic matter in microgeode-like hydrogarnets found in Mid-Atlantic Ridge serpentinites suggest these garnets possibly represent unique nests for the colonization of microbial ecosystems within the oceanic lithosphere. However, little is known about the mineralogical and geochemical processes that allow such unique environments to form. Here we present work on outcrop-scale vein networks from an ultramafic massif in Norway that contain massive amounts of spherulitic garnets (andradite), which help to constrain such processes. Vein andradite spherulites are associated with polyhedral serpentine, brucite, Ni-Fe alloy (awaruite), and magnetite indicative of low temperature (<200 °C) alteration under low fO2 and low aSiO2,aq geochemical conditions. Together with the outcrop- and micro-scale analysis geochemical reaction path modeling shows that there was limited mass transport and fluid flow over a large scale. Once opened the veins remained isolated (closed system), forming non-equilibrium microenvironments that allowed, upon a threshold supersaturation, the rapid crystallization (seconds to weeks) of spherulitic andradite. The presence of polyhedral serpentine spheres indicates that veins were initially filled with a gel-like protoserpentine phase. In addition, massive Fe oxidation associated with andradite formation could have generated as much as 600 mmol H2,aq per 100 cm3 vein. Although no carboneous matter was detected, the vein networks fulfill the reported geochemical criteria required to generate abiogenic hydrocarbons and support microbial communities. Thus, systems similar to those investigated here are of prime interest when searching for life-supporting environments within the deep subsurface.

  13. A Massively Parallel Bayesian Approach to Planetary Protection Trajectory Analysis and Design

    NASA Technical Reports Server (NTRS)

    Wallace, Mark S.

    2015-01-01

    The NASA Planetary Protection Office has levied a requirement that the upper stage of future planetary launches have a less than 10(exp -4) chance of impacting Mars within 50 years after launch. A brute-force approach requires a decade of computer time to demonstrate compliance. By using a Bayesian approach and taking advantage of the demonstrated reliability of the upper stage, the required number of fifty-year propagations can be massively reduced. By spreading the remaining embarrassingly parallel Monte Carlo simulations across multiple computers, compliance can be demonstrated in a reasonable time frame. The method used is described here.

  14. Massive Spontaneous Retroperitoneal Hemorrhage Induced by Enoxaparin and Subsequent Abdominal Compartment Syndrome Requiring Surgical Decompression: A Case Report and Literature Review

    DTIC Science & Technology

    2011-08-01

    No No No No No Abbreviations: DVT ppx - Deep Vein thrombosis prophylaxis; ASA - Aspirin; RF - renal failure; W - warfarin ; ACS...A. Surgical management of enoxaparin-and /or warfarin - induced massive retroperitoneal bleeding: report of a case and review of the literature

  15. RAMA: A file system for massively parallel computers

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  16. Pneumatosis cystoides intestinalis associated with massive free air mimicking perforated diffuse peritonitis.

    PubMed

    Sakurai, Yoichi; Hikichi, Masahiro; Isogaki, Jun; Furuta, Shinpei; Sunagawa, Risaburo; Inaba, Kazuki; Komori, Yoshiyuki; Uyama, Ichiro

    2008-11-21

    While pneumatosis cystoides intestinalis (PCI) is a rare disease entity associated with a wide variety of gastrointestinal and non-gastrointestinal disorders, PCI associated with massive intra- and retroperitoneal free air is extremely uncommon, and is difficult to diagnose differentially from perforated peritonitis. We present two cases of PCI associated with massive peritoneal free air and/or retroperitoneal air that mimicked perforated peritonitis. These cases highlight the clinical importance of PCI that mimics perforated peritonitis, which requires emergency surgery. Preoperative imaging modalities and diagnostic laparoscopy are useful to make an accurate diagnosis.

  17. Infrared observations of the dark matter lens candidate Q2345+007

    NASA Technical Reports Server (NTRS)

    Mcleod, Brian; Rieke, Marcia; Weedman, Daniel

    1994-01-01

    Deep K-band observations are presented of the double image quasar Q2345+007. This has the largest separation (7.1 sec) of any quasar image pair considered as gravitationally lensed, so the required lens is massive (10(exp 13) solar masses). No lens has been detected in previous deep images at visible wavelengths, and we find no lens to limiting K magnitude 20.0 in the infrared image. This constrains any lens to being much less luminous than brightest cluster galaxies, while the lens must be much more massive than such galaxies to produce the observed separation. Because spectral data indicate exceptional intrinsic similarity in the quasar image components, this pair remains as the most intriguing example of an observed configuration requiring the presence of massive, concentrated dark matter acting as a gravitational lens.

  18. Properties of Massive Stars in Primitive Galaxies

    NASA Technical Reports Server (NTRS)

    Heap, Sara

    2012-01-01

    According to R. Dave, the phases of galaxy formation are distinguished by their halo mass and governing feedback mechanism. Galaxies in the birth phase (our "primitive galaxies") have a low halo mass (M<10(exp 9) Msun); and star formation is affected by photoionizing radiation of massive stars. In contrast, galaxies in the growth phase (e.g. Lyman Break galaxies) are more massive (M=10(exp 9)-10(exp 12) Msun); star formation is fueled by cold accretion but modulated by strong outflows from massive stars. I Zw 18 is a local blue, compact dwarf galaxy that meets the requirements for a birth-phase galaxy: halo mass <10(exp 9) Msun, strong photo ionizing radiation, no galactic outflow, and very low metallicity, log(O/H)=7.2. We will describe the properties of massive stars in I Zw 18 based on analysis of ultraviolet spectra obtained with HST.

  19. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    NASA Astrophysics Data System (ADS)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  20. [Personalized medicine: an elusive concept, diversified practices].

    PubMed

    Bateman, Simone

    2014-11-01

    This article proposes a brief inquiry into the field of scientific and medical practices currently referred to as "personalized medicine". Our inquiry identifies four recurring themes in the literature: health care that is tailored to the individual patient, that is enabled by emerging technologies, in which genetics and genomics occupy a prominent place, and which requires the collection of a massive amount of data. Personalized medicine, thus characterized, turns out to be less interested in the uniqueness of each patient's case than in the differences among patients within the same category. The aim of personalized medicine, thus described, is to obtain, with the help of cutting edge technology, more objective biological data on patients, in an attempt to improve the tools it has at its disposal to establish diagnoses, make therapeutic decisions, and provide more effective preventive measures. © 2014 médecine/sciences – Inserm.

  1. Automatic Detection of Seizures with Applications

    NASA Technical Reports Server (NTRS)

    Olsen, Dale E.; Harris, John C.; Cutchis, Protagoras N.; Cristion, John A.; Lesser, Ronald P.; Webber, W. Robert S.

    1993-01-01

    There are an estimated two million people with epilepsy in the United States. Many of these people do not respond to anti-epileptic drug therapy. Two devices can be developed to assist in the treatment of epilepsy. The first is a microcomputer-based system designed to process massive amounts of electroencephalogram (EEG) data collected during long-term monitoring of patients for the purpose of diagnosing seizures, assessing the effectiveness of medical therapy, or selecting patients for epilepsy surgery. Such a device would select and display important EEG events. Currently many such events are missed. A second device could be implanted and would detect seizures and initiate therapy. Both of these devices require a reliable seizure detection algorithm. A new algorithm is described. It is believed to represent an improvement over existing seizure detection algorithms because better signal features were selected and better standardization methods were used.

  2. Dynamic file-access characteristics of a production parallel scientific workload

    NASA Technical Reports Server (NTRS)

    Kotz, David; Nieuwejaar, Nils

    1994-01-01

    Multiprocessors have permitted astounding increases in computational performance, but many cannot meet the intense I/O requirements of some scientific applications. An important component of any solution to this I/O bottleneck is a parallel file system that can provide high-bandwidth access to tremendous amounts of data in parallel to hundreds or thousands of processors. Most successful systems are based on a solid understanding of the expected workload, but thus far there have been no comprehensive workload characterizations of multiprocessor file systems. This paper presents the results of a three week tracing study in which all file-related activity on a massively parallel computer was recorded. Our instrumentation differs from previous efforts in that it collects information about every I/O request and about the mix of jobs running in a production environment. We also present the results of a trace-driven caching simulation and recommendations for designers of multiprocessor file systems.

  3. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  4. On the convergence of nanotechnology and Big Data analysis for computer-aided diagnosis.

    PubMed

    Rodrigues, Jose F; Paulovich, Fernando V; de Oliveira, Maria Cf; de Oliveira, Osvaldo N

    2016-04-01

    An overview is provided of the challenges involved in building computer-aided diagnosis systems capable of precise medical diagnostics based on integration and interpretation of data from different sources and formats. The availability of massive amounts of data and computational methods associated with the Big Data paradigm has brought hope that such systems may soon be available in routine clinical practices, which is not the case today. We focus on visual and machine learning analysis of medical data acquired with varied nanotech-based techniques and on methods for Big Data infrastructure. Because diagnosis is essentially a classification task, we address the machine learning techniques with supervised and unsupervised classification, making a critical assessment of the progress already made in the medical field and the prospects for the near future. We also advocate that successful computer-aided diagnosis requires a merge of methods and concepts from nanotechnology and Big Data analysis.

  5. User needs on Nursing Net (The Kango Net) - analyzing the total consultation page - http://www.kango-net.jp/en/index.html.

    PubMed

    Sakyo, Yumi; Nakayama, Kazuhiro; Komatsu, Hiroko; Setoyama, Yoko

    2009-01-01

    People are required to take in and comprehend a massive amount of health information and in turn make some serious decisions based on that information. We, at St. Luke's College of Nursing, provide a rich selection of high-quality health information, and have set up Nursing Net (The Kango Net:Kango is Nursing in Japanese). This website provides information for consumers as well as people interested in the nursing profession. In an attempt to identify the needs of users, this study conducted an analysis of the contents on the total consultation page. Many readers voted that responses to nursing techniques and symptoms questions proved instrumental in their queries. Based on the results of this study, we can conclude that this is an easy-to-access, convenient site for getting health information about physical symptoms and nursing techniques.

  6. Space-Time Data fusion for Remote Sensing Applications

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Nguyen, H.; Cressie, N.

    2011-01-01

    NASA has been collecting massive amounts of remote sensing data about Earth's systems for more than a decade. Missions are selected to be complementary in quantities measured, retrieval techniques, and sampling characteristics, so these datasets are highly synergistic. To fully exploit this, a rigorous methodology for combining data with heterogeneous sampling characteristics is required. For scientific purposes, the methodology must also provide quantitative measures of uncertainty that propagate input-data uncertainty appropriately. We view this as a statistical inference problem. The true but notdirectly- observed quantities form a vector-valued field continuous in space and time. Our goal is to infer those true values or some function of them, and provide to uncertainty quantification for those inferences. We use a spatiotemporal statistical model that relates the unobserved quantities of interest at point-level to the spatially aggregated, observed data. We describe and illustrate our method using CO2 data from two NASA data sets.

  7. The Multi-Step CADIS method for shutdown dose rate calculations and uncertainty propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Ahmad M.; Peplow, Douglas E.; Grove, Robert E.

    2015-12-01

    Shutdown dose rate (SDDR) analysis requires (a) a neutron transport calculation to estimate neutron flux fields, (b) an activation calculation to compute radionuclide inventories and associated photon sources, and (c) a photon transport calculation to estimate final SDDR. In some applications, accurate full-scale Monte Carlo (MC) SDDR simulations are needed for very large systems with massive amounts of shielding materials. However, these simulations are impractical because calculation of space- and energy-dependent neutron fluxes throughout the structural materials is needed to estimate distribution of radioisotopes causing the SDDR. Biasing the neutron MC calculation using an importance function is not simple becausemore » it is difficult to explicitly express the response function, which depends on subsequent computational steps. Furthermore, the typical SDDR calculations do not consider how uncertainties in MC neutron calculation impact SDDR uncertainty, even though MC neutron calculation uncertainties usually dominate SDDR uncertainty.« less

  8. A programmable computational image sensor for high-speed vision

    NASA Astrophysics Data System (ADS)

    Yang, Jie; Shi, Cong; Long, Xitian; Wu, Nanjian

    2013-08-01

    In this paper we present a programmable computational image sensor for high-speed vision. This computational image sensor contains four main blocks: an image pixel array, a massively parallel processing element (PE) array, a row processor (RP) array and a RISC core. The pixel-parallel PE is responsible for transferring, storing and processing image raw data in a SIMD fashion with its own programming language. The RPs are one dimensional array of simplified RISC cores, it can carry out complex arithmetic and logic operations. The PE array and RP array can finish great amount of computation with few instruction cycles and therefore satisfy the low- and middle-level high-speed image processing requirement. The RISC core controls the whole system operation and finishes some high-level image processing algorithms. We utilize a simplified AHB bus as the system bus to connect our major components. Programming language and corresponding tool chain for this computational image sensor are also developed.

  9. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  10. Detached Eddy Simulation Results for a Space Launch System Configuration at Liftoff Conditions and Comparison with Experiment

    NASA Technical Reports Server (NTRS)

    Krist, Steven E.; Ghaffari, Farhad

    2015-01-01

    Computational simulations for a Space Launch System configuration at liftoff conditions for incidence angles from 0 to 90 degrees were conducted in order to generate integrated force and moment data and longitudinal lineloads. While the integrated force and moment coefficients can be obtained from wind tunnel testing, computational analyses are indispensable in obtaining the extensive amount of surface information required to generate proper lineloads. However, beyond an incidence angle of about 15 degrees, the effects of massive flow separation on the leeward pressure field is not well captured with state of the art Reynolds Averaged Navier-Stokes methods, necessitating the employment of a Detached Eddy Simulation method. Results from these simulations are compared to the liftoff force and moment database and surface pressure data derived from a test in the NASA Langley 14- by 22-Foot Subsonic Wind Tunnel.

  11. Big Data Knowledge in Global Health Education.

    PubMed

    Olayinka, Olaniyi; Kekeh, Michele; Sheth-Chandra, Manasi; Akpinar-Elci, Muge

    The ability to synthesize and analyze massive amounts of data is critical to the success of organizations, including those that involve global health. As countries become highly interconnected, increasing the risk for pandemics and outbreaks, the demand for big data is likely to increase. This requires a global health workforce that is trained in the effective use of big data. To assess implementation of big data training in global health, we conducted a pilot survey of members of the Consortium of Universities of Global Health. More than half the respondents did not have a big data training program at their institution. Additionally, the majority agreed that big data training programs will improve global health deliverables, among other favorable outcomes. Given the observed gap and benefits, global health educators may consider investing in big data training for students seeking a career in global health. Copyright © 2017 Icahn School of Medicine at Mount Sinai. Published by Elsevier Inc. All rights reserved.

  12. BioMAJ: a flexible framework for databanks synchronization and processing.

    PubMed

    Filangi, Olivier; Beausse, Yoann; Assi, Anthony; Legrand, Ludovic; Larré, Jean-Marc; Martin, Véronique; Collin, Olivier; Caron, Christophe; Leroy, Hugues; Allouche, David

    2008-08-15

    Large- and medium-scale computational molecular biology projects require accurate bioinformatics software and numerous heterogeneous biological databanks, which are distributed around the world. BioMAJ provides a flexible, robust, fully automated environment for managing such massive amounts of data. The JAVA application enables automation of the data update cycle process and supervision of the locally mirrored data repository. We have developed workflows that handle some of the most commonly used bioinformatics databases. A set of scripts is also available for post-synchronization data treatment consisting of indexation or format conversion (for NCBI blast, SRS, EMBOSS, GCG, etc.). BioMAJ can be easily extended by personal homemade processing scripts. Source history can be kept via html reports containing statements of locally managed databanks. http://biomaj.genouest.org. BioMAJ is free open software. It is freely available under the CECILL version 2 license.

  13. Cost-effectiveness of using recombinant activated factor VII as an off-label rescue treatment for critical bleeding requiring massive transfusion.

    PubMed

    Ho, Kwok M; Litton, Edward

    2012-08-01

    Recombinant activated factor VII (rFVIIa) is widely used as an off-label rescue treatment for patients with nonhemophilic critical bleeding. Using data from the intensive care unit, transfusion service, and death registry, the long-term survival after using rFVIIa and the associated cost per life-year gained in a consecutive cohort of patients with critical bleeding requiring massive transfusion (≥ 10 red blood cell [RBC] units in 24 hr) were assessed. rFVIIa was only used as a lifesaving treatment when conventional measures had failed. Of the 353 patients with critical bleeding requiring massive transfusion, 81 (23%) required rFVIIa as a lifesaving rescue treatment. The patients requiring rFVIIa received a greater number of transfusions (number of units: RBCs, 18 vs. 12; fresh-frozen plasma, 16 vs. 10; platelets, 4 vs. 2; p < 0.001) and had a shorter survival time (24 months vs. 33 months; p = 0.002) than those who did not require rFVIIa. The total cost per life-year gained of massive transfusion and incremental cost of rFVIIa as a lifesaving treatment were US$1,148,000 (£711,760; 95% confidence interval [CI], US$825,000-US$1,471,000) and US$736,000 (£456,320; 95% CI, US$527,000-US$945,000), respectively. The incremental costs of rFVIIa increased with severity of illness and transfusion requirement and were greater than the usual acceptable cost-effective limit (

  14. NOTE: Circular symmetry in topologically massive gravity

    NASA Astrophysics Data System (ADS)

    Deser, S.; Franklin, J.

    2010-05-01

    We re-derive, compactly, a topologically massive gravity (TMG) decoupling theorem: source-free TMG separates into its Einstein and Cotton sectors for spaces with a hypersurface-orthogonal Killing vector, here concretely for circular symmetry. We then generalize the theorem to include matter; surprisingly, the single Killing symmetry also forces conformal invariance, requiring the sources to be null.

  15. Beyond the "c" and the "x": Learning with Algorithms in Massive Open Online Courses (MOOCs)

    ERIC Educational Resources Information Center

    Knox, Jeremy

    2018-01-01

    This article examines how algorithms are shaping student learning in massive open online courses (MOOCs). Following the dramatic rise of MOOC platform organisations in 2012, over 4,500 MOOCs have been offered to date, in increasingly diverse languages, and with a growing requirement for fees. However, discussions of "learning" in MOOCs…

  16. Individual Differences in Sequence Learning Ability and Second Language Acquisition in Early Childhood and Adulthood

    ERIC Educational Resources Information Center

    Granena, Gisela

    2013-01-01

    Language aptitude has been hypothesized as a factor that can compensate for postcritical period effects in language learning capacity. However, previous research has primarily focused on instructed contexts and rarely on acquisition-rich learning environments where there is a potential for massive amounts of input. In addition, the studies…

  17. How the Mastery Rubric for Statistical Literacy Can Generate Actionable Evidence about Statistical and Quantitative Learning Outcomes

    ERIC Educational Resources Information Center

    Tractenberg, Rochelle E.

    2017-01-01

    Statistical literacy is essential to an informed citizenry; and two emerging trends highlight a growing need for training that achieves this literacy. The first trend is towards "big" data: while automated analyses can exploit massive amounts of data, the interpretation--and possibly more importantly, the replication--of results are…

  18. Comparative Study of Emission Factors and Mutagenicity of Red Oak and Peat Smoke from Smoldering and Flaming Combustion

    EPA Science Inventory

    Wildfire events produce massive amounts of smoke and thus play an important role in local and regional air quality as well as public health. It is not well understood however if the impacts of wildfire smoke are influenced by fuel types or combustion conditions. Here we develop...

  19. 1988-2000 Long-Range Plan for Technology of the Texas State Board of Education.

    ERIC Educational Resources Information Center

    Texas State Board of Education, Austin.

    This plan plots the course for meeting educational needs in Texas through such technologies as computer-based systems, devices for storage and retrieval of massive amounts of information, telecommunications for audio, video, and information sharing, and other electronic media devised by the year 2000 that can help meet the instructional and…

  20. An extension of the plant ontology project supporting wood anatomy and development research

    Treesearch

    Federic Lens; Laurel Cooper; Maria Alejandra Gandolfo; Andrew Groover; Pankaj Jaiswal; Barbara Lachenbruch; Rachel Spicer; Margaret E. Staton; Dennis W. Stevenson; Ramona L. Walls; Jill Wegrzyn

    2012-01-01

    A wealth of information on plant anatomy and morphology is available in the current and historical literature, and molecular biologists are producing massive amounts of transcriptome and genome data that can be used to gain better insights into the development, evolution, ecology, and physiological function of plant anatomical attributes. Integrating anatomical and...

  1. Watson for Genomics: Moving Personalized Medicine Forward.

    PubMed

    Rhrissorrakrai, Kahn; Koyama, Takahiko; Parida, Laxmi

    2016-08-01

    The confluence of genomic technologies and cognitive computing has brought us to the doorstep of widespread usage of personalized medicine. Cognitive systems, such as Watson for Genomics (WG), integrate massive amounts of new omic data with the current body of knowledge to assist physicians in analyzing and acting on patient's genomic profiles. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Toward an Innovative, Basic Program Model for the Improvement of Professional Instruction in Dental Education: A Review of the Literature.

    ERIC Educational Resources Information Center

    Wulf, Kathleen M.; And Others

    1980-01-01

    An analysis of the massive amount of literature pertaining to the improvement of professional instruction in dental education resulted in the formation of a comprehensive model of 10 categories, including Delphi technique; systems approach; agencies; workshops; multi-media, self-instruction; evaluation paradigms, measurement, courses, and…

  3. The Effectiveness of "Knowledge Management System" in Research Mentoring Using Knowledge Engineering

    ERIC Educational Resources Information Center

    Sriwichai, Puangpet; Meksamoot, Komsak; Chakpitak, Nopasit; Dahal, Keshav; Jengjalean, Anchalee

    2014-01-01

    Currently, many old universities in Thailand have been facing the occurrence of lecturer massive retirement. This leads to the large amount of newly Ph. D. graduate recruitment for taking immediate responsibilities to teach and conduct research without mentoring by senior staff as well as in new universities. Therefore, this paper aims to propose…

  4. Modeling MOOC Student Behavior with Two-Layer Hidden Markov Models

    ERIC Educational Resources Information Center

    Geigle, Chase; Zhai, ChengXiang

    2017-01-01

    Massive open online courses (MOOCs) provide educators with an abundance of data describing how students interact with the platform, but this data is highly underutilized today. This is in part due to the lack of sophisticated tools to provide interpretable and actionable summaries of huge amounts of MOOC activity present in log data. To address…

  5. Dual Audio Television; an Experiment in Saturday Morning Broadcast and a Summary Report.

    ERIC Educational Resources Information Center

    Borton, Terry; And Others

    The Philadelphia City Schools engaged in a four-year program to develop and test dual audio television, a way to help children learn more from the massive amounts of time they spend watching commercial television. The format consisted of an instructional radio broadcast which accompanied popular television shows and attempted to clarify and…

  6. NSDL K-12 Science Literacy Maps: A Visual Tool for Learning

    ERIC Educational Resources Information Center

    Payo, Robert

    2008-01-01

    Given the massive amount of science and mathematics content available online, libraries working with science teachers can become lost when attempting to select material that is both compelling for the learner and effective in addressing learning goals. Tools that help educators identify the most appropriate resources can be a great time saver.…

  7. A Multi-Robot Sense-Act Approach to Lead to a Proper Acting in Environmental Incidents

    PubMed Central

    Conesa-Muñoz, Jesús; Valente, João; del Cerro, Jaime; Barrientos, Antonio; Ribeiro, Angela

    2016-01-01

    Many environmental incidents affect large areas, often in rough terrain constrained by natural obstacles, which makes intervention difficult. New technologies, such as unmanned aerial vehicles, may help address this issue due to their suitability to reach and easily cover large areas. Thus, unmanned aerial vehicles may be used to inspect the terrain and make a first assessment of the affected areas; however, nowadays they do not have the capability to act. On the other hand, ground vehicles rely on enough power to perform the intervention but exhibit more mobility constraints. This paper proposes a multi-robot sense-act system, composed of aerial and ground vehicles. This combination allows performing autonomous tasks in large outdoor areas by integrating both types of platforms in a fully automated manner. Aerial units are used to easily obtain relevant data from the environment and ground units use this information to carry out interventions more efficiently. This paper describes the platforms and sensors required by this multi-robot sense-act system as well as proposes a software system to automatically handle the workflow for any generic environmental task. The proposed system has proved to be suitable to reduce the amount of herbicide applied in agricultural treatments. Although herbicides are very polluting, they are massively deployed on complete agricultural fields to remove weeds. Nevertheless, the amount of herbicide required for treatment is radically reduced when it is accurately applied on patches by the proposed multi-robot system. Thus, the aerial units were employed to scout the crop and build an accurate weed distribution map which was subsequently used to plan the task of the ground units. The whole workflow was executed in a fully autonomous way, without human intervention except when required by Spanish law due to safety reasons. PMID:27517934

  8. Integrands for QCD rational terms and {N} = {4} SYM from massive CSW rules

    NASA Astrophysics Data System (ADS)

    Elvang, Henriette; Freedman, Daniel Z.; Kiermaier, Michael

    2012-06-01

    We use massive CSW rules to derive explicit compact expressions for integrands of rational terms in QCD with any number of external legs. Specifically, we present all- n integrands for the one-loop all-plus and one-minus gluon amplitudes in QCD. We extract the finite part of spurious external-bubble contributions systematically; this is crucial for the application of integrand-level CSW rules in theories without supersymmetry. Our approach yields integrands that are independent of the choice of CSW reference spinor even before integration. Furthermore, we present a recursive derivation of the recently proposed massive CSW-style vertex expansion for massive tree amplitudes and loop integrands on the Coulomb-branch of {N} = {4} SYM. The derivation requires a careful study of boundary terms in all-line shift recursion relations, and provides a rigorous (albeit indirect) proof of the recently proposed construction of massive amplitudes from soft-limits of massless on-shell amplitudes. We show that the massive vertex expansion manifestly preserves all holomorphic and half of the anti-holomorphic supercharges, diagram-by-diagram, even off-shell.

  9. Formation of massive seed black holes via collisions and accretion

    NASA Astrophysics Data System (ADS)

    Boekholt, T. C. N.; Schleicher, D. R. G.; Fellhauer, M.; Klessen, R. S.; Reinoso, B.; Stutz, A. M.; Haemmerlé, L.

    2018-05-01

    Models aiming to explain the formation of massive black hole seeds, and in particular the direct collapse scenario, face substantial difficulties. These are rooted in rather ad hoc and fine-tuned initial conditions, such as the simultaneous requirements of extremely low metallicities and strong radiation backgrounds. Here, we explore a modification of such scenarios where a massive primordial star cluster is initially produced. Subsequent stellar collisions give rise to the formation of massive (104-105 M⊙) objects. Our calculations demonstrate that the interplay among stellar dynamics, gas accretion, and protostellar evolution is particularly relevant. Gas accretion on to the protostars enhances their radii, resulting in an enhanced collisional cross-section. We show that the fraction of collisions can increase from 0.1 to 1 per cent of the initial population to about 10 per cent when compared to gas-free models or models of protostellar clusters in the local Universe. We conclude that very massive objects can form in spite of initial fragmentation, making the first massive protostellar clusters viable candidate birth places for observed supermassive black holes.

  10. Surface Operations Systems Improve Airport Efficiency

    NASA Technical Reports Server (NTRS)

    2009-01-01

    With Small Business Innovation Research (SBIR) contracts from Ames Research Center, Mosaic ATM of Leesburg, Virginia created software to analyze surface operations at airports. Surface surveillance systems, which report locations every second for thousands of air and ground vehicles, generate massive amounts of data, making gathering and analyzing this information difficult. Mosaic?s Surface Operations Data Analysis and Adaptation (SODAA) tool is an off-line support tool that can analyze how well the airport surface operation is working and can help redesign procedures to improve operations. SODAA helps researchers pinpoint trends and correlations in vast amounts of recorded airport operations data.

  11. [Secondary amyloidosis of the bladder and massive hematuria].

    PubMed

    García-Escudero López, A; Arruza Echevarría, A; Leunda Saizar, J; Infante Riaño, R; Padilla Nieva, J; Ortiz Barredo, E

    2010-01-01

    To report four additional cases of secondary amyloidosis of the bladder, an extremely rare condition, as shown by the cases reported in the literature. Four clinical cases are reported, all of them occurring as hematuria, which was massive and fulminant and resulted in death in three patients. Secondary amyloidosis of the bladder is of the AA type, which is more common in females and mainly secondary to rheumatoid arthritis, but also to ankylosing spondylitis and long-standing chronic inflammatory conditions. Hematuria is the main and virtually only symptom. A pathological and immunohistochemical study confirmed diagnosis. All three patients who experienced massive, fatal hematuria had an intercurrent condition requiring urethral catheterization, which was the triggering factor. Despite its rarity, as shown by the few cases reported, secondary amyloidosis of the bladder should be considered in patients already diagnosed with systemic amyloidosis and/or the conditions reported who require simple urethral catheterization.

  12. Characterizing the Disk of a Recent Massive Collisional Event

    NASA Astrophysics Data System (ADS)

    Song, Inseok

    2015-10-01

    Debris disks play a key role in the formation and evolution of planetary systems. On rare occasions, circumstellar material appears as strictly warm infrared excess in regions of expected terrestrial planet formation and so present an interesting opportunity for the study of terrestrial planetary regions. There are only a few known cases of extreme, warm, dusty disks which lack any colder outer component including BD+20 307, HD 172555, EF Cha, and HD 23514. We have recently found a new system TYC 8830-410-1 belonging to this rare group. Warm dust grains are extremely short-lived, and the extraordinary amount of warm dust near these stars can only be plausibly explainable by a recent (or on-going) massive transient event such as the Late Heavy Bombardment (LHB) or plantary collisions. LHB-like events are seen generally in a system with a dominant cold disk, however, warm dust only systems show no hint of a massive cold disk. Planetary collisions leave a telltale sign of strange mid-IR spectral feature such as silica and we want to fully characterize the spectral shape of the newly found system with SOFIA/FORCAST. With SOFIA/FORCAST, we propose to obtain two narrow band photometric measurements between 6 and 9 microns. These FORCAST photometric measurements will constrain the amount and temperature of the warm disk in the system. There are less than a handful systems with a strong hint of recent planetary collisions. With the firmly constrained warm disk around TYC 8830-410-1, we will publish the discovery in a leading astronomical journal accompanied with a potential press release through SOFIA.

  13. The Destructive Birth of Massive Stars and Massive Star Clusters

    NASA Astrophysics Data System (ADS)

    Rosen, Anna; Krumholz, Mark; McKee, Christopher F.; Klein, Richard I.; Ramirez-Ruiz, Enrico

    2017-01-01

    Massive stars play an essential role in the Universe. They are rare, yet the energy and momentum they inject into the interstellar medium with their intense radiation fields dwarfs the contribution by their vastly more numerous low-mass cousins. Previous theoretical and observational studies have concluded that the feedback associated with massive stars' radiation fields is the dominant mechanism regulating massive star and massive star cluster (MSC) formation. Therefore detailed simulation of the formation of massive stars and MSCs, which host hundreds to thousands of massive stars, requires an accurate treatment of radiation. For this purpose, we have developed a new, highly accurate hybrid radiation algorithm that properly treats the absorption of the direct radiation field from stars and the re-emission and processing by interstellar dust. We use our new tool to perform a suite of three-dimensional radiation-hydrodynamic simulations of the formation of massive stars and MSCs. For individual massive stellar systems, we simulate the collapse of massive pre-stellar cores with laminar and turbulent initial conditions and properly resolve regions where we expect instabilities to grow. We find that mass is channeled to the massive stellar system via gravitational and Rayleigh-Taylor (RT) instabilities. For laminar initial conditions, proper treatment of the direct radiation field produces later onset of RT instability, but does not suppress it entirely provided the edges of the radiation-dominated bubbles are adequately resolved. RT instabilities arise immediately for turbulent pre-stellar cores because the initial turbulence seeds the instabilities. To model MSC formation, we simulate the collapse of a dense, turbulent, magnetized Mcl = 106 M⊙ molecular cloud. We find that the influence of the magnetic pressure and radiative feedback slows down star formation. Furthermore, we find that star formation is suppressed along dense filaments where the magnetic field is amplified. Our results suggest that the combined effect of turbulence, magnetic pressure, and radiative feedback from massive stars is responsible for the low star formation efficiencies observed in molecular clouds.

  14. Role of Massive Stars in the Evolution of Primitive Galaxies

    NASA Technical Reports Server (NTRS)

    Heap, Sara

    2012-01-01

    An important factor controlling galaxy evolution is feedback from massive stars. It is believed that the nature and intensity of stellar feedback changes as a function of galaxy mass and metallicity. At low mass and metallicity, feedback from massive stars is mainly in the form of photoionizing radiation. At higher mass and metallicity, it is in stellar winds. IZw 18 is a local blue, compact dwarf galaxy that meets the requirements for a primitive galaxy: low halo mass greater than 10(exp 9)Msun, strong photoionizing radiation, no galactic outflow, and very low metallicity,log(O/H)+12=7.2. We will describe the properties of massive stars and their role in the evolution of IZw 18, based on analysis of ultraviolet images and spectra obtained with HST.

  15. Environmental interpretation using insoluble residues within reef coral skeletons: problems, pitfalls, and preliminary results

    NASA Astrophysics Data System (ADS)

    Budd, Ann F.; Mann, Keith O.; Guzmán, Hector M.

    1993-03-01

    Insoluble residue concentrations have been measured within colonies of four massive reef corals from seven localities along the Caribbean coast of Panama to determine if detrital sediments, incorporated within the skeletal lattice during growth, record changes in sedimentation over the past twenty years. Amounts of resuspended sediment have increased to varying degrees at the seven localities over the past decades in response to increased deforestation in nearby terrestrial habitats. Preliminary results of correlation and regression analyses reveal few consistent temporal trends in the insoluble residue concentration. Analyses of variance suggest that amounts of insoluble residues, however, differ among environments within species, but that no consistent pattern of variation exists among species. D. strigosa and P. astreoides possess high concentrations at protected localities, S. siderea at localities with high amounts of resuspended sediment, and M. annularis at the least turbid localities. Little correlation exists between insoluble residue concentration and growth band width within species at each locality. Only in two more efficient suspension feeders ( S. siderea and D. strigosa) do weak negative correlations with growth band width exist overall. These results indicate that insoluble residue concentrations cannot be used unequivocally in environmental interpretation, until more is known about tissue damage, polyp behavior, and their effects on the incorporation of insolubles in the skeleton during growth in different coral species. Insoluble residue data are highly variable; therefore, large sample sizes and strong contrasts between environments are required to reveal significant trends.

  16. A Cohort Analysis of Postbariatric Panniculectomy--Current Trends in Surgeon Reimbursement.

    PubMed

    Aherrera, Andrew S; Pandya, Sonal N

    2016-01-01

    The overall number of patients undergoing body contouring procedures after massive weight loss (MWL) has progressively increased over the past decade. The purpose of this study was to evaluate the charges and reimbursements for panniculectomy after MWL at a large academic institution in Massachusetts. A retrospective review was performed and included all identifiable panniculectomy procedures performed at our institution between January 2008 and January 2014. The annual number of patients undergoing panniculectomy, the type of insurance coverage and reimbursement method of each patient, and the amounts billed and reimbursed were evaluated. During our study period, 114 patients underwent a medically necessary panniculectomy as a result of MWL. The average surgeon fee billed was $3496 ± $704 and the average amount reimbursed was $1271 ± $589. Ten cases (8.8%) had no reimbursements, 31 cases (21.8%) reimbursed less than $1000, 66 cases (57.9%) reimbursed between $1000 and $2000, and no cases reimbursed the full amount billed. When evaluated by type of insurance coverage, collection ratios were 37.4% ± 17.4% overall, 41.7% ± 16.4% for private insurance, and 24.0% ± 13.0% for Medicare/Medicaid insurance (P < 0.001). Reimbursements for panniculectomy are remarkably low, and in many instances, absent, despite obtaining previous preauthorization of medical necessity. Although panniculectomy is associated with improvements in quality of life and high levels of patient satisfaction, poor physician reimbursement for this labor intensive procedure may preclude access to appropriate care required by the MWL patient population.

  17. The Integration of an Anatomy Massive Open Online Course (MOOC) into a Medical Anatomy Curriculum

    ERIC Educational Resources Information Center

    Swinnerton, Bronwen J.; Morris, Neil P.; Hotchkiss, Stephanie; Pickering, James D.

    2017-01-01

    Massive open online courses (MOOCs) are designed as stand-alone courses which can be accessed by any learner around the globe with only an internet-enabled electronic device required. Although much research has focused on the enrolment and demographics of MOOCs, their impact on undergraduate campus-based students is still unclear. This article…

  18. Luminous blue variables and the fates of very massive stars

    NASA Astrophysics Data System (ADS)

    Smith, Nathan

    2017-09-01

    Luminous blue variables (LBVs) had long been considered massive stars in transition to the Wolf-Rayet (WR) phase, so their identification as progenitors of some peculiar supernovae (SNe) was surprising. More recently, environment statistics of LBVs show that most of them cannot be in transition to the WR phase after all, because LBVs are more isolated than allowed in this scenario. Additionally, the high-mass H shells around luminous SNe IIn require that some very massive stars above 40 M⊙ die without shedding their H envelopes, and the precursor outbursts are a challenge for understanding the final burning sequences leading to core collapse. Recent evidence suggests a clear continuum in pre-SN mass loss from super-luminous SNe IIn, to regular SNe IIn, to SNe II-L and II-P, whereas most stripped-envelope SNe seem to arise from a separate channel of lower-mass binary stars rather than massive WR stars. This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'.

  19. Working in a Text Mine; Is Access about to Go down?

    ERIC Educational Resources Information Center

    Emery, Jill

    2008-01-01

    The age of networked research and networked data analysis is upon us. "Wired Magazine" proclaims on the cover of their July 2008 issue: "The End of Science. The quest for knowledge used to begin with grand theories. Now it begins with massive amounts of data. Welcome to the Petabyte Age." Computing technology is sufficiently complex at this point…

  20. Stop Programming Robots: How to Prepare Every Student for Success in Any Career

    ERIC Educational Resources Information Center

    Chester, Eric

    2012-01-01

    While technology has made communicating "easy," it has done so at the cost of communication that is "meaningful." And for information to be internalized to the point where it is remembered, used and valued, it must be meaningful. In other words, technology makes it easy to disseminate massive amounts of information to the masses, but teaching a…

  1. Teaching Case: Introduction to NoSQL in a Traditional Database Course

    ERIC Educational Resources Information Center

    Fowler, Brad; Godin, Joy; Geddy, Margaret

    2016-01-01

    Many organizations are dealing with the increasing demands of big data, so they are turning to NoSQL databases as their preferred system for handling the unique problems of capturing and storing massive amounts of data. Therefore, it is likely that employees in all sizes of organizations will encounter NoSQL databases. Thus, to be more job-ready,…

  2. Critical Pedagogy and the Decolonial Option: Challenges to the Inevitability of Capitalism

    ERIC Educational Resources Information Center

    Monzó, Lilia D.; McLaren, Peter

    2014-01-01

    The demise of capitalism was theoretically prophesied by Marx who posited that the world would come to such a state of destruction and human suffering that no amount of coercion or concessions would suffice to stop the massive uprisings that would lead us into a new socialist alternative. Although the downfall of world capitalism may seem…

  3. We're all in this together: decisionmaking to address climate change in a complex world

    Treesearch

    Jonathan Thompson; Ralph Alig

    2009-01-01

    Forests significantly influence the global carbon budget: they store massive amounts of carbon in their wood and soil, they sequester atmospheric carbon as they grow, and they emit carbon as a greenhouse gas when harvested or converted to another use. These factors make forest conservation and management important components of most strategies for adapting to and...

  4. The Effects of Leadership on Carrier Air Wing Sixteen’s Loss Rates during Operation Rolling Thunder, 1965-1968

    DTIC Science & Technology

    2006-06-16

    7 Corsair . As the war in Southeast Asia expanded, the massive amounts of ordnance being dropped on Laos, Cambodia, South Vietnam, and North Vietnam...over North Vietnam. Though gravely wounded, one of Foster’s main concerns while he 89 lay in the Oriskany’s sick bay was the impact on Tom Spitzer

  5. Should You Trust Your Money to a Robot?

    PubMed

    Dhar, Vasant

    2015-06-01

    Financial markets emanate massive amounts of data from which machines can, in principle, learn to invest with minimal initial guidance from humans. I contrast human and machine strengths and weaknesses in making investment decisions. The analysis reveals areas in the investment landscape where machines are already very active and those where machines are likely to make significant inroads in the next few years.

  6. On the Block: Student Data and Privacy in the Digital Age--The Seventheenth Annual Report on Schoolhouse Commercializing Trends, 2013-2014

    ERIC Educational Resources Information Center

    Molnar, Alex; Boninger, Faith

    2015-01-01

    Computer technology has made it possible to aggregate, collate, analyze, and store massive amounts of information about students. School districts and private companies that sell their services to the education market now regularly collect such information, raising significant issues about the privacy rights of students. Most school districts lack…

  7. Game-powered machine learning

    PubMed Central

    Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert

    2012-01-01

    Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the “wisdom of the crowds.” Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., “funky jazz with saxophone,” “spooky electronica,” etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data. PMID:22460786

  8. Game-powered machine learning.

    PubMed

    Barrington, Luke; Turnbull, Douglas; Lanckriet, Gert

    2012-04-24

    Searching for relevant content in a massive amount of multimedia information is facilitated by accurately annotating each image, video, or song with a large number of relevant semantic keywords, or tags. We introduce game-powered machine learning, an integrated approach to annotating multimedia content that combines the effectiveness of human computation, through online games, with the scalability of machine learning. We investigate this framework for labeling music. First, a socially-oriented music annotation game called Herd It collects reliable music annotations based on the "wisdom of the crowds." Second, these annotated examples are used to train a supervised machine learning system. Third, the machine learning system actively directs the annotation games to collect new data that will most benefit future model iterations. Once trained, the system can automatically annotate a corpus of music much larger than what could be labeled using human computation alone. Automatically annotated songs can be retrieved based on their semantic relevance to text-based queries (e.g., "funky jazz with saxophone," "spooky electronica," etc.). Based on the results presented in this paper, we find that actively coupling annotation games with machine learning provides a reliable and scalable approach to making searchable massive amounts of multimedia data.

  9. Mineralogy, textures, and relative age relationships of massive sulfide ore in the West Shasta district, California ( USA).

    USGS Publications Warehouse

    Howe, S.S.

    1985-01-01

    The Devonian massive sulfide orebodies of the West Shasta district in N California are composed primarily of pyrite, with lesser amounts of other sulfide and gangue minerals. Examination of polished thin sections of more than 100 samples from the Mammoth, Shasta King, Early Bird, Balaklala, Keystone, and Iron Mountain mines suggests that mineralization may be divided into 6 paragenetic stages, the last 5 each separated by an episode of deformation: 1) precipitation of fine-grained, locally colloform and framboidal pyrite and sphalerite; 2) deposition of fine-grained arsenopyrite and coarse-grained pyrite; 3) penetration and local replacement of sulfide minerals of stages 1 and 2 along growth zones and fractures by chalcopyrite, sphalerite, galena, tennantite, pyrrhotite, bornite, and idaite; 4) recrystallization and remobilization of existing minerals; 5) deposition of quartz, white mica, chlorite, and calcite; and 6) formation of bornite, digenite, chalcocite, and covellite during supergene enrichment of several orebodies at the Iron Mountain mine. Mineralogic and textural evidence do not support a second major episode of massive sulfide mineralization during the Permian. -from Author

  10. An empirical study of ensemble-based semi-supervised learning approaches for imbalanced splice site datasets.

    PubMed

    Stanescu, Ana; Caragea, Doina

    2015-01-01

    Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework.

  11. An empirical study of ensemble-based semi-supervised learning approaches for imbalanced splice site datasets

    PubMed Central

    2015-01-01

    Background Recent biochemical advances have led to inexpensive, time-efficient production of massive volumes of raw genomic data. Traditional machine learning approaches to genome annotation typically rely on large amounts of labeled data. The process of labeling data can be expensive, as it requires domain knowledge and expert involvement. Semi-supervised learning approaches that can make use of unlabeled data, in addition to small amounts of labeled data, can help reduce the costs associated with labeling. In this context, we focus on the problem of predicting splice sites in a genome using semi-supervised learning approaches. This is a challenging problem, due to the highly imbalanced distribution of the data, i.e., small number of splice sites as compared to the number of non-splice sites. To address this challenge, we propose to use ensembles of semi-supervised classifiers, specifically self-training and co-training classifiers. Results Our experiments on five highly imbalanced splice site datasets, with positive to negative ratios of 1-to-99, showed that the ensemble-based semi-supervised approaches represent a good choice, even when the amount of labeled data consists of less than 1% of all training data. In particular, we found that ensembles of co-training and self-training classifiers that dynamically balance the set of labeled instances during the semi-supervised iterations show improvements over the corresponding supervised ensemble baselines. Conclusions In the presence of limited amounts of labeled data, ensemble-based semi-supervised approaches can successfully leverage the unlabeled data to enhance supervised ensembles learned from highly imbalanced data distributions. Given that such distributions are common for many biological sequence classification problems, our work can be seen as a stepping stone towards more sophisticated ensemble-based approaches to biological sequence annotation in a semi-supervised framework. PMID:26356316

  12. Streamlining environmental product declarations: a stage model

    NASA Astrophysics Data System (ADS)

    Lefebvre, Elisabeth; Lefebvre, Louis A.; Talbot, Stephane; Le Hen, Gael

    2001-02-01

    General public environmental awareness and education is increasing, therefore stimulating the demand for reliable, objective and comparable information about products' environmental performances. The recently published standard series ISO 14040 and ISO 14025 are normalizing the preparation of Environmental Product Declarations (EPDs) containing comprehensive information relevant to a product's environmental impact during its life cycle. So far, only a few environmentally leading manufacturing organizations have experimented the preparation of EPDs (mostly from Europe), demonstrating its great potential as a marketing weapon. However the preparation of EPDs is a complex process, requiring collection and analysis of massive amounts of information coming from disparate sources (suppliers, sub-contractors, etc.). In a foreseeable future, the streamlining of the EPD preparation process will require product manufacturers to adapt their information systems (ERP, MES, SCADA) in order to make them capable of gathering, and transmitting the appropriate environmental information. It also requires strong functional integration all along the product supply chain in order to ensure that all the information is made available in a standardized and timely manner. The goal of the present paper is two fold: first to propose a transitional model towards green supply chain management and EPD preparation; second to identify key technologies and methodologies allowing to streamline the EPD process and subsequently the transition toward sustainable product development

  13. High-resolution bathymetry as a primary exploration tool for seafloor massive sulfide deposits - lessons learned from exploration on the Mid-Atlantic and Juan de Fuca Ridges, and northern Lau Basin

    NASA Astrophysics Data System (ADS)

    Jamieson, J. W.; Clague, D. A.; Petersen, S.; Yeo, I. A.; Escartin, J.; Kwasnitschka, T.

    2016-12-01

    High-resolution, autonomous underwater vehicle (AUV)-derived multibeam bathymetry is increasingly being used as an exploration tool for delineating the size and extent of hydrothermal vent fields and associated seafloor massive sulfide deposits. However, because of the limited amount of seafloor that can be surveyed during a single dive, and the challenges associated with distinguishing hydrothermal chimneys and mounds from other volcanic and tectonic features using solely bathymetric data, AUV mapping surveys have largely been employed as a secondary exploration tool once hydrothermal sites have been discovered using other exploration methods such as plume, self-potential and TV surveys, or ROV and submersible dives. Visual ground-truthing is often required to attain an acceptable level of confidence in the hydrothermal origin of features identified in AUV-derived bathymetry. Here, we present examples of high-resolution bathymetric surveys of vent fields from a variety of tectonic environments, including slow- and intermediate-rate mid-ocean ridges, oceanic core complexes and back arc basins. Results illustrate the diversity of sulfide deposit morphologies, and the challenges associated with identifying hydrothermal features in different tectonic environments. We present a developing set of criteria that can be used to distinguish hydrothermal deposits in bathymetric data, and how AUV surveys can be used either on their own or in conjunction with other exploration techniques as a primary exploration tool.

  14. Large Scale Document Inversion using a Multi-threaded Computing System

    PubMed Central

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations. PMID:29861701

  15. Large Scale Document Inversion using a Multi-threaded Computing System.

    PubMed

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  16. The early universe as a probe of new physics

    NASA Astrophysics Data System (ADS)

    Bird, Christopher Shane

    The Standard Model of Particle Physics has been verified to unprecedented precision in the last few decades. However there are still phenomena in nature which cannot be explained, and as such new theories will be required. Since terrestrial experiments are limited in both the energy and precision that can be probed, new methods are required to search for signs of physics beyond the Standard Model. In this dissertation, I demonstrate how these theories can be probed by searching for remnants of their effects in the early Universe. In particular I focus on three possible extensions of the Standard Model: the addition of massive neutral particles as dark matter, the addition of charged massive particles, and the existence of higher dimensions. For each new model, I review the existing experimental bounds and the potential for discovering new physics in the next generation of experiments. For dark matter, I introduce six simple models which I have developed, and which involve a minimum amount of new physics, as well as reviewing one existing model of dark matter. For each model I calculate the latest constraints from astrophysics experiments, nuclear recoil experiments, and collider experiments. I also provide motivations for studying sub-GeV mass dark matter, and propose the possibility of searching for light WIMPs in the decay of B-mesons and other heavy particles. For charged massive relics, I introduce and review the recently proposed model of catalyzed Big Bang nucleosynthesis. In particular I review the production of 6Li by this mechanism, and calculate the abundance of 7Li after destruction of 7Be by charged relics. The result is that for certain natural relics CBBN is capable of removing tensions between the predicted and observed 6Li and 7Li abundances which are present in the standard model of BBN. For extra dimensions, I review the constraints on the ADD model from both astrophysics and collider experiments. I then calculate the constraints on this model from Big Bang nucleosynthesis in the early Universe. I also calculate the bounds on this model from Kaluza-Klein gravitons trapped in the galaxy which decay to electron-positron pairs, using the measured 511 keV gamma-ray flux. For each example of new physics, I find that remnants of the early Universe provide constraints on the models which are complementary to the existing constraints from colliders and other terrestrial experiments.

  17. Shifting of the resonance location for planets embedded in circumstellar disks

    NASA Astrophysics Data System (ADS)

    Marzari, F.

    2018-03-01

    Context. In the early evolution of a planetary system, a pair of planets may be captured in a mean motion resonance while still embedded in their nesting circumstellar disk. Aims: The goal is to estimate the direction and amount of shift in the semimajor axis of the resonance location due to the disk gravity as a function of the gas density and mass of the planets. The stability of the resonance lock when the disk dissipates is also tested. Methods: The orbital evolution of a large number of systems is numerically integrated within a three-body problem in which the disk potential is computed as a series of expansion. This is a good approximation, at least over a limited amount of time. Results: Two different resonances are studied: the 2:1 and the 3:2. In both cases the shift is inwards, even if by a different amount, when the planets are massive and carve a gap in the disk. For super-Earths, the shift is instead outwards. Different disk densities, Σ, are considered and the resonance shift depends almost linearly on Σ. The gas dissipation leads to destabilization of a significant number of resonant systems, in particular if it is fast. Conclusions: The presence of a massive circumstellar disk may significantly affect the resonant behavior of a pair of planets by shifting the resonant location and by decreasing the size of the stability region. The disk dissipation may explain some systems found close to a resonance but not locked in it.

  18. Towards optimizing server performance in an educational MMORPG for teaching computer programming

    NASA Astrophysics Data System (ADS)

    Malliarakis, Christos; Satratzemi, Maya; Xinogalos, Stelios

    2013-10-01

    Web-based games have become significantly popular during the last few years. This is due to the gradual increase of internet speed, which has led to the ongoing multiplayer games development and more importantly the emergence of the Massive Multiplayer Online Role Playing Games (MMORPG) field. In parallel, similar technologies called educational games have started to be developed in order to be put into practice in various educational contexts, resulting in the field of Game Based Learning. However, these technologies require significant amounts of resources, such as bandwidth, RAM and CPU capacity etc. These amounts may be even larger in an educational MMORPG game that supports computer programming education, due to the usual inclusion of a compiler and the constant client/server data transmissions that occur during program coding, possibly leading to technical issues that could cause malfunctions during learning. Thus, the determination of the elements that affect the overall games resources' load is essential so that server administrators can configure them and ensure educational games' proper operation during computer programming education. In this paper, we propose a new methodology with which we can achieve monitoring and optimization of the load balancing, so that the essential resources for the creation and proper execution of an educational MMORPG for computer programming can be foreseen and bestowed without overloading the system.

  19. Analyzing a suitable elastic geomechanical model for Vaca Muerta Formation

    NASA Astrophysics Data System (ADS)

    Sosa Massaro, Agustin; Espinoza, D. Nicolas; Frydman, Marcelo; Barredo, Silvia; Cuervo, Sergio

    2017-11-01

    Accurate geomechanical evaluation of oil and gas reservoir rocks is important to provide design parameters for drilling, completion and predict production rates. In particular, shale reservoir rocks are geologically complex and heterogeneous. Wells need to be hydraulically fractured for stimulation and, in complex tectonic environments, it is to consider that rock fabric and in situ stress, strongly influence fracture propagation geometry. This article presents a combined wellbore-laboratory characterization of the geomechanical properties of a well in El Trapial/Curamched Field, over the Vaca Muerta Formation, located in the Neuquén Basin in Argentina. The study shows the results of triaxial tests with acoustic measurements in rock plugs from outcrops and field cores, and corresponding dynamic to static correlations considering various elastic models. The models, with increasing complexity, include the Isotropic Elastic Model (IEM), the Anisotropic Elastic Model (AEM) and the Detailed Anisotropic Elastic Model (DAEM). Each model shows advantages over the others. An IEM offers a quick overview, being easy to run without much detailed data for heterogeneous and anisotropic rocks. The DAEM requires significant amounts of data, time and a multidisciplinary team to arrive to a detailed model. Finally, an AEM suits well to an anisotropic and realistic rock without the need of massive amounts of data.

  20. Update on massive transfusion.

    PubMed

    Pham, H P; Shaz, B H

    2013-12-01

    Massive haemorrhage requires massive transfusion (MT) to maintain adequate circulation and haemostasis. For optimal management of massively bleeding patients, regardless of aetiology (trauma, obstetrical, surgical), effective preparation and communication between transfusion and other laboratory services and clinical teams are essential. A well-defined MT protocol is a valuable tool to delineate how blood products are ordered, prepared, and delivered; determine laboratory algorithms to use as transfusion guidelines; and outline duties and facilitate communication between involved personnel. In MT patients, it is crucial to practice damage control resuscitation and to administer blood products early in the resuscitation. Trauma patients are often admitted with early trauma-induced coagulopathy (ETIC), which is associated with mortality; the aetiology of ETIC is likely multifactorial. Current data support that trauma patients treated with higher ratios of plasma and platelet to red blood cell transfusions have improved outcomes, but further clinical investigation is needed. Additionally, tranexamic acid has been shown to decrease the mortality in trauma patients requiring MT. Greater use of cryoprecipitate or fibrinogen concentrate might be beneficial in MT patients from obstetrical causes. The risks and benefits for other therapies (prothrombin complex concentrate, recombinant activated factor VII, or whole blood) are not clearly defined in MT patients. Throughout the resuscitation, the patient should be closely monitored and both metabolic and coagulation abnormalities corrected. Further studies are needed to clarify the optimal ratios of blood products, treatment based on underlying clinical disorder, use of alternative therapies, and integration of laboratory testing results in the management of massively bleeding patients.

  1. Systems resilience for multihazard environments: definition, metrics, and valuation for decision making.

    PubMed

    Ayyub, Bilal M

    2014-02-01

    The United Nations Office for Disaster Risk Reduction reported that the 2011 natural disasters, including the earthquake and tsunami that struck Japan, resulted in $366 billion in direct damages and 29,782 fatalities worldwide. Storms and floods accounted for up to 70% of the 302 natural disasters worldwide in 2011, with earthquakes producing the greatest number of fatalities. Average annual losses in the United States amount to about $55 billion. Enhancing community and system resilience could lead to massive savings through risk reduction and expeditious recovery. The rational management of such reduction and recovery is facilitated by an appropriate definition of resilience and associated metrics. In this article, a resilience definition is provided that meets a set of requirements with clear relationships to the metrics of the relevant abstract notions of reliability and risk. Those metrics also meet logically consistent requirements drawn from measure theory, and provide a sound basis for the development of effective decision-making tools for multihazard environments. Improving the resiliency of a system to meet target levels requires the examination of system enhancement alternatives in economic terms, within a decision-making framework. Relevant decision analysis methods would typically require the examination of resilience based on its valuation by society at large. The article provides methods for valuation and benefit-cost analysis based on concepts from risk analysis and management. © 2013 Society for Risk Analysis.

  2. THE PREVALENCE AND IMPACT OF WOLF–RAYET STARS IN EMERGING MASSIVE STAR CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokal, Kimberly R.; Johnson, Kelsey E.; Indebetouw, Rémy

    We investigate Wolf–Rayet (WR) stars as a source of feedback contributing to the removal of natal material in the early evolution of massive star clusters. Despite previous work suggesting that massive star clusters clear out their natal material before the massive stars evolve into the WR phase, WR stars have been detected in several emerging massive star clusters. These detections suggest that the timescale for clusters to emerge can be at least as long as the time required to produce WR stars (a few million years), and could also indicate that WR stars may be providing the tipping point inmore » the combined feedback processes that drive a massive star cluster to emerge. We explore the potential overlap between the emerging phase and the WR phase with an observational survey to search for WR stars in emerging massive star clusters hosting WR stars. We select candidate emerging massive star clusters from known radio continuum sources with thermal emission and obtain optical spectra with the 4 m Mayall Telescope at Kitt Peak National Observatory and the 6.5 m MMT.{sup 4} We identify 21 sources with significantly detected WR signatures, which we term “emerging WR clusters.” WR features are detected in ∼50% of the radio-selected sample, and thus we find that WR stars are commonly present in currently emerging massive star clusters. The observed extinctions and ages suggest that clusters without WR detections remain embedded for longer periods of time, and may indicate that WR stars can aid, and therefore accelerate, the emergence process.« less

  3. Learning from Massive Distributed Data Sets (Invited)

    NASA Astrophysics Data System (ADS)

    Kang, E. L.; Braverman, A. J.

    2013-12-01

    Technologies for remote sensing and ever-expanding computer experiments in climate science are generating massive data sets. Meanwhile, it has been common in all areas of large-scale science to have these 'big data' distributed over multiple different physical locations, and moving large amounts of data can be impractical. In this talk, we will discuss efficient ways for us to summarize and learn from distributed data. We formulate a graphical model to mimic the main characteristics of a distributed-data network, including the size of the data sets and speed of moving data. With this nominal model, we investigate the trade off between prediction accurate and cost of data movement, theoretically and through simulation experiments. We will also discuss new implementations of spatial and spatio-temporal statistical methods optimized for distributed data.

  4. Massive spin-2 scattering and asymptotic superluminality

    NASA Astrophysics Data System (ADS)

    Hinterbichler, Kurt; Joyce, Austin; Rosen, Rachel A.

    2018-03-01

    We place model-independent constraints on theories of massive spin-2 particles by considering the positivity of the phase shift in eikonal scattering. The phase shift is an asymptotic S-matrix observable, related to the time delay/advance experienced by a particle during scattering. Demanding the absence of a time advance leads to constraints on the cubic vertices present in the theory. We find that, in theories with massive spin-2 particles, requiring no time advance means that either: (i) the cubic vertices must appear as a particular linear combination of the Einstein-Hilbert cubic vertex and an h μν 3 potential term or (ii) new degrees of freedom or strong coupling must enter at parametrically the mass of the massive spin-2 field. These conclusions have implications for a variety of situations. Applied to theories of large- N QCD, this indicates that any spectrum with an isolated massive spin-2 at the bottom must have these particular cubic self-couplings. Applied to de Rham-Gabadadze-Tolley massive gravity, the constraint is in accord with results obtained from a shockwave calculation: of the two free dimensionless parameters in the theory there is a one parameter line consistent with a subluminal phase shift.

  5. Biomimetic Models for An Ecological Approach to Massively-Deployed Sensor Networks

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng

    2005-01-01

    Promises of ubiquitous control of the physical environment by massively-deployed wireless sensor networks open avenues for new applications that will redefine the way we live and work. Due to small size and low cost of sensor devices, visionaries promise systems enabled by deployment of massive numbers of sensors ubiquitous throughout our environment working in concert. Recent research has concentrated on developing techniques for performing relatively simple tasks with minimal energy expense, assuming some form of centralized control. Unfortunately, centralized control is not conducive to parallel activities and does not scale to massive size networks. Execution of simple tasks in sparse networks will not lead to the sophisticated applications predicted. We propose a new way of looking at massively-deployed sensor networks, motivated by lessons learned from the way biological ecosystems are organized. We demonstrate that in such a model, fully distributed data aggregation can be performed in a scalable fashion in massively deployed sensor networks, where motes operate on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects. We show that such architectures may be used to facilitate communication and synchronization in a fault-tolerant manner, while balancing workload and required energy expenditure throughout the network.

  6. Information display: the weak link for NCW

    NASA Astrophysics Data System (ADS)

    Gilger, Mike

    2006-05-01

    The Global Information Grid (GIG) enables the dissemination of real-time data from any sensor/source as well as the distribution of that data immediately to recipients across the globe, resulting in better, faster, and more accurate decisions, reduced operational risk, and a more competitive war-fighting advantage. As a major component of Network Centric Warfare (NCW), the GIG seeks to provide the integrated information infrastructure necessary to connect the robust data streams from ConstellationNet, FORCENet, and LandWarNet to allow Joint Forces to move beyond Situational Awareness and into Situational Understanding. NCW will provide the Joint Forces a common situational understanding, a common operating picture, and any and all information necessary for rapid decision-making. However, with the exception of the 1994 introduction of the Military Standard 2525 "Common Warfighting Symbology," there has been no notable improvement in our ability to display information for accurate and rapid understanding. In fact, one of the notable problems associated with NCW is how to process the massive amount of newly integrated data being thrown at the warfighter: a significant human-machine interface challenge. The solution; a graphical language called GIFIC (Graphical Interface for Information Cognition) that can display thousands of data points simultaneously. Coupled with the new generation COP displays, GIFIC provides for the tremendous amounts of information-display required for effective NCW battlespace awareness requirements, offering instant insight into joint operations, tactical situations, and targeting necessities. GIFIC provides the next level of information-display necessary for a successful NCW, resulting in agile, high-performance, and highly competitive warfighters.

  7. Mineralogical and geochemical characterization of waste rocks from a gold mine in northeastern Thailand: application for environmental impact protection.

    PubMed

    Assawincharoenkij, Thitiphan; Hauzenberger, Christoph; Ettinger, Karl; Sutthirat, Chakkaphan

    2018-02-01

    Waste rocks from gold mining in northeastern Thailand are classified as sandstone, siltstone, gossan, skarn, skarn-sulfide, massive sulfide, diorite, and limestone/marble. Among these rocks, skarn-sulfide and massive sulfide rocks have the potential to generate acid mine drainage (AMD) because they contain significant amounts of sulfide minerals, i.e., pyrrhotite, pyrite, arsenopyrite, and chalcopyrite. Moreover, both sulfide rocks present high contents of As and Cu, which are caused by the occurrence of arsenopyrite and chalcopyrite, respectively. Another main concern is gossan contents, which are composed of goethite, hydrous ferric oxide (HFO), quartz, gypsum, and oxidized pyroxene. X-ray maps using electron probe micro-analysis (EPMA) indicate distribution of some toxic elements in Fe-oxyhydroxide minerals in the gossan waste rock. Arsenic (up to 1.37 wt.%) and copper (up to 0.60 wt.%) are found in goethite, HFO, and along the oxidized rim of pyroxene. Therefore, the gossan rock appears to be a source of As, Cu, and Mn. As a result, massive sulfide, skarn-sulfide, and gossan have the potential to cause environmental impacts, particularly AMD and toxic element contamination. Consequently, the massive sulfide and skarn-sulfide waste rocks should be protected from oxygen and water to avoid an oxidizing environment, whereas the gossan waste rocks should be protected from the formation of AMD to prevent heavy metal contamination.

  8. Uncertainties in s-process nucleosynthesis in massive stars determined by Monte Carlo variations

    NASA Astrophysics Data System (ADS)

    Nishimura, N.; Hirschi, R.; Rauscher, T.; St. J. Murphy, A.; Cescutti, G.

    2017-08-01

    The s-process in massive stars produces the weak component of the s-process (nuclei up to A ˜ 90), in amounts that match solar abundances. For heavier isotopes, such as barium, production through neutron capture is significantly enhanced in very metal-poor stars with fast rotation. However, detailed theoretical predictions for the resulting final s-process abundances have important uncertainties caused both by the underlying uncertainties in the nuclear physics (principally neutron-capture reaction and β-decay rates) as well as by the stellar evolution modelling. In this work, we investigated the impact of nuclear-physics uncertainties relevant to the s-process in massive stars. Using a Monte Carlo based approach, we performed extensive nuclear reaction network calculations that include newly evaluated upper and lower limits for the individual temperature-dependent reaction rates. We found that most of the uncertainty in the final abundances is caused by uncertainties in the neutron-capture rates, while β-decay rate uncertainties affect only a few nuclei near s-process branchings. The s-process in rotating metal-poor stars shows quantitatively different uncertainties and key reactions, although the qualitative characteristics are similar. We confirmed that our results do not significantly change at different metallicities for fast rotating massive stars in the very low metallicity regime. We highlight which of the identified key reactions are realistic candidates for improved measurement by future experiments.

  9. The Galactic Distribution of Massive Star Formation from the Red MSX Source Survey

    NASA Astrophysics Data System (ADS)

    Figura, Charles C.; Urquhart, J. S.

    2013-01-01

    Massive stars inject enormous amounts of energy into their environments in the form of UV radiation and molecular outflows, creating HII regions and enriching local chemistry. These effects provide feedback mechanisms that aid in regulating star formation in the region, and may trigger the formation of subsequent generations of stars. Understanding the mechanics of massive star formation presents an important key to understanding this process and its role in shaping the dynamics of galactic structure. The Red MSX Source (RMS) survey is a multi-wavelength investigation of ~1200 massive young stellar objects (MYSO) and ultra-compact HII (UCHII) regions identified from a sample of colour-selected sources from the Midcourse Space Experiment (MSX) point source catalog and Two Micron All Sky Survey. We present a study of over 900 MYSO and UCHII regions investigated by the RMS survey. We review the methods used to determine distances, and investigate the radial galactocentric distribution of these sources in context with the observed structure of the galaxy. The distribution of MYSO and UCHII regions is found to be spatially correlated with the spiral arms and galactic bar. We examine the radial distribution of MYSOs and UCHII regions and find variations in the star formation rate between the inner and outer Galaxy and discuss the implications for star formation throughout the galactic disc.

  10. High Resolution Studies of Mass Loss from Massive Binary Stars

    NASA Astrophysics Data System (ADS)

    Corcoran, Michael F.; Gull, Theodore R.; Hamaguchi, Kenji; Richardson, Noel; Madura, Thomas; Post Russell, Christopher Michael; Teodoro, Mairan; Nichols, Joy S.; Moffat, Anthony F. J.; Shenar, Tomer; Pablo, Herbert

    2017-01-01

    Mass loss from hot luminous single and binary stars has a significant, perhaps decisive, effect on their evolution. The combination of X-ray observations of hot shocked gas embedded in the stellar winds and high-resolution optical/UV spectra of the cooler mass in the outflow provides unique ways to study the unstable process by which massive stars lose mass both through continuous stellar winds and rare, impulsive, large-scale mass ejections. The ability to obtain coordinated observations with the Hubble Space Telescope Imaging Spectrograph (HST/STIS) and the Chandra High-Energy Transmission Grating Spectrometer (HETGS) and other X-ray observatories has allowed, for the first time, studies of resolved line emisssion over the temperature range of 104- 108K, and has provided observations to confront numerical dynamical models in three dimensions. Such observations advance our knowledge of mass-loss asymmetries, spatial and temporal variabilities, and the fundamental underlying physics of the hot shocked outflow, providing more realistic constraints on the amount of mass lost by different luminous stars in a variety of evolutionary stages. We discuss the impact that these joint observational studies have had on our understanding of dynamical mass outflows from massive stars, with particular emphasis on two important massive binaries, Delta Ori Aa, a linchpin of the mass luminosity relation for upper HRD main sequence stars, and the supermassive colliding wind binary Eta Carinae.

  11. Restrictive Versus Massive Fluid Resuscitation Strategy (REFILL study), influence on blood loss and hemostatic parameters in obstetric hemorrhage: study protocol for a randomized controlled trial.

    PubMed

    de Lange, Natascha; Schol, Pim; Lancé, Marcus; Woiski, Mallory; Langenveld, Josje; Rijnders, Robbert; Smits, Luc; Wassen, Martine; Henskens, Yvonne; Scheepers, Hubertina

    2018-03-06

    Postpartum hemorrhage (PPH) is associated with maternal morbidity and mortality and has an increasing incidence in high-resource countries, despite dissemination of guidelines, introduction of skills training, and correction for risk factors. Current guidelines advise the administration, as fluid resuscitation, of almost twice the amount of blood lost. This advice is not evidence-based and could potentially harm patients. All women attending the outpatient clinic who are eligible will be informed of the study; oral and written informed consent will be obtained. Where there is more than 500 ml blood loss and ongoing bleeding, patients will be randomized to care as usual, fluid resuscitation with 1.5-2 times the amount of blood loss or fluid resuscitation with 0.75-1.0 times the blood loss. Blood loss will be assessed by weighing all draping. A blood sample, for determining hemoglobin concentration, hematocrit, thrombocyte concentration, and conventional coagulation parameters will be taken at the start of the study, after 60 min, and 12-18 h after delivery. In a subgroup of women, additional thromboelastometric parameters will be obtained. Our hypothesis is that massive fluid administration might lead to a progression of bleeding due to secondary coagulation disorders. In non-pregnant individuals with massive blood loss, restrictive fluid management has been shown to prevent a progression to dilution coagulopathy. These data, however, cannot be extrapolated to women in labor. Our objective is to compare both resuscitation protocols in women with early, mild PPH (blood loss 500-750 ml) and ongoing bleeding, taking as primary outcome measure the progression to severe PPH (blood loss > 1000 ml). Netherlands Trial Register, NTR 3789 . Registered on 11 January 2013.

  12. Mineral deposit densities for estimating mineral resources

    USGS Publications Warehouse

    Singer, Donald A.

    2008-01-01

    Estimates of numbers of mineral deposits are fundamental to assessing undiscovered mineral resources. Just as frequencies of grades and tonnages of well-explored deposits can be used to represent the grades and tonnages of undiscovered deposits, the density of deposits (deposits/area) in well-explored control areas can serve to represent the number of deposits. Empirical evidence presented here indicates that the processes affecting the number and quantity of resources in geological settings are very general across many types of mineral deposits. For podiform chromite, porphyry copper, and volcanogenic massive sulfide deposit types, the size of tract that geologically could contain the deposits is an excellent predictor of the total number of deposits. The number of mineral deposits is also proportional to the type’s size. The total amount of mineralized rock is also proportional to size of the permissive area and the median deposit type’s size. Regressions using these variables provide a means to estimate the density of deposits and the total amount of mineralization. These powerful estimators are based on analysis of ten different types of mineral deposits (Climax Mo, Cuban Mn, Cyprus massive sulfide, Franciscan Mn, kuroko massive sulfide, low-sulfide quartz-Au vein, placer Au, podiform Cr, porphyry Cu, and W vein) from 108 permissive control tracts around the world therefore generalizing across deposit types. Despite the diverse and complex geological settings of deposit types studied here, the relationships observed indicate universal controls on the accumulation and preservation of mineral resources that operate across all scales. The strength of the relationships (R 2=0.91 for density and 0.95 for mineralized rock) argues for their broad use. Deposit densities can now be used to provide a guideline for expert judgment or used directly for estimating the number of most kinds of mineral deposits.

  13. Composite Material Testing Data Reduction to Adjust for the Systematic 6-DOF Testing Machine Aberrations

    Treesearch

    Athanasios lliopoulos; John G. Michopoulos; John G. C. Hermanson

    2012-01-01

    This paper describes a data reduction methodology for eliminating the systematic aberrations introduced by the unwanted behavior of a multiaxial testing machine, into the massive amounts of experimental data collected from testing of composite material coupons. The machine in reference is a custom made 6-DoF system called NRL66.3 and developed at the NAval...

  14. Calcium Oxalate Accumulation in Malpighian Tubules of Silkworm (Bombyx mori)

    NASA Astrophysics Data System (ADS)

    Wyman, Aaron J.; Webb, Mary Alice

    2007-04-01

    Silkworm provides an ideal model system for study of calcium oxalate crystallization in kidney-like organs, called Malpighian tubules. During their growth and development, silkworm larvae accumulate massive amounts of calcium oxalate crystals in their Malpighian tubules with no apparent harm to the organism. This manuscript reports studies of crystal structure in the tubules along with analyses identifying molecular constituents of tubule exudate.

  15. Characterization of the Gut Microbiome Using 16S or Shotgun Metagenomics

    PubMed Central

    Jovel, Juan; Patterson, Jordan; Wang, Weiwei; Hotte, Naomi; O'Keefe, Sandra; Mitchel, Troy; Perry, Troy; Kao, Dina; Mason, Andrew L.; Madsen, Karen L.; Wong, Gane K.-S.

    2016-01-01

    The advent of next generation sequencing (NGS) has enabled investigations of the gut microbiome with unprecedented resolution and throughput. This has stimulated the development of sophisticated bioinformatics tools to analyze the massive amounts of data generated. Researchers therefore need a clear understanding of the key concepts required for the design, execution and interpretation of NGS experiments on microbiomes. We conducted a literature review and used our own data to determine which approaches work best. The two main approaches for analyzing the microbiome, 16S ribosomal RNA (rRNA) gene amplicons and shotgun metagenomics, are illustrated with analyses of libraries designed to highlight their strengths and weaknesses. Several methods for taxonomic classification of bacterial sequences are discussed. We present simulations to assess the number of sequences that are required to perform reliable appraisals of bacterial community structure. To the extent that fluctuations in the diversity of gut bacterial populations correlate with health and disease, we emphasize various techniques for the analysis of bacterial communities within samples (α-diversity) and between samples (β-diversity). Finally, we demonstrate techniques to infer the metabolic capabilities of a bacteria community from these 16S and shotgun data. PMID:27148170

  16. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  17. Dynamic Load Balancing for Grid Partitioning on a SP-2 Multiprocessor: A Framework

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single EBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.

  18. Dynamic Load Balancing For Grid Partitioning on a SP-2 Multiprocessor: A Framework

    NASA Technical Reports Server (NTRS)

    Sohn, Andrew; Simon, Horst; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    Computational requirements of full scale computational fluid dynamics change as computation progresses on a parallel machine. The change in computational intensity causes workload imbalance of processors, which in turn requires a large amount of data movement at runtime. If parallel CFD is to be successful on a parallel or massively parallel machine, balancing of the runtime load is indispensable. Here a framework is presented for dynamic load balancing for CFD applications, called Jove. One processor is designated as a decision maker Jove while others are assigned to computational fluid dynamics. Processors running CFD send flags to Jove in a predetermined number of iterations to initiate load balancing. Jove starts working on load balancing while other processors continue working with the current data and load distribution. Jove goes through several steps to decide if the new data should be taken, including preliminary evaluate, partition, processor reassignment, cost evaluation, and decision. Jove running on a single IBM SP2 node has been completely implemented. Preliminary experimental results show that the Jove approach to dynamic load balancing can be effective for full scale grid partitioning on the target machine IBM SP2.

  19. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  20. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    PubMed

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  1. Shifting from Stewardship to Analytics of Massive Science Data

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Doyle, R.; Law, E.; Hughes, S.; Huang, T.; Mahabal, A.

    2015-12-01

    Currently, the analysis of large data collections is executed through traditional computational and data analysis approaches, which require users to bring data to their desktops and perform local data analysis. Data collection, archiving and analysis from future remote sensing missions, be it from earth science satellites, planetary robotic missions, or massive radio observatories may not scale as more capable instruments stress existing architectural approaches and systems due to more continuous data streams, data from multiple observational platforms, and measurements and models from different agencies. A new paradigm is needed in order to increase the productivity and effectiveness of scientific data analysis. This paradigm must recognize that architectural choices, data processing, management, analysis, etc are interrelated, and must be carefully coordinated in any system that aims to allow efficient, interactive scientific exploration and discovery to exploit massive data collections. Future observational systems, including satellite and airborne experiments, and research in climate modeling will significantly increase the size of the data requiring new methodological approaches towards data analytics where users can more effectively interact with the data and apply automated mechanisms for data reduction, reduction and fusion across these massive data repositories. This presentation will discuss architecture, use cases, and approaches for developing a big data analytics strategy across multiple science disciplines.

  2. Metal release from stainless steel powders and massive sheets--comparison and implication for risk assessment of alloys.

    PubMed

    Hedberg, Yolanda; Mazinanian, Neda; Odnevall Wallinder, Inger

    2013-02-01

    Industries that place metal and alloy products on the market are required to demonstrate that they are safe for all intended uses, and that any risks to humans, animals or the environment are adequately controlled. This requires reliable and robust in vitro test procedures. The aim of this study is to compare the release of alloy constituents from stainless steel powders of different grades (focus on AISI 316L) and production routes into synthetic body fluids with the release of the same metals from massive sheets in relation to material and surface characteristics. The comparison is justified by the fact that the difference between massive surfaces and powders from a metal release/dissolution and surface perspective is not clearly elucidated within current legislations. Powders and abraded and aged (24 h) massive sheets were exposed to synthetic solutions of relevance for biological settings and human exposure routes, for periods of up to one week. Concentrations of released iron, chromium, nickel, and manganese in solution were measured, and the effect of solution pH, acidity, complexation capacity, and proteins elucidated in relation to surface oxide composition and its properties. Implications for risk assessments based on in vitro metal release data from alloys are elucidated.

  3. Malignant phyllodes tumour presenting as a massive fungating breast mass and silent thrombo-embolism

    PubMed Central

    Bourke, Anita G.; McCreanor, Madeleine; Yeo, Allen; Weber, Dieter; Bartlett, Anthony; Backhouse, Anastasia

    2015-01-01

    Introduction We report an unusual case of a massive malignant phyllodes tumour that had almost replaced the entire breast presenting with severe chronic blood loss, extensive deep venous thrombosis (DVT) and a silent pulmonary embolus. Presentation Long-standing neglected massive fungating ulcerative mass larger than the left haemothorax. Discussion Phyllodes tumours are rare fibro-epithelial breast lesions that have the propensity to grow rapidly to a large size if neglected. Larger tumours are more likely to be malignant with an overall metastatic rate around 10%. An incidental pulmonary embolus arising from extensive silent lower limb deep vein thrombosis requiring an IVC filter complicated the surgical management. Conclusion Phyllodes tumours are rare and account for approximately 0.3–0.5% of all breast tumours [1]. They have the propensity to be fast growing. However, tumours reaching a massive size (>10 cm) are rare with few reports in the literature. PMID:25734318

  4. Nucleosynthesis in the first massive stars

    NASA Astrophysics Data System (ADS)

    Choplin, Arthur; Meynet, Georges; Maeder, André; Hirschi, Raphael; Chiappini, Cristina

    2018-01-01

    The nucleosynthesis in the first massive stars may be constrained by observing the surface composition of long-lived very iron-poor stars born around 10 billion years ago from material enriched by their ejecta. Many interesting clues on physical processes having occurred in the first stars can be obtained based on nuclear aspects. First, in these first massive stars, mixing must have occurred between the H-burning and the He-burning zone during their nuclear lifetimes; Second, only the outer layers of these massive stars have enriched the material from which the very iron-poor stars, observed today in the halo of the MilkyWay, have formed. These two basic requirements can be obtained by rotating stellar models at very low metallicity. In the present paper, we discuss the arguments supporting this view and illustrate the sensitivity of the results concerning the [Mg/Al] ratio on the rate of the reaction 23Na(p,γ)24Mg.

  5. Outcomes after surgical pulmonary embolectomy for acute submassive and massive pulmonary embolism: A single-center experience.

    PubMed

    Pasrija, Chetan; Kronfli, Anthony; Rouse, Michael; Raithel, Maxwell; Bittle, Gregory J; Pousatis, Sheelagh; Ghoreishi, Mehrdad; Gammie, James S; Griffith, Bartley P; Sanchez, Pablo G; Kon, Zachary N

    2018-03-01

    Ideal treatment strategies for submassive and massive pulmonary embolism remain unclear. Recent reports of surgical pulmonary embolectomy have demonstrated improved outcomes, but surgical technique and postoperative outcomes continue to be refined. The aim of this study is to describe in-hospital survival and right ventricular function after surgical pulmonary embolectomy for submassive and massive pulmonary embolism with excessive predicted mortality (≥5%). All patients undergoing surgical pulmonary embolectomy (2011-2015) were retrospectively reviewed. Patients with pulmonary embolism were stratified as submassive, massive without arrest, and massive with arrest. Submassive was defined as normotensive with right ventricular dysfunction. Massive was defined as prolonged hypotension due to the pulmonary embolism. Preoperative demographics, intraoperative variables, and postoperative outcomes were compared. A total of 55 patients were identified: 28 as submassive, 18 as massive without arrest, and 9 as massive with arrest. All patients had a right ventricle/left ventricle ratio greater than 1.0. Right ventricular dysfunction decreased from moderate preoperatively to none before discharge (P < .001). In-hospital and 1-year survival were 93% and 91%, respectively, with 100% survival in the submassive group. No patients developed renal failure requiring hemodialysis at discharge or had a postoperative stroke. In this single institution experience, surgical pulmonary embolectomy is a safe and effective therapy to treat patients with a submassive or massive pulmonary embolism. Although survival in this study is higher than previously reported for patients treated with medical therapy alone, a prospective trial comparing surgical therapy with medical therapy is necessary to further elucidate the role of surgical pulmonary embolectomy in the treatment of pulmonary embolism. Copyright © 2017 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  6. Highly accurate quantitative spectroscopy of massive stars in the Galaxy

    NASA Astrophysics Data System (ADS)

    Nieva, María-Fernanda; Przybilla, Norbert

    2017-11-01

    Achieving high accuracy and precision in stellar parameter and chemical composition determinations is challenging in massive star spectroscopy. On one hand, the target selection for an unbiased sample build-up is complicated by several types of peculiarities that can occur in individual objects. On the other hand, composite spectra are often not recognized as such even at medium-high spectral resolution and typical signal-to-noise ratios, despite multiplicity among massive stars is widespread. In particular, surveys that produce large amounts of automatically reduced data are prone to oversight of details that turn hazardous for the analysis with techniques that have been developed for a set of standard assumptions applicable to a spectrum of a single star. Much larger systematic errors than anticipated may therefore result because of the unrecognized true nature of the investigated objects, or much smaller sample sizes of objects for the analysis than initially planned, if recognized. More factors to be taken care of are the multiple steps from the choice of instrument over the details of the data reduction chain to the choice of modelling code, input data, analysis technique and the selection of the spectral lines to be analyzed. Only when avoiding all the possible pitfalls, a precise and accurate characterization of the stars in terms of fundamental parameters and chemical fingerprints can be achieved that form the basis for further investigations regarding e.g. stellar structure and evolution or the chemical evolution of the Galaxy. The scope of the present work is to provide the massive star and also other astrophysical communities with criteria to evaluate the quality of spectroscopic investigations of massive stars before interpreting them in a broader context. The discussion is guided by our experiences made in the course of over a decade of studies of massive star spectroscopy ranging from the simplest single objects to multiple systems.

  7. Luminous blue variables and the fates of very massive stars.

    PubMed

    Smith, Nathan

    2017-10-28

    Luminous blue variables (LBVs) had long been considered massive stars in transition to the Wolf-Rayet (WR) phase, so their identification as progenitors of some peculiar supernovae (SNe) was surprising. More recently, environment statistics of LBVs show that most of them cannot be in transition to the WR phase after all, because LBVs are more isolated than allowed in this scenario. Additionally, the high-mass H shells around luminous SNe IIn require that some very massive stars above 40  M ⊙ die without shedding their H envelopes, and the precursor outbursts are a challenge for understanding the final burning sequences leading to core collapse. Recent evidence suggests a clear continuum in pre-SN mass loss from super-luminous SNe IIn, to regular SNe IIn, to SNe II-L and II-P, whereas most stripped-envelope SNe seem to arise from a separate channel of lower-mass binary stars rather than massive WR stars.This article is part of the themed issue 'Bridging the gap: from massive stars to supernovae'. © 2017 The Author(s).

  8. Initial conditions for accurate N-body simulations of massive neutrino cosmologies

    NASA Astrophysics Data System (ADS)

    Zennaro, M.; Bel, J.; Villaescusa-Navarro, F.; Carbone, C.; Sefusatti, E.; Guzzo, L.

    2017-04-01

    The set-up of the initial conditions in cosmological N-body simulations is usually implemented by rescaling the desired low-redshift linear power spectrum to the required starting redshift consistently with the Newtonian evolution of the simulation. The implementation of this practical solution requires more care in the context of massive neutrino cosmologies, mainly because of the non-trivial scale-dependence of the linear growth that characterizes these models. In this work, we consider a simple two-fluid, Newtonian approximation for cold dark matter and massive neutrinos perturbations that can reproduce the cold matter linear evolution predicted by Boltzmann codes such as CAMB or CLASS with a 0.1 per cent accuracy or below for all redshift relevant to non-linear structure formation. We use this description, in the first place, to quantify the systematic errors induced by several approximations often assumed in numerical simulations, including the typical set-up of the initial conditions for massive neutrino cosmologies adopted in previous works. We then take advantage of the flexibility of this approach to rescale the late-time linear power spectra to the simulation initial redshift, in order to be as consistent as possible with the dynamics of the N-body code and the approximations it assumes. We implement our method in a public code (REPS rescaled power spectra for initial conditions with massive neutrinos https://github.com/matteozennaro/reps) providing the initial displacements and velocities for cold dark matter and neutrino particles that will allow accurate, I.e. 1 per cent level, numerical simulations for this cosmological scenario.

  9. [Traumatic rupture of the pericardium--the source of massive haemothorax, a case report].

    PubMed

    Hromádka, P; Skach, J; Cernohorský, S; Krivohlávek, M; Gaalová, R

    2011-05-01

    Extensive traumatic haemothorax is a life-threatening condition that requires the surgeon's resolute approach. Massive bleeding may first lead to hypovoleamic shock, then to haemorrhagic shock. The most common sources are bleeding from the chest wall (intercostal artery), bleeding when the lung parenchyma or major intrathoracic vessels are injured. The case report describes a rare case of massive right-sided haemothorax in pericardial rupture with cardiac herniation in a patient with polytrauma when the source of bleeding was artery pericardium. The report draws attention to the treacherousness of the diagnosis in a polytraumatised patient; the report retrospectively evaluates the interpretation of imaging examinations that were carried out

  10. A Mechanical Model of Brownian Motion for One Massive Particle Including Slow Light Particles

    NASA Astrophysics Data System (ADS)

    Liang, Song

    2018-01-01

    We provide a connection between Brownian motion and a classical mechanical system. Precisely, we consider a system of one massive particle interacting with an ideal gas, evolved according to non-random mechanical principles, via interaction potentials, without any assumption requiring that the initial velocities of the environmental particles should be restricted to be "fast enough". We prove the convergence of the (position, velocity)-process of the massive particle under a certain scaling limit, such that the mass of the environmental particles converges to 0 while the density and the velocities of them go to infinity, and give the precise expression of the limiting process, a diffusion process.

  11. The nature of ultra-massive lens galaxies

    NASA Astrophysics Data System (ADS)

    Canameras, Raoul

    2017-08-01

    During the past decade, strong gravitational lensing analyses have contributed tremendously to the characterization of the inner properties of massive early-type galaxies, beyond the local Universe. Here we intend to extend studies of this kind to the most massive lens galaxies known to date, well outside the mass limits investigated by previous lensing surveys. This will allow us to probe the physics of the likely descendants of the most violent episodes of star formation and of the compact massive galaxies at high redshift. We propose WFC3 imaging (F438W and F160W) of four extremely massive early-type lens galaxies at z 0.5, in order to put them into context with the evolutionary trends of ellipticals as a function of mass and redshift. These systems were discovered in the SDSS and show one single main lens galaxy with a stellar mass above 1.5x10^12 Msun and large Einstein radii. Our high-resolution spectroscopic follow-up with VLT/X-shooter provides secure lens and source redshifts, between 0.3 and 0.7 and between 1.5 and 2.5, respectively, and confirm extreme stellar velocity dispersions > 400 km/s for the lenses. The excellent angular resolution of the proposed WFC3 imaging - not achievable from the ground - is the remaining indispensable piece of information to :(1) Resolve the lens structural parameters and obtain robust measurements of their stellar mass distributions,(2) Model the amount and distribution of the lens total masses and measure their M/L ratios and stellar IMF with joint strong lensing and stellar dynamics analyses,(3) Enhance our on-going lens models through the most accurate positions and morphologies of the blue multiply-imaged sources.

  12. Data Sharing in DHT Based P2P Systems

    NASA Astrophysics Data System (ADS)

    Roncancio, Claudia; Del Pilar Villamil, María; Labbé, Cyril; Serrano-Alvarado, Patricia

    The evolution of peer-to-peer (P2P) systems triggered the building of large scale distributed applications. The main application domain is data sharing across a very large number of highly autonomous participants. Building such data sharing systems is particularly challenging because of the “extreme” characteristics of P2P infrastructures: massive distribution, high churn rate, no global control, potentially untrusted participants... This article focuses on declarative querying support, query optimization and data privacy on a major class of P2P systems, that based on Distributed Hash Table (P2P DHT). The usual approaches and the algorithms used by classic distributed systems and databases for providing data privacy and querying services are not well suited to P2P DHT systems. A considerable amount of work was required to adapt them for the new challenges such systems present. This paper describes the most important solutions found. It also identifies important future research trends in data management in P2P DHT systems.

  13. Integration of Molecular Networking and In-Silico MS/MS Fragmentation for Natural Products Dereplication.

    PubMed

    Allard, Pierre-Marie; Péresse, Tiphaine; Bisson, Jonathan; Gindro, Katia; Marcourt, Laurence; Pham, Van Cuong; Roussi, Fanny; Litaudon, Marc; Wolfender, Jean-Luc

    2016-03-15

    Dereplication represents a key step for rapidly identifying known secondary metabolites in complex biological matrices. In this context, liquid-chromatography coupled to high resolution mass spectrometry (LC-HRMS) is increasingly used and, via untargeted data-dependent MS/MS experiments, massive amounts of detailed information on the chemical composition of crude extracts can be generated. An efficient exploitation of such data sets requires automated data treatment and access to dedicated fragmentation databases. Various novel bioinformatics approaches such as molecular networking (MN) and in-silico fragmentation tools have emerged recently and provide new perspective for early metabolite identification in natural products (NPs) research. Here we propose an innovative dereplication strategy based on the combination of MN with an extensive in-silico MS/MS fragmentation database of NPs. Using two case studies, we demonstrate that this combined approach offers a powerful tool to navigate through the chemistry of complex NPs extracts, dereplicate metabolites, and annotate analogues of database entries.

  14. Simulating Gravitational Wave Emission from Massive Black Hole Binaries

    NASA Technical Reports Server (NTRS)

    Centrella, Joan

    2008-01-01

    The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity. The resulting computer codes have been plagued by instabilities, causing them to crash well before the black holes in the binary could complete even a single orbit. In the past few years, this situation has changed dramatically, with a series of amazing breakthroughs. This talk will focus on the recent advances that are revealing these waveforms. highlighting their astrophysical consequences and the dramatic new potential for discovery that arises when merging black holes will be observed using gravitational waves.

  15. Oil sands mining and reclamation cause massive loss of peatland and stored carbon

    PubMed Central

    Rooney, Rebecca C.; Bayley, Suzanne E.; Schindler, David W.

    2012-01-01

    We quantified the wholesale transformation of the boreal landscape by open-pit oil sands mining in Alberta, Canada to evaluate its effect on carbon storage and sequestration. Contrary to claims made in the media, peatland destroyed by open-pit mining will not be restored. Current plans dictate its replacement with upland forest and tailings storage lakes, amounting to the destruction of over 29,500 ha of peatland habitat. Landscape changes caused by currently approved mines will release between 11.4 and 47.3 million metric tons of stored carbon and will reduce carbon sequestration potential by 5,734–7,241 metric tons C/y. These losses have not previously been quantified, and should be included with the already high estimates of carbon emissions from oil sands mining and bitumen upgrading. A fair evaluation of the costs and benefits of oil sands mining requires a rigorous assessment of impacts on natural capital and ecosystem services. PMID:22411786

  16. Molecular diagnostics in gastric cancer.

    PubMed

    Bornschein, Jan; Leja, Marcis; Kupcinskas, Juozas; Link, Alexander; Weaver, Jamie; Rugge, Massimo; Malfertheiner, Peter

    2014-01-01

    Despite recent advances in individualised targeted therapy, gastric cancer remains one of the most challenging diseases in gastrointestinal oncology. Modern imaging techniques using endoscopic filter devices and in vivo molecular imaging are designed to enable early detection of the cancer and surveillance of patients at risk. Molecular characterisation of the tumour itself as well as of the surrounding inflammatory environment is more sophisticated in the view of tailored therapies and individual prognostic assessment. The broad application of high throughput techniques for the description of genome wide patterns of structural (copy number aberrations, single nucleotide polymorphisms, methylation pattern) and functional (gene expression profiling, proteomics, miRNA) alterations in the cancer tissue lead not only to a better understanding of the tumour biology but also to a description of gastric cancer subtypes independent from classical stratification systems. Biostatistical means are required for the interpretation of the massive amount of data generated by these approaches. In this review we give an overview on the current knowledge of diagnostic methods for detection, description and understanding of gastric cancer disease.

  17. High-content screening for the discovery of pharmacological compounds: advantages, challenges and potential benefits of recent technological developments.

    PubMed

    Soleilhac, Emmanuelle; Nadon, Robert; Lafanechere, Laurence

    2010-02-01

    Screening compounds with cell-based assays and microscopy image-based analysis is an approach currently favored for drug discovery. Because of its high information yield, the strategy is called high-content screening (HCS). This review covers the application of HCS in drug discovery and also in basic research of potential new pathways that can be targeted for treatment of pathophysiological diseases. HCS faces several challenges, however, including the extraction of pertinent information from the massive amount of data generated from images. Several proposed approaches to HCS data acquisition and analysis are reviewed. Different solutions from the fields of mathematics, bioinformatics and biotechnology are presented. Potential applications and limits of these recent technical developments are also discussed. HCS is a multidisciplinary and multistep approach for understanding the effects of compounds on biological processes at the cellular level. Reliable results depend on the quality of the overall process and require strong interdisciplinary collaborations.

  18. Metagenomics of rumen bacteriophage from thirteen lactating dairy cattle

    PubMed Central

    2013-01-01

    Background The bovine rumen hosts a diverse and complex community of Eukarya, Bacteria, Archea and viruses (including bacteriophage). The rumen viral population (the rumen virome) has received little attention compared to the rumen microbial population (the rumen microbiome). We used massively parallel sequencing of virus like particles to investigate the diversity of the rumen virome in thirteen lactating Australian Holstein dairy cattle all housed in the same location, 12 of which were sampled on the same day. Results Fourteen putative viral sequence fragments over 30 Kbp in length were assembled and annotated. Many of the putative genes in the assembled contigs showed no homology to previously annotated genes, highlighting the large amount of work still required to fully annotate the functions encoded in viral genomes. The abundance of the contig sequences varied widely between animals, even though the cattle were of the same age, stage of lactation and fed the same diets. Additionally the twelve animals which were co-habited shared a number of their dominant viral contigs. We compared the functional characteristics of our bovine viromes with that of other viromes, as well as rumen microbiomes. At the functional level, we found strong similarities between all of the viral samples, which were highly distinct from the rumen microbiome samples. Conclusions Our findings suggest a large amount of between animal variation in the bovine rumen virome and that co-habiting animals may have more similar viromes than non co-habited animals. We report the deepest sequencing to date of the rumen virome. This work highlights the enormous amount of novelty and variation present in the rumen virome. PMID:24180266

  19. TransAtlasDB: an integrated database connecting expression data, metadata and variants

    PubMed Central

    Adetunji, Modupeore O; Lamont, Susan J; Schmidt, Carl J

    2018-01-01

    Abstract High-throughput transcriptome sequencing (RNAseq) is the universally applied method for target-free transcript identification and gene expression quantification, generating huge amounts of data. The constraint of accessing such data and interpreting results can be a major impediment in postulating suitable hypothesis, thus an innovative storage solution that addresses these limitations, such as hard disk storage requirements, efficiency and reproducibility are paramount. By offering a uniform data storage and retrieval mechanism, various data can be compared and easily investigated. We present a sophisticated system, TransAtlasDB, which incorporates a hybrid architecture of both relational and NoSQL databases for fast and efficient data storage, processing and querying of large datasets from transcript expression analysis with corresponding metadata, as well as gene-associated variants (such as SNPs) and their predicted gene effects. TransAtlasDB provides the data model of accurate storage of the large amount of data derived from RNAseq analysis and also methods of interacting with the database, either via the command-line data management workflows, written in Perl, with useful functionalities that simplifies the complexity of data storage and possibly manipulation of the massive amounts of data generated from RNAseq analysis or through the web interface. The database application is currently modeled to handle analyses data from agricultural species, and will be expanded to include more species groups. Overall TransAtlasDB aims to serve as an accessible repository for the large complex results data files derived from RNAseq gene expression profiling and variant analysis. Database URL: https://modupeore.github.io/TransAtlasDB/ PMID:29688361

  20. A MapReduce approach to diminish imbalance parameters for big deoxyribonucleic acid dataset.

    PubMed

    Kamal, Sarwar; Ripon, Shamim Hasnat; Dey, Nilanjan; Ashour, Amira S; Santhi, V

    2016-07-01

    In the age of information superhighway, big data play a significant role in information processing, extractions, retrieving and management. In computational biology, the continuous challenge is to manage the biological data. Data mining techniques are sometimes imperfect for new space and time requirements. Thus, it is critical to process massive amounts of data to retrieve knowledge. The existing software and automated tools to handle big data sets are not sufficient. As a result, an expandable mining technique that enfolds the large storage and processing capability of distributed or parallel processing platforms is essential. In this analysis, a contemporary distributed clustering methodology for imbalance data reduction using k-nearest neighbor (K-NN) classification approach has been introduced. The pivotal objective of this work is to illustrate real training data sets with reduced amount of elements or instances. These reduced amounts of data sets will ensure faster data classification and standard storage management with less sensitivity. However, general data reduction methods cannot manage very big data sets. To minimize these difficulties, a MapReduce-oriented framework is designed using various clusters of automated contents, comprising multiple algorithmic approaches. To test the proposed approach, a real DNA (deoxyribonucleic acid) dataset that consists of 90 million pairs has been used. The proposed model reduces the imbalance data sets from large-scale data sets without loss of its accuracy. The obtained results depict that MapReduce based K-NN classifier provided accurate results for big data of DNA. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Recycling rice husks for high-capacity lithium battery anodes

    PubMed Central

    Jung, Dae Soo; Ryou, Myung-Hyun; Sung, Yong Joo; Park, Seung Bin; Choi, Jang Wook

    2013-01-01

    The rice husk is the outer covering of a rice kernel and protects the inner ingredients from external attack by insects and bacteria. To perform this function while ventilating air and moisture, rice plants have developed unique nanoporous silica layers in their husks through years of natural evolution. Despite the massive amount of annual production near 108 tons worldwide, so far rice husks have been recycled only for low-value agricultural items. In an effort to recycle rice husks for high-value applications, we convert the silica to silicon and use it for high-capacity lithium battery anodes. Taking advantage of the interconnected nanoporous structure naturally existing in rice husks, the converted silicon exhibits excellent electrochemical performance as a lithium battery anode, suggesting that rice husks can be a massive resource for use in high-capacity lithium battery negative electrodes. PMID:23836636

  2. Recycling rice husks for high-capacity lithium battery anodes.

    PubMed

    Jung, Dae Soo; Ryou, Myung-Hyun; Sung, Yong Joo; Park, Seung Bin; Choi, Jang Wook

    2013-07-23

    The rice husk is the outer covering of a rice kernel and protects the inner ingredients from external attack by insects and bacteria. To perform this function while ventilating air and moisture, rice plants have developed unique nanoporous silica layers in their husks through years of natural evolution. Despite the massive amount of annual production near 10(8) tons worldwide, so far rice husks have been recycled only for low-value agricultural items. In an effort to recycle rice husks for high-value applications, we convert the silica to silicon and use it for high-capacity lithium battery anodes. Taking advantage of the interconnected nanoporous structure naturally existing in rice husks, the converted silicon exhibits excellent electrochemical performance as a lithium battery anode, suggesting that rice husks can be a massive resource for use in high-capacity lithium battery negative electrodes.

  3. Distributed Fast Self-Organized Maps for Massive Spectrophotometric Data Analysis †.

    PubMed

    Dafonte, Carlos; Garabato, Daniel; Álvarez, Marco A; Manteiga, Minia

    2018-05-03

    Analyzing huge amounts of data becomes essential in the era of Big Data, where databases are populated with hundreds of Gigabytes that must be processed to extract knowledge. Hence, classical algorithms must be adapted towards distributed computing methodologies that leverage the underlying computational power of these platforms. Here, a parallel, scalable, and optimized design for self-organized maps (SOM) is proposed in order to analyze massive data gathered by the spectrophotometric sensor of the European Space Agency (ESA) Gaia spacecraft, although it could be extrapolated to other domains. The performance comparison between the sequential implementation and the distributed ones based on Apache Hadoop and Apache Spark is an important part of the work, as well as the detailed analysis of the proposed optimizations. Finally, a domain-specific visualization tool to explore astronomical SOMs is presented.

  4. Possible signatures of the inflationary particle content: spin-2 fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biagetti, Matteo; Dimastrogiovanni, Emanuela; Fasiello, Matteo, E-mail: m.biagetti@uva.nl, E-mail: emanuela1573@gmail.com, E-mail: matteorf@stanford.edu

    2017-10-01

    We study the imprints of a massive spin-2 field on inflationary observables, and in particular on the breaking of consistency relations. In this setup, the minimal inflationary field content interacts with the massive spin-2 field through dRGT interactions, thus guaranteeing the absence of Boulware-Deser ghostly degrees of freedom. The unitarity requirement on spinning particles, known as Higuchi bound, plays a crucial role for the size of the observable signal.

  5. Physical properties of Southern infrared dark clouds

    NASA Astrophysics Data System (ADS)

    Vasyunina, T.; Linz, H.; Henning, Th.; Stecklum, B.; Klose, S.; Nyman, L.-Å.

    2009-05-01

    Context: What are the mechanisms by which massive stars form? What are the initial conditions for these processes? It is commonly assumed that cold and dense Infrared Dark Clouds (IRDCs) represent the birth-sites of massive stars. Therefore, these clouds have been receiving an increasing amount of attention, and their analysis offers the opportunity to tackle the afore mentioned questions. Aims: To enlarge the sample of well-characterised IRDCs in the southern hemisphere, where ALMA will play a major role in the near future, we have developed a program to study the gas and dust of southern infrared dark clouds. The present paper attempts to characterize the continuum properties of this sample of IRDCs. Methods: We cross-correlated 1.2 mm continuum data from SIMBA bolometer array mounted on SEST telescope with Spitzer/GLIMPSE images to establish the connection between emission sources at millimeter wavelengths and the IRDCs that we observe at 8 μm in absorption against the bright PAH background. Analysing the dust emission and extinction enables us to determine the masses and column densities, which are important quantities in characterizing the initial conditions of massive star formation. We also evaluated the limitations of the emission and extinction methods. Results: The morphology of the 1.2 mm continuum emission is in all cases in close agreement with the mid-infrared extinction. The total masses of the IRDCs were found to range from 150 to 1150 M_⊙ (emission data) and from 300 to 1750 M_⊙ (extinction data). We derived peak column densities of between 0.9 and 4.6 × 1022 cm-2 (emission data) and 2.1 and 5.4 × 1022 cm-2 (extinction data). We demonstrate that the extinction method is unreliable at very high extinction values (and column densities) beyond AV values of roughly 75 mag according to the Weingartner & Draine (2001) extinction relation RV = 5.5 model B (around 200 mag when following the common Mathis (1990, ApJ, 548, 296) extinction calibration). By taking the spatial resolution effects into account and restoring the column densities derived from the dust emission to a linear resolution of 0.01 pc, peak column densities of 3-19 × 1023 cm-2 are obtained, which are much higher than typical values for low-mass cores. Conclusions: Taking into account the spatial resolution effects, the derived column densities are beyond the column density threshold of 3.0 × 1023 cm-2 required by theoretical considerations for massive star formation. We conclude that the values of column densities derived for the selected IRDC sample imply that these objects are excellent candidates for objects in the earliest stages of massive star formation.

  6. ISED: Constructing a high-resolution elevation road dataset from massive, low-quality in-situ observations derived from geosocial fitness tracking data.

    PubMed

    McKenzie, Grant; Janowicz, Krzysztof

    2017-01-01

    Gaining access to inexpensive, high-resolution, up-to-date, three-dimensional road network data is a top priority beyond research, as such data would fuel applications in industry, governments, and the broader public alike. Road network data are openly available via user-generated content such as OpenStreetMap (OSM) but lack the resolution required for many tasks, e.g., emergency management. More importantly, however, few publicly available data offer information on elevation and slope. For most parts of the world, up-to-date digital elevation products with a resolution of less than 10 meters are a distant dream and, if available, those datasets have to be matched to the road network through an error-prone process. In this paper we present a radically different approach by deriving road network elevation data from massive amounts of in-situ observations extracted from user-contributed data from an online social fitness tracking application. While each individual observation may be of low-quality in terms of resolution and accuracy, taken together they form an accurate, high-resolution, up-to-date, three-dimensional road network that excels where other technologies such as LiDAR fail, e.g., in case of overpasses, overhangs, and so forth. In fact, the 1m spatial resolution dataset created in this research based on 350 million individual 3D location fixes has an RMSE of approximately 3.11m compared to a LiDAR-based ground-truth and can be used to enhance existing road network datasets where individual elevation fixes differ by up to 60m. In contrast, using interpolated data from the National Elevation Dataset (NED) results in 4.75m RMSE compared to the base line. We utilize Linked Data technologies to integrate the proposed high-resolution dataset with OpenStreetMap road geometries without requiring any changes to the OSM data model.

  7. The presence of insect at composting

    NASA Astrophysics Data System (ADS)

    Mudruňka, J.; Lyčková, B.; Kučerová, R.; Glogarová, V.; Závada, J.; Gibesová, B.; Takač, D.

    2017-10-01

    During composting biodegradable waste, microbic organisms reproduce massively, most of which belong to serious biopathogens which are able to penetrate various environmental layers. Their vector species include dipterous insect (Diptera) which reaches considerable amounts in composting plant premises as well as home composting units, mainly during summer months. Therefore measures must be taken to eliminate or reduce this unwanted phenomenon (sanitisation, disinfection). For evaluating obtained results, relative abundance calculation was chosen.

  8. Implications of an Independent Kosovo for Russia’s Near Abroad

    DTIC Science & Technology

    2007-10-01

    as Chair of the Standing Committee on International Law and Ethics of the World Association for Disaster and Emergency Medicine . He is admitted to...of Kosovar Albanians converted to Islam, while Serbs remained Serbian Orthodox. More than 500 years later, Serbia and Montenegro regained the...massive amounts of funding would be needed to build new plants and/or overhaul existing structures. Without final status resolution, investors are

  9. Anatomy of an online misinformation network.

    PubMed

    Shao, Chengcheng; Hui, Pik-Mai; Wang, Lei; Jiang, Xinwen; Flammini, Alessandro; Menczer, Filippo; Ciampaglia, Giovanni Luca

    2018-01-01

    Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network.

  10. Anatomy of an online misinformation network

    PubMed Central

    Wang, Lei; Jiang, Xinwen; Flammini, Alessandro; Ciampaglia, Giovanni Luca

    2018-01-01

    Massive amounts of fake news and conspiratorial content have spread over social media before and after the 2016 US Presidential Elections despite intense fact-checking efforts. How do the spread of misinformation and fact-checking compete? What are the structural and dynamic characteristics of the core of the misinformation diffusion network, and who are its main purveyors? How to reduce the overall amount of misinformation? To explore these questions we built Hoaxy, an open platform that enables large-scale, systematic studies of how misinformation and fact-checking spread and compete on Twitter. Hoaxy captures public tweets that include links to articles from low-credibility and fact-checking sources. We perform k-core decomposition on a diffusion network obtained from two million retweets produced by several hundred thousand accounts over the six months before the election. As we move from the periphery to the core of the network, fact-checking nearly disappears, while social bots proliferate. The number of users in the main core reaches equilibrium around the time of the election, with limited churn and increasingly dense connections. We conclude by quantifying how effectively the network can be disrupted by penalizing the most central nodes. These findings provide a first look at the anatomy of a massive online misinformation diffusion network. PMID:29702657

  11. Impact of drought stress on specialised metabolism: Biosynthesis and the expression of monoterpene synthases in sage (Salvia officinalis).

    PubMed

    Radwan, Alzahraa; Kleinwächter, Maik; Selmar, Dirk

    2017-09-01

    In previous experiments, we demonstrated that the amount of monoterpenes in sage is increased massively by drought stress. Our current study is aimed to elucidate whether this increase is due, at least in part, to elevated activity of the monoterpene synthases responsible for the biosynthesis of essential oils in sage. Accordingly, the transcription rates of the monoterpene synthases were analyzed. Salvia officinalis plants were cultivated under moderate drought stress. The concentrations of monoterpenes as well as the expression of the monoterpene synthases were analyzed. The amount of monoterpenes massively increased in response to drought stress; it doubled after just two days of drought stress. The observed changes in monoterpene content mostly match with the patterns of monoterpene synthase expressions. The expression of bornyl diphosphate synthase was strongly up-regulated; its maximum level was reached after two days. Sabinene synthase increased gradually and reached a maximum after two weeks. In contrast, the transcript level of cineole synthase continuously declined. This study revealed that the stress related increase of biosynthesis is not only due to a "passive" shift caused by the stress related over-reduced status, but also is due - at least in part-to an "active" up-regulation of the enzymes involved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. PeakVizor: Visual Analytics of Peaks in Video Clickstreams from Massive Open Online Courses.

    PubMed

    Chen, Qing; Chen, Yuanzhe; Liu, Dongyu; Shi, Conglei; Wu, Yingcai; Qu, Huamin

    2016-10-01

    Massive open online courses (MOOCs) aim to facilitate open-access and massive-participation education. These courses have attracted millions of learners recently. At present, most MOOC platforms record the web log data of learner interactions with course videos. Such large amounts of multivariate data pose a new challenge in terms of analyzing online learning behaviors. Previous studies have mainly focused on the aggregate behaviors of learners from a summative view; however, few attempts have been made to conduct a detailed analysis of such behaviors. To determine complex learning patterns in MOOC video interactions, this paper introduces a comprehensive visualization system called PeakVizor. This system enables course instructors and education experts to analyze the "peaks" or the video segments that generate numerous clickstreams. The system features three views at different levels: the overview with glyphs to display valuable statistics regarding the peaks detected; the flow view to present spatio-temporal information regarding the peaks; and the correlation view to show the correlation between different learner groups and the peaks. Case studies and interviews conducted with domain experts have demonstrated the usefulness and effectiveness of PeakVizor, and new findings about learning behaviors in MOOC platforms have been reported.

  13. Astrophysics and Big Data: Challenges, Methods, and Tools

    NASA Astrophysics Data System (ADS)

    Garofalo, Mauro; Botta, Alessio; Ventre, Giorgio

    2017-06-01

    Nowadays there is no field research which is not flooded with data. Among the sciences, astrophysics has always been driven by the analysis of massive amounts of data. The development of new and more sophisticated observation facilities, both ground-based and spaceborne, has led data more and more complex (Variety), an exponential growth of both data Volume (i.e., in the order of petabytes), and Velocity in terms of production and transmission. Therefore, new and advanced processing solutions will be needed to process this huge amount of data. We investigate some of these solutions, based on machine learning models as well as tools and architectures for Big Data analysis that can be exploited in the astrophysical context.

  14. Transfer of interferon alfa into human breast milk.

    PubMed

    Kumar, A R; Hale, T W; Mock, R E

    2000-08-01

    Originally assumed to be antiviral substances, the efficacy of interferons in a number of pathologies, including malignancies, multiple sclerosis, and other immune syndromes, is increasingly recognized. This study provides data on the transfer of interferon alfa (2B) into human milk of a patient receiving massive intravenous doses for the treatment of malignant melanoma. Following an intravenous dose of 30 million IU, the amount of interferon transferred into human milk was only slightly elevated (1551 IU/mL) when compared to control milk (1249 IU/mL). These data suggest that even following enormous doses, interferon is probably too large in molecular weight to transfer into human milk in clinically relevant amounts.

  15. City transformations in a 1.5 °C warmer world

    NASA Astrophysics Data System (ADS)

    Solecki, William; Rosenzweig, Cynthia; Dhakal, Shobhakar; Roberts, Debra; Barau, Aliyu Salisu; Schultz, Seth; Ürge-Vorsatz, Diana

    2018-03-01

    Meeting the ambitions of the Paris Agreement will require rapid and massive decarbonization of cities, as well as adaptation. Capacity and requirement differs across cities, with challenges and opportunities for transformational action in both the Global North and South.

  16. City Transformations in a 1.5 C Warmer World

    NASA Technical Reports Server (NTRS)

    Barau, Aliyu Salisu; Urge-Vorsatz, Diana; Schultz, Seth; Solecki, William; Dhakal, Shobhakar; Rosenzweig, Cynthia; Roberts, Debra

    2018-01-01

    Meeting the ambitions of the Paris Agreement will require rapid and massive decarbonization of cities, as well as adaptation. Capacity and requirement differs across cities, with challenges and opportunities for transformational action in both the Global North and South.

  17. Coordination and management of multicenter clinical studies in trauma: Experience from the PRospective Observational Multicenter Major Trauma Transfusion (PROMMTT) Study.

    PubMed

    Rahbar, Mohammad H; Fox, Erin E; del Junco, Deborah J; Cotton, Bryan A; Podbielski, Jeanette M; Matijevic, Nena; Cohen, Mitchell J; Schreiber, Martin A; Zhang, Jiajie; Mirhaji, Parsa; Duran, Sarah J; Reynolds, Robert J; Benjamin-Garner, Ruby; Holcomb, John B

    2012-04-01

    Early death due to hemorrhage is a major consequence of traumatic injury. Transfusion practices differ among hospitals and it is unknown which transfusion practices improve survival. This report describes the experience of the PRospective Observational Multicenter Major Trauma Transfusion (PROMMTT) Study Data Coordination Center in designing and coordinating a study to examine transfusion practices at ten Level 1 trauma centers in the US. PROMMTT was a multisite prospective observational study of severely injured transfused trauma patients. The clinical sites collected real-time information on the timing and amounts of blood product infusions as well as colloids and crystalloids, vital signs, initial diagnostic and clinical laboratory tests, life saving interventions and other clinical care data. Between July 2009 and October 2010, PROMMTT screened 12,561 trauma admissions and enrolled 1245 patients who received one or more blood transfusions within 6h of Emergency Department (ED) admission. A total of 297 massive transfusions were observed over the course of the study at a combined rate of 5.0 massive transfusion patients/week. PROMMTT is the first multisite study to collect real-time prospective data on trauma patients requiring transfusion. Support from the Department of Defense and collaborative expertise from the ten participating centers helped to demonstrate the feasibility of prospective trauma transfusion studies. The observational data collected from this study will be an invaluable resource for research in trauma surgery and it will guide the design and conduct of future randomized trials. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  18. Coordination and management of multicenter clinical studies in trauma: Experience from the PRospective Observational Multicenter Major Trauma Transfusion (PROMMTT) Study

    PubMed Central

    Rahbar, Mohammad H.; Fox, Erin E.; del Junco, Deborah J.; Cotton, Bryan A.; Podbielski, Jeanette M.; Matijevic, Nena; Cohen, Mitchell J.; Schreiber, Martin A.; Zhang, Jiajie; Mirhaji, Parsa; Duran, Sarah; Reynolds, Robert J.; Benjamin-Garner, Ruby; Holcomb, John B.

    2011-01-01

    Aim Early death due to hemorrhage is a major consequence of traumatic injury. Transfusion practices differ among hospitals and it is unknown which transfusion practices improve survival. This report describes the experience of the PRospective Observational Multicenter Major Trauma Transfusion (PROMMTT) Study Data Coordination Center in designing and coordinating a study to examine transfusion practices at ten Level 1 trauma centers in the U.S. Methods PROMMTT was a multisite prospective observational study of severely injured transfused trauma patients. The clinical sites collected real-time information on the timing and amounts of blood product infusions as well as colloids and crystalloids, vital signs, initial diagnostic and clinical laboratory tests, life saving interventions and other clinical care data. Results Between July 2009 and October 2010, PROMMTT screened 12,561 trauma admissions and enrolled 1,245 patients who received one or more blood transfusions within 6 hours of ED admission. A total of 297 massive transfusions were observed over the course of the study at a combined rate of 5.0 massive transfusion patients/week. Conclusion PROMMTT is the first multisite study to collect real-time prospective data on trauma patients requiring transfusion. Support from the Department of Defense and collaborative expertise from the ten participating centers helped to demonstrate the feasibility of prospective trauma transfusion studies. The observational data collected from this study will be an invaluable resource for research in trauma surgery and it will guide the design and conduct of future randomized trials. PMID:22001613

  19. SpS5 - II. Stellar and wind parameters

    NASA Astrophysics Data System (ADS)

    Martins, F.; Bergemann, M.; Bestenlehner, J. M.; Crowther, P. A.; Hamann, W. R.; Najarro, F.; Nieva, M. F.; Przybilla, N.; Freimanis, J.; Hou, W.; Kaper, L.

    2015-03-01

    The development of infrared observational facilities has revealed a number of massive stars in obscured environments throughout the Milky Way and beyond. The determination of their stellar and wind properties from infrared diagnostics is thus required to take full advantage of the wealth of observations available in the near and mid infrared. However, the task is challenging. This session addressed some of the problems encountered and showed the limitations and successes of infrared studies of massive stars.

  20. Scan line graphics generation on the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Dorband, John E.

    1988-01-01

    Described here is how researchers implemented a scan line graphics generation algorithm on the Massively Parallel Processor (MPP). Pixels are computed in parallel and their results are applied to the Z buffer in large groups. To perform pixel value calculations, facilitate load balancing across the processors and apply the results to the Z buffer efficiently in parallel requires special virtual routing (sort computation) techniques developed by the author especially for use on single-instruction multiple-data (SIMD) architectures.

  1. The Ratio of Blood Products Transfused Affects Mortality in Patients Receiving Massive Transfusions at a Combat Support Hospital

    DTIC Science & Technology

    2007-10-01

    therapy resuscitation, and exacer- bated by hemorrhagic shock, metabolic acidosis, hypother- mia, hyperfibrinolysis, hypocalcemia , and anemia.11,14–19...outcome studies examining the effect of blood product transfusion ratios for trauma patients requiring massive transfusion. Most deaths (80% to 85%) that...calculation of apheresis platelet units transfused, though FWB has previously been shown to be as effective as 10 units of platelet concentrate.33 The

  2. The coupling to matter in massive, bi- and multi-gravity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noller, Johannes; Melville, Scott, E-mail: noller@physics.ox.ac.uk, E-mail: scott.melville@queens.ox.ac.uk

    2015-01-01

    In this paper we construct a family of ways in which matter can couple to one or more 'metrics'/spin-2 fields in the vielbein formulation. We do so subject to requiring the weak equivalence principle and the absence of ghosts from pure spin-2 interactions generated by the matter action. Results are presented for Massive, Bi- and Multi-Gravity theories and we give explicit expressions for the effective matter metric in all of these cases.

  3. Brane SUSY breaking and the gravitino mass

    NASA Astrophysics Data System (ADS)

    Kitazawa, Noriaki

    2018-04-01

    Supergravity models with spontaneously broken supersymmetry have been widely investigated over the years, together with some notable non-linear limits. Although in these models the gravitino becomes naturally massive absorbing the degrees of freedom of a Nambu-Goldstone fermion, there are cases in which the naive counting of degrees of freedom does not apply, in particular because of the absence of explicit gravitino mass terms in unitary gauge. The corresponding models require non-trivial de Sitter-like backgrounds, and it becomes of interest to clarify the fate of their Nambu-Goldstone modes. We elaborate on the fact that these non-trivial backgrounds can accommodate, consistently, gravitino fields carrying a number of degrees of freedom that is intermediate between those of massless and massive fields in a flat spacetime. For instance, in a simple supergravity model of this type with de Sitter background, the overall degrees of freedom of gravitino are as many as for a massive spin-3/2 field in flat spacetime, while the gravitino remains massless in the sense that it undergoes null-cone propagation in the stereographic picture. On the other hand, in the ten-dimensional USp(32) Type I Sugimoto model with "brane SUSY breaking", which requires a more complicated background, the degrees of freedom of gravitino are half as many of those of a massive one, and yet it somehow behaves again as a massless one.

  4. Acute and massive bleeding from placenta previa and infants' brain damage.

    PubMed

    Furuta, Ken; Tokunaga, Shuichi; Furukawa, Seishi; Sameshima, Hiroshi

    2014-09-01

    Among the causes of third trimester bleeding, the impact of placenta previa on cerebral palsy is not well known. To clarify the effect of maternal bleeding from placenta previa on cerebral palsy, and in particular when and how it occurs. A descriptive study. Sixty infants born to mothers with placenta previa in our regional population-based study of 160,000 deliveries from 1998 to 2012. Premature deliveries occurring at<26 weeks of gestation and placenta accreta were excluded. Prevalence of cystic periventricular leukomalacia (PVL) and cerebral palsy (CP). Five infants had PVL and 4 of these infants developed CP (1/40,000 deliveries). Acute and massive bleeding (>500g within 8h) occurred at around 30-31 weeks of gestation, and was severe enough to deliver the fetus. None of the 5 infants with PVL underwent antenatal corticosteroid treatment, and 1 infant had mild neonatal hypocapnia with a PaCO2 <25mmHg. However, none of the 5 PVL infants showed umbilical arterial acidemia with pH<7.2, an abnormal fetal heart rate monitoring pattern, or neonatal hypotension. Our descriptive study showed that acute and massive bleeding from placenta previa at around 30 weeks of gestation may be a risk factor for CP, and requires careful neonatal follow-up. The underlying process connecting massive placental bleeding and PVL requires further investigation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Massive splenic infarction in Saudi patients with sickle cell anemia: a unique manifestation.

    PubMed

    Jama, Ali Hassan Al; Salem, Ahmed Hassan Al; Dabbous, Ibrahim Abdalla Al

    2002-03-01

    Splenic infarcts are common in patients with sickle cell anemia (SCA), but these are usually small and repetitive, leading ultimately to autosplenectomy. Massive splenic infarcts on the other hand are extremely rare. This is a report of our experience with 8 (4 males and 4 females) cases of massive splenic infarction in patients with SCA. Their ages ranged from 16 to 36 years (mean 22 years). Three presented with left upper quadrant abdominal pain and massive splenic infarction on admission, while the other 5 developed massive splenic infarction while in hospital. In 5 the precipitating factors were high altitude, postoperative, postpartum, salmonella septicemia, and strenuous exercise in one each, while the remaining 3 had severe generalized vasoocclusive crises. Although both ultrasound and CT scan of the abdomen were of diagnostic value, we found CT scan more accurate in delineating the size of infarction. All our patients were managed conservatively with I.V. fluids, analgesia, and blood transfusion when necessary. Diagnostic aspiration under ultrasound guidance was necessary in two patients to differentiate between massive splenic infarction and splenic abscess. Two patients required splenectomy during the same admission because of suspicion of secondary infection and abscess formation, while a third patient had splenectomy 2 months after the attack because of persistent left upper quadrant abdominal pain. In all the 3 histology of the spleen showed congestive splenomegaly with massive infarction. All of our patients survived. Two patients subsequently developed autosplenectomy while the remaining 3 continue to have persistent but asymptomatic splenomegaly. Massive splenic infarction is a rare and unique complication of SCA in the Eastern Province of Saudi Arabia, and for early diagnosis and treatment, physicians caring for these patients should be aware of such a complication.

  6. Balancing risk and benefit: maintenance of a thawed Group A plasma inventory for trauma patients requiring massive transfusion.

    PubMed

    Mehr, Chelsea R; Gupta, Rajan; von Recklinghausen, Friedrich M; Szczepiorkowski, Zbigniew M; Dunbar, Nancy M

    2013-06-01

    Transfusion of plasma and red blood cell (RBC) units in a balanced ratio approximating 1:1 has been shown in retrospective studies to be associated with improved outcomes for trauma patients. Our low-volume rural trauma center uses a trauma-activated transfusion algorithm. Plasma is thawed upon activation to avoid wastage. However, the time required for plasma thawing has made achievement of a 1:1 ratio early in resuscitation challenging. In this study, the time required for plasma thawing is characterized, and a potential solution is proposed. A retrospective chart study of 38 moderately and massively transfused (≥6 U in the first 24 hours) trauma patients admitted from January 2008 to March 2012 was performed. We evaluated the time required to dispense plasma and the number of RBCs dispensed before plasma in these patients. The average time between the dispense of RBCs and plasma was 26 minutes (median, 28; range, 0-48 minutes). The average number of RBCs dispensed before plasma was 8 U (median, 7 U; range, 0-24 U). Nearly one third of massively transfused patients had 10 RBCs or greater dispensed before plasma was available. There exists the potential for delayed plasma availability owing to time required for thawing, which may compromise the ability to provide balanced plasma to RBC transfusion to trauma patients. Maintenance of a thawed Group AB plasma inventory may not be operationally feasible for rural centers with low trauma volumes. Use of a thawed Group A plasma inventory is a potential alternative to ensure rapid plasma availability. Therapeutic study, level V.

  7. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  8. [Safety and efficacy of a prothrombin complex concentrate in patients with coagulopathy and hemorrhage].

    PubMed

    Martínez-Calle, N; Marcos-Jubilar, M; Alfonso, A; Hernández, M; Hidalgo, F; Lecumberri, R; Páramo, Ja

    2014-01-01

    Prothrombin complex concentrates (PCC) are approved for urgent reversal of vitamin K antagonists (VKA). Recently, PCC have been used in the management of massive bleeding-associated coagulopathy. The present work evaluates safety and efficacy of PCC in a case series of both VKA reversal and massive bleeding. Retrospective review of cases treated with CCP (January 2010 to February 2013). Safety endpoints were infusion reactions and incidence of thromboembolic events. Efficacy endpoints were: 1) VKA reversal efficacy and 2) Massive bleeding coagulopathy reversal and 24h mortality. Thirty-one patients were included (22 male), median age 61 years (range 30-86). No infusion reactions were detected, and only 1 thrombotic episode was observed. VKA reversal was effective in 100% of patients (6/6), all of them with complete reversal of INR value. In massive bleeding, 24-hour survival was 64% (16/25). Invasive hemostatic procedures were required in 28% of patients (7/25). CCP use was correlated with bleeding control in 44% of cases (11/25), and also significantly associated with survival (p=0.01). CCP are safe and effective for the novel indication of adjuvant treatment in massive bleeding patients, as well as for traditional urgent reversal of VKA.

  9. Aortic occlusion balloon catheter technique is useful for uncontrollable massive intraabdominal bleeding after hepato-pancreato-biliary surgery.

    PubMed

    Miura, Fumihiko; Takada, Tadahiro; Ochiai, Takenori; Asano, Takehide; Kenmochi, Takashi; Amano, Hodaka; Yoshida, Masahiro

    2006-04-01

    Massive intraabdominal hemorrhage sometimes requires urgent hemostatic surgical intervention. In such cases, its rapid stabilization is crucial to reestablish a general hemodynamic status. We used an aortic occlusion balloon catheter in patients with massive intraabdominal hemorrhage occurring after hepato-pancreato-biliary surgery. An 8-French balloon catheter was percutaneously inserted into the aorta from the femoral artery, and the balloon was placed just above the celiac artery. Fifteen minutes inflation and 5 minutes deflation were alternated during surgery until the bleeding was surgically controlled. An aortic occlusion balloon catheter was inserted on 13 occasions in 10 patients undergoing laparotomy for hemostasis of massive hemorrhage. The aorta was successfully occluded on 12 occasions in nine patients. Both systolic pressure and heart rate were normalized during aortic occlusion, and the operative field became clearly visible after adequate suction of leaked blood. Bleeding sites were then easily found and controlled. Hemorrhage was successfully controlled in 7 of 10 patients (70%), and they were discharged in good condition. The aortic occlusion balloon catheter technique was effective for easily controlling massive intraabdominal bleeding by hemostatic procedure after hepato-pancreato-biliary surgery.

  10. New and Topologically Massive Gravity, from the Outside In

    NASA Astrophysics Data System (ADS)

    Cunliff, Colin

    This thesis examines the asymptotically anti-de Sitter solutions of higher-derivative gravity in 2+1 dimensions, using a Fefferman-Graham-like approach that expands solutions from the boundary (at infinity) into the interior. First, solutions of topologically massive gravity (TMG) are analyzed for values of the mass parameter in the range mu ≥ 1. The traditional Fefferman-Graham expansion fails to capture the dynamics of TMG, and new terms in the asymptotic expansion are needed to include the massive graviton modes. The linearized modes of Carlip, Deser, Waldron and Wise map onto the non-Einstein solutions for all μ, with nonlinear corrections appearing at higher order in the expansion. A similar result is found for new massive gravity (NMG), where the asymptotic behavior of massive gravitons is found to depend on the coupling parameter m2. Additionally, new boundary conditions are discovered for a range of values -1 < 2m2 l2 < 1 at which non-Einstein modes decay more slowly than the rate required for Brown-Henneaux boundary conditions. The holographically renormalized stress tensor is computed for these modes, and the relevant counterterms are identified up to unphysical ambiguities.

  11. Organic and Inorganic Carbon in the Rio Tinto (Spain) Deep Subsurface System: a Possible Model for Subsurface Carbon and Lithoautotrophs on Mars.

    NASA Astrophysics Data System (ADS)

    Bonaccorsi, R.; Stoker, C. R.; MARTE Science Team

    2007-12-01

    The subsurface is the key environment for searching for life on planets lacking surface life. Subsurface ecosystems are of great relevance to astrobiology including the search for past/present life on Mars. Conditions on the Martian surface do not support biological activity but the subsurface might preserve organics and host subsurface life [1]. A key requirement for the analysis of subsurface samples on Mars is the ability to characterize organic vs. inorganic carbon pools. This information is needed to determine if the sample contains organic material of biological origin and/ or to establish if pools of inorganic carbon can support subsurface biospheres. The Mars Analog Rio Tinto Experiment (MARTE) performed deep drilling of cores i.e., down to 165-m depth, in a volcanically-hosted-massive-sulfide deposit at Rio Tinto, Spain, which is considered an important analog of the Sinus Meridiani site on Mars. Results from MARTE suggest the existence of a relatively complex subsurface life including aerobic and anaerobic chemoautotrophs, and strict anaerobic methanogens sustained by Fe and S minerals in anoxic conditions, which is an ideal model analog for a deep subsurface Martian environment. We report here on the distribution of organic (C-org: 0.01-0.3Wt% and inorganic carbon (IC = 0.01-7.0 Wt%) in a subsurface rock system including weathered/oxidized i.e., gossan, and unaltered pyrite stockwork. Cores were analyzed from 3 boreholes (BH-4, BH-7, and BH-8) that penetrated down to a depth of ~165 m into massive sulfide. Nearsurface phyllosilicate rich-pockets contain the highest amounts of organics (0.3Wt%) [2], while the deeper rocks contain the highest amount of carbonates. Assessing the amount of C pools available throughout the RT subsurface brings key insight on the type of trophic system sustaining its microbial ecosystem (i.e., heterotrophs vs. autotrophs) and the biogeochemical relationships that characterize a new type of subsurface biosphere at RT. This potentially novel biosphere on Earth could be used as a model to test for extant and extinct life on Mars. Furthermore, having found carbonates in an hyperacidic system (pH ~2.3) brings new insights on the possible occurrence of deep carbonates deposits under low-pH condition on Mars. [1] Boston, P.J., et al., 1992. Icarus 95,300-308; Bonaccorsi, Stoker and Sutter, 2007 Accepted with review in Astrobiology.

  12. [The importance of genealogy applied to genetic research in Costa Rica].

    PubMed

    Meléndez Obando, Mauricio O

    2004-09-01

    The extensive development of genealogical studies based on archival documents has provided powerful support for genetic research in Costa Rica over the past quarter century. As a result, several questions of population history have been answered, such as those involving hereditary illnesses, suggesting additional avenues and questions as well. Similarly, the preservation of massive amounts of historical documentation highlights the major advantages that the Costa Rican population offers to genetic research.

  13. Journal of Special Operations Medicine. Volume 7, Edition 4, Fall 2007

    DTIC Science & Technology

    2007-01-01

    which demonstrated massive amounts of pericardial fat , but no blood (a false positive FAST). Exploratory laparatomy revealed a catastrophic supra...ARTERIAL GAS EMBOLISM An additional concern in the unconscious diver is barotrauma and arterial gas embolism (AGE). Boyle’s law states that as pressure...ment in 32 cases of air embolism (abs). Proceedings: Joint Meeting on Diving andHyperbaric Medicine, 11-18 August. Amsterdam, The Netherlands, pg. 90. 13

  14. Litterfall Production Prior to and during Hurricanes Irma and Maria in Four Puerto Rican Forests

    Treesearch

    Xianbin Liu; Xiucheng Zeng; Xiaoming Zou; Grizelle González; Chao Wang; Si Yang

    2018-01-01

    Hurricanes Irma and Maria struck Puerto Rico on the 6th and 20th of September 2017, respectively. These two powerful Cat 5 hurricanes severely defoliated forest canopy and deposited massive amounts of litterfall in the forests across the island. We established a 1-ha research plot in each of four forests (Guánica State Forest, Río Abajo State Forest, Guayama Research...

  15. 14. Photographic copy of photograph, dated 21 July 1971 (original ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. Photographic copy of photograph, dated 21 July 1971 (original print in possession of U.S. Space & Strategic Defense Command Historic Office CSSD-HO, Huntsville, AL). Photographer unknown. View of missile site control building turret wall during early construction, illustrating the massive amount of rebar utilized in the project. - Stanley R. Mickelsen Safeguard Complex, Missile Site Control Building, Northeast of Tactical Road; southeast of Tactical Road South, Nekoma, Cavalier County, ND

  16. The shadow world of superstring theories

    NASA Technical Reports Server (NTRS)

    Kolb, E. W.; Turner, M. S.; Seckel, D.

    1985-01-01

    Some possible astrophysical and cosmological implications of 'shadow matter', a form of matter which only interacts gravitationally with ordinary matter and which may or may not be identical in its properties to ordinary matter, are considered. The possible existence, amount, and location of shadow matter in the solar system are discussed, and the significance of shadow matter for primordial nucleosynthesis, macroscopic asymmetry, baryogenesis, double-bubble inflation, and asymmetric microphysics is addressed. Massive shadow states are discussed.

  17. Ultra-fast outflows (aka UFOs) in AGNs and their relevance for feedback

    NASA Astrophysics Data System (ADS)

    Cappi, Massimo; Tombesi, F.; Giustini, M.; Dadina, M.; Braito, V.; Kaastra, J.; Reeves, J.; Chartas, G.; Gaspari, M.; Vignali, C.; Gofford, J.; Lanzuisi, G.

    2012-09-01

    During the last decade, several observational evidences have been accumulated for the existence of massive, high velocity winds/outflows (aka UFOs) in nearby AGNs and, possibly, distant quasars. I will review here such evidences, present some of the latest results in this field, and discuss the relevance of UFOs for both understanding the physics of accretion/ejection flows on supermassive black holes, and for quantifying the amount of AGN feedback.

  18. A paradigm shift towards low-nitrifying production systems: the role of biological nitrification inhibition (BNI).

    PubMed

    Subbarao, G V; Sahrawat, K L; Nakahara, K; Rao, I M; Ishitani, M; Hash, C T; Kishii, M; Bonnett, D G; Berry, W L; Lata, J C

    2013-07-01

    Agriculture is the single largest geo-engineering initiative that humans have initiated on planet Earth, largely through the introduction of unprecedented amounts of reactive nitrogen (N) into ecosystems. A major portion of this reactive N applied as fertilizer leaks into the environment in massive amounts, with cascading negative effects on ecosystem health and function. Natural ecosystems utilize many of the multiple pathways in the N cycle to regulate N flow. In contrast, the massive amounts of N currently applied to agricultural systems cycle primarily through the nitrification pathway, a single inefficient route that channels much of this reactive N into the environment. This is largely due to the rapid nitrifying soil environment of present-day agricultural systems. In this Viewpoint paper, the importance of regulating nitrification as a strategy to minimize N leakage and to improve N-use efficiency (NUE) in agricultural systems is highlighted. The ability to suppress soil nitrification by the release of nitrification inhibitors from plant roots is termed 'biological nitrification inhibition' (BNI), an active plant-mediated natural function that can limit the amount of N cycling via the nitrification pathway. The development of a bioassay using luminescent Nitrosomonas to quantify nitrification inhibitory activity from roots has facilitated the characterization of BNI function. Release of BNIs from roots is a tightly regulated physiological process, with extensive genetic variability found in selected crops and pasture grasses. Here, the current status of understanding of the BNI function is reviewed using Brachiaria forage grasses, wheat and sorghum to illustrate how BNI function can be utilized for achieving low-nitrifying agricultural systems. A fundamental shift towards ammonium (NH4(+))-dominated agricultural systems could be achieved by using crops and pastures with high BNI capacities. When viewed from an agricultural and environmental perspective, the BNI function in plants could potentially have a large influence on biogeochemical cycling and closure of the N loop in crop-livestock systems.

  19. A paradigm shift towards low-nitrifying production systems: the role of biological nitrification inhibition (BNI)

    PubMed Central

    Subbarao, G. V.; Sahrawat, K. L.; Nakahara, K.; Rao, I. M.; Ishitani, M.; Hash, C. T.; Kishii, M.; Bonnett, D. G.; Berry, W. L.; Lata, J. C.

    2013-01-01

    Background Agriculture is the single largest geo-engineering initiative that humans have initiated on planet Earth, largely through the introduction of unprecedented amounts of reactive nitrogen (N) into ecosystems. A major portion of this reactive N applied as fertilizer leaks into the environment in massive amounts, with cascading negative effects on ecosystem health and function. Natural ecosystems utilize many of the multiple pathways in the N cycle to regulate N flow. In contrast, the massive amounts of N currently applied to agricultural systems cycle primarily through the nitrification pathway, a single inefficient route that channels much of this reactive N into the environment. This is largely due to the rapid nitrifying soil environment of present-day agricultural systems. Scope In this Viewpoint paper, the importance of regulating nitrification as a strategy to minimize N leakage and to improve N-use efficiency (NUE) in agricultural systems is highlighted. The ability to suppress soil nitrification by the release of nitrification inhibitors from plant roots is termed ‘biological nitrification inhibition’ (BNI), an active plant-mediated natural function that can limit the amount of N cycling via the nitrification pathway. The development of a bioassay using luminescent Nitrosomonas to quantify nitrification inhibitory activity from roots has facilitated the characterization of BNI function. Release of BNIs from roots is a tightly regulated physiological process, with extensive genetic variability found in selected crops and pasture grasses. Here, the current status of understanding of the BNI function is reviewed using Brachiaria forage grasses, wheat and sorghum to illustrate how BNI function can be utilized for achieving low-nitrifying agricultural systems. A fundamental shift towards ammonium (NH4+)-dominated agricultural systems could be achieved by using crops and pastures with high BNI capacities. When viewed from an agricultural and environmental perspective, the BNI function in plants could potentially have a large influence on biogeochemical cycling and closure of the N loop in crop–livestock systems. PMID:23118123

  20. The radiation asymmetry in MGI rapid shutdown on J-TEXT tokamak

    NASA Astrophysics Data System (ADS)

    Tong, Ruihai; Chen, Zhongyong; Huang, Duwei; Cheng, Zhifeng; Zhang, Xiaolong; Zhuang, Ge; J-TEXT Team

    2017-10-01

    Disruptions, the sudden termination of tokamak fusion plasmas by instabilities, have the potential to cause severe material wall damage to large tokamaks like ITER. The mitigation of disruption damage is an essential part of any fusion reactor system. Massive gas injection (MGI) rapid shutdown is a technique in which large amounts of noble gas are injected into the plasma in order to safely radiate the plasma energy evenly over the entire plasma-facing first wall. However, the radiated energy during the thermal quench (TQ) in massive gas injection (MGI) induced disruptions is found toroidal asymmetric, and the degrees of asymmetry correlate with the gas penetration and MGI induced magnetohydrodynamics (MHD) activities. A toroidal and poloidal array of ultraviolet photodiodes (AXUV) has been developed to investigate the radiation asymmetry on J-TEXT tokamak. Together with the upgraded mirnov probe arrays, the relation between MGI triggered MHD activities with radiation asymmetry is studied.

  1. Minimization of Roll Firings for Optimal Propellant Maneuvers

    NASA Astrophysics Data System (ADS)

    Leach, Parker C.

    Attitude control of the International Space Station (ISS) is critical for operations, impacting power, communications, and thermal systems. The station uses gyroscopes and thrusters for attitude control, and reorientations are normally assisted by thrusters on docked vehicles. When the docked vehicles are unavailable, the reduction in control authority in the roll axis results in frequent jet firings and massive fuel consumption. To improve this situation, new guidance and control schemes are desired that provide control with fewer roll firings. Optimal control software was utilized to solve for potential candidates that satisfied desired conditions with the goal of minimizing total propellant. An ISS simulation too was then used to test these solutions for feasibility. After several problem reformulations, multiple candidate solutions minimizing or completely eliminating roll firings were found. Flight implementation would not only save massive amounts of fuel and thus money, but also reduce ISS wear and tear, thereby extending its lifetime.

  2. Template based parallel checkpointing in a massively parallel computer system

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Inglett, Todd Alan [Rochester, MN

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  3. Continuous and massive intake of chitosan affects mineral and fat-soluble vitamin status in rats fed on a high-fat diet.

    PubMed

    Deuchi, K; Kanauchi, O; Shizukuishi, M; Kobayashi, E

    1995-07-01

    We investigated the effects of continuous and massive intake of chitosan with sodium ascorbate (AsN) on the mineral and the fat-soluble vitamin status in male Sprague-Dawley rats fed on a high-fat diet. The apparent fat digestibility in the chitosan-receiving group was significantly lower than that in the cellulose- or glucosamine-receiving group. Chitosan feeding for 2 weeks caused a decrease in mineral absorption and bone mineral content, and it was necessary to administer twice the amount of Ca in the AIN-76 formula, which was supplemented with AsN, to prevent such a decrease in the bone mineral content. Moreover, the ingestion of chitosan along with AsN led to a marked and rapid decrease in the serum vitamin E level, while such a loss in vitamin E was not observed for rats given glucosamine monomer instead of chitosan.

  4. Data, Meet Compute: NASA's Cumulus Ingest Architecture

    NASA Technical Reports Server (NTRS)

    Quinn, Patrick

    2018-01-01

    NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has looked to the cloud to address these needs, building its Cumulus system to manage the ingest of diverse data in a wide variety of formats into the cloud. In this talk, we look at what Cumulus is from a high level and then take a deep dive into how it manages complexity and versioning associated with multiple AWS Lambda and ECS microservices communicating through AWS Step Functions across several disparate installations

  5. Roadmap of optical communications

    NASA Astrophysics Data System (ADS)

    Agrell, Erik; Karlsson, Magnus; Chraplyvy, A. R.; Richardson, David J.; Krummrich, Peter M.; Winzer, Peter; Roberts, Kim; Fischer, Johannes Karl; Savory, Seb J.; Eggleton, Benjamin J.; Secondini, Marco; Kschischang, Frank R.; Lord, Andrew; Prat, Josep; Tomkos, Ioannis; Bowers, John E.; Srinivasan, Sudha; Brandt-Pearce, Maïté; Gisin, Nicolas

    2016-06-01

    Lightwave communications is a necessity for the information age. Optical links provide enormous bandwidth, and the optical fiber is the only medium that can meet the modern society's needs for transporting massive amounts of data over long distances. Applications range from global high-capacity networks, which constitute the backbone of the internet, to the massively parallel interconnects that provide data connectivity inside datacenters and supercomputers. Optical communications is a diverse and rapidly changing field, where experts in photonics, communications, electronics, and signal processing work side by side to meet the ever-increasing demands for higher capacity, lower cost, and lower energy consumption, while adapting the system design to novel services and technologies. Due to the interdisciplinary nature of this rich research field, Journal of Optics has invited 16 researchers, each a world-leading expert in their respective subfields, to contribute a section to this invited review article, summarizing their views on state-of-the-art and future developments in optical communications.

  6. Collisions in primordial star clusters. Formation pathway for intermediate mass black holes

    NASA Astrophysics Data System (ADS)

    Reinoso, B.; Schleicher, D. R. G.; Fellhauer, M.; Klessen, R. S.; Boekholt, T. C. N.

    2018-06-01

    Collisions were suggested to potentially play a role in the formation of massive stars in present day clusters, and have likely been relevant during the formation of massive stars and intermediate mass black holes within the first star clusters. In the early Universe, the first stellar clusters were particularly dense, as fragmentation typically only occurred at densities above 109 cm-3, and the radii of the protostars were enhanced as a result of larger accretion rates, suggesting a potentially more relevant role of stellar collisions. We present here a detailed parameter study to assess how the number of collisions and the mass growth of the most massive object depend on the properties of the cluster. We also characterize the time evolution with three effective parameters: the time when most collisions occur, the duration of the collisions period, and the normalization required to obtain the total number of collisions. We apply our results to typical Population III (Pop. III) clusters of about 1000 M⊙, finding that a moderate enhancement of the mass of the most massive star by a factor of a few can be expected. For more massive Pop. III clusters as expected in the first atomic cooling halos, we expect a more significant enhancement by a factor of 15-32. We therefore conclude that collisions in massive Pop. III clusters were likely relevant to form the first intermediate mass black holes.

  7. Lunar Navigation with Libration Point Orbiters and GPS

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell

    2004-01-01

    NASA is currently studying a Vision for Space Exploration based on spiral development of robotic and piloted missions to the moon and Mars, but research into how to perform such missions has continued ever since the first era of lunar exploration. One area of study that a number of researchers have pursued is libration point navigation and communication relay concepts. These concepts would appear to support many of NASA's current requirements for navigation and communications coverage for human and robotic spacecraft operating in lunar space and beyond. In trading libration point concepts against other options, designers must consider issues such as the number of spacecraft, required to provide coverage, insertion and stationkeeping costs, power and data rate requirements, frequency allocations, and many others. The libration points, along with a typical cis-lunar trajectory, are equilibrium locations for an infinitesimal mass in the rotating coordinate system that follows the motion of two massive bodies in circular orbits with respect to their common barycenter. There are three co-linear points along the line connecting the massive bodies: between the bodies, beyond the secondary body, and beyond the primary body. The relative distances of these points along the line connecting the bodies depend on the mass ratios. There are also two points that form equilateral triangles with the massive bodies. Ideally, motion in the neighborhood of the co-linear points is unstable, while motion near the equilibrium points is stable. However, in the real world, the motions are highly perturbed so that a satellite will require stationkeeping maneuvers.

  8. Very Massive Stars in the Primitive Galaxy, IZw 18

    NASA Technical Reports Server (NTRS)

    Heap, Sara

    2012-01-01

    IZw 18 is a local blue, compact dwarf galaxy that meets the requirements for a primitive galaxy: low halo mass greater than 10(exp 9) Msun, strong photoionizing radiation, no galactic outflow, and very low metallicity,log(O/H)+12=7.2. We will describe the properties and evolutionary status of very massive stars in IZw 18, based on UV photometry of individual stars in I Zw 18 and analysis of unresolved ultraviolet spectra of IZw 18-NW obtained with HST.

  9. The Inertia Reaction Force and Its Vacuum Origin

    NASA Astrophysics Data System (ADS)

    Rueda, Alfonso; Haisch, Bernard

    By means of a covariant approach we show that there must be a contribution to the inertial mass and to the inertial reaction force on an accelerated massive object by the zero-point electromagnetic field. This development does not require any detailed model of the accelerated object other than the knowledge that it interacts electromagnetically. It is shown that inertia can indeed be construed as an opposition of the vacuum fields to any change to the uniform state of motion of an object. Interesting insights originating from this result are discussed. It is argued why the proposed existence of a Higgs field in no way contradicts or is at odds with the above statements. The Higgs field is responsible for assigning mass to elementary particles. It is argued that still the underlying reason for the opposition to acceleration that massive objects present requires an explanation. The explanation proposed here fulfills that requirement.

  10. Dilutional hyponatraemia: a cause of massive fatal intraoperative cerebral oedema in a child undergoing renal transplantation.

    PubMed

    Armour, A

    1997-05-01

    A four year old boy with polyuric renal failure resulting from recurrent urinary tract infections and vesicoureteric reflux from birth underwent renal transplantation. In the past he had had five ureteric reimplant operations and a gastrostomy, as he ate nothing by mouth. He required peritoneal dialysis 13 hours a night, six nights a week. His fluid requirements were 2100 ml per day. This included a night feed of 1.5 litres Nutrizon. Before operation he received 900 ml of Dioralyte instead of the Nutrizon feed, and peritoneal dialysis was performed as usual. The operation itself was technically difficult and there was more blood loss than anticipated, requiring intravenous fluids and blood. The operation ended about four hours later but he did not wake up. Urgent computed tomography revealed gross cerebral oedema. He died the next day. At necropsy the brain was massively oedematous and weighed 1680 g.

  11. SN 2006gy: Discovery of the Most Luminous Supernova Ever Recorded, Powered by the Death of an Extremely Massive Star like η Carinae

    NASA Astrophysics Data System (ADS)

    Smith, Nathan; Li, Weidong; Foley, Ryan J.; Wheeler, J. Craig; Pooley, David; Chornock, Ryan; Filippenko, Alexei V.; Silverman, Jeffrey M.; Quimby, Robert; Bloom, Joshua S.; Hansen, Charles

    2007-09-01

    We report the discovery and early observations of the peculiar Type IIn supernova (SN) 2006gy in NGC 1260. With a peak visual magnitude of about -22, it is the most luminous supernova ever recorded. Its very slow rise to maximum took ~70 days, and it stayed brighter than -21 mag for about 100 days. It is not yet clear what powers the enormous luminosity and the total radiated energy of ~1051 erg, but we argue that any known mechanism-thermal emission, circumstellar interaction, or 56Ni decay-requires a very massive progenitor star. The circumstellar interaction hypothesis would require truly exceptional conditions around the star, which, in the decades before its death, must have experienced a luminous blue variable (LBV) eruption like the 19th century eruption of η Carinae. However, this scenario fails to explain the weak and unabsorbed soft X-rays detected by Chandra. Radioactive decay of 56Ni may be a less objectionable hypothesis, but it would imply a large Ni mass of ~22 Msolar, requiring SN 2006gy to have been a pair-instability supernova where the star's core was obliterated. While this is still uncertain, SN 2006gy is the first supernova for which we have good reason to suspect a pair-instability explosion. Based on a number of lines of evidence, we eliminate the hypothesis that SN 2006gy was a ``Type IIa'' event, that is, a white dwarf exploding inside a hydrogen envelope. Instead, we propose that the progenitor was a very massive evolved object like η Carinae that, contrary to expectations, failed to shed its hydrogen envelope. SN 2006gy implies that some of the most massive stars can explode prematurely during the LBV phase, never becoming Wolf-Rayet stars. SN 2006gy also suggests that they can create brilliant supernovae instead of experiencing ignominious deaths through direct collapse to a black hole. If such a fate is common among the most massive stars, then observable supernovae from Population III stars in the early universe will be more numerous than previously believed.

  12. Massive acetaminophen overdose: effect of hemodialysis on acetaminophen and acetylcysteine kinetics.

    PubMed

    Ghannoum, Marc; Kazim, Sara; Grunbaum, Ami M; Villeneuve, Eric; Gosselin, Sophie

    2016-07-01

    Early onset acidosis from mitochondrial toxicity can be observed in massive acetaminophen poisoning prior to the development of hepatotoxicity. In this context, the efficacy of acetylcysteine to reverse mitochondrial toxicity remains unclear and hemodialysis may offer prompt correction of acidosis. Unfortunately, toxicokinetics of acetaminophen and acetylcysteine during extracorporeal treatments hemodialysis have seldom been described. An 18-year-old woman presented to the emergency department 60 minutes after ingestion of 100 g of acetaminophen, and unknown amounts of ibuprofen and ethanol. Initial assessment revealed an agitated patient. Her mental status worsened and she required intubation for airway protection. Investigations showed metabolic acidosis with lactate peaking at 8.6 mmol/L. Liver and coagulation profiles remained normal. Acetaminophen concentration peaked at 981 μg/ml (6496 μmol/L). Pending hemodialysis, the patient received 100 g of activated charcoal and an acetylcysteine infusion at 150 mg/kg over 1 hour, followed by 12.5 mg/kg/h for 4 hours. During hemodialysis, the infusion was maintained at 12.5 mg/kg/h to compensate for expected removal before it was decreased to 6.25 mg/kg for 20 hours after hemodialysis. The patient rapidly improved during hemodialysis and was discharged 48 hours post-admission. The acetaminophen elimination half-life was 5.2 hours prior to hemodialysis, 1.9-hours during hemodialysis and 3.6 hours post hemodialysis. The acetaminophen and acetylcysteine clearances by A-V gradient during hemodialysis were 160.4 ml/min and 190.3 ml/min, respectively. Hemodialysis removed a total of 20.6 g of acetaminophen and 17.9 g of acetylcysteine. This study confirms the high dialyzability of both acetaminophen and acetylcysteine. Hemodialysis appears to be a beneficial therapeutic option in cases of massive acetaminophen ingestion with coma and lactic acidosis. Additionally, these results suggest that the infusion rate of acetylcysteine must be more than double during hemodialysis to compensate for its ongoing removal and provide similar plasma concentrations to the usual acetylcysteine regimen.

  13. Knowledge Discovery and Data Mining in Iran's Climatic Researches

    NASA Astrophysics Data System (ADS)

    Karimi, Mostafa

    2013-04-01

    Advances in measurement technology and data collection is the database gets larger. Large databases require powerful tools for analysis data. Iterative process of acquiring knowledge from information obtained from data processing is done in various forms in all scientific fields. However, when the data volume large, and many of the problems the Traditional methods cannot respond. in the recent years, use of databases in various scientific fields, especially atmospheric databases in climatology expanded. in addition, increases in the amount of data generated by the climate models is a challenge for analysis of it for extraction of hidden pattern and knowledge. The approach to this problem has been made in recent years uses the process of knowledge discovery and data mining techniques with the use of the concepts of machine learning, artificial intelligence and expert (professional) systems is overall performance. Data manning is analytically process for manning in massive volume data. The ultimate goal of data mining is access to information and finally knowledge. climatology is a part of science that uses variety and massive volume data. Goal of the climate data manning is Achieve to information from variety and massive atmospheric and non-atmospheric data. in fact, Knowledge Discovery performs these activities in a logical and predetermined and almost automatic process. The goal of this research is study of uses knowledge Discovery and data mining technique in Iranian climate research. For Achieve This goal, study content (descriptive) analysis and classify base method and issue. The result shown that in climatic research of Iran most clustering, k-means and wards applied and in terms of issues precipitation and atmospheric circulation patterns most introduced. Although several studies in geography and climate issues with statistical techniques such as clustering and pattern extraction is done, Due to the nature of statistics and data mining, but cannot say for internal climate studies in data mining and knowledge discovery techniques are used. However, it is necessary to use the KDD Approach and DM techniques in the climatic studies, specific interpreter of climate modeling result.

  14. Massive transfusion: an overview of the main characteristics and potential risks associated with substances used for correction of a coagulopathy.

    PubMed

    Seghatchian, Jerard; Samama, Meyer Michel

    2012-10-01

    Massive transfusion (MT) is an empiric mode of treatment advocated for uncontrolled bleeding and massive haemorrhage, aiming at optimal resuscitation and aggressive correction of coagulopathy. Conventional guidelines recommend early administration of crystalloids and colloids in conjunction with red cells, where the red cell also plays a critical haemostatic function. Plasma and platelets are only used in patients with microvascular bleeding with PT/APTT values >1.5 times the normal values and if PLT counts are below 50×10(9)/L. Massive transfusion carries a significant mortality rate (40%), which increases with the number of volume expanders and blood components transfused. Controversies still exist over the optimal ratio of blood components with respect to overall clinical outcomes and collateral damage. While inadequate transfusion is believed to be associated with poor outcomes but empirical over transfusion results in unnecessary donor exposure with an increased rate of sepsis, transfusion overload and infusion of variable amounts of some biological response modifiers (BRMs), which have the potential to cause additional harm. Alternative strategies, such as early use of tranexamic acid are helpful. However in trauma settings the use of warm fresh whole blood (WFWB) instead of reconstituted components with a different ratio of stored components might be the most cost effective and safer option to improve the patient's survival rate and minimise collateral damage. This manuscript, after a brief summary of standard medical intervention in massive transfusion focuses on the main characteristics of various substances currently available to overcome massive transfusion coagulopathy. The relative levels of some BRMs in fresh and aged blood components of the same origin are highlighted and some myths and unresolved issues related to massive transfusion practice are discussed. In brief, the coagulopathy in MT is a complex phenomenon, often complicated by chronic activation of coagulation, platelets, complement and vascular endothelial cells, where haemolysis, microvesiculation, exposure of phosphatidyl serine positive cells, altered red cells with reduced adhesive proteins and the presence of some BRM, could play a pivotal role in the coagulopathy and untoward effects. The challenges of improving the safety of massive transfusion remain as numerous and as varied as ever. The answer may reside in appropriate studies on designer whole blood, combined with new innovative tools to diagnosis a coagulopathy and an evidence based mode of therapy to establish the optimal survival benefit of patients, always taking into account the concept of harm reduction and reduction of collateral damage. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. The Rise and Fall of the Type Ib Supernova iPTF13bvn Not a Massive Wolf-Rayet Star

    NASA Technical Reports Server (NTRS)

    Fremling, C.; Sollerman, J.; Taddia, F.; Ergon, M.; Valenti, S.; Arcavi, I.; Ben-Ami, S.; Cao, Y.; Cenko, S. B.; Filippenko, A. V.; hide

    2014-01-01

    Context. We investigate iPTF13bvn, a core-collapse (CC) supernova (SN) in the nearby spiral galaxy NGC 5806. This object was discovered by the intermediate Palomar Transient Factory (iPTF) very close to the estimated explosion date and was classified as a stripped-envelope CC SN, likely of Type Ib. Furthermore, a possible progenitor detection in pre-explosion Hubble Space Telescope (HST) images was reported, making this the only SN Ib with such an identification. Based on the luminosity and color of the progenitor candidate, as well as on early-time spectra and photometry of the SN, it was argued that the progenitor candidate is consistent with a single, massive Wolf-Rayet (WR) star. Aims. We aim to confirm the progenitor detection, to robustly classify the SN using additional spectroscopy, and to investigate if our follow-up photometric and spectroscopic data on iPTF13bvn are consistent with a single-star WR progenitor scenario. Methods. We present a large set of observational data, consisting of multi-band light curves (UBVRI, g'r'i'z') and optical spectra. We perform standard spectral line analysis to track the evolution of the SN ejecta. We also construct a bolometric light curve and perform hydrodynamical calculations to model this light curve to constrain the synthesized radioactive nickel mass and the total ejecta mass of the SN. Late-time photometry is analyzed to constrain the amount of oxygen. Furthermore, image registration of pre- and post-explosion HST images is performed. Results. Our HST astrometry confirms the location of the progenitor candidate of iPTF13bvn, and follow-up spectra securely classify this as a SN Ib. We use our hydrodynamical model to fit the observed bolometric light curve, estimating the total ejecta mass to be 1.9 solar mass and the radioactive nickel mass to be 0.05 solar mass. The model fit requires the nickel synthesized in the explosion to be highly mixed out in the ejecta. We also find that the late-time nebular r'-band luminosity is not consistent with predictions based on the expected oxygen nucleosynthesis in very massive stars. Conclusions. We find that our bolometric light curve of iPTF13bvn is not consistent with the previously proposed single massive WR-star progenitor scenario. The total ejecta mass and, in particular, the late-time oxygen emission are both significantly lower than what would be expected from a single WR progenitor with a main-sequence mass of at least 30 solar mass.

  16. An autonomous surface discontinuity detection and quantification method by digital image correlation and phase congruency

    NASA Astrophysics Data System (ADS)

    Cinar, A. F.; Barhli, S. M.; Hollis, D.; Flansbjer, M.; Tomlinson, R. A.; Marrow, T. J.; Mostafavi, M.

    2017-09-01

    Digital image correlation has been routinely used to measure full-field displacements in many areas of solid mechanics, including fracture mechanics. Accurate segmentation of the crack path is needed to study its interaction with the microstructure and stress fields, and studies of crack behaviour, such as the effect of closure or residual stress in fatigue, require data on its opening displacement. Such information can be obtained from any digital image correlation analysis of cracked components, but it collection by manual methods is quite onerous, particularly for massive amounts of data. We introduce the novel application of Phase Congruency to detect and quantify cracks and their opening. Unlike other crack detection techniques, Phase Congruency does not rely on adjustable threshold values that require user interaction, and so allows large datasets to be treated autonomously. The accuracy of the Phase Congruency based algorithm in detecting cracks is evaluated and compared with conventional methods such as Heaviside function fitting. As Phase Congruency is a displacement-based method, it does not suffer from the noise intensification to which gradient-based methods (e.g. strain thresholding) are susceptible. Its application is demonstrated to experimental data for cracks in quasi-brittle (Granitic rock) and ductile (Aluminium alloy) materials.

  17. Accretion of low-metallicity gas by the Milky Way.

    PubMed

    Wakker, B P; Howk, J C; Savage, B D; van Woerden, H; Tufte, S L; Schwarz, U J; Benjamin, R; Reynolds, R J; Peletier, R F; Kalberla, P M

    1999-11-25

    Models of the chemical evolution of the Milky Way suggest that the observed abundances of elements heavier than helium ('metals') require a continuous infall of gas with metallicity (metal abundance) about 0.1 times the solar value. An infall rate integrated over the entire disk of the Milky Way of approximately 1 solar mass per year can solve the 'G-dwarf problem'--the observational fact that the metallicities of most long-lived stars near the Sun lie in a relatively narrow range. This infall dilutes the enrichment arising from the production of heavy elements in stars, and thereby prevents the metallicity of the interstellar medium from increasing steadily with time. However, in other spiral galaxies, the low-metallicity gas needed to provide this infall has been observed only in associated dwarf galaxies and in the extreme outer disk of the Milky Way. In the distant Universe, low-metallicity hydrogen clouds (known as 'damped Ly alpha absorbers') are sometimes seen near galaxies. Here we report a metallicity of 0.09 times solar for a massive cloud that is falling into the disk of the Milky Way. The mass flow associated with this cloud represents an infall per unit area of about the theoretically expected rate, and approximately 0.1-0.2 times the amount required for the whole Galaxy.

  18. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  19. Extraction of High Molecular Weight DNA from Fungal Rust Spores for Long Read Sequencing.

    PubMed

    Schwessinger, Benjamin; Rathjen, John P

    2017-01-01

    Wheat rust fungi are complex organisms with a complete life cycle that involves two different host plants and five different spore types. During the asexual infection cycle on wheat, rusts produce massive amounts of dikaryotic urediniospores. These spores are dikaryotic (two nuclei) with each nucleus containing one haploid genome. This dikaryotic state is likely to contribute to their evolutionary success, making them some of the major wheat pathogens globally. Despite this, most published wheat rust genomes are highly fragmented and contain very little haplotype-specific sequence information. Current long-read sequencing technologies hold great promise to provide more contiguous and haplotype-phased genome assemblies. Long reads are able to span repetitive regions and phase structural differences between the haplomes. This increased genome resolution enables the identification of complex loci and the study of genome evolution beyond simple nucleotide polymorphisms. Long-read technologies require pure high molecular weight DNA as an input for sequencing. Here, we describe a DNA extraction protocol for rust spores that yields pure double-stranded DNA molecules with molecular weight of >50 kilo-base pairs (kbp). The isolated DNA is of sufficient purity for PacBio long-read sequencing, but may require additional purification for other sequencing technologies such as Nanopore and 10× Genomics.

  20. Plasma transfusion for patients with severe hemorrhage: what is the evidence?

    PubMed

    Callum, Jeannie L; Rizoli, Sandro

    2012-05-01

    The following review will detail the current knowledge in massive hemorrhage with regard to the pathophysiology of the coagulation disturbance, the role of plasma, the role of alternatives to plasma, and the clinical value of having a massive transfusion protocol. The coagulation disturbance in trauma patients is more than just the result of consumption of clotting factors at sites of injury and dilution from the infusion of intravenous fluids and red blood cells (RBCs). Even before substantial amounts of fluid resuscitation and RBC transfusion, one-quarter of trauma patients already have abnormal coagulation variables. There is an apparent role for the activation of protein C, hypofibrinogenemia, and fibrin(gen)olysis in the coagulation disturbance after trauma and massive hemorrhage. None of these three disturbances would be completely mitigated by the use of plasma alone, suggesting that there may be an opportunity to improve care of these patients with alternative strategies, such as fibrinogen concentrates and antifibrinolytics. Despite numerous retrospective cohort studies evaluating 1:1 plasma to RBC formula-driven resuscitation, the overall clinical value of this approach is unclear. Studies have even raised concerns regarding a potential increase in morbidity associated with this approach, particularly for patients overtriaged to 1:1 where a massive transfusion is unlikely. We also do not have sufficient evidence to recommend either goal-directed therapy with thromboelastography or early use of fibrinogen replacement, with either cryoprecipitate or fibrinogen concentrates. We have high-quality data that argue against the role for recombinant Factor VIIa that should prompt removal of this strategy from existing protocols. In contrast, we have high-level evidence that all bleeding trauma patients should receive tranexamic acid as soon as possible after injury. This therapy must be included in hemorrhage protocols. If we are to improve the care of massively bleeding patients on a firm scientific ground, we will need large-scale randomized trials to delineate the role of coagulation replacement and the utility of laboratory monitoring. But even until these trials are completed, it is clear that a massive transfusion protocol is needed in all hospitals that manage bleeding patients, to ensure a prompt and coordinated response to hemorrhage. © 2012 American Association of Blood Banks.

  1. Health plans keeping drug cost increases in check with programs that promote generics.

    PubMed

    2002-07-01

    To counter the massive amount of drug company detailing and marketing that is partly responsible for driving up pharmaceutical costs, health plans and some independent practice associations are promoting the use of generics to physicians in their networks. While most physicians in capitated contracts don't directly benefit from the movement to encourage generics unless they have pharmacy risk, some health plans are paying physicians financial incentives to increase generic prescribing.

  2. Black Holes Collide

    NASA Image and Video Library

    2017-12-08

    When two black holes collide, they release massive amounts of energy in the form of gravitational waves that last a fraction of a second and can be "heard" throughout the universe - if you have the right instruments. Today we learned that the #LIGO project heard the telltale chirp of black holes colliding, fulfilling Einstein's General Theory of Relativity. NASA's LISA mission will look for direct evidence of gravitational waves. go.nasa.gov/23ZbqoE This video illustrates what that collision might look like.

  3. Probiotics as control agents in aquaculture

    NASA Astrophysics Data System (ADS)

    Geovanny D, Gómez R.; Balcázar, José Luis; Ma, Shen

    2007-01-01

    Infectious diseases constitute a limiting factor in the development of the aquaculture production, and control has solely concentrated on the use of antibiotics. However, the massive use of antibiotics for the control of diseases has been questioned by acquisition of antibiotic resistance and the need of alternative is of prime importance. Probiotics, live microorganisms administered in adequate amounts that confer a healthy effect on the host, are emerging as significant microbial food supplements in the field of prophylaxis.

  4. Ultra-fast outflows (aka UFOs) from AGNs and QSOs

    NASA Astrophysics Data System (ADS)

    Cappi, M.; Tombesi, F.; Giustini, M.

    During the last decade, strong observational evidence has been accumulated for the existence of massive, high velocity winds/outflows (aka Ultra Fast Outflows, UFOs) in nearby AGNs and in more distant quasars. Here we briefly review some of the most recent developments in this field and discuss the relevance of UFOs for both understanding the physics of accretion disk winds in AGNs, and for quantifying the global amount of AGN feedback on the surrounding medium.

  5. A physical model of mass ejection in failed supernovae

    NASA Astrophysics Data System (ADS)

    Coughlin, Eric R.; Quataert, Eliot; Fernández, Rodrigo; Kasen, Daniel

    2018-06-01

    During the core collapse of massive stars, the formation of the proto-neutron star is accompanied by the emission of a significant amount of mass energy (˜0.3 M⊙) in the form of neutrinos. This mass-energy loss generates an outward-propagating pressure wave that steepens into a shock near the stellar surface, potentially powering a weak transient associated with an otherwise-failed supernova. We analytically investigate this mass-loss-induced wave generation and propagation. Heuristic arguments provide an accurate estimate of the amount of energy contained in the outgoing sound pulse. We then develop a general formalism for analysing the response of the star to centrally concentrated mass loss in linear perturbation theory. To build intuition, we apply this formalism to polytropic stellar models, finding qualitative and quantitative agreement with simulations and heuristic arguments. We also apply our results to realistic pre-collapse massive star progenitors (both giants and compact stars). Our analytic results for the sound pulse energy, excitation radius, and steepening in the stellar envelope are in good agreement with full time-dependent hydrodynamic simulations. We show that prior to the sound pulses arrival at the stellar photosphere, the photosphere has already reached velocities ˜ 20-100 per cent of the local sound speed, thus likely modestly decreasing the stellar effective temperature prior to the star disappearing. Our results provide important constraints on the physical properties and observational appearance of failed supernovae.

  6. Stellar Wind Retention and Expulsion in Massive Star Clusters

    NASA Astrophysics Data System (ADS)

    Naiman, J. P.; Ramirez-Ruiz, E.; Lin, D. N. C.

    2018-05-01

    Mass and energy injection throughout the lifetime of a star cluster contributes to the gas reservoir available for subsequent episodes of star formation and the feedback energy budget responsible for ejecting material from the cluster. In addition, mass processed in stellar interiors and ejected as winds has the potential to augment the abundance ratios of currently forming stars, or stars which form at a later time from a retained gas reservoir. Here we present hydrodynamical simulations that explore a wide range of cluster masses, compactnesses, metallicities and stellar population age combinations in order to determine the range of parameter space conducive to stellar wind retention or wind powered gas expulsion in star clusters. We discuss the effects of the stellar wind prescription on retention and expulsion effectiveness, using MESA stellar evolutionary models as a test bed for exploring how the amounts of wind retention/expulsion depend upon the amount of mixing between the winds from stars of different masses and ages. We conclude by summarizing some implications for gas retention and expulsion in a variety of compact (σv ≳ 20 kms-1) star clusters including young massive star clusters (105 ≲ M/M⊙ ≲ 107, age ≲ 500 Myrs), intermediate age clusters (105 ≲ M/M⊙ ≲ 107, age ≈ 1 - 4 Gyrs), and globular clusters (105 ≲ M/M⊙ ≲ 107, age ≳ 10 Gyrs).

  7. Understanding Spatiotemporal Patterns of Biking Behavior by Analyzing Massive Bike Sharing Data in Chicago

    PubMed Central

    Zhou, Xiaolu

    2015-01-01

    The growing number of bike sharing systems (BSS) in many cities largely facilitates biking for transportation and recreation. Most recent bike sharing systems produce time and location specific data, which enables the study of travel behavior and mobility of each individual. However, despite a rapid growth of interest, studies on massive bike sharing data and the underneath travel pattern are still limited. Few studies have explored and visualized spatiotemporal patterns of bike sharing behavior using flow clustering, nor examined the station functional profiles based on over-demand patterns. This study investigated the spatiotemporal biking pattern in Chicago by analyzing massive BSS data from July to December in 2013 and 2014. The BSS in Chicago gained more popularity. About 15.9% more people subscribed to this service. Specifically, we constructed bike flow similarity graph and used fastgreedy algorithm to detect spatial communities of biking flows. By using the proposed methods, we discovered unique travel patterns on weekdays and weekends as well as different travel trends for customers and subscribers from the noisy massive amount data. In addition, we also examined the temporal demands for bikes and docks using hierarchical clustering method. Results demonstrated the modeled over-demand patterns in Chicago. This study contributes to offer better knowledge of biking flow patterns, which was difficult to obtain using traditional methods. Given the trend of increasing popularity of the BSS and data openness in different cities, methods used in this study can extend to examine the biking patterns and BSS functionality in different cities. PMID:26445357

  8. Star formation induced by cloud-cloud collisions and galactic giant molecular cloud evolution

    NASA Astrophysics Data System (ADS)

    Kobayashi, Masato I. N.; Kobayashi, Hiroshi; Inutsuka, Shu-ichiro; Fukui, Yasuo

    2018-05-01

    Recent millimeter/submillimeter observations towards nearby galaxies have started to map the whole disk and to identify giant molecular clouds (GMCs) even in the regions between galactic spiral structures. Observed variations of GMC mass functions in different galactic environments indicates that massive GMCs preferentially reside along galactic spiral structures whereas inter-arm regions have many small GMCs. Based on the phase transition dynamics from magnetized warm neutral medium to molecular clouds, Kobayashi et al. (2017, ApJ, 836, 175) proposes a semi-analytical evolutionary description for GMC mass functions including a cloud-cloud collision (CCC) process. Their results show that CCC is less dominant in shaping the mass function of GMCs than the accretion of dense H I gas driven by the propagation of supersonic shock waves. However, their formulation does not take into account the possible enhancement of star formation by CCC. Millimeter/submillimeter observations within the Milky Way indicate the importance of CCC in the formation of star clusters and massive stars. In this article, we reformulate the time-evolution equation largely modified from Kobayashi et al. (2017, ApJ, 836, 175) so that we additionally compute star formation subsequently taking place in CCC clouds. Our results suggest that, although CCC events between smaller clouds are more frequent than the ones between massive GMCs, CCC-driven star formation is mostly driven by massive GMCs ≳ 10^{5.5} M_{⊙} (where M⊙ is the solar mass). The resultant cumulative CCC-driven star formation may amount to a few 10 percent of the total star formation in the Milky Way and nearby galaxies.

  9. The value of automated gel column agglutination technology in the identification of true inherited D blood types in massively transfused patients.

    PubMed

    Summers, Thomas; Johnson, Viviana V; Stephan, John P; Johnson, Gloria J; Leonard, George

    2009-08-01

    Massive transfusion of D- trauma patients in the combat setting involves the use of D+ red blood cells (RBCs) or whole blood along with suboptimal pretransfusion test result documentation. This presents challenges to the transfusion service of tertiary care military hospitals who ultimately receive these casualties because initial D typing results may only reflect the transfused RBCs. After patients are stabilized, mixed-field reaction results on D typing indicate the patient's true inherited D phenotype. This case series illustrates the utility of automated gel column agglutination in detecting mixed-field reactions in these patients. The transfusion service test results, including the automated gel column agglutination D typing results, of four massively transfused D- patients transfused D+ RBCs is presented. To test the sensitivity of the automated gel column agglutination method in detecting mixed-field agglutination reactions, a comparative analysis of three automated technologies using predetermined mixtures of D+ and D- RBCs is also presented. The automated gel column agglutination method detected mixed-field agglutination in D typing in all four patients and in the three prepared control specimens. The automated microwell tube method identified one of the three prepared control specimens as indeterminate, which was subsequently manually confirmed as a mixed-field reaction. The automated solid-phase method was unable to detect any mixed fields. The automated gel column agglutination method provides a sensitive means for detecting mixed-field agglutination reactions in the determination of the true inherited D phenotype of combat casualties transfused massive amounts of D+ RBCs.

  10. Liquid nitrogen ingestion leading to massive pneumoperitoneum without identifiable gastrointestinal perforation.

    PubMed

    Walsh, Mike J; Tharratt, Steven R; Offerman, Steven R

    2010-06-01

    Liquid nitrogen (LN) ingestion is unusual, but may be encountered by poison centers, emergency physicians, and general surgeons. Unique properties of LN produce a characteristic pattern of injury. A 19-year-old male college student presented to the Emergency Department complaining of abdominal pain and "bloating" after drinking LN. His presentation vital signs were remarkable only for mild tachypnea and tachycardia. On physical examination, he had mild respiratory difficulty due to abdominal distention. His abdomen was tense and distended. Abdominal X-ray studies revealed a massive pneumoperitoneum. At laparotomy, he was found to have a large amount of peritoneal gas. No perforation was identified. After surgery, the patient made an uneventful recovery and was discharged 5 days later. At 2-week clinic follow-up, he was doing well without complications. Nitrogen is a colorless, odorless gas at room temperature. Due to its low boiling point (-195 degrees C), LN rapidly evaporates when in contact with body surface temperatures. Therefore, ingested LN causes damage by two mechanisms: rapid freezing injury upon mucosal contact and rapid volume expansion as nitrogen gas is formed. Patients who ingest LN may develop gastrointestinal perforation and massive pneumoperitoneum. Because rapid gas formation may allow large volumes to escape from tiny perforations, the exact site of perforation may never be identified. In cases of LN ingestion, mucosal injury and rapid gas formation can cause massive pneumoperitoneum. Although laparotomy is recommended for all patients with signs of perforation, the site of injury may never be identified. Copyright 2010 Elsevier Inc. All rights reserved.

  11. Understanding Spatiotemporal Patterns of Biking Behavior by Analyzing Massive Bike Sharing Data in Chicago.

    PubMed

    Zhou, Xiaolu

    2015-01-01

    The growing number of bike sharing systems (BSS) in many cities largely facilitates biking for transportation and recreation. Most recent bike sharing systems produce time and location specific data, which enables the study of travel behavior and mobility of each individual. However, despite a rapid growth of interest, studies on massive bike sharing data and the underneath travel pattern are still limited. Few studies have explored and visualized spatiotemporal patterns of bike sharing behavior using flow clustering, nor examined the station functional profiles based on over-demand patterns. This study investigated the spatiotemporal biking pattern in Chicago by analyzing massive BSS data from July to December in 2013 and 2014. The BSS in Chicago gained more popularity. About 15.9% more people subscribed to this service. Specifically, we constructed bike flow similarity graph and used fastgreedy algorithm to detect spatial communities of biking flows. By using the proposed methods, we discovered unique travel patterns on weekdays and weekends as well as different travel trends for customers and subscribers from the noisy massive amount data. In addition, we also examined the temporal demands for bikes and docks using hierarchical clustering method. Results demonstrated the modeled over-demand patterns in Chicago. This study contributes to offer better knowledge of biking flow patterns, which was difficult to obtain using traditional methods. Given the trend of increasing popularity of the BSS and data openness in different cities, methods used in this study can extend to examine the biking patterns and BSS functionality in different cities.

  12. No obvious sympathetic excitation after massive levothyroxine overdose: A case report.

    PubMed

    Xue, Jianxin; Zhang, Lei; Qin, Zhiqiang; Li, Ran; Wang, Yi; Zhu, Kai; Li, Xiao; Gao, Xian; Zhang, Jianzhong

    2018-06-01

    Thyrotoxicosis from an overdose of medicinal thyroid hormone is a condition that may be associated with a significant delay in onset of toxicity. However, limited literature is available regarding thyrotoxicosis attributed to excessive ingestion of exogenous thyroid hormone and most cases described were pediatric clinical researches. Herein, we presented the course of a patient who ingested a massive amount of levothyroxine with no obvious sympathetic excited symptoms exhibited and reviewed feasible treatment options for such overdoses. A 41-year-old woman patient with ureteral calculus ingested a massive amount of levothyroxine (120 tablets, equal to 6 mg in total) during her hospitalization. Her transient vital signs were unremarkable after ingestion except for significantly accelerated breathing rate of 45 times per minute. Initial laboratory findings revealed evidently elevated serum levels of thyroxine (T4) >320 nmol/L, free triiodothyronine (fT3) 10.44 pmol/L, and free thyroxine (fT4) >100 pmol/L. The patient had a history of hypothyroidism, which was managed with thyroid hormone replacement (levothyroxine 100 μg per day). Besides, she also suffered from systemic lupus erythematosus and chronic pancreatitis. This is a case of excessive ingestion of exogenous thyroid hormone in an adult. The interventions included use propranolol to prevent heart failure; utilize hemodialysis to remove redundant thyroid hormone from blood; closely monitor the vital signs, basal metabolic rate, blood biochemical indicators, and serum levels of thyroid hormone. The woman had no obvious symptoms of thyrotoxicosis. After 4 weeks, the results of thyroid function indicated that serum thyroid hormone levels were completely recovered to pre-ingestion levels. Accordingly, the levothyroxine was used again as before. Adults often exhibit more severe symptoms after intaking overdose levothyroxine due to their complex medical history and comorbidities than children. As for them, hemodialysis should be considered as soon as possible. Besides, diverse treatments according to specific symptoms and continuously monitoring were indispensable.

  13. mIoT Slice for 5G Systems: Design and Performance Evaluation

    PubMed Central

    Condoluci, Massimo; An, Xueli

    2018-01-01

    Network slicing is a key feature of the upcoming 5G networks allowing the design and deployment of customized communication systems to integrate services provided by vertical industries. In this context, massive Internet of Things (mIoT) is regarded as a compelling use case, both for its relevance from business perspective, and for the technical challenges it poses to network design. With their envisaged massive deployment of devices requiring sporadic connectivity and small data transmission, yet Quality of Service (QoS) constrained, mIoT services will need an ad-hoc end-to-end (E2E) slice, i.e., both access and core network with enhanced Control and User planes (CP/UP). After revising the key requirements of mIoT and identifying major shortcomings of previous generation networks, this paper presents and evaluates an E2E mIoT network slicing solution, featuring a new connectivity model overcoming the load limitations of legacy systems. Unique in its kind, this paper addresses mIoT requirements from an end-to-end perspective highlighting and solving, unlike most prior related work, the connectivity challenges posed to the core network. Results demonstrate that the proposed solution, reducing CP signaling and optimizing UP resource utilization, is a suitable candidate for next generation network standards to efficiently handle massive device deployment. PMID:29466311

  14. mIoT Slice for 5G Systems: Design and Performance Evaluation.

    PubMed

    Trivisonno, Riccardo; Condoluci, Massimo; An, Xueli; Mahmoodi, Toktam

    2018-02-21

    Network slicing is a key feature of the upcoming 5G networks allowing the design and deployment of customized communication systems to integrate services provided by vertical industries. In this context, massive Internet of Things (mIoT) is regarded as a compelling use case, both for its relevance from business perspective, and for the technical challenges it poses to network design. With their envisaged massive deployment of devices requiring sporadic connectivity and small data transmission, yet Quality of Service (QoS) constrained, mIoT services will need an ad-hoc end-to-end (E2E) slice, i.e., both access and core network with enhanced Control and User planes (CP/UP). After revising the key requirements of mIoT and identifying major shortcomings of previous generation networks, this paper presents and evaluates an E2E mIoT network slicing solution, featuring a new connectivity model overcoming the load limitations of legacy systems. Unique in its kind, this paper addresses mIoT requirements from an end-to-end perspective highlighting and solving, unlike most prior related work, the connectivity challenges posed to the core network. Results demonstrate that the proposed solution, reducing CP signaling and optimizing UP resource utilization, is a suitable candidate for next generation network standards to efficiently handle massive device deployment.

  15. The mystery of a supposed massive star exploding in a brightest cluster galaxy

    NASA Astrophysics Data System (ADS)

    Hosseinzadeh, Griffin

    2017-08-01

    Most of the diversity of core-collapse supernovae results from late-stage mass loss by their progenitor stars. Supernovae that interact with circumstellar material (CSM) are a particularly good probe of these last stages of stellar evolution. Type Ibn supernovae are a rare and poorly understood class of hydrogen-poor explosions that show signs of interaction with helium-rich CSM. The leading hypothesis is that they are explosions of very massive Wolf-Rayet stars in which the supernova ejecta excites material previously lost by stellar winds. These massive stars have very short lifetimes, and therefore should only found in actively star-forming galaxies. However, PS1-12sk is a Type Ibn supernova found on the outskirts of a giant elliptical galaxy. As this is extraordinary unlikely, we propose to obtain deep UV images of the host environment of PS1-12sk in order to map nearby star formation and/or find a potential unseen star-forming host. If star formation is detected, its amount and location will provide deep insights into the progenitor picture for the poorly-understood Type Ibn class. If star formation is still not detected, these observations would challenge the well-accepted hypothesis that these are core-collapse supernovae at all.

  16. Endoscopic management of massive mercury ingestion

    PubMed Central

    Zag, Levente; Berkes, Gábor; Takács, Irma F; Szepes, Attila; Szabó, István

    2017-01-01

    Abstract Rationale: Ingestion of a massive amount of metallic mercury was thought to be harmless until the last century. After that, in a number of cases, mercury ingestion has been associated with appendicitis, impaired liver function, memory deficits, aspiration leading to pneumonitis and acute renal failure. Treatment includes gastric lavage, giving laxatives and chelating agents, but rapid removal of metallic mercury with gastroscopy has not been used. Patient concerns: An 18-year-old man was admitted to our emergency department after drinking 1000 g of metallic mercury as a suicide attempt. Diagnosis: Except from mild umbilical tenderness, he had no other symptoms. Radiography showed a metallic density in the area of the stomach. Intervention: Gastroscopy was performed to remove the mercury. One large pool and several small droplets of mercury were removed from the stomach. Outcomes: Blood and urine mercury levels of the patient remained low during hospitalization. No symptoms of mercury intoxication developed during the follow-up period. Lessons: Massive mercury ingestion may cause several symptoms, which can be prevented with prompt treatment. We used endoscopy to remove the mercury, which shortened the exposure time and minimized the risk of aspiration. This is the first case where endoscopy was used for the management of mercury ingestion. PMID:28562544

  17. [NEII] Line Velocity Structure of Ultracompact HII Regions

    NASA Astrophysics Data System (ADS)

    Okamoto, Yoshiko K.; Kataza, Hirokazu; Yamashita, Takuya; Miyata, Takashi; Sako, Shigeyuki; Honda, Mitsuhiko; Onaka, Takashi; Fujiyoshi, Takuya

    Newly formed massive stars are embedded in their natal molecular clouds and are observed as ultracompact HII regions. They emit strong ionic lines such as [NeII] 12.8 micron. Since Ne is ionized by UV photons of E>21.6eV which is higher than the ionization energy of hydrogen atoms the line probes the ionized gas near the ionizing stars. This enables to probe gas motion in the vicinity of recently-formed massive stars. High angular and spectral resolution observations of the [NeII] line will thus provide siginificant information on structures (e.g. disks and outflows) generated through massive star formation. We made [NeII] spectroscopy of ultracompact HII regions using the Cooled Mid-Infrared Camera and Spectrometer (COMICS) on the 8.2m Subaru Telescope in July 2002. Spatial and spectral resolutions were 0.5"" and 10000 respectively. Among the targets G45.12+0.13 shows the largest spatial variation in velocity. The brightest area of G45.12+0.13 has the largest line width in the object. The total velocity deviation amounts to 50km/s (peak to peak value) in the observed area. We report the velocity structure of [NeII] emission of G45.12+0.13 and discuss the gas motion near the ionizing star.

  18. a UV Spectral Library of Metal-Poor Massive Stars

    NASA Astrophysics Data System (ADS)

    Robert, Carmelle

    1994-01-01

    We propose to use the FOS to build a snapshot library of UV spectra of a sample of about 50 metal-poor massive stars located in the Magellanic Clouds. The majority of libraries already existing contains spectra of hot stars with chemical abundances close to solar. The high spectral resolution achieves with the FOS will be a major factor for the uniqueness of this new library. UV spectral libraries represent fundamental tools for the study of the massive star populations of young star-forming regions. Massive stars, which are impossible to identify directly in the optical-IR part of a composite spectrum, display on the other hand key signatures in the UV region. These signatures are mainly broad, metallicity dependent spectral features formed in the hot star winds. They require a high spectral resolution (of the order of 200-300 km/s) for an adequate study. A spectral library of metal-poor massive stars represents also a unique source of data for a stellar atmosphere analysis. Within less then 10 min we will obtain a high signal-to-noise ratio of at least 30. Finally, since short exposure times are possible, this proposal makes extremely good use of the capabilities of HST. We designed an observing strategy which yields a maximum scientific return at a minimum cost of spacecraft time.

  19. The Evolution and Stability of Massive Stars

    NASA Astrophysics Data System (ADS)

    Shiode, Joshua Hajime

    Massive stars are the ultimate source for nearly all the elements necessary for life. The first stars forge these elements from the sparse set of ingredients supplied by the Big Bang, and distribute enriched ashes throughout their galactic homes via their winds and explosive deaths. Subsequent generations follow suit, assembling from the enriched ashes of their predecessors. Over the last several decades, the astrophysics community has developed a sophisticated theoretical picture of the evolution of these stars, but it remains an incomplete accounting of the rich set of observations. Using state of the art models of massive stars, I have investigated the internal processes taking place throughout the life-cycles of stars spanning those from the first generation ("Population III") to the present-day ("Population I"). I will argue that early-generation stars were not highly unstable to perturbations, contrary to a host of past investigations, if a correct accounting is made for the viscous effect of convection. For later generations, those with near solar metallicity, I find that this very same convection may excite gravity-mode oscillations that produce observable brightness variations at the stellar surface when the stars are near the main sequence. If confirmed with modern high-precision monitoring experiments, like Kepler and CoRoT, the properties of observed gravity modes in massive stars could provide a direct probe of the poorly constrained physics of gravity mode excitation by convection. Finally, jumping forward in stellar evolutionary time, I propose and explore an entirely new mechanism to explain the giant eruptions observed and inferred to occur during the final phases of massive stellar evolution. This mechanism taps into the vast nuclear fusion luminosity, and accompanying convective luminosity, in the stellar core to excite waves capable of carrying a super-Eddington luminosity out to the stellar envelope. This energy transfer from the core to the envelope has the potential to unbind a significant amount of mass in close proximity to a star's eventual explosion as a core collapse supernova.

  20. Hyperfast pulsars as the remnants of massive stars ejected from young star clusters

    NASA Astrophysics Data System (ADS)

    Gvaramadze, Vasilii V.; Gualandris, Alessia; Portegies Zwart, Simon

    2008-04-01

    Recent proper motion and parallax measurements for the pulsar PSR B1508+55 indicate a transverse velocity of ~1100kms-1, which exceeds earlier measurements for any neutron star. The spin-down characteristics of PSR B1508+55 are typical for a non-recycled pulsar, which implies that the velocity of the pulsar cannot have originated from the second supernova disruption of a massive binary system. The high velocity of PSR B1508+55 can be accounted for by assuming that it received a kick at birth or that the neutron star was accelerated after its formation in the supernova explosion. We propose an explanation for the origin of hyperfast neutron stars based on the hypothesis that they could be the remnants of a symmetric supernova explosion of a high-velocity massive star which attained its peculiar velocity (similar to that of the pulsar) in the course of a strong dynamical three- or four-body encounter in the core of dense young star cluster. To check this hypothesis, we investigated three dynamical processes involving close encounters between: (i) two hard massive binaries, (ii) a hard binary and an intermediate-mass black hole (IMBH) and (iii) a single stars and a hard binary IMBH. We find that main-sequence O-type stars cannot be ejected from young massive star clusters with peculiar velocities high enough to explain the origin of hyperfast neutron stars, but lower mass main-sequence stars or the stripped helium cores of massive stars could be accelerated to hypervelocities. Our explanation for the origin of hyperfast pulsars requires a very dense stellar environment of the order of 106- 107starspc-3. Although such high densities may exist during the core collapse of young massive star clusters, we caution that they have never been observed.

  1. Applications of massively parallel computers in telemetry processing

    NASA Technical Reports Server (NTRS)

    El-Ghazawi, Tarek A.; Pritchard, Jim; Knoble, Gordon

    1994-01-01

    Telemetry processing refers to the reconstruction of full resolution raw instrumentation data with artifacts, of space and ground recording and transmission, removed. Being the first processing phase of satellite data, this process is also referred to as level-zero processing. This study is aimed at investigating the use of massively parallel computing technology in providing level-zero processing to spaceflights that adhere to the recommendations of the Consultative Committee on Space Data Systems (CCSDS). The workload characteristics, of level-zero processing, are used to identify processing requirements in high-performance computing systems. An example of level-zero functions on a SIMD MPP, such as the MasPar, is discussed. The requirements in this paper are based in part on the Earth Observing System (EOS) Data and Operation System (EDOS).

  2. Humanitarian health computing using artificial intelligence and social media: A narrative literature review.

    PubMed

    Fernandez-Luque, Luis; Imran, Muhammad

    2018-06-01

    According to the World Health Organization (WHO), over 130 million people are in constant need of humanitarian assistance due to natural disasters, disease outbreaks, and conflicts, among other factors. These health crises can compromise the resilience of healthcare systems, which are essential for achieving the health objectives of the sustainable development goals (SDGs) of the United Nations (UN). During a humanitarian health crisis, rapid and informed decision making is required. This is often challenging due to information scarcity, limited resources, and strict time constraints. Moreover, the traditional approach to digital health development, which involves a substantial requirement analysis, a feasibility study, and deployment of technology, is ill-suited for many crisis contexts. The emergence of Web 2.0 technologies and social media platforms in the past decade, such as Twitter, has created a new paradigm of massive information and misinformation, in which new technologies need to be developed to aid rapid decision making during humanitarian health crises. Humanitarian health crises increasingly require the analysis of massive amounts of information produced by different sources, such as social media content, and, hence, they are a prime case for the use of artificial intelligence (AI) techniques to help identify relevant information and make it actionable. To identify challenges and opportunities for using AI in humanitarian health crises, we reviewed the literature on the use of AI techniques to process social media. We performed a narrative literature review aimed at identifying examples of the use of AI in humanitarian health crises. Our search strategy was designed to get a broad overview of the different applications of AI in a humanitarian health crisis and their challenges. A total of 1459 articles were screened, and 24 articles were included in the final analysis. Successful case studies of AI applications in a humanitarian health crisis have been reported, such as for outbreak detection. A commonly shared concern in the reviewed literature is the technical challenge of analyzing large amounts of data in real time. Data interoperability, which is essential to data sharing, is also a barrier with regard to the integration of online and traditional data sources. Human and organizational aspects that might be key factors for the adoption of AI and social media remain understudied. There is also a publication bias toward high-income countries, as we identified few examples in low-income countries. Further, we did not identify any examples of certain types of major crisis, such armed conflicts, in which misinformation might be more common. The feasibility of using AI to extract valuable information during a humanitarian health crisis is proven in many cases. There is a lack of research on how to integrate the use of AI into the work-flow and large-scale deployments of humanitarian aid during a health crisis. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. The development of a massive open online course during the 2014-15 Ebola virus disease epidemic.

    PubMed

    Evans, Dabney P; Luffy, Samantha M; Parisi, Stephanie; Del Rio, Carlos

    2017-09-01

    Timely training was urgently needed at the onset of the 2014 Ebola virus disease epidemic. Massive open online courses (MOOCs) have grown in popularity, though little is known about their utility in time-sensitive situations, including infectious disease outbreaks. We created the first English language massive open online course on Ebola virus disease. Designed by a team representing various units of Emory University and six partner institutions, the six module course was aimed at a global general audience but also relevant for health care professionals. Over 7,000 learners from 170 countries participated in the initial course offering. More than a third of learners were from emerging economies, including seven percent from Africa, and another 13% from countries outside the United States who received individuals requiring treatment for Ebola virus disease. Creating and producing the first English language MOOC on EVD in a short time period required effective collaboration and strong coordination between subject matter and course development experts from Emory. Through these collaborative efforts, the development team was able to provide urgently needed training and educational materials while the epidemic of EVD continued to radiate through West Africa. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Block iterative restoration of astronomical images with the massively parallel processor

    NASA Technical Reports Server (NTRS)

    Heap, Sara R.; Lindler, Don J.

    1987-01-01

    A method is described for algebraic image restoration capable of treating astronomical images. For a typical 500 x 500 image, direct algebraic restoration would require the solution of a 250,000 x 250,000 linear system. The block iterative approach is used to reduce the problem to solving 4900 121 x 121 linear systems. The algorithm was implemented on the Goddard Massively Parallel Processor, which can solve a 121 x 121 system in approximately 0.06 seconds. Examples are shown of the results for various astronomical images.

  5. Treatment Strategy for Irreparable Rotator Cuff Tears

    PubMed Central

    Oh, Joo Han; Rhee, Sung Min

    2018-01-01

    Recently, patients with shoulder pain have increased rapidly. Of all shoulder disorders, rotator cuff tears (RCTs) are most prevalent in the middle-aged and older adults, which is the primary reason for shoulder surgery in the population. Some authors have reported that up to 30% of total RCTs can be classified as irreparable due to the massive tear size and severe muscle atrophy. In this review article, we provide an overview of treatment methods for irreparable massive RCTs and discuss proper surgical strategies for RCTs that require operative management. PMID:29854334

  6. Massive Black Hole Mergers: Can We "See" what LISA will "Hear"?

    NASA Technical Reports Server (NTRS)

    Centrella, Joan

    2010-01-01

    The final merger of massive black holes produces strong gravitational radiation that can be detected by the space-borne LISA. If the black hole merger takes place in the presence of gas and magnetic fields, various types of electromagnetic signals may also be produced. Modeling such electromagnetic counterparts of the final merger requires evolving the behavior of both gas and fields in the strong-field regions around the black holes. We will review current efforts to simulate these systems, and discuss possibilities for observing the electromagnetic signals they produce.

  7. The inner structure of very massive elliptical galaxies: implications for the inside-out formation mechanism of z˜ 2 galaxies

    NASA Astrophysics Data System (ADS)

    Tiret, O.; Salucci, P.; Bernardi, M.; Maraston, C.; Pforr, J.

    2011-03-01

    We analyse a sample of 23 supermassive elliptical galaxies (central velocity dispersion larger than 330 km s-1) drawn from the Sloan Digital Sky Survey. For each object, we estimate the dynamical mass from the light profile and central velocity dispersion, and compare it with the stellar mass derived from stellar population models. We show that these galaxies are dominated by luminous matter within the radius for which the velocity dispersion is measured. We find that the sizes and stellar masses are tightly correlated, with Re∝M1.1*, making the mean density within the de Vaucouleurs radius a steeply declining function of M*: ρe∝M-2.2*. These scalings are easily derived from the virial theorem if one recalls that this sample has essentially fixed (but large) σ0. In contrast, the mean density within 1 kpc is almost independent of M*, at a value that is in good agreement with recent studies of z˜ 2 galaxies. The fact that the mass within 1 kpc has remained approximately unchanged suggests assembly histories that were dominated by minor mergers - but we discuss why this is not the unique way to achieve this. Moreover, the total stellar mass of the objects in our sample is typically a factor of ˜5 larger than that in the high-redshift (z˜ 2) sample, an amount which seems difficult to achieve. If our galaxies are the evolved objects of the recent high-redshift studies, then we suggest that major mergers are required at z≳ 1.5 and that minor mergers become the dominant growth mechanism for massive galaxies at z≲ 1.5.

  8. Generating Southern Africa Precipitation Forecast Using the FEWS Engine, a New Application for the Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Landsfeld, M. F.; Hegewisch, K.; Daudert, B.; Morton, C.; Husak, G. J.; Friedrichs, M.; Funk, C. C.; Huntington, J. L.; Abatzoglou, J. T.; Verdin, J. P.

    2016-12-01

    The Famine Early Warning Systems Network (FEWS NET) focuses on food insecurity in developing nations and provides objective, evidence-based analysis to help government decision-makers and relief agencies plan for and respond to humanitarian emergencies. The network of FEWS NET analysts and scientists require flexible, interactive tools to aid in their monitoring and research efforts. Because they often work in bandwidth-limited regions, lightweight Internet tools and services that bypass the need for downloading massive datasets are preferred for their work. To support food security analysis FEWS NET developed a custom interface for the Google Earth Engine (GEE). GEE is a platform developed by Google to support scientific analysis of environmental data in their cloud computing environment. This platform allows scientists and independent researchers to mine massive collections of environmental data, leveraging Google's vast computational resources for purposes of detecting changes and monitoring the Earth's surface and climate. GEE hosts an enormous amount of satellite imagery and climate archives, one of which is the Climate Hazards Group Infrared Precipitation with Stations dataset (CHIRPS). CHIRPS precipitation dataset is a key input for FEWS NET monitoring and forecasting efforts. In this talk we introduce the FEWS Engine interface. We present an application that highlights the utility of FEWS Engine for forecasting the upcoming seasonal precipitation of southern Africa. Specifically, the current state of ENSO is assessed and used to identify similar historical seasons. The FEWS Engine compositing tool is used to examine rainfall and other environmental data for these analog seasons. The application illustrates the unique benefits of using FEWS Engine for on-the-fly food security scenario development.

  9. Altitude Wind Tunnel at the NACA’s Aircraft Engine Research Laboratory

    NASA Image and Video Library

    1945-06-21

    Two men on top of the Altitude Wind Tunnel (AWT) at the National Advisory Committee for Aeronautics (NACA) Aircraft Engine Research Laboratory. The tunnel was a massive rectangular structure, which for years provided one of the highest vantage points on the laboratory. The tunnel was 263 feet long on the north and south legs and 121 feet long on the east and west sides. The larger west end of the tunnel, seen here, was 51 feet in diameter. The east side of the tunnel was 31 feet in diameter at the southeast corner and 27 feet in diameter at the northeast. The throat section, which connected the northwest corner to the test section, narrowed sharply from 51 to 20 feet in diameter. The AWT’s altitude simulation required temperature and pressure fluctuations that made the design of the shell more difficult than other tunnels. The simultaneous decrease in both pressure and temperature inside the facility produced uneven stress loads, particularly on the support rings. The steel used in the primary tunnel structure was one inch thick to ensure that the shell did not collapse as the internal air pressure was dropped to simulate high altitudes. It was a massive amount of steel considering the World War II shortages. The shell was covered with several inches of fiberglass insulation to retain the refrigerated air and a thinner outer steel layer to protect the insulation against the weather. A unique system of rollers was used between the shell and its support piers. These rollers allowed for movement as the shell expanded or contracted during the altitude simulations. Certain sections would move as much as five inches during operation.

  10. A Bootstrap Approach to Martian Manufacturing

    NASA Technical Reports Server (NTRS)

    Dorais, Gregory A.

    2004-01-01

    In-Situ Resource Utilization (ISRU) is an essential element of any affordable strategy for a sustained human presence on Mars. Ideally, Martian habitats would be extremely massive to allow plenty of room to comfortably live and work, as well as to protect the occupants from the environment. Moreover, transportation and power generation systems would also require significant mass if affordable. For our approach to ISRU, we use the industrialization of the U.S. as a metaphor. The 19th century started with small blacksmith shops and ended with massive steel mills primarily accomplished by blacksmiths increasing their production capacity and product size to create larger shops, which produced small mills, which produced the large steel mills that industrialized the country. Most of the mass of a steel mill is comprised of steel in simple shapes, which are produced and repaired with few pieces of equipment also mostly made of steel in basic shapes. Due to this simplicity, we expect that the 19th century manufacturing growth can be repeated on Mars in the 21st century using robots as the primary labor force. We suggest a "bootstrap" approach to manufacturing on Mars that uses a "seed" manufacturing system that uses regolith to create major structural components and spare parts. The regolith would be melted, foamed, and sintered as needed to fabricate parts using casting and solid freeform fabrication techniques. Complex components, such as electronics, would be brought from Earth and integrated as needed. These parts would be assembled to create additional manufacturing systems, which can be both more capable and higher capacity. These subsequent manufacturing systems could refine vast amounts of raw materials to create large components, as well as assemble equipment, habitats, pressure vessels, cranes, pipelines, railways, trains, power generation stations, and other facilities needed to economically maintain a sustained human presence on Mars.

  11. A hierarchical pyramid method for managing large-scale high-resolution drainage networks extracted from DEM

    NASA Astrophysics Data System (ADS)

    Bai, Rui; Tiejian, Li; Huang, Yuefei; Jiaye, Li; Wang, Guangqian; Yin, Dongqin

    2015-12-01

    The increasing resolution of Digital Elevation Models (DEMs) and the development of drainage network extraction algorithms make it possible to develop high-resolution drainage networks for large river basins. These vector networks contain massive numbers of river reaches with associated geographical features, including topological connections and topographical parameters. These features create challenges for efficient map display and data management. Of particular interest are the requirements of data management for multi-scale hydrological simulations using multi-resolution river networks. In this paper, a hierarchical pyramid method is proposed, which generates coarsened vector drainage networks from the originals iteratively. The method is based on the Horton-Strahler's (H-S) order schema. At each coarsening step, the river reaches with the lowest H-S order are pruned, and their related sub-basins are merged. At the same time, the topological connections and topographical parameters of each coarsened drainage network are inherited from the former level using formulas that are presented in this study. The method was applied to the original drainage networks of a watershed in the Huangfuchuan River basin extracted from a 1-m-resolution airborne LiDAR DEM and applied to the full Yangtze River basin in China, which was extracted from a 30-m-resolution ASTER GDEM. In addition, a map-display and parameter-query web service was published for the Mississippi River basin, and its data were extracted from the 30-m-resolution ASTER GDEM. The results presented in this study indicate that the developed method can effectively manage and display massive amounts of drainage network data and can facilitate multi-scale hydrological simulations.

  12. Bioactive Natural Products Prioritization Using Massive Multi-informational Molecular Networks.

    PubMed

    Olivon, Florent; Allard, Pierre-Marie; Koval, Alexey; Righi, Davide; Genta-Jouve, Gregory; Neyts, Johan; Apel, Cécile; Pannecouque, Christophe; Nothias, Louis-Félix; Cachet, Xavier; Marcourt, Laurence; Roussi, Fanny; Katanaev, Vladimir L; Touboul, David; Wolfender, Jean-Luc; Litaudon, Marc

    2017-10-20

    Natural products represent an inexhaustible source of novel therapeutic agents. Their complex and constrained three-dimensional structures endow these molecules with exceptional biological properties, thereby giving them a major role in drug discovery programs. However, the search for new bioactive metabolites is hampered by the chemical complexity of the biological matrices in which they are found. The purification of single constituents from such matrices requires such a significant amount of work that it should be ideally performed only on molecules of high potential value (i.e., chemical novelty and biological activity). Recent bioinformatics approaches based on mass spectrometry metabolite profiling methods are beginning to address the complex task of compound identification within complex mixtures. However, in parallel to these developments, methods providing information on the bioactivity potential of natural products prior to their isolation are still lacking and are of key interest to target the isolation of valuable natural products only. In the present investigation, we propose an integrated analysis strategy for bioactive natural products prioritization. Our approach uses massive molecular networks embedding various informational layers (bioactivity and taxonomical data) to highlight potentially bioactive scaffolds within the chemical diversity of crude extracts collections. We exemplify this workflow by targeting the isolation of predicted active and nonactive metabolites from two botanical sources (Bocquillonia nervosa and Neoguillauminia cleopatra) against two biological targets (Wnt signaling pathway and chikungunya virus replication). Eventually, the detection and isolation processes of a daphnane diterpene orthoester and four 12-deoxyphorbols inhibiting the Wnt signaling pathway and exhibiting potent antiviral activities against the CHIKV virus are detailed. Combined with efficient metabolite annotation tools, this bioactive natural products prioritization pipeline proves to be efficient. Implementation of this approach in drug discovery programs based on natural extract screening should speed up and rationalize the isolation of bioactive natural products.

  13. Demography of SDSS Early-type Galaxies from the Perspective of Radial Color Gradients

    NASA Astrophysics Data System (ADS)

    Suh, Hyewon; Jeong, H.; Oh, K.; Yi, S. K.; Ferreras, I.; Schawinski, K.

    2010-01-01

    We have investigated the radial g-r color gradients of early-type galaxies in the Sloan Digital Sky Survey (SDSS) DR6 in the redshift range 0.00 < z < 0.06. The majority of massive early-type galaxies show a negative color gradient (centers being redder). On the other hand, roughly 30 percent of the galaxies in this sample show positive color gradients (centers being bluer). These positive-gradient galaxies often show strong Hβ absorption line strengths and/or emission line ratios that are consistent with containing young stellar populations. Combining the optical data with Galaxy Evolution Explorer (GALEX) UV photometry, we find that all positive-gradient galaxies show blue UV-optical colors. This implies that the residual star formation in early-type galaxies is centrally concentrated. These positive-gradient galaxies tend to live in lower density regions. They are also a bit more likely to have a late-type companion galaxy, hinting at a possible role of interactions with a gas-rich companion. A simplistic population analysis shows that these positive color gradients are visible only for half a billion years after a star burst. Moreover, the positive-gradient galaxies occupy different regions in the fundamental planes from the outnumbering negative-gradient galaxies. However, the positions of the positive-gradient galaxies on the fundamental planes cannot be attributed to any reasonable amount of recent star formation alone but require substantially lower velocity dispersions to begin with. Our results based on the optical data are consistent with the residual star formation interpretation which was based on the GALEX UV data. A low-level residual star formation seems continuing in most of the less-massive early-type galaxies in their centers.

  14. A Lightweight I/O Scheme to Facilitate Spatial and Temporal Queries of Scientific Data Analytics

    NASA Technical Reports Server (NTRS)

    Tian, Yuan; Liu, Zhuo; Klasky, Scott; Wang, Bin; Abbasi, Hasan; Zhou, Shujia; Podhorszki, Norbert; Clune, Tom; Logan, Jeremy; Yu, Weikuan

    2013-01-01

    In the era of petascale computing, more scientific applications are being deployed on leadership scale computing platforms to enhance the scientific productivity. Many I/O techniques have been designed to address the growing I/O bottleneck on large-scale systems by handling massive scientific data in a holistic manner. While such techniques have been leveraged in a wide range of applications, they have not been shown as adequate for many mission critical applications, particularly in data post-processing stage. One of the examples is that some scientific applications generate datasets composed of a vast amount of small data elements that are organized along many spatial and temporal dimensions but require sophisticated data analytics on one or more dimensions. Including such dimensional knowledge into data organization can be beneficial to the efficiency of data post-processing, which is often missing from exiting I/O techniques. In this study, we propose a novel I/O scheme named STAR (Spatial and Temporal AggRegation) to enable high performance data queries for scientific analytics. STAR is able to dive into the massive data, identify the spatial and temporal relationships among data variables, and accordingly organize them into an optimized multi-dimensional data structure before storing to the storage. This technique not only facilitates the common access patterns of data analytics, but also further reduces the application turnaround time. In particular, STAR is able to enable efficient data queries along the time dimension, a practice common in scientific analytics but not yet supported by existing I/O techniques. In our case study with a critical climate modeling application GEOS-5, the experimental results on Jaguar supercomputer demonstrate an improvement up to 73 times for the read performance compared to the original I/O method.

  15. Relation between Birth Weight and Intraoperative Hemorrhage during Cesarean Section in Pregnancy with Placenta Previa

    PubMed Central

    Ishibashi, Hiroki; Takano, Masashi; Sasa, Hidenori; Furuya, Kenichi

    2016-01-01

    Background Placenta previa, one of the most severe obstetric complications, carries an increased risk of intraoperative massive hemorrhage. Several risk factors for intraoperative hemorrhage have been identified to date. However, the correlation between birth weight and intraoperative hemorrhage has not been investigated. Here we estimate the correlation between birth weight and the occurrence of intraoperative massive hemorrhage in placenta previa. Materials and Methods We included all 256 singleton pregnancies delivered via cesarean section at our hospital because of placenta previa between 2003 and 2015. We calculated not only measured birth weights but also standard deviation values according to the Japanese standard growth curve to adjust for differences in gestational age. We assessed the correlation between birth weight and the occurrence of intraoperative massive hemorrhage (>1500 mL blood loss). Receiver operating characteristic curves were constructed to determine the cutoff value of intraoperative massive hemorrhage. Results Of 256 pregnant women with placenta previa, 96 (38%) developed intraoperative massive hemorrhage. Receiver-operating characteristic curves revealed that the area under the curve of the combination variables between the standard deviation of birth weight and intraoperative massive hemorrhage was 0.71. The cutoff value with a sensitivity of 81.3% and specificity of 55.6% was −0.33 standard deviation. The multivariate analysis revealed that a standard deviation of >−0.33 (odds ratio, 5.88; 95% confidence interval, 3.04–12.00), need for hemostatic procedures (odds ratio, 3.31; 95% confidence interval, 1.79–6.25), and placental adhesion (odds ratio, 12.68; 95% confidence interval, 2.85–92.13) were independent risk of intraoperative massive hemorrhage. Conclusion In patients with placenta previa, a birth weight >−0.33 standard deviation was a significant risk indicator of massive hemorrhage during cesarean section. Based on this result, further studies are required to investigate whether fetal weight estimated by ultrasonography can predict hemorrhage during cesarean section in patients with placental previa. PMID:27902772

  16. Relation between Birth Weight and Intraoperative Hemorrhage during Cesarean Section in Pregnancy with Placenta Previa.

    PubMed

    Soyama, Hiroaki; Miyamoto, Morikazu; Ishibashi, Hiroki; Takano, Masashi; Sasa, Hidenori; Furuya, Kenichi

    2016-01-01

    Placenta previa, one of the most severe obstetric complications, carries an increased risk of intraoperative massive hemorrhage. Several risk factors for intraoperative hemorrhage have been identified to date. However, the correlation between birth weight and intraoperative hemorrhage has not been investigated. Here we estimate the correlation between birth weight and the occurrence of intraoperative massive hemorrhage in placenta previa. We included all 256 singleton pregnancies delivered via cesarean section at our hospital because of placenta previa between 2003 and 2015. We calculated not only measured birth weights but also standard deviation values according to the Japanese standard growth curve to adjust for differences in gestational age. We assessed the correlation between birth weight and the occurrence of intraoperative massive hemorrhage (>1500 mL blood loss). Receiver operating characteristic curves were constructed to determine the cutoff value of intraoperative massive hemorrhage. Of 256 pregnant women with placenta previa, 96 (38%) developed intraoperative massive hemorrhage. Receiver-operating characteristic curves revealed that the area under the curve of the combination variables between the standard deviation of birth weight and intraoperative massive hemorrhage was 0.71. The cutoff value with a sensitivity of 81.3% and specificity of 55.6% was -0.33 standard deviation. The multivariate analysis revealed that a standard deviation of >-0.33 (odds ratio, 5.88; 95% confidence interval, 3.04-12.00), need for hemostatic procedures (odds ratio, 3.31; 95% confidence interval, 1.79-6.25), and placental adhesion (odds ratio, 12.68; 95% confidence interval, 2.85-92.13) were independent risk of intraoperative massive hemorrhage. In patients with placenta previa, a birth weight >-0.33 standard deviation was a significant risk indicator of massive hemorrhage during cesarean section. Based on this result, further studies are required to investigate whether fetal weight estimated by ultrasonography can predict hemorrhage during cesarean section in patients with placental previa.

  17. Single-Versus Double-Row Arthroscopic Rotator Cuff Repair in Massive Tears

    PubMed Central

    Wang, EnZhi; Wang, Liang; Gao, Peng; Li, ZhongJi; Zhou, Xiao; Wang, SongGang

    2015-01-01

    Background It is a challenge for orthopaedic surgeons to treat massive rotator cuff tears. The optimal management of massive rotator cuff tears remains controversial. Therefore, the goal of this study was to compare arthroscopic single- versus double-row rotator cuff repair with a larger sample size. Material/Methods Of the subjects with massive rotator cuff tears, 146 were treated using single-row repair, and 102 were treated using double-row repair. Pre- and postoperative functional outcomes and radiographic images were collected. The clinical outcomes were evaluated for a minimum of 2 years. Results No significant differences were shown between the groups in terms of functional outcomes. Regarding the integrity of the tendon, a lower rate of post-treatment retear was observed in patients who underwent double-row repair compared with single-row repair. Conclusions The results suggest that double-row repair is relatively superior in shoulder ROM and the strength of tendon compared with single-row repair. Future studies involving more patients in better-designed randomized controlled trials will be required. PMID:26017641

  18. Serpentinitic waste materials: possible reuses and critical issues

    NASA Astrophysics Data System (ADS)

    Cavallo, Alessandro

    2017-04-01

    The extraction and processing of marbles, rocks and granites produces a significant amount of waste materials, in the form of shapeless blocks, scraps, gravel and sludge. Current regulations and a greater concern to the environment promote the reuse of these wastes: quartz-feldspathic materials are successfully used for ceramics, crushed porphyry as track ballast, whereas carbonatic wastes for lime, cement and fillers. However, there are currently no reuses for serpentinitic materials: a striking example is represented by the Valmalenco area (central Alps, northern Italy), a relatively small productive district. In this area 22 different enterprises operate in the quarrying and/or processing of serpentinites with various textures, schistose to massive, and color shades; the commercial products are used all over the world and are known with many commercial names. The total volume extracted in the quarries is estimated around 68000 m3/yr. and the resulting commercial blocks and products can be estimated around the 40 - 50 % of the extracted material. The processing wastes can vary significantly according to the finished product: 35 % of waste can be estimated in the case of slab production, whereas 50 % can be estimated in the case of gang-saw cutting of massive serpentinite blocks. The total estimate of the processing rock waste in the Valmalenco area is about 12700 m3/yr; together with the quarry waste, the total amount of waste produced in the area is more than 43000 m3/yr. The sludge (approximately 12000 m3/yr, more than 95 % has grain size < 50 micron) mainly derives from the cutting (by diamond disk and gang-saw) and polishing of massive serpentinites; it is filter-pressed before disposal (water content ranging from 11.5 to 19.4 wt. %). All the different waste materials (85 samples) were characterized by quantitative XRPD (FULLPAT software), whole-rock geochemistry (ICP-AES, ICP-MS and Leco®) and SEM-EDS. The mineralogical composition is quite variable from quarry to quarry, with abundant antigorite (up to 90 wt. %) and olivine (up to 38 wt. %), and variable contents of diopside, chlorite, magnetite, chromite and brucite. The chemical composition reflects the protolith: MgO 35.1 - 42.7 wt. %, SiO2 38.8 - 42.3 wt. %, Fe2O3 7.1 - 8.8 wt. %, Al2O3 0.9 - 2.8 wt. %, CaO 0.2 - 3.1 wt. %, Cr2O3 0.26 - 0.35 wt. %, Ni 1800 - 2100 ppm; little differences can be observed in trace elements. SEM-EDS investigations evidenced little amounts of chrysotile asbestos fibers (generally < 1000 ppm, mean values 200 - 400 ppm), deriving from cracks, fissures and veins of the waste blocks. Very few published studies on the reuse of serpentinitic wastes can be found. Finely ground antigorite-rich materials could be used as filler for plastics (instead of talc), whereas olivine-rich wastes as a reactive fixing carbon dioxide (as carbonates) released during the use of fossil fuels. In the ceramic industry, the most promising target is represented by forsterite and/or high-MgO ceramics and forsterite refractories (with periclase addition), but also by cordierite ceramics (adding kaolin) and high-hardness vitroceramics. The real possibility of an industrial use of serpentinitic materials will require much more experimental work, because no relevant previous studies are available. Special care must be taken to avoid chrysotile asbestos contamination.

  19. Paradigm Shift in Data Content and Informatics Infrastructure Required for Generalized Constitutive Modeling of Materials Behavior

    NASA Technical Reports Server (NTRS)

    Arnold, S. M.

    2006-01-01

    Materials property information such as composition and thermophysical/mechanical properties abound in the literature. Oftentimes, however, the corresponding response curves from which these data are determined are missing or at the very least difficult to retrieve. Further, the paradigm for collecting materials property information has historically centered on (1) properties for materials comparison/selection purposes and (2) input requirements for conventional design/analysis methods. However, just as not all materials are alike or equal, neither are all constitutive models (and thus design/ analysis methods) equal; each model typically has its own specific and often unique required materials parameters, some directly measurable and others indirectly measurable. Therefore, the type and extent of materials information routinely collected is not always sufficient to meet the current, much less future, needs of the materials modeling community. Informatics has been defined as the science concerned with gathering, manipulating, storing, retrieving, and classifying recorded information. A key aspect of informatics is its focus on understanding problems and applying information technology as needed to address those problems. The primary objective of this article is to highlight the need for a paradigm shift in materials data collection, analysis, and dissemination so as to maximize the impact on both practitioners and researchers. Our hope is to identify and articulate what constitutes "sufficient" data content (i.e., quality and quantity) for developing, characterizing, and validating sophisticated nonlinear time- and history-dependent (hereditary) constitutive models. Likewise, the informatics infrastructure required for handling the potentially massive amounts of materials data will be discussed.

  20. Studies on the hot corrosion of a nickel-base superalloy, Udimet 700

    NASA Technical Reports Server (NTRS)

    Misra, A. K.

    1984-01-01

    The hot corrosion of a nickel-base superalloy, Udimet 700, was studied in the temperature range of 884 to 965 C and with different amounts of Na2SO4. Two different modes of degradation were identified: (1) formation of Na2MoO4 - MoO3 melt and fluxing by this melt, and (2) formation of large interconnected sulfides. The dissolution of Cr2O3, TiO2 in the Na2SO4 melt does not play a significant role in the overall corrosion process. The conditions for the formation of massive interconnected sulfides were identified and a mechanism of degradation due to sulfide formation is described. The formation of Ns2MoO4 - MoO3 melt requires an induction period and various physiochemical processes during the induction period were identified. The factors affecting the length of the induction period were also examined. The melt penetration through the oxide appears to be the prime mode of degradation whether the degradation is due to the formation of sulfides or the formation of the Na2MoO4 - MoO3 melt.

  1. Massively parallel first-principles simulation of electron dynamics in materials

    DOE PAGES

    Draeger, Erik W.; Andrade, Xavier; Gunnels, John A.; ...

    2017-08-01

    Here we present a highly scalable, parallel implementation of first-principles electron dynamics coupled with molecular dynamics (MD). By using optimized kernels, network topology aware communication, and by fully distributing all terms in the time-dependent Kohn–Sham equation, we demonstrate unprecedented time to solution for disordered aluminum systems of 2000 atoms (22,000 electrons) and 5400 atoms (59,400 electrons), with wall clock time as low as 7.5 s per MD time step. Despite a significant amount of non-local communication required in every iteration, we achieved excellent strong scaling and sustained performance on the Sequoia Blue Gene/Q supercomputer at LLNL. We obtained up tomore » 59% of the theoretical sustained peak performance on 16,384 nodes and performance of 8.75 Petaflop/s (43% of theoretical peak) on the full 98,304 node machine (1,572,864 cores). Lastly, scalable explicit electron dynamics allows for the study of phenomena beyond the reach of standard first-principles MD, in particular, materials subject to strong or rapid perturbations, such as pulsed electromagnetic radiation, particle irradiation, or strong electric currents.« less

  2. Handwritten-word spotting using biologically inspired features.

    PubMed

    van der Zant, Tijn; Schomaker, Lambert; Haak, Koen

    2008-11-01

    For quick access to new handwritten collections, current handwriting recognition methods are too cumbersome. They cannot deal with the lack of labeled data and would require extensive laboratory training for each individual script, style, language and collection. We propose a biologically inspired whole-word recognition method which is used to incrementally elicit word labels in a live, web-based annotation system, named Monk. Since human labor should be minimized given the massive amount of image data, it becomes important to rely on robust perceptual mechanisms in the machine. Recent computational models of the neuro-physiology of vision are applied to isolated word classification. A primate cortex-like mechanism allows to classify text-images that have a low frequency of occurrence. Typically these images are the most difficult to retrieve and often contain named entities and are regarded as the most important to people. Usually standard pattern-recognition technology cannot deal with these text-images if there are not enough labeled instances. The results of this retrieval system are compared to normalized word-image matching and appear to be very promising.

  3. Analytical reverse time migration: An innovation in imaging of infrastructures using ultrasonic shear waves.

    PubMed

    Asadollahi, Aziz; Khazanovich, Lev

    2018-04-11

    The emergence of ultrasonic dry point contact (DPC) transducers that emit horizontal shear waves has enabled efficient collection of high-quality data in the context of a nondestructive evaluation of concrete structures. This offers an opportunity to improve the quality of evaluation by adapting advanced imaging techniques. Reverse time migration (RTM) is a simulation-based reconstruction technique that offers advantages over conventional methods, such as the synthetic aperture focusing technique. RTM is capable of imaging boundaries and interfaces with steep slopes and the bottom boundaries of inclusions and defects. However, this imaging technique requires a massive amount of memory and its computation cost is high. In this study, both bottlenecks of the RTM are resolved when shear transducers are used for data acquisition. An analytical approach was developed to obtain the source and receiver wavefields needed for imaging using reverse time migration. It is shown that the proposed analytical approach not only eliminates the high memory demand, but also drastically reduces the computation time from days to minutes. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Dynamic modeling of Tampa Bay urban development using parallel computing

    USGS Publications Warehouse

    Xian, G.; Crane, M.; Steinwand, D.

    2005-01-01

    Urban land use and land cover has changed significantly in the environs of Tampa Bay, Florida, over the past 50 years. Extensive urbanization has created substantial change to the region's landscape and ecosystems. This paper uses a dynamic urban-growth model, SLEUTH, which applies six geospatial data themes (slope, land use, exclusion, urban extent, transportation, hillside), to study the process of urbanization and associated land use and land cover change in the Tampa Bay area. To reduce processing time and complete the modeling process within an acceptable period, the model is recoded and ported to a Beowulf cluster. The parallel-processing computer system accomplishes the massive amount of computation the modeling simulation requires. SLEUTH calibration process for the Tampa Bay urban growth simulation spends only 10 h CPU time. The model predicts future land use/cover change trends for Tampa Bay from 1992 to 2025. Urban extent is predicted to double in the Tampa Bay watershed between 1992 and 2025. Results show an upward trend of urbanization at the expense of a decline of 58% and 80% in agriculture and forested lands, respectively.

  5. Helium-Shell Nucleosynthesis and Extinct Radioactivities

    NASA Technical Reports Server (NTRS)

    Meyer, B. S.; The, L.-S.; Clayton, D. D.; ElEid, M. F.

    2004-01-01

    Although the exact site for the origin of the r-process isotopes remains mysterious, most thinking has centered on matter ejected from the cores of massive stars in core-collapse supernovae [13]. In the 1970's and 1980's, however, difficulties in understanding the yields from such models led workers to consider the possibility of r-process nucleosynthesis farther out in the exploding star, in particular, in the helium burning shell [4,5]. The essential idea was that shock passage through this shell would heat and compress this material to the point that the reactions 13C(alpha; n)16O and, especially, 22Ne(alpha; n)25Mg would generate enough neutrons to capture on preexisting seed nuclei and drive an "n process" [6], which could reproduce the r-process abundances. Subsequent work showed that the required 13C and 22Ne abundances were too large compared to the amounts available in realistic models [7] and recent thinking has returned to supernova core material or matter ejected from neutron star-neutron star collisions as the more likely r-process sites.

  6. TOM: a web-based integrated approach for identification of candidate disease genes.

    PubMed

    Rossi, Simona; Masotti, Daniele; Nardini, Christine; Bonora, Elena; Romeo, Giovanni; Macii, Enrico; Benini, Luca; Volinia, Stefano

    2006-07-01

    The massive production of biological data by means of highly parallel devices like microarrays for gene expression has paved the way to new possible approaches in molecular genetics. Among them the possibility of inferring biological answers by querying large amounts of expression data. Based on this principle, we present here TOM, a web-based resource for the efficient extraction of candidate genes for hereditary diseases. The service requires the previous knowledge of at least another gene responsible for the disease and the linkage area, or else of two disease associated genetic intervals. The algorithm uses the information stored in public resources, including mapping, expression and functional databases. Given the queries, TOM will select and list one or more candidate genes. This approach allows the geneticist to bypass the costly and time consuming tracing of genetic markers through entire families and might improve the chance of identifying disease genes, particularly for rare diseases. We present here the tool and the results obtained on known benchmark and on hereditary predisposition to familial thyroid cancer. Our algorithm is available at http://www-micrel.deis.unibo.it/~tom/.

  7. A fast boosting-based screening method for large-scale association study in complex traits with genetic heterogeneity.

    PubMed

    Wang, Lu-Yong; Fasulo, D

    2006-01-01

    Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.

  8. Massively parallel first-principles simulation of electron dynamics in materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Draeger, Erik W.; Andrade, Xavier; Gunnels, John A.

    Here we present a highly scalable, parallel implementation of first-principles electron dynamics coupled with molecular dynamics (MD). By using optimized kernels, network topology aware communication, and by fully distributing all terms in the time-dependent Kohn–Sham equation, we demonstrate unprecedented time to solution for disordered aluminum systems of 2000 atoms (22,000 electrons) and 5400 atoms (59,400 electrons), with wall clock time as low as 7.5 s per MD time step. Despite a significant amount of non-local communication required in every iteration, we achieved excellent strong scaling and sustained performance on the Sequoia Blue Gene/Q supercomputer at LLNL. We obtained up tomore » 59% of the theoretical sustained peak performance on 16,384 nodes and performance of 8.75 Petaflop/s (43% of theoretical peak) on the full 98,304 node machine (1,572,864 cores). Lastly, scalable explicit electron dynamics allows for the study of phenomena beyond the reach of standard first-principles MD, in particular, materials subject to strong or rapid perturbations, such as pulsed electromagnetic radiation, particle irradiation, or strong electric currents.« less

  9. Color and Vector Flow Imaging in Parallel Ultrasound With Sub-Nyquist Sampling.

    PubMed

    Madiena, Craig; Faurie, Julia; Poree, Jonathan; Garcia, Damien; Garcia, Damien; Madiena, Craig; Faurie, Julia; Poree, Jonathan

    2018-05-01

    RF acquisition with a high-performance multichannel ultrasound system generates massive data sets in short periods of time, especially in "ultrafast" ultrasound when digital receive beamforming is required. Sampling at a rate four times the carrier frequency is the standard procedure since this rule complies with the Nyquist-Shannon sampling theorem and simplifies quadrature sampling. Bandpass sampling (or undersampling) outputs a bandpass signal at a rate lower than the maximal frequency without harmful aliasing. Advantages over Nyquist sampling are reduced storage volumes and data workflow, and simplified digital signal processing tasks. We used RF undersampling in color flow imaging (CFI) and vector flow imaging (VFI) to decrease data volume significantly (factor of 3 to 13 in our configurations). CFI and VFI with Nyquist and sub-Nyquist samplings were compared in vitro and in vivo. The estimate errors due to undersampling were small or marginal, which illustrates that Doppler and vector Doppler images can be correctly computed with a drastically reduced amount of RF samples. Undersampling can be a method of choice in CFI and VFI to avoid information overload and reduce data transfer and storage.

  10. Structural variation discovery in the cancer genome using next generation sequencing: Computational solutions and perspectives

    PubMed Central

    Liu, Biao; Conroy, Jeffrey M.; Morrison, Carl D.; Odunsi, Adekunle O.; Qin, Maochun; Wei, Lei; Trump, Donald L.; Johnson, Candace S.; Liu, Song; Wang, Jianmin

    2015-01-01

    Somatic Structural Variations (SVs) are a complex collection of chromosomal mutations that could directly contribute to carcinogenesis. Next Generation Sequencing (NGS) technology has emerged as the primary means of interrogating the SVs of the cancer genome in recent investigations. Sophisticated computational methods are required to accurately identify the SV events and delineate their breakpoints from the massive amounts of reads generated by a NGS experiment. In this review, we provide an overview of current analytic tools used for SV detection in NGS-based cancer studies. We summarize the features of common SV groups and the primary types of NGS signatures that can be used in SV detection methods. We discuss the principles and key similarities and differences of existing computational programs and comment on unresolved issues related to this research field. The aim of this article is to provide a practical guide of relevant concepts, computational methods, software tools and important factors for analyzing and interpreting NGS data for the detection of SVs in the cancer genome. PMID:25849937

  11. Binary Black Holes, Gravitational Waves, and Numerical Relativity

    NASA Technical Reports Server (NTRS)

    Centrella, Joan

    2007-01-01

    This viewgraph presentation reviews the massive black hole (MBH) binaries that are found at the center of most galaxies, "astronomical messenger", gravitational waves (GW), and the use of numerical relativity understand the features of these phenomena. The final merger of two black holes releases a tremendous amount of energy and is one of the brightest sources in the gravitational wave sky. Observing these sources with gravitational wave detectors requires that we know the radiation waveforms they emit. Since these mergers take place in regions of very strong gravitational fields, we need to solve Einstein's equations of general relativity on a computer in order to calculate these waveforms. For more than 30 years, scientists have tried to compute these waveforms using the methods of numerical relativity.. This talk will take you on this quest for the holy grail of numerical relativity, showing how a spacetime is constructed on a computer to build a simulation laboratory for binary black hole mergers. We will focus on the recent advances that are revealing these waveforms, and the dramatic new potential for discoveries that arises when these sources will be observed by LIGO and LISA.

  12. Pregalactic black holes - A new constraint

    NASA Technical Reports Server (NTRS)

    Barrow, J. D.; Silk, J.

    1979-01-01

    Pregalactic black holes accrete matter in the early universe and produce copious amounts of X radiation. By using observations of the background radiation in the X and gamma wavebands, a strong constraint is imposed upon their possible abundance. If pregalactic black holes are actually present, several outstanding problems of cosmogony can be resolved with typical pregalactic black hole masses of 100 solar masses. Significantly more massive holes cannot constitute an appreciable mass fraction of the universe and are limited by a specific mass-density bound.

  13. Radioactivities and gamma-rays from supernovae

    NASA Technical Reports Server (NTRS)

    Woosley, S. E.

    1991-01-01

    An account is given of the implications of several calculations relevant to the estimation of gamma-ray signals from various explosive astronomical phenomena. After discussing efforts to constrain the amounts of Ni-57 and Ti-44 produced in SN 1987A, attention is given to the production of Al-27 in massive stars and SNs. A 'delayed detonation' model of type Ia SNs is proposed, and the gamma-ray signal which may be expected when a bare white dwarf collapses directly into a neutron star is discussed.

  14. BREAD LOAF ROADLESS AREA, VERMONT.

    USGS Publications Warehouse

    Slack, John F.; Bitar, Richard F.

    1984-01-01

    On the basis of mineral-resource survey the Bread Loaf Roadless Area, Vermont, is considered to have probable resource potential for the occurrence of volcanogenic massive sulfide deposits of copper, zinc, and lead, particularly in the north and northeastern section of the roadless area. Nonmetallic commodities include minor deposits of sand and gravel, and abundant rock suitable for crushing. However, large amounts of these materials in more accessible locations are available outside the roadless area. A possibility exists that oil or natural gas resources may be present at great depth.

  15. Detecting Man-in-the-Middle Attacks against Transport Layer Security Connections with Timing Analysis

    DTIC Science & Technology

    2011-09-15

    Networks (VPNs), TLS protects massive amounts of private information, and protecting this data from Man-in-the-Middle ( MitM ) attacks is imperative to...keeping the information secure. This thesis illustrates how an attacker can successfully perform a MitM attack against a TLS connection without alerting...mechanism a user has against a MitM . The goal for this research is to determine if a time threshold exists that can indicate the presence of a MitM in this

  16. Privacy Challenges of Genomic Big Data.

    PubMed

    Shen, Hong; Ma, Jian

    2017-01-01

    With the rapid advancement of high-throughput DNA sequencing technologies, genomics has become a big data discipline where large-scale genetic information of human individuals can be obtained efficiently with low cost. However, such massive amount of personal genomic data creates tremendous challenge for privacy, especially given the emergence of direct-to-consumer (DTC) industry that provides genetic testing services. Here we review the recent development in genomic big data and its implications on privacy. We also discuss the current dilemmas and future challenges of genomic privacy.

  17. B vitamins in the nervous system.

    PubMed

    Bender, D A

    1984-01-01

    The coenzyme functions of the B vitamins in intermediatry metabolism are well established; nevertheless, for none of them is it possible to determine precisely the connection between the biochemical lesions associated with deficiency and the neurological consequences. Although there is convincing evidence of a neurospecific role for thiamin and other B vitamins, in no case has this role been adequately described. Similarly, the neurochemical sequelae of intoxication by massive amounts of vitamins (so-called mega-vitamin therapy or orthomolecular medicine) remain largely unexplained.

  18. A Framework for Identifying and Analyzing Major Issues in Implementing Big Data and Data Analytics in E-Learning: Introduction to Special Issue on Big Data and Data Analytics

    ERIC Educational Resources Information Center

    Corbeil, Maria Elena; Corbeil, Joseph Rene; Khan, Badrul H.

    2017-01-01

    Due to rapid advancements in our ability to collect, process, and analyze massive amounts of data, it is now possible for educational institutions to gain new insights into how people learn (Kumar, 2013). E-learning has become an important part of education, and this form of learning is especially suited to the use of big data and data analysis,…

  19. Experience in highly parallel processing using DAP

    NASA Technical Reports Server (NTRS)

    Parkinson, D.

    1987-01-01

    Distributed Array Processors (DAP) have been in day to day use for ten years and a large amount of user experience has been gained. The profile of user applications is similar to that of the Massively Parallel Processor (MPP) working group. Experience has shown that contrary to expectations, highly parallel systems provide excellent performance on so-called dirty problems such as the physics part of meteorological codes. The reasons for this observation are discussed. The arguments against replacing bit processors with floating point processors are also discussed.

  20. Soviet News and Propaganda Analysis Based on RED STAR (The Official Newspaper of the Soviet Defense Establishment) for the Period 1-30 November 1982. Volume 2, Number 11, 1982.

    DTIC Science & Technology

    1982-01-01

    massive propaganda war, based on lies. Patriotic Lebanese attack Israeli forces. • Israelis increase repression and terror against Lebanese. - For...e Israelis increase repression and terror against Lebanese. An analysis of the amount of space by topicp devoted to articles about Israel and Lebanon...Israeli repressions/ terror .............. 21% (4) United States aid/interactions .......... 4% 100% *Represents percent of space in Red Star for Israel

  1. Relationship Between Deltoid and Rotator Cuff Muscles During Dynamic Shoulder Abduction: A Biomechanical Study of Rotator Cuff Tear Progression.

    PubMed

    Dyrna, Felix; Kumar, Neil S; Obopilwe, Elifho; Scheiderer, Bastian; Comer, Brendan; Nowak, Michael; Romeo, Anthony A; Mazzocca, Augustus D; Beitzel, Knut

    2018-05-01

    Previous biomechanical studies regarding deltoid function during glenohumeral abduction have primarily used static testing protocols. (1) Deltoid forces required for scapular plane abduction increase as simulated rotator cuff tears become larger, and (2) maximal abduction decreases despite increased deltoid forces. Controlled laboratory study. Twelve fresh-frozen cadaveric shoulders with a mean age of 67 years (range, 64-74 years) were used. The supraspinatus and anterior, middle, and posterior deltoid tendons were attached to individual shoulder simulator actuators. Deltoid forces and maximum abduction were recorded for the following tear patterns: intact, isolated subscapularis (SSC), isolated supraspinatus (SSP), anterosuperior (SSP + SSC), posterosuperior (infraspinatus [ISP] + SSP), and massive (SSC + SSP + ISP). Optical triads tracked 3-dimensional motion during dynamic testing. Fluoroscopy and computed tomography were used to measure critical shoulder angle, acromial index, and superior humeral head migration with massive tears. Mean values for maximum glenohumeral abduction and deltoid forces were determined. Linear mixed-effects regression examined changes in motion and forces over time. Pearson product-moment correlation coefficients ( r) among deltoid forces, critical shoulder angles, and acromial indices were calculated. Shoulders with an intact cuff required 193.8 N (95% CI, 125.5 to 262.1) total deltoid force to achieve 79.8° (95% CI, 66.4° to 93.2°) of maximum glenohumeral abduction. Compared with native shoulders, abduction decreased after simulated SSP (-27.2%; 95% CI, -43.3% to -11.1%, P = .04), anterosuperior (-51.5%; 95% CI, -70.2% to -32.8%, P < .01), and massive (-48.4%; 95% CI, -65.2% to -31.5%, P < .01) cuff tears. Increased total deltoid forces were required for simulated anterosuperior (+108.1%; 95% CI, 68.7% to 147.5%, P < .01) and massive (+57.2%; 95% CI, 19.6% to 94.7%, P = .05) cuff tears. Anterior deltoid forces were significantly greater in anterosuperior ( P < .01) and massive ( P = .03) tears. Middle deltoid forces were greater with anterosuperior tears ( P = .03). Posterior deltoid forces were greater with anterosuperior ( P = .02) and posterosuperior ( P = .04) tears. Anterior deltoid force was negatively correlated ( r = -0.89, P = .01) with critical shoulder angle (34.3°; 95% CI, 32.0° to 36.6°). Deltoid forces had no statistical correlation with acromial index (0.55; 95% CI, 0.48 to 0.61). Superior migration was 8.3 mm (95% CI, 5.5 to 11.1 mm) during testing of massive rotator cuff tears. Shoulders with rotator cuff tears require considerable compensatory deltoid function to prevent abduction motion loss. Anterosuperior tears resulted in the largest motion loss despite the greatest increase in deltoid force. Rotator cuff tears place more strain on the deltoid to prevent abduction motion loss. Fatigue or injury to the deltoid may result in a precipitous decline in abduction, regardless of tear size.

  2. Results and analysis of the hot-spot temperature experiment for a cable-in-conduit conductor with thick conduit

    NASA Astrophysics Data System (ADS)

    Sedlak, Kamil; Bruzzone, Pierluigi

    2015-12-01

    In the design of future DEMO fusion reactor a long time constant (∼23 s) is required for an emergency current dump in the toroidal field (TF) coils, e.g. in case of a quench detection. This requirement is driven mainly by imposing a limit on forces on mechanical structures, namely on the vacuum vessel. As a consequence, the superconducting cable-in-conduit conductors (CICC) of the TF coil have to withstand heat dissipation lasting tens of seconds at the section where the quench started. During that time, the heat will be partially absorbed by the (massive) steel conduit and electrical insulation, thus reducing the hot-spot temperature estimated strictly from the enthalpy of the strand bundle. A dedicated experiment has been set up at CRPP to investigate the radial heat propagation and the hot-spot temperature in a CICC with a 10 mm thick steel conduit and a 2 mm thick glass epoxy outer electrical insulation. The medium size, ∅ = 18 mm, NbTi CICC was powered by the operating current of up to 10 kA. The temperature profile was monitored by 10 temperature sensors. The current dump conditions, namely the decay time constant and the quench detection delay, were varied. The experimental results show that the thick conduit significantly contributes to the overall enthalpy balance, and consequently reduces the amount of copper required for the quench protection in superconducting cables for fusion reactors.

  3. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.

  4. Comparative Analysis of Data Structures for Storing Massive Tins in a Dbms

    NASA Astrophysics Data System (ADS)

    Kumar, K.; Ledoux, H.; Stoter, J.

    2016-06-01

    Point cloud data are an important source for 3D geoinformation. Modern day 3D data acquisition and processing techniques such as airborne laser scanning and multi-beam echosounding generate billions of 3D points for simply an area of few square kilometers. With the size of the point clouds exceeding the billion mark for even a small area, there is a need for their efficient storage and management. These point clouds are sometimes associated with attributes and constraints as well. Storing billions of 3D points is currently possible which is confirmed by the initial implementations in Oracle Spatial SDO PC and the PostgreSQL Point Cloud extension. But to be able to analyse and extract useful information from point clouds, we need more than just points i.e. we require the surface defined by these points in space. There are different ways to represent surfaces in GIS including grids, TINs, boundary representations, etc. In this study, we investigate the database solutions for the storage and management of massive TINs. The classical (face and edge based) and compact (star based) data structures are discussed at length with reference to their structure, advantages and limitations in handling massive triangulations and are compared with the current solution of PostGIS Simple Feature. The main test dataset is the TIN generated from third national elevation model of the Netherlands (AHN3) with a point density of over 10 points/m2. PostgreSQL/PostGIS DBMS is used for storing the generated TIN. The data structures are tested with the generated TIN models to account for their geometry, topology, storage, indexing, and loading time in a database. Our study is useful in identifying what are the limitations of the existing data structures for storing massive TINs and what is required to optimise these structures for managing massive triangulations in a database.

  5. Massive binary stars as a probe of massive star formation

    NASA Astrophysics Data System (ADS)

    Kiminki, Daniel C.

    2010-10-01

    Massive stars are among the largest and most influential objects we know of on a sub-galactic scale. Binary systems, composed of at least one of these stars, may be responsible for several types of phenomena, including type Ib/c supernovae, short and long gamma ray bursts, high-velocity runaway O and B-type stars, and the density of the parent star clusters. Our understanding of these stars has met with limited success, especially in the area of their formation. Current formation theories rely on the accumulated statistics of massive binary systems that are limited because of their sample size or the inhomogeneous environments from which the statistics are collected. The purpose of this work is to provide a higher-level analysis of close massive binary characteristics using the radial velocity information of 113 massive stars (B3 and earlier) and binary orbital properties for the 19 known close massive binaries in the Cygnus OB2 Association. This work provides an analysis using the largest amount of massive star and binary information ever compiled for an O-star rich cluster like Cygnus OB2, and compliments other O-star binary studies such as NGC 6231, NGC 2244, and NGC 6611. I first report the discovery of 73 new O or B-type stars and 13 new massive binaries by this survey. This work involved the use of 75 successful nights of spectroscopic observation at the Wyoming Infrared Observatory in addition to observations obtained using the Hydra multi-object spectrograph at WIYN, the HIRES echelle spectrograph at KECK, and the Hamilton spectrograph at LICK. I use these data to estimate the spectrophotometric distance to the cluster and to measure the mean systemic velocity and the one-sided velocity dispersion of the cluster. Finally, I compare these data to a series of Monte Carlo models, the results of which indicate that the binary fraction of the cluster is 57 +/- 5% and that the indices for the power law distributions, describing the log of the periods, mass-ratios, and eccentricities, are --0.2 +/- 0.3, 0.3 +/- 0.3, and --0.8 +/- 0.3 respectively (or not consistent with a simple power law distribution). The observed distributions indicate a preference for short period systems with nearly circular orbits and companions that are not likely drawn from a standard initial mass function, as would be expected from random pairing. An interesting and unexpected result is that the period distribution is inconsistent with a standard power-law slope stemming mainly from an excess of periods between 3 and 5 days and an absence of periods between 7 and 14 days. One possible explanation of this phenomenon is that the binary systems with periods from 7--14 days are migrating to periods of 3--5 days. In addition, the binary distribution here is not consistent with previous suggestions in the literature that 45% of OB binaries are members of twin systems (mass ratio near 1).

  6. A New Optical Bench Concept for Space-Based Laser Interferometric Gravitational Wave Missions

    NASA Astrophysics Data System (ADS)

    Chilton, Andrew; Apple, Stephen; Ciani, Giacomo; Olatunde, Taiwo; Conklin, John; Mueller, Guido

    2015-04-01

    Space-based interferometric gravitational wave detectors such as LISA have been proposed to detect low-frequency gravitational wave sources such as the inspirals of compact objects into massive black holes or two massive black holes into each other. The optical components used to perform the high-precision interferometry required to make these measurements have historically been bonded to Zerodur optical benches, which are thermally ultrastable but difficult and time-consuming to manufacture. More modern implementations of LISA-like interferometry have reduced the length stability requirement on these benches from 30fm/√{Hz} to a few pm √{ Hz}. We therefore propose to alter the design of the optical bench in such a way as to no longer require the use of Zerodur; instead, we plan to replace it with more easily-used materials such as titanium or molybdenum. In this presentation, we discuss the current status of and future plans for the construction and testing of such an optical bench.

  7. The impact of a massive transfusion protocol (1:1:1) on major hepatic injuries: does it increase abdominal wall closure rates?

    PubMed

    Ball, Chad G; Dente, Christopher J; Shaz, Beth; Wyrzykowski, Amy D; Nicholas, Jeffrey M; Kirkpatrick, Andrew W; Feliciano, David V

    2013-10-01

    Massive transfusion protocols (MTPs) using high plasma and platelet ratios for exsanguinating trauma patients are increasingly popular. Major liver injuries often require massive resuscitations and immediate hemorrhage control. Current published literature describes outcomes among patients with mixed patterns of injury. We sought to identify the effects of an MTP on patients with major liver trauma. Patients with grade 3, 4 or 5 liver injuries who required a massive blood component transfusion were analyzed. We compared patients with high plasma:red blood cell:platelet ratio (1:1:1) transfusions (2007-2009) with patients injured before the creation of an institutional MTP (2005-2007). Among 60 patients with major hepatic injuries, 35 (58%) underwent resuscitation after the implementation of an MTP. Patient and injury characteristics were similar between cohorts. Implementation of the MTP significantly improved plasma: red blood cell:platelet ratios and decreased crystalloid fluid resuscitation (p = 0.026). Rapid improvement in early acidosis and coagulopathy was superior with an MTP (p = 0.009). More patients in the MTP group also underwent primary abdominal fascial closure during their hospital stay (p = 0.021). This was most evident with grade 4 injuries (89% vs. 14%). The mean time to fascial closure was 4.2 days. The overall survival rate for all major liver injuries was not affected by an MTP (p = 0.61). The implementation of a formal MTP using high plasma and platelet ratios resulted in a substantial increase in abdominal wall approximation. This occurred concurrently to a decrease in the delivered volume of crystalloid fluid.

  8. Massive plexiform neurofibromas in childhood: natural history and management issues.

    PubMed

    Serletis, Demitre; Parkin, Patricia; Bouffet, Eric; Shroff, Manohar; Drake, James M; Rutka, James T

    2007-05-01

    The authors review their experience with massive plexiform neurofibromas (PNs) in patients with pediatric neurofibromatosis Type 1 (NF1) to better characterize the natural history and management of these complex lesions. The authors performed a retrospective review of data obtained in seven patients with NF1 in whom massive PNs were diagnosed at The Hospital for Sick Children in Toronto, Ontario, Canada. These patients attended routine follow-up examinations conducted by a number of specialists, and serial neuroimaging studies were obtained to monitor disease progression. The most common presenting feature of PN was that of a painful, expanding lesion. Furthermore, two patients harbored multiple, distinct PNs affecting different body sites. With respect to management, two patients were simply observed, undergoing serial neuroimaging studies; two patients underwent biopsy sampling of their plexiform lesions; two patients underwent attempted medical treatment (farnesyl transferase inhibitor, R11577, and cyclophosphamide chemotherapy); and three patients required surgical debulking of their PNs because the massive growth of these tumors caused functional compromise. Ultimately, one patient died of respiratory complications due to progressive growth of the massive PN lesion. In this review of their experience, the authors found certain features that underscore the presentation and natural history of PNs. The management of these complex lesions, however, remains unclear. Slow-growing PNs may be observed conservatively, but the authors' experience suggests that resection should be considered in selected cases involving significant deterioration or functional compromise. Nevertheless, patients with massive PNs will benefit from close surveillance by a team of specialists to monitor for ongoing disease progression.

  9. A massive, quiescent galaxy at a redshift of 3.717.

    PubMed

    Glazebrook, Karl; Schreiber, Corentin; Labbé, Ivo; Nanayakkara, Themiya; Kacprzak, Glenn G; Oesch, Pascal A; Papovich, Casey; Spitler, Lee R; Straatman, Caroline M S; Tran, Kim-Vy H; Yuan, Tiantian

    2017-04-05

    Finding massive galaxies that stopped forming stars in the early Universe presents an observational challenge because their rest-frame ultraviolet emission is negligible and they can only be reliably identified by extremely deep near-infrared surveys. These surveys have revealed the presence of massive, quiescent early-type galaxies appearing as early as redshift z ≈ 2, an epoch three billion years after the Big Bang. Their age and formation processes have now been explained by an improved generation of galaxy-formation models, in which they form rapidly at z ≈ 3-4, consistent with the typical masses and ages derived from their observations. Deeper surveys have reported evidence for populations of massive, quiescent galaxies at even higher redshifts and earlier times, using coarsely sampled photometry. However, these early, massive, quiescent galaxies are not predicted by the latest generation of theoretical models. Here we report the spectroscopic confirmation of one such galaxy at redshift z = 3.717, with a stellar mass of 1.7 × 10 11 solar masses. We derive its age to be nearly half the age of the Universe at this redshift and the absorption line spectrum shows no current star formation. These observations demonstrate that the galaxy must have formed the majority of its stars quickly, within the first billion years of cosmic history in a short, extreme starburst. This ancestral starburst appears similar to those being found by submillimetre-wavelength surveys. The early formation of such massive systems implies that our picture of early galaxy assembly requires substantial revision.

  10. A massive, quiescent galaxy at a redshift of 3.717

    NASA Astrophysics Data System (ADS)

    Glazebrook, Karl; Schreiber, Corentin; Labbé, Ivo; Nanayakkara, Themiya; Kacprzak, Glenn G.; Oesch, Pascal A.; Papovich, Casey; Spitler, Lee R.; Straatman, Caroline M. S.; Tran, Kim-Vy H.; Yuan, Tiantian

    2017-04-01

    Finding massive galaxies that stopped forming stars in the early Universe presents an observational challenge because their rest-frame ultraviolet emission is negligible and they can only be reliably identified by extremely deep near-infrared surveys. These surveys have revealed the presence of massive, quiescent early-type galaxies appearing as early as redshift z ≈ 2, an epoch three billion years after the Big Bang. Their age and formation processes have now been explained by an improved generation of galaxy-formation models, in which they form rapidly at z ≈ 3-4, consistent with the typical masses and ages derived from their observations. Deeper surveys have reported evidence for populations of massive, quiescent galaxies at even higher redshifts and earlier times, using coarsely sampled photometry. However, these early, massive, quiescent galaxies are not predicted by the latest generation of theoretical models. Here we report the spectroscopic confirmation of one such galaxy at redshift z = 3.717, with a stellar mass of 1.7 × 1011 solar masses. We derive its age to be nearly half the age of the Universe at this redshift and the absorption line spectrum shows no current star formation. These observations demonstrate that the galaxy must have formed the majority of its stars quickly, within the first billion years of cosmic history in a short, extreme starburst. This ancestral starburst appears similar to those being found by submillimetre-wavelength surveys. The early formation of such massive systems implies that our picture of early galaxy assembly requires substantial revision.

  11. Late Wenlock (middle Silurian) bio-events: Caused by volatile boloid impact/s

    NASA Technical Reports Server (NTRS)

    Berry, W. B. N.; Wilde, P.

    1988-01-01

    Late Wenlockian (late mid-Silurian) life is characterized by three significant changes or bioevents: sudden development of massive carbonate reefs after a long interval of limited reef growth; sudden mass mortality among colonial zooplankton, graptolites; and origination of land plants with vascular tissue (Cooksonia). Both marine bioevents are short in duration and occur essentially simultaneously at the end of the Wenlock without any recorded major climatic change from the general global warm climate. These three disparate biologic events may be linked to sudden environmental change that could have resulted from sudden infusion of a massive amount of ammonia into the tropical ocean. Impact of a boloid or swarm of extraterrestrial bodies containing substantial quantities of a volatile (ammonia) component could provide such an infusion. Major carbonate precipitation (formation), as seen in the reefs as well as, to a more limited extent, in certain brachiopods, would be favored by increased pH resulting from addition of a massive quantity of ammonia into the upper ocean. Because of the buffer capacity of the ocean and dilution effects, the pH would have returned soon to equilibrium. Major proliferation of massive reefs ceased at the same time. Addition of ammonia as fertilizer to terrestrial environments in the tropics would have created optimum environmental conditions for development of land plants with vascular, nutrient-conductive tissue. Fertilization of terrestrial environments thus seemingly preceded development of vascular tissue by a short time interval. Although no direct evidence of impact of a volatile boloid may be found, the bioevent evidence is suggestive that such an impact in the oceans could have taken place. Indeed, in the case of an ammonia boloid, evidence, such as that of the Late Wenlockian bioevents may be the only available data for impact of such a boloid.

  12. Black Hole Universe Model for Explaining GRBs, X-Ray Flares, and Quasars as Emissions of Dynamic Star-like, Massive, and Supermassive Black Holes

    NASA Astrophysics Data System (ADS)

    Zhang, Tianxi

    2014-01-01

    Slightly modifying the standard big bang theory, the author has recently developed a new cosmological model called black hole universe, which is consistent with Mach’s principle, governed by Einstein’s general theory of relativity, and able to explain all observations of the universe. Previous studies accounted for the origin, structure, evolution, expansion, cosmic microwave background radiation, and acceleration of the black hole universe, which grew from a star-like black hole with several solar masses through a supermassive black hole with billions of solar masses to the present state with hundred billion-trillions of solar masses by accreting ambient matter and merging with other black holes. This study investigates the emissions of dynamic black holes according to the black hole universe model and provides a self-consistent explanation for the observations of gamma ray bursts (GRBs), X-ray flares, and quasars as emissions of dynamic star-like, massive, and supermassive black holes. It is shown that a black hole, when it accretes its ambient matter or merges with other black holes, becomes dynamic. Since the event horizon of a dynamic black hole is broken, the inside hot (or high-frequency) blackbody radiation leaks out. The leakage of the inside hot blackbody radiation leads to a GRB if it is a star-like black hole, an X-ray flare if it is a massive black hole like the one at the center of the Milky Way, or a quasar if it is a supermassive black hole like an active galactic nucleus (AGN). The energy spectra and amount of emissions produced by the dynamic star-like, massive, and supermassive black holes can be consistent with the measurements of GRBs, X-ray flares, and quasars.

  13. Hubble Witnesses Massive Comet-Like Object Pollute Atmosphere of a White Dwarf

    NASA Image and Video Library

    2017-12-08

    For the first time, scientists using NASA’s Hubble Space Telescope have witnessed a massive object with the makeup of a comet being ripped apart and scattered in the atmosphere of a white dwarf, the burned-out remains of a compact star. The object has a chemical composition similar to Halley’s Comet, but it is 100,000 times more massive and has a much higher amount of water. It is also rich in the elements essential for life, including nitrogen, carbon, oxygen, and sulfur. These findings are evidence for a belt of comet-like bodies orbiting the white dwarf, similar to our solar system’s Kuiper Belt. These icy bodies apparently survived the star’s evolution as it became a bloated red giant and then collapsed to a small, dense white dwarf. Caption: This artist's concept shows a massive, comet-like object falling toward a white dwarf. New Hubble Space Telescope findings are evidence for a belt of comet-like bodies orbiting the white dwarf, similar to our solar system's Kuiper Belt. The findings also suggest the presence of one or more unseen surviving planets around the white dwarf, which may have perturbed the belt to hurl icy objects into the burned-out star. Credits: NASA, ESA, and Z. Levay (STScI) NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  14. Remote sensing observations for monitoring and mathematical simulations of transboundary air pollutants migration from Siberian mass wildfires to Kazakhstan

    NASA Astrophysics Data System (ADS)

    Kaipov, I. V.

    2017-03-01

    Anthropogenic and natural factors have increased the power of wildfires in massive Siberian woodlands. As a consequence, the expansion of burned areas and increase in the duration of the forest fire season have led to the release of significant amounts of gases and aerosols. Therefore, it is important to understand the impact of wildland fires on air quality, atmospheric composition, climate and accurately describe the distribution of combustion products in time and space. The most effective research tool is the regional hydrodynamic model of the atmosphere, coupled with the model of pollutants transport and chemical interaction. Taking into account the meteorological parameters and processes of chemical interaction of impurities, complex use of remote sensing techniques for monitoring massive forest fires and mathematical modeling of long-range transport of pollutants in the atmosphere, allow to evaluate spatial and temporal scale of the phenomenon and calculate the quantitative characteristics of pollutants depending on the height and distance of migration.

  15. Massively parallel processor computer

    NASA Technical Reports Server (NTRS)

    Fung, L. W. (Inventor)

    1983-01-01

    An apparatus for processing multidimensional data with strong spatial characteristics, such as raw image data, characterized by a large number of parallel data streams in an ordered array is described. It comprises a large number (e.g., 16,384 in a 128 x 128 array) of parallel processing elements operating simultaneously and independently on single bit slices of a corresponding array of incoming data streams under control of a single set of instructions. Each of the processing elements comprises a bidirectional data bus in communication with a register for storing single bit slices together with a random access memory unit and associated circuitry, including a binary counter/shift register device, for performing logical and arithmetical computations on the bit slices, and an I/O unit for interfacing the bidirectional data bus with the data stream source. The massively parallel processor architecture enables very high speed processing of large amounts of ordered parallel data, including spatial translation by shifting or sliding of bits vertically or horizontally to neighboring processing elements.

  16. A Massively Parallel Computational Method of Reading Index Files for SOAPsnv.

    PubMed

    Zhu, Xiaoqian; Peng, Shaoliang; Liu, Shaojie; Cui, Yingbo; Gu, Xiang; Gao, Ming; Fang, Lin; Fang, Xiaodong

    2015-12-01

    SOAPsnv is the software used for identifying the single nucleotide variation in cancer genes. However, its performance is yet to match the massive amount of data to be processed. Experiments reveal that the main performance bottleneck of SOAPsnv software is the pileup algorithm. The original pileup algorithm's I/O process is time-consuming and inefficient to read input files. Moreover, the scalability of the pileup algorithm is also poor. Therefore, we designed a new algorithm, named BamPileup, aiming to improve the performance of sequential read, and the new pileup algorithm implemented a parallel read mode based on index. Using this method, each thread can directly read the data start from a specific position. The results of experiments on the Tianhe-2 supercomputer show that, when reading data in a multi-threaded parallel I/O way, the processing time of algorithm is reduced to 3.9 s and the application program can achieve a speedup up to 100×. Moreover, the scalability of the new algorithm is also satisfying.

  17. Hot Gas and AGN Feedback in Galaxies and Nearby Groups

    NASA Astrophysics Data System (ADS)

    Jones, Christine; Forman, William; Bogdan, Akos; Randall, Scott; Kraft, Ralph; Churazov, Eugene

    2013-07-01

    Massive galaxies harbor a supermassive black hole at their centers. At high redshifts, these galaxies experienced a very active quasar phase, when, as their black holes grew by accretion, they produced enormous amounts of energy. At the present epoch, these black holes still undergo occasional outbursts, although the mode of their energy release is primarily mechanical rather than radiative. The energy from these outbursts can reheat the cooling gas in the galaxy cores and maintain the red and dead nature of the early-type galaxies. These outbursts also can have dramatic effects on the galaxy-scale hot coronae found in the more massive galaxies. We describe research in three areas related to the hot gas around galaxies and their supermassive black holes. First we present examples of galaxies with AGN outbursts that have been studied in detail. Second, we show that X-ray emitting low-luminosity AGN are present in 80% of the galaxies studied. Third, we discuss the first examples of extensive hot gas and dark matter halos in optically faint galaxies.

  18. Massive spalling of intermetallic compounds in solder-substrate reactions due to limited supply of the active element

    NASA Astrophysics Data System (ADS)

    Yang, S. C.; Ho, C. E.; Chang, C. W.; Kao, C. R.

    2007-04-01

    Massive spalling of intermetallic compounds has been reported in the literature for several solder/substrate systems, including SnAgCu soldered on Ni substrate, SnZn on Cu, high-Pb PbSn on Cu, and high-Pb PbSn on Ni. In this work, a unified thermodynamic argument is proposed to explain this rather unusual phenomenon. According to this argument, two necessary conditions must be met. The number one condition is that at least one of the reactive constituents of the solder must be present in a limited amount, and the second condition is that the soldering reaction has to be very sensitive to its concentration. With the growth of intermetallic, more and more atoms of this constituent are extracted out of the solder and incorporated into the intermetallic. As the concentration of this constituent decreases, the original intermetallic at the interface becomes a nonequilibrium phase, and the spalling of the original intermetallic occurs.

  19. Massive spalling of intermetallic compounds in solder-substrate reactions due to limited supply of the active element

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, S. C.; Ho, C. E.; Chang, C. W.

    2007-04-15

    Massive spalling of intermetallic compounds has been reported in the literature for several solder/substrate systems, including SnAgCu soldered on Ni substrate, SnZn on Cu, high-Pb PbSn on Cu, and high-Pb PbSn on Ni. In this work, a unified thermodynamic argument is proposed to explain this rather unusual phenomenon. According to this argument, two necessary conditions must be met. The number one condition is that at least one of the reactive constituents of the solder must be present in a limited amount, and the second condition is that the soldering reaction has to be very sensitive to its concentration. With themore » growth of intermetallic, more and more atoms of this constituent are extracted out of the solder and incorporated into the intermetallic. As the concentration of this constituent decreases, the original intermetallic at the interface becomes a nonequilibrium phase, and the spalling of the original intermetallic occurs.« less

  20. Interactive 3D Visualization: An Important Element in Dealing with Increasing Data Volumes and Decreasing Resources

    NASA Astrophysics Data System (ADS)

    Gee, L.; Reed, B.; Mayer, L.

    2002-12-01

    Recent years have seen remarkable advances in sonar technology, positioning capabilities, and computer processing power that have revolutionized the way we image the seafloor. The US Naval Oceanographic Office (NAVOCEANO) has updated its survey vessels and launches to the latest generation of technology and now possesses a tremendous ocean observing and mapping capability. However, the systems produce massive amounts of data that must be validated prior to inclusion in various bathymetry, hydrography, and imagery products. The key to meeting the challenge of the massive data volumes was to change the approach that required every data point be viewed. This was achieved with the replacement of the traditional line-by-line editing approach with an automated cleaning module, and an area-based editor. The approach includes a unique data structure that enables the direct access to the full resolution data from the area based view, including a direct interface to target files and imagery snippets from mosaic and full resolution imagery. The increased data volumes to be processed also offered tremendous opportunities in terms of visualization and analysis, and interactive 3D presentation of the complex multi-attribute data provided a natural complement to the area based processing. If properly geo-referenced and treated, the complex data sets can be presented in a natural and intuitive manner that allows the integration of multiple components each at their inherent level of resolution and without compromising the quantitative nature of the data. Artificial sun-illumination, shading, and 3-D rendering are used with digital bathymetric data to form natural looking and easily interpretable, yet quantitative, landscapes that allow the user to rapidly identify the data requiring further processing or analysis. Color can be used to represent depth or other parameters (like backscatter, quality factors or sediment properties), which can be draped over the DTM, or high resolution imagery can be texture mapped on bathymetric data. The presentation will demonstrate the new approach of the integrated area based processing and 3D visualization with a number of data sets from recent surveys.

  1. High throughput optical lithography by scanning a massive array of bowtie aperture antennas at near-field

    PubMed Central

    Wen, X.; Datta, A.; Traverso, L. M.; Pan, L.; Xu, X.; Moon, E. E.

    2015-01-01

    Optical lithography, the enabling process for defining features, has been widely used in semiconductor industry and many other nanotechnology applications. Advances of nanotechnology require developments of high-throughput optical lithography capabilities to overcome the optical diffraction limit and meet the ever-decreasing device dimensions. We report our recent experimental advancements to scale up diffraction unlimited optical lithography in a massive scale using the near field nanolithography capabilities of bowtie apertures. A record number of near-field optical elements, an array of 1,024 bowtie antenna apertures, are simultaneously employed to generate a large number of patterns by carefully controlling their working distances over the entire array using an optical gap metrology system. Our experimental results reiterated the ability of using massively-parallel near-field devices to achieve high-throughput optical nanolithography, which can be promising for many important nanotechnology applications such as computation, data storage, communication, and energy. PMID:26525906

  2. Visser's massive graviton bimetric theory revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roany, Alain de; Chauvineau, Bertrand; Freitas Pacheco, Jose A. de

    2011-10-15

    A massive gravity theory was proposed by Visser in the late 1990s. This theory, based on a background metric b{sub {alpha}{beta}} and on an usual dynamical metric g{sub {alpha}{beta}} has the advantage of being free of ghosts as well as discontinuities present in other massive theories proposed in the past. In the present investigation, the equations of Visser's theory are revisited with particular care on the related conservation laws. It will be shown that a multiplicative factor is missing in the graviton tensor originally derived by Visser, which has no incidence on the weak field approach but becomes important inmore » the strong field regime when, for instance, cosmological applications are considered. In this case, contrary to some previous claims found in the literature, we conclude that a nonstatic background metric is required in order to obtain a solution able to mimic the {Lambda}CDM cosmology.« less

  3. Pilot Project for Spaceborne Massive Optical Storage Devices

    NASA Technical Reports Server (NTRS)

    Chen, Y. J.

    1996-01-01

    A space bound storage device has many special requirements. In addition to large storage capacity, fas read/ write time, and high reliability, it also needs to have small volume, light weight, low power consumption, radiation hardening, ability to operate in extreme temperature ranges, etc. Holographic optical recording technology, which has been making major advancements in recent years, is an extremely promising candidate. The goal of this pilot project is to demonstrate a laboratory bench-top holographic optical recording storage system (HORSS) based on nonlinear polymer films 1 and/or other advanced photo-refractive materials. This system will be used as a research vehicle to study relevant optical properties of novel holographic optical materials, to explore massive optical storage technologies based on the photo-refractive effect and to evaluate the feasibility of developing a massive storage system, based on holographic optical recording technology, for a space bound experiment in the near future.

  4. Parallel processing architecture for H.264 deblocking filter on multi-core platforms

    NASA Astrophysics Data System (ADS)

    Prasad, Durga P.; Sonachalam, Sekar; Kunchamwar, Mangesh K.; Gunupudi, Nageswara Rao

    2012-03-01

    Massively parallel computing (multi-core) chips offer outstanding new solutions that satisfy the increasing demand for high resolution and high quality video compression technologies such as H.264. Such solutions not only provide exceptional quality but also efficiency, low power, and low latency, previously unattainable in software based designs. While custom hardware and Application Specific Integrated Circuit (ASIC) technologies may achieve lowlatency, low power, and real-time performance in some consumer devices, many applications require a flexible and scalable software-defined solution. The deblocking filter in H.264 encoder/decoder poses difficult implementation challenges because of heavy data dependencies and the conditional nature of the computations. Deblocking filter implementations tend to be fixed and difficult to reconfigure for different needs. The ability to scale up for higher quality requirements such as 10-bit pixel depth or a 4:2:2 chroma format often reduces the throughput of a parallel architecture designed for lower feature set. A scalable architecture for deblocking filtering, created with a massively parallel processor based solution, means that the same encoder or decoder will be deployed in a variety of applications, at different video resolutions, for different power requirements, and at higher bit-depths and better color sub sampling patterns like YUV, 4:2:2, or 4:4:4 formats. Low power, software-defined encoders/decoders may be implemented using a massively parallel processor array, like that found in HyperX technology, with 100 or more cores and distributed memory. The large number of processor elements allows the silicon device to operate more efficiently than conventional DSP or CPU technology. This software programing model for massively parallel processors offers a flexible implementation and a power efficiency close to that of ASIC solutions. This work describes a scalable parallel architecture for an H.264 compliant deblocking filter for multi core platforms such as HyperX technology. Parallel techniques such as parallel processing of independent macroblocks, sub blocks, and pixel row level are examined in this work. The deblocking architecture consists of a basic cell called deblocking filter unit (DFU) and dependent data buffer manager (DFM). The DFU can be used in several instances, catering to different performance needs the DFM serves the data required for the different number of DFUs, and also manages all the neighboring data required for future data processing of DFUs. This approach achieves the scalability, flexibility, and performance excellence required in deblocking filters.

  5. Combustor Simulation

    NASA Technical Reports Server (NTRS)

    Norris, Andrew

    2003-01-01

    The goal was to perform 3D simulation of GE90 combustor, as part of full turbofan engine simulation. Requirements of high fidelity as well as fast turn-around time require massively parallel code. National Combustion Code (NCC) was chosen for this task as supports up to 999 processors and includes state-of-the-art combustion models. Also required is ability to take inlet conditions from compressor code and give exit conditions to turbine code.

  6. Protein metabolism in obese patients during very low-calorie mixed diets containing different amounts of proteins and carbohydrates.

    PubMed

    Pasquali, R; Casimirri, F; Melchionda, N

    1987-12-01

    To assess long-term nitrogen sparing capacity of very low-calorie mixed diets, we administered two isoenergetic (2092KJ) liquid formula regimens of different composition for 8 weeks to two matched groups of massively obese patients (group 1: proteins 60 g, carbohydrate 54 g; group 2: proteins 41 g, carbohydrates 81 g). Weight loss was similar in both groups. Daily nitrogen balance (g) during the second month resulted more a negative in group 2 with respect to group 1. However, within the groups individual nitrogen sparing capacity varied markedly; only a few in group 1 and one in group 2 were able to attain nitrogen equilibrium throughout the study. Daily urine excretion of 3-methylhistidine fell significantly in group 1 but did not change in group 2. Unlike total proteins, albumins, and transferrin, serum levels of retinol-binding protein, thyroxin-binding globulin, and complement-C3 fell significantly in both groups but per cent variations of complement-C3 were more pronounced in the first group. Prealbumin levels fell persistently in group 1 and transiently in group 2. The results indicate that even with this type of diet an adequate amount of dietary protein represents the most important factor in minimizing whole body protein catabolism during long-term semistarvation in massively obese patients. Moreover, they confirm the possible role of dietary carbohydrates in the regulation of some visceral protein metabolism.

  7. Occurrence of silver minerals in a silver-rich pocket in the massive sulfide zinc-lead ores in the Edwards mine, New York

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serviss, C.R.; Grout, C.M.; Hagni, R.D.

    1985-01-01

    Ore microscopic examination of uncommon silver-rich ores from the Edwards mine has detected three silver minerals, native silver, freibergite, and argentite, that were previously unreported in the literature from the Balmat-Edwards district. The zinc-lead ore deposits of the Balmat-Edwards District in northern New York are composed of very coarse-grained massive sulfides, principally sphalerite, galena, and pyrite. The typical ores contain small amounts of silver in solid solution galena. Galena concentrates produced from those ores have contained an average of 15 ounces of silver per ton of 60% lead concentrates. In contrast to the typical ore a silver-rich pocket, that measuredmore » three feet by three feet on the vertical mine face and was the subject of this study, contained nearly 1% silver in a zinc ore. Ore microscopic study shows that this ore is especially characterized by abundant, relatively fine-grained chalcopyrite with anhedral pyrite inclusions. Fine-grained sphalerite, native silver, argentite, freibergite and arsenopyrite occur in association with the chalcopyrite and as fracture-fillings in gangue minerals. Geochemically anomalous amounts of tin, barium, chromium, and nickel also are present in the silver-rich pocket. The silver-rich pocket may mark the locus of an early feeder vent or alternatively it may record a hydrothermal event that was superimposed upon the event responsible for the metamorphic ore textures.« less

  8. Data List - Specifying and Acquiring Earth Science Data Measurements All at Once

    NASA Astrophysics Data System (ADS)

    Shie, C. L.; Teng, W. L.; Liu, Z.; Hearty, T. J., III; Shen, S.; Li, A.; Hegde, M.; Bryant, K.; Seiler, E.; Kempler, S. J.

    2016-12-01

    Natural phenomena, such as tropical storms (e.g., hurricane/typhoons), winter storms (e.g., blizzards) volcanic eruptions, floods, and drought, have the potential to cause immense property damage, great socioeconomic impact, and tragic losses of human life. In order to investigate and assess these natural hazards in a timely manner, there needs to be efficient searching and accessing of massive amounts of heterogeneous scientific data from, particularly, satellite and model products. This is a daunting task for most application users, decision makers, and science researchers. The NASA Goddard Earth Sciences Data and Information Service Center (GES DISC) has, for many years, archived and served massive amounts of Earth science data, along with value-added information and services. In order to facilitate the GES DISC users in acquiring their data of interest "all at once," with minimum effort, the GES DISC has started developing a value-added and knowledge-based data service framework. This framework allows the preparation and presentation to users of collections of data and their related resources for natural disaster events or other scientific themes. These collections of data, initially termed "Data Bundle" and then "Virtual Collections" and finally "Data Lists," contain suites of annotated Web addresses (URLs) that point to their respective data and resource addresses, "all at once" and "virtually." Because these collections of data are virtual, there is no need to duplicate the data. Currently available "Data Lists" for several natural disaster phenomena and the architecture of the data service framework will be presented.

  9. Massive hemoptysis and complete unilateral lung collapse in pregnancy due to pulmonary tuberculosis with good maternal and fetal outcome: a case report.

    PubMed

    Masukume, Gwinyai; Sengurayi, Elton; Moyo, Phinot; Feliu, Julio; Gandanhamo, Danboy; Ndebele, Wedu; Ngwenya, Solwayo; Gwini, Rudo

    2013-08-22

    We report an extremely rare case of massive hemoptysis and complete left-sided lung collapse in pregnancy due to pulmonary tuberculosis in a health care worker with good maternal and fetal outcome. A 33-year-old human immuno deficiency virus seronegative African health care worker in her fourth pregnancy with two previous second trimester miscarriages and an apparently healthy daughter from her third pregnancy presented coughing up copious amounts of blood at 18 weeks and two days of gestation. She had a cervical suture in situ for presumed cervical weakness. Computed tomography of her chest showed complete collapse of the left lung; subsequent bronchoscopy was apparently normal. Her serum β-human chorionic gonadotropin, tests for autoimmune disease and echocardiography were all normal. Her lung re-inflated spontaneously. Sputum for acid alcohol fast bacilli was positive; our patient was commenced on anti-tuberculosis medication and pyridoxine. At 41 weeks and three days of pregnancy our patient went into spontaneous labor and delivered a live born female baby weighing 2.6 kg with APGAR scores of nine and 10 at one and five minutes respectively. She and her baby are apparently doing well about 10 months after delivery. It is possible to have massive hemoptysis and complete unilateral lung collapse with spontaneous resolution in pregnancy due to pulmonary tuberculosis with good maternal and fetal outcome.

  10. Stellar haloes in massive early-type galaxies

    NASA Astrophysics Data System (ADS)

    Buitrago, F.

    2017-03-01

    The Hubble Ultra Deep Field (HUDF) opens up an unique window to witness galaxy assembly at all cosmic distances. Thanks to its extraordinary depth, it is a privileged tool to beat the cosmological dimming, which affects any extragalactic observations and has a very strong dependence with redshift (1 +z)^4. In particular, massive (M_{stellar}>5 × 10^{10} M_⊙) Early Type Galaxies (ETGs) are the most interesting candidates for these studies, as they must grow in an inside-out fashion developing an extended stellar envelope/halo that accounts for their remarkable size evolution (˜5 times larger in the nearby Universe than at z=2-3). To this end we have analysed the 6 most massive ETGs at z <1 in the HUDF12. Because of the careful data reduction and the exhaustive treatment of the Point Spread Function (PSF), we are able to trace the galaxy surface brightness profiles up to the same levels as in the local Universe but this time at = 0.65 (31 mag arcsec^{-2} in all 8 HST bands, ˜ 29 mag arcsec^{-2} restframe or beyond 25 effective radii). This fact enables us to investigate the galactic outskirts or stellar haloes at a previously unexplored era, characterising their light and mass profiles, colors and for the first time the amount of mass in ongoing mergers.

  11. a Snapshot Survey of X-Ray Selected Central Cluster Galaxies

    NASA Astrophysics Data System (ADS)

    Edge, Alastair

    1999-07-01

    Central cluster galaxies are the most massive stellar systems known and have been used as standard candles for many decades. Only recently have central cluster galaxies been recognised to exhibit a wide variety of small scale {<100 pc} features that can only be reliably detected with HST resolution. The most intriguing of these are dust lanes which have been detected in many central cluster galaxies. Dust is not expected to survive long in the hostile cluster environment unless shielded by the ISM of a disk galaxy or very dense clouds of cold gas. WFPC2 snapshot images of a representative subset of the central cluster galaxies from an X-ray selected cluster sample would provide important constraints on the formation and evolution of dust in cluster cores that cannot be obtained from ground-based observations. In addition, these images will allow the AGN component, the frequency of multiple nuclei, and the amount of massive-star formation in central cluster galaxies to be ass es sed. The proposed HST observatio ns would also provide high-resolution images of previously unresolved gravitational arcs in the most massive clusters in our sample resulting in constraints on the shape of the gravitational potential of these systems. This project will complement our extensive multi-frequency work on this sample that includes optical spectroscopy and photometry, VLA and X-ray images for the majority of the 210 targets.

  12. Is there vacuum when there is mass? Vacuum and non-vacuum solutions for massive gravity

    NASA Astrophysics Data System (ADS)

    Martín-Moruno, Prado; Visser, Matt

    2013-08-01

    Massive gravity is a theory which has a tremendous amount of freedom to describe different cosmologies, but at the same time, the various solutions one encounters must fulfil some rather nontrivial constraints. Most of the freedom comes not from the Lagrangian, which contains only a small number of free parameters (typically three depending on counting conventions), but from the fact that one is in principle free to choose the reference metric almost arbitrarily—which effectively introduces a non-denumerable infinity of free parameters. In the current paper, we stress that although changing the reference metric would lead to a different cosmological model, this does not mean that the dynamics of the universe can be entirely divorced from its matter content. That is, while the choice of reference metric certainly influences the evolution of the physically observable foreground metric, the effect of matter cannot be neglected. Indeed the interplay between matter and geometry can be significantly changed in some specific models; effectively since the graviton would be able to curve the spacetime by itself, without the need of matter. Thus, even the set of vacuum solutions for massive gravity can have significant structure. In some cases, the effect of the reference metric could be so strong that no conceivable material content would be able to drastically affect the cosmological evolution. Dedicated to the memory of Professor Pedro F González-Díaz

  13. Password Cracking Using Sony Playstations

    NASA Astrophysics Data System (ADS)

    Kleinhans, Hugo; Butts, Jonathan; Shenoi, Sujeet

    Law enforcement agencies frequently encounter encrypted digital evidence for which the cryptographic keys are unknown or unavailable. Password cracking - whether it employs brute force or sophisticated cryptanalytic techniques - requires massive computational resources. This paper evaluates the benefits of using the Sony PlayStation 3 (PS3) to crack passwords. The PS3 offers massive computational power at relatively low cost. Moreover, multiple PS3 systems can be introduced easily to expand parallel processing when additional power is needed. This paper also describes a distributed framework designed to enable law enforcement agents to crack encrypted archives and applications in an efficient and cost-effective manner.

  14. Massive gas gangrene secondary to occult colon carcinoma.

    PubMed

    Griffin, Andrew S; Crawford, Matthew D; Gupta, Rajan T

    2016-06-01

    Gas gangrene is a rare but often fatal soft-tissue infection. Because it is uncommon and the classic symptom of crepitus does not appear until the infection is advanced, prompt diagnosis requires a high index of suspicion. We present a case report of a middle-aged man who presented with acute onset lower-extremity pain that was initially thought to be due to deep vein thrombosis. After undergoing workup for pulmonary embolism, he was found to have massive gas gangrene of the lower extremity secondary to an occult colon adenocarcinoma and died within hours of presentation from multisystem organ failure.

  15. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by dynamically adjusting local routing strategies

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-03-16

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Each node implements a respective routing strategy for routing data through the network, the routing strategies not necessarily being the same in every node. The routing strategies implemented in the nodes are dynamically adjusted during application execution to shift network workload as required. Preferably, adjustment of routing policies in selective nodes is performed at synchronization points. The network may be dynamically monitored, and routing strategies adjusted according to detected network conditions.

  16. The M-Y Axilloplasty After Massive Weight Loss: Analysis of 159 Consecutive Patients.

    PubMed

    Boccara, David; Petit, Arnaud; Reinbold, Christophe; Chaouat, Marc; Mimoun, Maurice; Serror, Kevin

    2018-05-10

    Brachioplasties often culminate in unsightly scars that are a source of disappointment to patients. We aimed to evaluate the results of M-Y axilloplasty following massive weight loss. We performed a retrospective assessment of our technique for brachioplasty with an M-Y axilloplasty in 159 female patients after massive weight loss. This retrospective study covered a study period of 10 years. After substantial lipoaspiration, the incision is placed on the internal side of the arm, with an M-shaped axilloplasty. The satisfaction rate was 154/159 (97%) and 120/159 (75.5%) being happy with their esthetic results. Nineteen percent (30/159) of the patients had complications and 12/159 (7.5%) underwent a surgical revision. M-Y axilloplasty for brachioplasty is an effective procedure for treating women who are unhappy with their upper arms after massive weight loss. The satisfaction rate is high, and the result leaves no excess skin on the chest. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  17. Black-hole-regulated star formation in massive galaxies.

    PubMed

    Martín-Navarro, Ignacio; Brodie, Jean P; Romanowsky, Aaron J; Ruiz-Lara, Tomás; van de Ven, Glenn

    2018-01-18

    Supermassive black holes, with masses more than a million times that of the Sun, seem to inhabit the centres of all massive galaxies. Cosmologically motivated theories of galaxy formation require feedback from these supermassive black holes to regulate star formation. In the absence of such feedback, state-of-the-art numerical simulations fail to reproduce the number density and properties of massive galaxies in the local Universe. There is, however, no observational evidence of this strongly coupled coevolution between supermassive black holes and star formation, impeding our understanding of baryonic processes within galaxies. Here we report that the star formation histories of nearby massive galaxies, as measured from their integrated optical spectra, depend on the mass of the central supermassive black hole. Our results indicate that the black-hole mass scales with the gas cooling rate in the early Universe. The subsequent quenching of star formation takes place earlier and more efficiently in galaxies that host higher-mass central black holes. The observed relation between black-hole mass and star formation efficiency applies to all generations of stars formed throughout the life of a galaxy, revealing a continuous interplay between black-hole activity and baryon cooling.

  18. Black-hole-regulated star formation in massive galaxies

    NASA Astrophysics Data System (ADS)

    Martín-Navarro, Ignacio; Brodie, Jean P.; Romanowsky, Aaron J.; Ruiz-Lara, Tomás; van de Ven, Glenn

    2018-01-01

    Supermassive black holes, with masses more than a million times that of the Sun, seem to inhabit the centres of all massive galaxies. Cosmologically motivated theories of galaxy formation require feedback from these supermassive black holes to regulate star formation. In the absence of such feedback, state-of-the-art numerical simulations fail to reproduce the number density and properties of massive galaxies in the local Universe. There is, however, no observational evidence of this strongly coupled coevolution between supermassive black holes and star formation, impeding our understanding of baryonic processes within galaxies. Here we report that the star formation histories of nearby massive galaxies, as measured from their integrated optical spectra, depend on the mass of the central supermassive black hole. Our results indicate that the black-hole mass scales with the gas cooling rate in the early Universe. The subsequent quenching of star formation takes place earlier and more efficiently in galaxies that host higher-mass central black holes. The observed relation between black-hole mass and star formation efficiency applies to all generations of stars formed throughout the life of a galaxy, revealing a continuous interplay between black-hole activity and baryon cooling.

  19. Thrombolysis with intravenous recombinant tissue plasminogen activator during early postpartum period: a review of the literature.

    PubMed

    Akazawa, Munetoshi; Nishida, Makoto

    2017-05-01

    Thromboembolic events are one of the leading causes of maternal death during the postpartum period. Postpartum thrombolytic therapy with recombinant tissue plasminogen activator (rt-PA) is controversial because the treatment may lead to massive bleeding. Data centralization may be beneficial for analyzing the safety and effectiveness of systemic thrombolysis during the early postpartum period. We performed a computerized MEDLINE and EMBASE search. We collected data for 13 cases of systemic thrombolytic therapy during the early postpartum period, when limiting the early postpartum period to 48 hours after delivery. Blood transfusion was necessary in all cases except for one (12/13; 92%). In seven cases (7/13; 54%), a large amount of blood was required for transfusion. Subsequent laparotomy to control bleeding was required in five cases (5/13; 38%), including three cases of hysterectomy and two cases of hematoma removal, all of which involved cesarean delivery. In cases of transvaginal delivery, there was no report of laparotomy. The occurrence of severe bleeding was high in relation to cesarean section, compared with vaginal deliveries. Using rt-PA in relation to cesarean section might be worth avoiding. However, the paucity of data in the literature makes it difficult to assess the ultimate outcomes and safety of this treatment. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.

  20. Spacecraft Autonomy and Automation: A Comparative Analysis of Strategies for Cost Effective Mission Operations

    NASA Technical Reports Server (NTRS)

    Wright, Nathaniel, Jr.

    2000-01-01

    The evolution of satellite operations over the last 40 years has drastically changed. October 4, 1957 (during the cold war) the Soviet Union launched the world's first spacecraft into orbit. The Sputnik satellite orbited Earth for three months and catapulted the United States into a race for dominance in space. A year after Sputnik, President Dwight Eisenhower formed the National Space and Aeronautics Administration (NASA). With a team of scientists and engineers, NASA successfully launched Explorer 1, the first US satellite to orbit Earth. During these early years, massive amounts of ground support equipment and operators were required to successfully operate spacecraft vehicles. Today, budget reductions and technological advances have forced new approaches to spacecraft operations. These approaches require increasingly complex, on board spacecraft systems, that enable autonomous operations, resulting in more cost-effective mission operations. NASA's Goddard Space Flight Center, considered world class in satellite development and operations, has developed and operated over 200 satellites during its 40 years of existence. NASA Goddard is adopting several new millennium initiatives that lower operational costs through the spacecraft autonomy and automation. This paper examines NASA's approach to spacecraft autonomy and ground system automation through a comparative analysis of satellite missions for Hubble Space Telescope-HST, Near Earth Asteroid Rendezvous-NEAR, and Solar Heliospheric Observatory-SoHO, with emphasis on cost reduction methods, risk analysis and anomalies and strategies employed for mitigating risk.

  1. The impact of temperature on marine phytoplankton resource allocation and metabolism

    NASA Astrophysics Data System (ADS)

    Toseland, A.; Daines, S. J.; Clark, J. R.; Kirkham, A.; Strauss, J.; Uhlig, C.; Lenton, T. M.; Valentin, K.; Pearson, G. A.; Moulton, V.; Mock, T.

    2013-11-01

    Marine phytoplankton are responsible for ~50% of the CO2 that is fixed annually worldwide, and contribute massively to other biogeochemical cycles in the oceans. Their contribution depends significantly on the interplay between dynamic environmental conditions and the metabolic responses that underpin resource allocation and hence biogeochemical cycling in the oceans. However, these complex environment-biome interactions have not been studied on a larger scale. Here we use a set of integrative approaches that combine metatranscriptomes, biochemical data, cellular physiology and emergent phytoplankton growth strategies in a global ecosystems model, to show that temperature significantly affects eukaryotic phytoplankton metabolism with consequences for biogeochemical cycling under global warming. In particular, the rate of protein synthesis strongly increases under high temperatures even though the numbers of ribosomes and their associated rRNAs decreases. Thus, at higher temperatures, eukaryotic phytoplankton seem to require a lower density of ribosomes to produce the required amounts of cellular protein. The reduction of phosphate-rich ribosomes in warmer oceans will tend to produce higher organismal nitrogen (N) to phosphate (P) ratios, in turn increasing demand for N with consequences for the marine carbon cycle due to shifts towards N-limitation. Our integrative approach suggests that temperature plays a previously unrecognized, critical role in resource allocation and marine phytoplankton stoichiometry, with implications for the biogeochemical cycles that they drive.

  2. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  3. A Framework for Creating Value from Fleet Data at Ecosystem Level

    NASA Astrophysics Data System (ADS)

    Kinnunen, Sini-Kaisu; Hanski, Jyri; Marttonen-Arola, Salla; Kärri, Timo

    2017-09-01

    As companies have recently gotten more interested in utilizing the increasingly gathered data and realizing the potential of data analysis, the ability to upgrade data into value for business has been recognized as an advantage. Companies gain competitive advantage if they are able to benefit from the fleet data that is produced both in and outside the boundaries of the company. Benefits of fleet management are based on the possibility to have access to the massive amounts of asset data that can then be utilized e.g. to gain cost savings and to develop products and services. The ambition of the companies is to create value from fleet data but this requires that different actors in ecosystem are working together for a common goal - to get the most value out of fleet data for the ecosystem. In order that this could be possible, we need a framework to meet the requirements of the fleet life-cycle data utilization. This means that the different actors in the ecosystem need to understand their role in the fleet data refining process in order to promote the value creation from fleet data. The objective of this paper is to develop a framework for knowledge management in order to create value from fleet data in ecosystems. As a result, we present a conceptual framework which helps companies to develop their asset management practices related to the fleet of assets.

  4. Catastrophic ice lake collapse in Aram Chaos, Mars

    NASA Astrophysics Data System (ADS)

    Roda, Manuel; Kleinhans, Maarten G.; Zegers, Tanja E.; Oosthoek, Jelmer H. P.

    2014-07-01

    Hesperian chaotic terrains have been recognized as the source of outflow channels formed by catastrophic outflows. Four main scenarios have been proposed for the formation of chaotic terrains that involve different amounts of water and single or multiple outflow events. Here, we test these scenarios with morphological and structural analyses of imagery and elevation data for Aram Chaos in conjunction with numerical modeling of the morphological evolution of the catastrophic carving of the outflow valley. The morphological and geological analyses of Aram Chaos suggest large-scale collapse and subsidence (1500 m) of the entire area, which is consistent with a massive expulsion of liquid water from the subsurface in one single event. The combined observations suggest a complex process starting with the outflow of water from two small channels, followed by continuous groundwater sapping and headward erosion and ending with a catastrophic lake rim collapse and carving of the Aram Valley, which is synchronous with the 2.5 Ga stage of the Ares Vallis formation. The water volume and formative time scale required to carve the Aram channels indicate that a single, rapid (maximum tens of days) and catastrophic (flood volume of 9.3 × 104 km3) event carved the outflow channel. We conclude that a sub-ice lake collapse model can best explain the features of the Aram Chaos Valley system as well as the time scale required for its formation.

  5. The Dynamics of Massive Starless Cores with ALMA

    NASA Astrophysics Data System (ADS)

    Tan, Jonathan C.; Kong, Shuo; Butler, Michael J.; Caselli, Paola; Fontani, Francesco

    2013-12-01

    How do stars that are more massive than the Sun form, and thus how is the stellar initial mass function (IMF) established? Such intermediate- and high-mass stars may be born from relatively massive pre-stellar gas cores, which are more massive than the thermal Jeans mass. The turbulent core accretion model invokes such cores as being in approximate virial equilibrium and in approximate pressure equilibrium with their surrounding clump medium. Their internal pressure is provided by a combination of turbulence and magnetic fields. Alternatively, the competitive accretion model requires strongly sub-virial initial conditions that then lead to extensive fragmentation to the thermal Jeans scale, with intermediate- and high-mass stars later forming by competitive Bondi-Hoyle accretion. To test these models, we have identified four prime examples of massive (~100 M ⊙) clumps from mid-infrared extinction mapping of infrared dark clouds. Fontani et al. found high deuteration fractions of N2H+ in these objects, which are consistent with them being starless. Here we present ALMA observations of these four clumps that probe the N2D+ (3-2) line at 2.''3 resolution. We find six N2D+ cores and determine their dynamical state. Their observed velocity dispersions and sizes are broadly consistent with the predictions of the turbulent core model of self-gravitating, magnetized (with Alfvén Mach number mA ~ 1) and virialized cores that are bounded by the high pressures of their surrounding clumps. However, in the most massive cores, with masses up to ~60 M ⊙, our results suggest that moderately enhanced magnetic fields (so that mA ~= 0.3) may be needed for the structures to be in virial and pressure equilibrium. Magnetically regulated core formation may thus be important in controlling the formation of massive cores, inhibiting their fragmentation, and thus helping to establish the stellar IMF.

  6. Evaluating the Cassandra NoSQL Database Approach for Genomic Data Persistency.

    PubMed

    Aniceto, Rodrigo; Xavier, Rene; Guimarães, Valeria; Hondo, Fernanda; Holanda, Maristela; Walter, Maria Emilia; Lifschitz, Sérgio

    2015-01-01

    Rapid advances in high-throughput sequencing techniques have created interesting computational challenges in bioinformatics. One of them refers to management of massive amounts of data generated by automatic sequencers. We need to deal with the persistency of genomic data, particularly storing and analyzing these large-scale processed data. To find an alternative to the frequently considered relational database model becomes a compelling task. Other data models may be more effective when dealing with a very large amount of nonconventional data, especially for writing and retrieving operations. In this paper, we discuss the Cassandra NoSQL database approach for storing genomic data. We perform an analysis of persistency and I/O operations with real data, using the Cassandra database system. We also compare the results obtained with a classical relational database system and another NoSQL database approach, MongoDB.

  7. Adjusting process count on demand for petascale global optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sosonkina, Masha; Watson, Layne T.; Radcliffe, Nicholas R.

    2012-11-23

    There are many challenges that need to be met before efficient and reliable computation at the petascale is possible. Many scientific and engineering codes running at the petascale are likely to be memory intensive, which makes thrashing a serious problem for many petascale applications. One way to overcome this challenge is to use a dynamic number of processes, so that the total amount of memory available for the computation can be increased on demand. This paper describes modifications made to the massively parallel global optimization code pVTdirect in order to allow for a dynamic number of processes. In particular, themore » modified version of the code monitors memory use and spawns new processes if the amount of available memory is determined to be insufficient. The primary design challenges are discussed, and performance results are presented and analyzed.« less

  8. Information, Please.

    ERIC Educational Resources Information Center

    Hardy, Lawrence

    2003-01-01

    Requirements of the No Child Left Behind Act present school districts with a massive lesson in data-driven decision-making. Technology companies offer data-management tools that organize student information from state tests. Offers districts advice in choosing a technology provider. (MLF)

  9. The nature of the central parsec of the Galaxy

    NASA Technical Reports Server (NTRS)

    Lacy, J. H.; Townes, C. H.; Hollenbach, D. J.

    1982-01-01

    Observations of infrared fine-structure line emission from compact clouds of ionized gas in the galactic center have been reported by Lacy et al (1979, 1980). These observations suggest the existence of a central black hole of nearly 3,000,000 solar masses and require mechanisms to generate, ionize, and dispose of the gas clouds. It is found that the best model to fulfill these requirements involves cloud generation through disruption of red giants by stellar collisions, ionization by a population of stars which is affected either by enhanced metal abundances or the death of the most massive stars, and gas disposal by star formation. Although the existence of a massive black hole cannot be ruled out, it would play no necessary role in this model and may cause the tidal disruption of stars at a rate such that their accretion into the black hole would produce more radiation than is observed.

  10. Hyperkalemia caused by rapid red cell transfusion and the potassium absorption filter

    PubMed Central

    Imashuku, Yasuhiko; Kitagawa, Hirotoshi; Mizuno, Takayoshi; Fukushima, Yutaka

    2017-01-01

    We report a case of transient hyperkalemia during hysterectomy after cesarean section, due to preoperatively undiagnosed placenta accreta that caused unforeseen massive hemorrhage and required rapid red cell transfusion. Hyperkalemia-induced by rapid red cell transfusion is a well-known severe complication of transfusion; however, in patients with sudden massive hemorrhage, rapid red cell transfusion is necessary to save their life. In such cases, it is extremely important to monitor serum potassium levels. For an emergency situation, a system should be developed to ensure sufficient preparation for immediate transfusion and laboratory tests. Furthermore, sufficient stock of preparations to treat hyperkalemia, such as calcium preparations, diuretics, glucose, and insulin is required. Moreover, a transfusion filter that absorbs potassium has been developed and is now available for clinical use in Japan. The filter is easy to use and beneficial, and should be prepared when it is available. PMID:28217070

  11. A method of fast mosaic for massive UAV images

    NASA Astrophysics Data System (ADS)

    Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong

    2014-11-01

    With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.

  12. 5G small-cell networks leveraging optical technologies with mm-wave massive MIMO and MT-MAC protocols

    NASA Astrophysics Data System (ADS)

    Papaioannou, S.; Kalfas, G.; Vagionas, C.; Mitsolidou, C.; Maniotis, P.; Miliou, A.; Pleros, N.

    2018-01-01

    Analog optical fronthaul for 5G network architectures is currently being promoted as a bandwidth- and energy-efficient technology that can sustain the data-rate, latency and energy requirements of the emerging 5G era. This paper deals with a new optical fronthaul architecture that can effectively synergize optical transceiver, optical add/drop multiplexer and optical beamforming integrated photonics towards a DSP-assisted analog fronthaul for seamless and medium-transparent 5G small-cell networks. Its main application targets include dense and Hot-Spot Area networks, promoting the deployment of mmWave massive MIMO Remote Radio Heads (RRHs) that can offer wireless data-rates ranging from 25Gbps up to 400Gbps depending on the fronthaul technology employed. Small-cell access and resource allocation is ensured via a Medium-Transparent (MT-) MAC protocol that enables the transparent communication between the Central Office and the wireless end-users or the lamp-posts via roof-top-located V-band massive MIMO RRHs. The MTMAC is analysed in detail with simulation and analytical theoretical results being in good agreement and confirming its credentials to satisfy 5G network latency requirements by guaranteeing latency values lower than 1 ms for small- to midload conditions. Its extension towards supporting optical beamforming capabilities and mmWave massive MIMO antennas is discussed, while its performance is analysed for different fiber fronthaul link lengths and different optical channel capacities. Finally, different physical layer network architectures supporting the MT-MAC scheme are presented and adapted to different 5G use case scenarios, starting from PON-overlaid fronthaul solutions and gradually moving through Spatial Division Multiplexing up to Wavelength Division Multiplexing transport as the user density increases.

  13. Long-term outcome of free fibula osteocutaneous flap and massive allograft in the reconstruction of long bone defect.

    PubMed

    Halim, Ahmad Sukari; Chai, Siew Cheng; Wan Ismail, Wan Faisham; Wan Azman, Wan Sulaiman; Mat Saad, Arman Zaharil; Wan, Zulmi

    2015-12-01

    Reconstruction of massive bone defects in bone tumors with allografts has been shown to have significant complications including infection, delayed or nonunion of allograft, and allograft fracture. Resection compounded with soft tissue defects requires skin coverage. A composite osteocutaneous free fibula offers an optimal solution where the allografts can be augmented mechanically and achieve biological incorporation. Following resection, the cutaneous component of the free osteocutaneous fibula flaps covers the massive soft tissue defect. In this retrospective study, the long-term outcome of 12 patients, who underwent single-stage limb reconstruction with massive allograft and free fibula osteocutaneous flaps instead of free fibula osteal flaps only, was evaluated. This study included 12 consecutive patients who had primary bone tumors and had follow-up for a minimum of 24 months. The mean age at the time of surgery was 19.8 years. A total of eight patients had primary malignant bone tumors (five osteosarcomas, two chondrosarcomas and one synovial sarcoma), and four patients had benign bone tumors (two giant-cell tumors, one aneurysmal bone cyst, and one neurofibromatosis). The mean follow-up for the 12 patients was 63 months (range 24-124 months). Out of the 10 patients, nine underwent lower-limb reconstruction and ambulated with partial weight bearing and full weight bearing at an average of 4.2 months and 8.2 months, respectively. In conclusion, augmentation of a massive allograft with free fibula osteocutaneous flap is an excellent alternative for reducing the long-term complication of massive allograft and concurrently addresses the soft tissue coverage. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Postoperative Infection in the Setting of Massive Intraoperative Blood Loss.

    PubMed

    Leylek, Melike; Poliquin, Vanessa; Al-Wazzan, Ahmad; Dean, Erin; Altman, Alon D

    2016-12-01

    To determine the local rates of massive intraoperative blood loss and subsequent infectious morbidity for patients undergoing gynaecologic laparotomy. We performed a retrospective chart review of all patients undergoing gynaecologic laparotomy between January 1, 2013 and December 31, 2013 to identify cases of massive intraoperative blood loss (defined as ≥1 L estimated intraoperative blood loss, a postoperative reduction in hemoglobin concentration of ≥40 g/L, or a perioperative blood transfusion). For cases meeting these criteria, we abstracted further data to assess the rate of postoperative infectious morbidity (defined as a positive wound swab culture, positive urine culture, or satisfying the 1991 criteria for systemic inflammatory response syndrome). The rate of massive intraoperative blood loss was 13.5% (n = 69). The average age in this cohort was 50.4 years (range 18-84 years) and the average BMI was 27.9 kg/m 2 . Perioperative transfusion was required in 31.9% (n = 22). Notably, 26.1% of patients (n = 18) met one of our primary endpoints for postoperative infectious morbidity. A further 10.1% (n = 7) had morbidities including hyponatremia, wound dehiscence, intra-abdominal abscess, positive blood cultures, acute respiratory distress syndrome, myocardial infarction, intensive care unit admission, or death. Our rate of massive intraoperative blood loss during gynaecologic laparotomy was found to be 13.5%, and our rate of postoperative infectious morbidity subsequent to massive intraoperative blood loss was 26.1%. Copyright © 2016 The Society of Obstetricians and Gynaecologists of Canada/La Société des obstétriciens et gynécologues du Canada. Published by Elsevier Inc. All rights reserved.

  15. Oxygen Administration Improves Survival but Worsens Cardiopulmonary Functions in Chlorine-exposed Rats.

    PubMed

    Okponyia, Obiefuna C; McGraw, Matthew D; Dysart, Marilyn M; Garlick, Rhonda B; Rioux, Jacqueline S; Murphy, Angela L; Roe, Gates B; White, Carl W; Veress, Livia A

    2018-01-01

    Chlorine is a highly reactive gas that can cause significant injury when inhaled. Unfortunately, its use as a chemical weapon has increased in recent years. Massive chlorine inhalation can cause death within 4 hours of exposure. Survivors usually require hospitalization after massive exposure. No countermeasures are available for massive chlorine exposure and supportive-care measures lack controlled trials. In this work, adult rats were exposed to chlorine gas (LD 58-67 ) in a whole-body exposure chamber, and given oxygen (0.8 Fi O 2 ) or air (0.21 Fi O 2 ) for 6 hours after baseline measurements were obtained. Oxygen saturation, vital signs, respiratory distress and neuromuscular scores, arterial blood gases, and hemodynamic measurements were obtained hourly. Massive chlorine inhalation caused severe acute respiratory failure, hypoxemia, decreased cardiac output, neuromuscular abnormalities (ataxia and hypotonia), and seizures resulting in early death. Oxygen improved survival to 6 hours (87% versus 42%) and prevented observed seizure-related deaths. However, oxygen administration worsened the severity of acute respiratory failure in chlorine-exposed rats compared with controls, with increased respiratory acidosis (pH 6.91 ± 0.04 versus 7.06 ± 0.01 at 2 h) and increased hypercapnia (180.0 ± 19.8 versus 103.2 ± 3.9 mm Hg at 2 h). In addition, oxygen did not improve neuromuscular abnormalities, cardiac output, or respiratory distress associated with chlorine exposure. Massive chlorine inhalation causes severe acute respiratory failure and multiorgan damage. Oxygen administration can improve short-term survival but appears to worsen respiratory failure, with no improvement in cardiac output or neuromuscular dysfunction. Oxygen should be used with caution after massive chlorine inhalation, and the need for early assisted ventilation should be assessed in victims.

  16. The physical properties of Lyα emitting galaxies: not just primeval galaxies?

    NASA Astrophysics Data System (ADS)

    Pentericci, L.; Grazian, A.; Fontana, A.; Castellano, M.; Giallongo, E.; Salimbeni, S.; Santini, P.

    2009-02-01

    Aims: We have analyzed a sample of Lyman break galaxies from z ~ 3.5 to z ~ 6 selected from the GOODS-S field as B, V, and i-dropouts, and with spectroscopic observations showing that they have the Lyα line in emission. Our main aim is to investigate their physical properties and their dependence on the emission line characteristic and to shed light on the relation between galaxies with Lyα emission and the general LBG population. Methods: The objects were selected from their optical continuum colors and then spectroscopically confirmed by the GOODS collaboration and other campaigns. From the public spectra we derived the main properties of the Lyα emission such as total flux and rest frame EW. We then used complete photometry, from U band to mid-infrared from the GOODS-MUSIC database, and through standard spectro-photometric techniques we derived the physical properties of the galaxies, such as total stellar mass, stellar ages, star formation rates, and dust content. Finally we investigated the relation between emission line and physical properties. Results: Although most galaxies are fit by young stellar populations, a small but non negligible fraction has SEDs that cannot be represented well by young models and require considerably older stellar component, up to ~1 Gyr. There is no apparent relation between age and EW: some of the oldest galaxies have high line EW, and should be also selected in narrow-band surveys. Therefore not all Lyα emitting galaxies are primeval galaxies in the very early stages of formation, as is commonly assumed. We also find a range of stellar populations, with masses from 5 × 108 M_⊙ to 5 × 1010 M_⊙ and SFR from few to 60 M_⊙ yr-1. Although there is no net correlation between mass and EW, we find a significant lack of massive galaxies with high EW, which could be explained if the most massive galaxies were either dustier and/or if they contained more neutral gas than less massive objects. Finally we find that more than half of the galaxies contain small but non negligible amounts of dust: the mean E(B-V) derived from the SED fit and the EW are well-correlated, although with a large scatter, as already found at lower redshift.

  17. Fast I/O for Massively Parallel Applications

    NASA Technical Reports Server (NTRS)

    OKeefe, Matthew T.

    1996-01-01

    The two primary goals for this report were the design, contruction and modeling of parallel disk arrays for scientific visualization and animation, and a study of the IO requirements of highly parallel applications. In addition, further work in parallel display systems required to project and animate the very high-resolution frames resulting from our supercomputing simulations in ocean circulation and compressible gas dynamics.

  18. Army Research Office’s ARO in Review 2014.The Annual Historical Record of the Army Research Laboratory’s Army Research Office (ARO) Programs and Funding Activities

    DTIC Science & Technology

    2015-07-01

    TEM image of 1T-TaS2 showing CDW discommensuration network. (Main panel) Nonlinear resistivity and current slip at large bias of device shown in lower...the same species. As most pollen is generally dispersed by either wind or insects, the male plants must produce pollen in vast amounts (up to...for Massive and Messy Data • Professor Yuri Bazilevs, University of California - San Diego; Fluid-Structure Interaction Simulation of Gas Turbine

  19. Ocean circulation and climate during the past 120,000 years

    NASA Astrophysics Data System (ADS)

    Rahmstorf, Stefan

    2002-09-01

    Oceans cover more than two-thirds of our blue planet. The waters move in a global circulation system, driven by subtle density differences and transporting huge amounts of heat. Ocean circulation is thus an active and highly nonlinear player in the global climate game. Increasingly clear evidence implicates ocean circulation in abrupt and dramatic climate shifts, such as sudden temperature changes in Greenland on the order of 5-10 °C and massive surges of icebergs into the North Atlantic Ocean - events that have occurred repeatedly during the last glacial cycle.

  20. Pele Erupting on Lo

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This image of Jupiter's moon, lo, was taken by the Chandra X-Ray Observatory (CXO). Shown here is the most extreme example of the effect of tidal forces as Lo is being pulled by massive Jupiter on one side and by the outer moons Europa, Callisto, and Ganymede on the other. The opposing tidal forces alternately squeeze and stretch its interior, causing the solid surface to rise and fall by about 100 meters. The enormous amount of heat and pressure generated by the resulting friction creates colossal volcanoes and fractures on the surface of this moon.

  1. An analytical benchmark and a Mathematica program for MD codes: Testing LAMMPS on the 2nd generation Brenner potential

    NASA Astrophysics Data System (ADS)

    Favata, Antonino; Micheletti, Andrea; Ryu, Seunghwa; Pugno, Nicola M.

    2016-10-01

    An analytical benchmark and a simple consistent Mathematica program are proposed for graphene and carbon nanotubes, that may serve to test any molecular dynamics code implemented with REBO potentials. By exploiting the benchmark, we checked results produced by LAMMPS (Large-scale Atomic/Molecular Massively Parallel Simulator) when adopting the second generation Brenner potential, we made evident that this code in its current implementation produces results which are offset from those of the benchmark by a significant amount, and provide evidence of the reason.

  2. Incident analysis of Bucheon LPG filling station pool fire and BLEVE.

    PubMed

    Park, Kyoshik; Mannan, M Sam; Jo, Young-Do; Kim, Ji-Yoon; Keren, Nir; Wang, Yanjun

    2006-09-01

    An LPG filling station incident in Korea has been studied. The direct cause of the incident was concluded to be faulty joining of the couplings of the hoses during the butane unloading process from a tank lorry into an underground storage tank. The faulty connection of a hose to the tank lorry resulted in a massive leak of gas followed by catastrophic explosions. The leaking source was verified by calculating the amount of released LPG and by analyzing captured photos recorded by the television news service. Two BLEVEs were also studied.

  3. A Massive Prestellar Clump Hosting No High-mass Cores

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanhueza, Patricio; Lu, Xing; Tatematsu, Ken’ichi

    The infrared dark cloud (IRDC) G028.23-00.19 hosts a massive (1500 M {sub ⊙}), cold (12 K), and 3.6–70 μ m IR dark clump (MM1) that has the potential to form high-mass stars. We observed this prestellar clump candidate with the Submillimeter Array (∼3.″5 resolution) and Jansky Very Large Array (∼2.″1 resolution) in order to characterize the early stages of high-mass star formation and to constrain theoretical models. Dust emission at 1.3 mm wavelength reveals five cores with masses ≤15 M {sub ⊙}. None of the cores currently have the mass reservoir to form a high-mass star in the prestellar phase.more » If the MM1 clump will ultimately form high-mass stars, its embedded cores must gather a significant amount of additional mass over time. No molecular outflows are detected in the CO (2-1) and SiO (5-4) transitions, suggesting that the SMA cores are starless. By using the NH{sub 3} (1, 1) line, the velocity dispersion of the gas is determined to be transonic or mildly supersonic (Δ V {sub nt}/Δ V {sub th} ∼ 1.1–1.8). The cores are not highly supersonic as some theories of high-mass star formation predict. The embedded cores are four to seven times more massive than the clump thermal Jeans mass and the most massive core (SMA1) is nine times less massive than the clump turbulent Jeans mass. These values indicate that neither thermal pressure nor turbulent pressure dominates the fragmentation of MM1. The low virial parameters of the cores (0.1–0.5) suggest that they are not in virial equilibrium, unless strong magnetic fields of ∼1–2 mG are present. We discuss high-mass star formation scenarios in a context based on IRDC G028.23-00.19, a study case believed to represent the initial fragmentation of molecular clouds that will form high-mass stars.« less

  4. High molecular gas fractions in normal massive star-forming galaxies in the young Universe.

    PubMed

    Tacconi, L J; Genzel, R; Neri, R; Cox, P; Cooper, M C; Shapiro, K; Bolatto, A; Bouché, N; Bournaud, F; Burkert, A; Combes, F; Comerford, J; Davis, M; Schreiber, N M Förster; Garcia-Burillo, S; Gracia-Carpio, J; Lutz, D; Naab, T; Omont, A; Shapley, A; Sternberg, A; Weiner, B

    2010-02-11

    Stars form from cold molecular interstellar gas. As this is relatively rare in the local Universe, galaxies like the Milky Way form only a few new stars per year. Typical massive galaxies in the distant Universe formed stars an order of magnitude more rapidly. Unless star formation was significantly more efficient, this difference suggests that young galaxies were much more molecular-gas rich. Molecular gas observations in the distant Universe have so far largely been restricted to very luminous, rare objects, including mergers and quasars, and accordingly we do not yet have a clear idea about the gas content of more normal (albeit massive) galaxies. Here we report the results of a survey of molecular gas in samples of typical massive-star-forming galaxies at mean redshifts of about 1.2 and 2.3, when the Universe was respectively 40% and 24% of its current age. Our measurements reveal that distant star forming galaxies were indeed gas rich, and that the star formation efficiency is not strongly dependent on cosmic epoch. The average fraction of cold gas relative to total galaxy baryonic mass at z = 2.3 and z = 1.2 is respectively about 44% and 34%, three to ten times higher than in today's massive spiral galaxies. The slow decrease between z approximately 2 and z approximately 1 probably requires a mechanism of semi-continuous replenishment of fresh gas to the young galaxies.

  5. Galaxies Grow Their Bulges and Black Holes in Diverse Ways

    NASA Astrophysics Data System (ADS)

    Bell, Eric F.; Monachesi, Antonela; Harmsen, Benjamin; de Jong, Roelof S.; Bailin, Jeremy; Radburn-Smith, David J.; D'Souza, Richard; Holwerda, Benne W.

    2017-03-01

    Galaxies with Milky Way-like stellar masses have a wide range of bulge and black hole masses; in turn, these correlate with other properties such as star formation history. While many processes may drive bulge formation, major and minor mergers are expected to play a crucial role. Stellar halos offer a novel and robust measurement of galactic merger history; cosmologically motivated models predict that mergers with larger satellites produce more massive, higher-metallicity stellar halos, reproducing the recently observed stellar halo metallicity-mass relation. We quantify the relationship between stellar halo mass and bulge or black hole prominence using a sample of 18 Milky Way-mass galaxies with newly available measurements of (or limits on) stellar halo properties. There is an order of magnitude range in bulge mass, and two orders of magnitude in black hole mass, at a given stellar halo mass (or, equivalently, merger history). Galaxies with low-mass bulges show a wide range of quiet merger histories, implying formation mechanisms that do not require intense merging activity. Galaxies with massive “classical” bulges and central black holes also show a wide range of merger histories. While three of these galaxies have massive stellar halos consistent with a merger origin, two do not—merging appears to have had little impact on making these two massive “classical” bulges. Such galaxies may be ideal laboratories to study massive bulge formation through pathways such as early gas-rich accretion, violent disk instabilities, or misaligned infall of gas throughout cosmic time.

  6. Blood transfusion and the anaesthetist: management of massive haemorrhage

    PubMed Central

    Thomas, D; Wee, M; Clyburn, P; Walker, I; Brohi, K; Collins, P; Doughty, H; Isaac, J; Mahoney, PF; Shewry, L

    2010-01-01

    Hospitals must have a major haemorrhage protocol in place and this should include clinical, laboratory and logistic responses. Immediate control of obvious bleeding is of paramount importance (pressure, tourniquet, haemostatic dressings). The major haemorrhage protocol must be mobilised immediately when a massive haemorrhage situation is declared. A fibrinogen < 1 g.l−1 or a prothrombin time (PT) and activated partial thromboplastin time (aPTT) of > 1.5 times normal represents established haemostatic failure and is predictive of microvascular bleeding. Early infusion of fresh frozen plasma (FFP; 15 ml.kg−1) should be used to prevent this occurring if a senior clinician anticipates a massive haemorrhage. Established coagulopathy will require more than 15 ml.kg−1 of FFP to correct. The most effective way to achieve fibrinogen replacement rapidly is by giving fibrinogen concentrate or cryoprecipitate if fibrinogen is unavailable. 1:1:1 red cell:FFP:platelet regimens, as used by the military, are reserved for the most severely traumatised patients. A minimum target platelet count of 75 × 109.l−1 is appropriate in this clinical situation. Group-specific blood can be issued without performing an antibody screen because patients will have minimal circulating antibodies. O negative blood should only be used if blood is needed immediately. In hospitals where the need to treat massive haemorrhage is frequent, the use of locally developed shock packs may be helpful. Standard venous thromboprophylaxis should be commenced as soon as possible after haemostasis has been secured as patients develop a prothrombotic state following massive haemorrhage. PMID:20963925

  7. Mesoproterozoic graphite deposits, New Jersey Highlands: Geologic and stable isotopic evidence for possible algal origins

    USGS Publications Warehouse

    Volkert, R.A.

    2000-01-01

    Graphite deposits of Mesoproterozoic age are locally abundant in the eastern New Jersey Highlands, where they are hosted by sulphidic biotite-quartz-feldspar gneiss, metaquartzite, and anatectic pegmatite. Gneiss and metaquartzite represent a shallow marine shelf sequence of locally organic-rich sand and mud. Graphite from massive deposits within metaquartzite yielded ??13C values of -26 ?? 2??? (1??), and graphite from massive deposits within biotite-quartz-feldspar gneiss yielded ??13C values of -23 ??4???. Disseminated graphite from biotite-quartz-feldspar gneiss country rock was -22 ??3???, indistinguishable from the massive deposits hosted by the same lithology. Anatectic pegmatite is graphitic only where generated from graphite-bearing host rocks; one sample gave a ??13C value of -15???. The ??34S values of trace pyrrhotite are uniform within individual deposits, but vary from 0 to 9??? from one deposit to another. Apart from pegmatitic occurrences, evidence is lacking for long-range mobilization of carbon during Grenvillian orogenesis or post-Grenvillian tectonism. The field, petrographic, and isotope data suggest that massive graphite was formed by granulite-facies metamorphism of Proterozoic accumulations of sedimentary organic matter, possibly algal mats. Preservation of these accumulations in the sedimentary environment requires anoxic basin waters or rapid burial. Anoxia would also favour the accumulation of dissolved ferrous iron in basin waters, which may explain some of the metasediment-hosted massive magnetite deposits in the New Jersey Highlands. ?? 2000 NRC.

  8. Assessment of crop yield losses in Punjab and Haryana using two years of continuous in-situ ozone measurements

    NASA Astrophysics Data System (ADS)

    Sinha, B.; Singh Sangwan, K.; Maurya, Y.; Kumar, V.; Sarkar, C.; Chandra, B. P.; Sinha, V.

    2015-01-01

    In this study we use a high quality dataset of in-situ ozone measurements at a suburban site called Mohali in the state of Punjab to estimate ozone related crop yield losses for wheat, rice, cotton and maize for Punjab and the neighbouring state Haryana for the years 2011-2013. We inter-compare crop yield loss estimates according to different exposure metrics such as AOT40 and M7 for the two major crop growing seasons of Kharif (June-October) and Rabi (November-April) and establish a new crop yield exposure relationship for South Asian wheat and rice cultivars. These are a factor of two more sensitive to ozone induced crop yield losses compared to their European and American counterparts. Relative yield losses based on the AOT40 metrics ranged from 27-41% for wheat, 21-26% for rice, 9-11% for maize and 47-58% for cotton. Crop production losses for wheat amounted to 20.8 million t in fiscal year 2012-2013 and 10.3 million t in fiscal year 2013-2014 for Punjab and Haryana jointly. Crop production losses for rice totalled 5.4 million t in fiscal year 2012-2013 and 3.2 million t year 2013-2014 for Punjab and Haryana jointly. The Indian National Food Security Ordinance entitles ~ 820 million of India's poor to purchase about 60 kg of rice/wheat per person annually at subsidized rates. The scheme requires 27.6 Mt of wheat and 33.6 Mt of rice per year. Mitigation of ozone related crop production losses in Punjab and Haryana alone could provide >50% of the wheat and ~10% of the rice required for the scheme. The total economic cost losses in Punjab and Haryana amounted to USD 6.5 billion in the fiscal year 2012-2013 and USD 3.7 billion in the fiscal year 2013-2014. This economic loss estimate represents a very conservative lower limit based on the minimum support price of the crop, which is lower than the actual production costs. The upper limit for ozone related crop yield losses in entire India currently amounts to 3.5-20% of India's GDP. Mitigation of high surface ozone would require relatively little investment in comparison to economic losses incurred presently. Therefore, ozone mitigation can yield massive benefits in terms of ensuring food security and boosting the economy. Co-benefits of ozone mitigation also include a decrease in the ozone related mortality, morbidity and a reduction of the ozone induced warming in the lower troposphere.

  9. Priority Intelligence Requirements: The Operational Vacuum

    DTIC Science & Technology

    1990-05-16

    armored vehicles , not ho!: the systems are used to achieve operational goals.34 Enemy mobilization, employment philosophy, and history are excluded in...extensive security problems create massive bottlenecks in the dissemination of intellingence information .48 Today, we sport a tremendous intelligence

  10. Tiger LDRD final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steich, D J; Brugger, S T; Kallman, J S

    2000-02-01

    This final report describes our efforts on the Three-Dimensional Massively Parallel CEM Technologies LDRD project (97-ERD-009). Significant need exists for more advanced time domain computational electromagnetics modeling. Bookkeeping details and modifying inflexible software constitute a vast majority of the effort required to address such needs. The required effort escalates rapidly as problem complexity increases. For example, hybrid meshes requiring hybrid numerics on massively parallel platforms (MPPs). This project attempts to alleviate the above limitations by investigating flexible abstractions for these numerical algorithms on MPPs using object-oriented methods, providing a programming environment insulating physics from bookkeeping. The three major design iterationsmore » during the project, known as TIGER-I to TIGER-III, are discussed. Each version of TIGER is briefly discussed along with lessons learned during the development and implementation. An Application Programming Interface (API) of the object-oriented interface for Tiger-III is included in three appendices. The three appendices contain the Utilities, Entity-Attribute, and Mesh libraries developed during the project. The API libraries represent a snapshot of our latest attempt at insulated the physics from the bookkeeping.« less

  11. Massively parallel electrical conductivity imaging of the subsurface: Applications to hydrocarbon exploration

    NASA Astrophysics Data System (ADS)

    Newman, Gregory A.; Commer, Michael

    2009-07-01

    Three-dimensional (3D) geophysical imaging is now receiving considerable attention for electrical conductivity mapping of potential offshore oil and gas reservoirs. The imaging technology employs controlled source electromagnetic (CSEM) and magnetotelluric (MT) fields and treats geological media exhibiting transverse anisotropy. Moreover when combined with established seismic methods, direct imaging of reservoir fluids is possible. Because of the size of the 3D conductivity imaging problem, strategies are required exploiting computational parallelism and optimal meshing. The algorithm thus developed has been shown to scale to tens of thousands of processors. In one imaging experiment, 32,768 tasks/processors on the IBM Watson Research Blue Gene/L supercomputer were successfully utilized. Over a 24 hour period we were able to image a large scale field data set that previously required over four months of processing time on distributed clusters based on Intel or AMD processors utilizing 1024 tasks on an InfiniBand fabric. Electrical conductivity imaging using massively parallel computational resources produces results that cannot be obtained otherwise and are consistent with timeframes required for practical exploration problems.

  12. Spectral Calculation of ICRF Wave Propagation and Heating in 2-D Using Massively Parallel Computers

    NASA Astrophysics Data System (ADS)

    Jaeger, E. F.; D'Azevedo, E.; Berry, L. A.; Carter, M. D.; Batchelor, D. B.

    2000-10-01

    Spectral calculations of ICRF wave propagation in plasmas have the natural advantage that they require no assumption regarding the smallness of the ion Larmor radius ρ relative to wavelength λ. Results are therefore applicable to all orders in k_bot ρ where k_bot = 2π/λ. But because all modes in the spectral representation are coupled, the solution requires inversion of a large dense matrix. In contrast, finite difference algorithms involve only matrices that are sparse and banded. Thus, spectral calculations of wave propagation and heating in tokamak plasmas have so far been limited to 1-D. In this paper, we extend the spectral method to 2-D by taking advantage of new matrix inversion techniques that utilize massively parallel computers. By spreading the dense matrix over 576 processors on the ORNL IBM RS/6000 SP supercomputer, we are able to solve up to 120,000 coupled complex equations requiring 230 GBytes of memory and achieving over 500 Gflops/sec. Initial results for ASDEX and NSTX will be presented using up to 200 modes in both the radial and vertical dimensions.

  13. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by employing bandwidth shells at areas of overutilization

    DOEpatents

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-04-27

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. An automated routing strategy routes packets through one or more intermediate nodes of the network to reach a final destination. The default routing strategy is altered responsive to detection of overutilization of a particular path of one or more links, and at least some traffic is re-routed by distributing the traffic among multiple paths (which may include the default path). An alternative path may require a greater number of link traversals to reach the destination node.

  14. Non-compact nonlinear sigma models

    NASA Astrophysics Data System (ADS)

    de Rham, Claudia; Tolley, Andrew J.; Zhou, Shuang-Yong

    2016-09-01

    The target space of a nonlinear sigma model is usually required to be positive definite to avoid ghosts. We introduce a unique class of nonlinear sigma models where the target space metric has a Lorentzian signature, thus the associated group being non-compact. We show that the would-be ghost associated with the negative direction is fully projected out by 2 second-class constraints, and there exist stable solutions in this class of models. This result also has important implications for Lorentz-invariant massive gravity: There exist stable nontrivial vacua in massive gravity that are free from any linear vDVZ-discontinuity and a Λ2 decoupling limit can be defined on these vacua.

  15. Hairy black holes in scalar extended massive gravity

    NASA Astrophysics Data System (ADS)

    Tolley, Andrew J.; Wu, De-Jun; Zhou, Shuang-Yong

    2015-12-01

    We construct static, spherically symmetric black hole solutions in scalar extended ghost-free massive gravity and show the existence of hairy black holes in this class of extension. While the existence seems to be a generic feature, we focus on the simplest models of this extension and find that asymptotically flat hairy black holes can exist without fine-tuning the theory parameters, unlike the bi-gravity extension, where asymptotical flatness requires fine-tuning in the parameter space. Like the bi-gravity extension, we are unable to obtain asymptotically dS regular black holes in the simplest models considered, but it is possible to obtain asymptotically AdS black holes.

  16. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data.

    PubMed

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H

    2012-11-06

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the "big data" challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce.

  17. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  18. Towards Building a High Performance Spatial Query System for Large Scale Medical Imaging Data

    PubMed Central

    Aji, Ablimit; Wang, Fusheng; Saltz, Joel H.

    2013-01-01

    Support of high performance queries on large volumes of scientific spatial data is becoming increasingly important in many applications. This growth is driven by not only geospatial problems in numerous fields, but also emerging scientific applications that are increasingly data- and compute-intensive. For example, digital pathology imaging has become an emerging field during the past decade, where examination of high resolution images of human tissue specimens enables more effective diagnosis, prediction and treatment of diseases. Systematic analysis of large-scale pathology images generates tremendous amounts of spatially derived quantifications of micro-anatomic objects, such as nuclei, blood vessels, and tissue regions. Analytical pathology imaging provides high potential to support image based computer aided diagnosis. One major requirement for this is effective querying of such enormous amount of data with fast response, which is faced with two major challenges: the “big data” challenge and the high computation complexity. In this paper, we present our work towards building a high performance spatial query system for querying massive spatial data on MapReduce. Our framework takes an on demand index building approach for processing spatial queries and a partition-merge approach for building parallel spatial query pipelines, which fits nicely with the computing model of MapReduce. We demonstrate our framework on supporting multi-way spatial joins for algorithm evaluation and nearest neighbor queries for microanatomic objects. To reduce query response time, we propose cost based query optimization to mitigate the effect of data skew. Our experiments show that the framework can efficiently support complex analytical spatial queries on MapReduce. PMID:24501719

  19. Quantum entanglement for systems of identical bosons: I. General features

    NASA Astrophysics Data System (ADS)

    Dalton, B. J.; Goold, J.; Garraway, B. M.; Reid, M. D.

    2017-02-01

    These two accompanying papers are concerned with two mode entanglement for systems of identical massive bosons and the relationship to spin squeezing and other quantum correlation effects. Entanglement is a key quantum feature of composite systems in which the probabilities for joint measurements on the composite sub-systems are no longer determined from measurement probabilities on the separate sub-systems. There are many aspects of entanglement that can be studied. This two-part review focuses on the meaning of entanglement, the quantum paradoxes associated with entangled states, and the important tests that allow an experimentalist to determine whether a quantum state—in particular, one for massive bosons is entangled. An overall outcome of the review is to distinguish criteria (and hence experiments) for entanglement that fully utilize the symmetrization principle and the super-selection rules that can be applied to bosonic massive particles. In the first paper (I), the background is given for the meaning of entanglement in the context of systems of identical particles. For such systems, the requirement is that the relevant quantum density operators must satisfy the symmetrization principle and that global and local super-selection rules prohibit states in which there are coherences between differing particle numbers. The justification for these requirements is fully discussed. In the second quantization approach that is used, both the system and the sub-systems are modes (or sets of modes) rather than particles, particles being associated with different occupancies of the modes. The definition of entangled states is based on first defining the non-entangled states—after specifying which modes constitute the sub-systems. This work mainly focuses on the two mode entanglement for massive bosons, but is put in the context of tests of local hidden variable theories, where one may not be able to make the above restrictions. The review provides the detailed arguments necessary for the conclusions of a recent paper, where the question of how to rigorously demonstrate the entanglement of a two-mode Bose-Einstein condensate (BEC) has been examined. In the accompanying review paper (II), we consider spin squeezing and other tests for entanglement that have been proposed for two-mode bosonic systems. We apply the approach of review (I) to determine which tests, and which modifications of the tests, are useful for detecting entanglement in massive bosonic (BEC), as opposed to photonic, systems. Several new inequalities are derived, a theory for the required two-mode interferometry is presented, and key experiments to date are analyzed.

  20. Morus spp. as a New Biomass Crop

    USDA-ARS?s Scientific Manuscript database

    Generating enthusiasm from political or business entities to promote conservation requires economic viability in times of economic downturn. Massive reforestation is being considered as a governmental policy to address the climate crisis. It offers an enormous opportunity to redefine forestry plan...

  1. Significant Traumatic Intracranial Hemorrhage in the Setting of Massive Bee Venom-Induced Coagulopathy: A Case Report.

    PubMed

    Stack, Kelsey; Pryor, Lindsey

    2016-09-01

    Bees and wasps of the Hymenoptera order are encountered on a daily basis throughout the world. Some encounters prove harmless, while others can have significant morbidity and mortality. Hymenoptera venom is thought to contain an enzyme that can cleave phospholipids and cause significant coagulation abnormalities. This toxin and others can lead to reactions ranging from local inflammation to anaphylaxis. We report a single case of a previously healthy man who presented to the emergency department with altered mental status and anaphylaxis after a massive honeybee envenomation that caused a fall from standing resulting in significant head injury. He was found to have significant coagulopathy and subdural bleeding that progressed to near brain herniation requiring emergent decompression. Trauma can easily occur to individuals escaping swarms of hymenoptera. Closer attention must be paid to potential bleeding sources in these patients and in patients with massive bee envenomation. Copyright © 2016 Wilderness Medical Society. Published by Elsevier Inc. All rights reserved.

  2. Efficient Risk Determination of Risk of Road Blocking by Means of MMS and Data of Buildings and Their Surrounding

    NASA Astrophysics Data System (ADS)

    Nose, Kazuhito; Hatake, Shuhei

    2016-06-01

    Massive earthquake named "Tonankai Massive earthquake" is predicted to occur in the near future and is feared to cause severe damage in Kinki District . "Hanshin-Awaji Massive Earthquake" in 1995 destroyed most of the buildings constructed before 1981 and not complying with the latest earthquake resistance standards. Collapsed buildings blocked roads, obstructed evacuation, rescue and firefighting operations and inflicted further damages.To alleviate the damages, it is important to predict the points where collapsed buildings are likely block the roads and to take precautions in advance. But big cities have an expanse of urban areas with densely-distributed buildings, and it requires time and cost to check each and every building whether or not it will block the road. In order to reduce blocked roads when a disaster strikes, we made a study and confirmed that the risk of road blocking can be determined easily by means of the latest technologies of survey and geographical information.

  3. Medium generated gap in gravity and a 3D gauge theory

    NASA Astrophysics Data System (ADS)

    Gabadadze, Gregory; Older, Daniel

    2018-05-01

    It is well known that a physical medium that sets a Lorentz frame generates a Lorentz-breaking gap for a graviton. We examine such generated "mass" terms in the presence of a fluid medium whose ground state spontaneously breaks spatial translation invariance in d =D +1 spacetime dimensions, and for a solid in D =2 spatial dimensions. By requiring energy positivity and subluminal propagation, certain constraints are placed on the equation of state of the medium. In the case of D =2 spatial dimensions, classical gravity can be recast as a Chern-Simons gauge theory, and motivated by this we recast the massive theory of gravity in AdS3 as a massive Chern-Simons gauge theory with an unusual mass term. We find that in the flat space limit the Chern-Simons theory has a novel gauge invariance that mixes the kinetic and mass terms, and enables the massive theory with a noncompact internal group to be free of ghosts and tachyons.

  4. Geochemical studies of rare earth elements in the Portuguese pyrite belt, and geologic and geochemical controls on gold distribution

    USGS Publications Warehouse

    Grimes, David J.; Earhart, Robert L.; de Carvalho, Delfim; Oliveira, Vitor; Oliveira, Jose T.; Castro, Paulo

    1998-01-01

    This report describes geochemical and geological studies which were conducted by the U.S. Geological Survey (USGS) and the Servicos Geologicos de Portugal (SPG) in the Portuguese pyrite belt (PPB) in southern Portugal. The studies included rare earth element (REE) distributions and geological and geochemical controls on the distribution of gold. Rare earth element distributions were determined in representative samples of the volcanic rocks from five west-trending sub-belts of the PPB in order to test the usefulness of REE as a tool for the correlation of volcanic events, and to determine their mobility and application as hydrothermal tracers. REE distributions in felsic volcanic rocks show increases in the relative abundances of heavy REE and a decrease in La/Yb ratios from north to south in the Portuguese pyrite belt. Anomalous amounts of gold are distributed in and near massive and disseminated sulfide deposits in the PPB. Gold is closely associated with copper in the middle and lower parts of the deposits. Weakly anomalous concentrations of gold were noted in exhalative sedimentary rocks that are stratigraphically above massive sulfide deposits in a distal manganiferous facies, whereas anomalously low concentrations were detected in the barite-rich, proximal-facies exhalites. Altered and pyritic felsic volcanic rocks locally contain highly anomalous concentrations of gold, suggesting that disseminated sulfide deposits and the non-ore parts of massive sulfide deposits should be evaluated for their gold potential.

  5. JELLYFISH: EVIDENCE OF EXTREME RAM-PRESSURE STRIPPING IN MASSIVE GALAXY CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ebeling, H.; Stephenson, L. N.; Edge, A. C.

    Ram-pressure stripping by the gaseous intracluster medium has been proposed as the dominant physical mechanism driving the rapid evolution of galaxies in dense environments. Detailed studies of this process have, however, largely been limited to relatively modest examples affecting only the outermost gas layers of galaxies in nearby and/or low-mass galaxy clusters. We here present results from our search for extreme cases of gas-galaxy interactions in much more massive, X-ray selected clusters at z > 0.3. Using Hubble Space Telescope snapshots in the F606W and F814W passbands, we have discovered dramatic evidence of ram-pressure stripping in which copious amounts ofmore » gas are first shock compressed and then removed from galaxies falling into the cluster. Vigorous starbursts triggered by this process across the galaxy-gas interface and in the debris trail cause these galaxies to temporarily become some of the brightest cluster members in the F606W passband, capable of outshining even the Brightest Cluster Galaxy. Based on the spatial distribution and orientation of systems viewed nearly edge-on in our survey, we speculate that infall at large impact parameter gives rise to particularly long-lasting stripping events. Our sample of six spectacular examples identified in clusters from the Massive Cluster Survey, all featuring M {sub F606W} < –21 mag, doubles the number of such systems presently known at z > 0.2 and facilitates detailed quantitative studies of the most violent galaxy evolution in clusters.« less

  6. Bright vigorous winds as signposts of supermassive black hole birth

    NASA Astrophysics Data System (ADS)

    Fiacconi, Davide; Rossi, Elena M.

    2016-01-01

    The formation of supermassive black holes is still an outstanding question. In the quasi-star scenario, black hole seeds experience an initial super-Eddington growth, that in less than a million years may leave a 104-105 M⊙ black hole at the centre of a protogalaxy at z ˜ 20-10. Super-Eddington accretion, however, may be accompanied by vigorous mass-loss that can limit the amount of mass that reaches the black hole. In this paper, we critically assess the impact of radiative driven winds, launched from the surface of the massive envelopes from which the black hole accretes. Solving the full wind equations coupled with the hydrostatic structure of the envelope, we find mass outflows with rates between a few tens and 104 M⊙ yr-1, mainly powered by advection luminosity within the outflow. We therefore confirm the claim by Dotan et al. that mass losses can severely affect the black hole seed early growth within a quasi-star. In particular, seeds with mass >104 M⊙ can only form within mass reservoirs ≳107 M⊙, unless they are refilled at huge rates (≳100 M⊙ yr-1). This may imply that only very massive haloes (>109 M⊙) at those redshifts can harbour massive seeds. Contrary to previous claims, these winds are expected to be relatively bright (1044-1047 erg s-1), blue (Teff ˜ 8000 K) objects, that while eluding the Hubble Space Telescope, could be observed by the James Webb Space Telescope.

  7. [Pelvic reconstructions after bone tumor resection].

    PubMed

    Anract, Philippe; Biau, David; Babinet, Antoine; Tomeno, Bernard

    2014-02-01

    The three more frequent primitive malignant bone tumour which concerned the iliac bone are chondrosarcoma, following Ewing sarcoma and osteosarcoma. Wide resection remains the most important part of the treatment associated with chemotherapy for osteosarcoma and the Ewing sarcoma. Iliac wing resections and obdurate ring don't required reconstruction. However, acetabular resections and iliac wing resection with disruption of the pelvic ring required reconstruction to provide acceptable functional result. Acetabular reconstruction remains high technical demanding challenge. After isolated acetabular resection or associated to obdurate ring, our usual method of reconstruction is homolateral proximal femoral autograft and total hip prosthesis but it is possible to also used : saddle prosthesis, Mac Minn prosthesis with auto or allograft, modular prosthesis or custom made prosthesis, massive allograft with or without prosthesis and femoro-ilac arthrodesis. After resection of the iliac wing plus acetabulum, reconstruction can be performed by femoro-obturatrice and femora-sacral arthrodesis, homolateral proximal femoral autograft and prosthesis, femoral medialisation, massive allograft and massive allograft. Carcinological results are lesser than resection for distal limb tumor, local recurrence rate range 17 to 45%. Functional results after Iliac wing and obdurate ring are good. However, acetabular reconstruction provide uncertain functional results. The lesser results arrive after hemipelvic or acetabular and iliac wing resection-reconstruction, especially when gluteus muscles were also resected. The most favourable results arrive after isolated acetabular or acetabular plus obturateur ring resection-reconstruction.

  8. High chance that current atmospheric greenhouse concentrations commit to warmings greater than 1.5 °C over land

    PubMed Central

    Huntingford, Chris; Mercado, Lina M.

    2016-01-01

    The recent Paris UNFCCC climate meeting discussed the possibility of limiting global warming to 2 °C since pre-industrial times, or possibly even 1.5 °C, which would require major future emissions reductions. However, even if climate is stabilised at current atmospheric greenhouse gas (GHG) concentrations, those warming targets would almost certainly be surpassed in the context of mean temperature increases over land only. The reason for this is two-fold. First, current transient warming lags significantly below equilibrium or “committed” warming. Second, almost all climate models indicate warming rates over land are much higher than those for the oceans. We demonstrate this potential for high eventual temperatures over land, even for contemporary GHG levels, using a large set of climate models and for which climate sensitivities are known. Such additional land warming has implications for impacts on terrestrial ecosystems and human well-being. This suggests that even if massive and near-immediate emissions reductions occur such that atmospheric GHGs increase further by only small amounts, careful planning is needed by society to prepare for higher land temperatures in an eventual equilibrium climatic state. PMID:27461560

  9. Chemometric Strategies for Peak Detection and Profiling from Multidimensional Chromatography.

    PubMed

    Navarro-Reig, Meritxell; Bedia, Carmen; Tauler, Romà; Jaumot, Joaquim

    2018-04-03

    The increasing complexity of omics research has encouraged the development of new instrumental technologies able to deal with these challenging samples. In this way, the rise of multidimensional separations should be highlighted due to the massive amounts of information that provide with an enhanced analyte determination. Both proteomics and metabolomics benefit from this higher separation capacity achieved when different chromatographic dimensions are combined, either in LC or GC. However, this vast quantity of experimental information requires the application of chemometric data analysis strategies to retrieve this hidden knowledge, especially in the case of nontargeted studies. In this work, the most common chemometric tools and approaches for the analysis of this multidimensional chromatographic data are reviewed. First, different options for data preprocessing and enhancement of the instrumental signal are introduced. Next, the most used chemometric methods for the detection of chromatographic peaks and the resolution of chromatographic and spectral contributions (profiling) are presented. The description of these data analysis approaches is complemented with enlightening examples from omics fields that demonstrate the exceptional potential of the combination of multidimensional separation techniques and chemometric tools of data analysis. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Application of the SNoW machine learning paradigm to a set of transportation imaging problems

    NASA Astrophysics Data System (ADS)

    Paul, Peter; Burry, Aaron M.; Wang, Yuheng; Kozitsky, Vladimir

    2012-01-01

    Machine learning methods have been successfully applied to image object classification problems where there is clear distinction between classes and where a comprehensive set of training samples and ground truth are readily available. The transportation domain is an area where machine learning methods are particularly applicable, since the classification problems typically have well defined class boundaries and, due to high traffic volumes in most applications, massive roadway data is available. Though these classes tend to be well defined, the particular image noises and variations can be challenging. Another challenge is the extremely high accuracy typically required in most traffic applications. Incorrect assignment of fines or tolls due to imaging mistakes is not acceptable in most applications. For the front seat vehicle occupancy detection problem, classification amounts to determining whether one face (driver only) or two faces (driver + passenger) are detected in the front seat of a vehicle on a roadway. For automatic license plate recognition, the classification problem is a type of optical character recognition problem encompassing multiple class classification. The SNoW machine learning classifier using local SMQT features is shown to be successful in these two transportation imaging applications.

  11. High chance that current atmospheric greenhouse concentrations commit to warmings greater than 1.5 °C over land

    NASA Astrophysics Data System (ADS)

    Huntingford, Chris; Mercado, Lina M.

    2016-07-01

    The recent Paris UNFCCC climate meeting discussed the possibility of limiting global warming to 2 °C since pre-industrial times, or possibly even 1.5 °C, which would require major future emissions reductions. However, even if climate is stabilised at current atmospheric greenhouse gas (GHG) concentrations, those warming targets would almost certainly be surpassed in the context of mean temperature increases over land only. The reason for this is two-fold. First, current transient warming lags significantly below equilibrium or “committed” warming. Second, almost all climate models indicate warming rates over land are much higher than those for the oceans. We demonstrate this potential for high eventual temperatures over land, even for contemporary GHG levels, using a large set of climate models and for which climate sensitivities are known. Such additional land warming has implications for impacts on terrestrial ecosystems and human well-being. This suggests that even if massive and near-immediate emissions reductions occur such that atmospheric GHGs increase further by only small amounts, careful planning is needed by society to prepare for higher land temperatures in an eventual equilibrium climatic state.

  12. Interaction with culture medium components, cellular uptake and intracellular distribution of cobalt nanoparticles, microparticles and ions in Balb/3T3 mouse fibroblasts.

    PubMed

    Sabbioni, Enrico; Fortaner, Salvador; Farina, Massimo; Del Torchio, Riccardo; Petrarca, Claudia; Bernardini, Giovanni; Mariani-Costantini, Renato; Perconti, Silvia; Di Giampaolo, Luca; Gornati, Rosalba; Di Gioacchino, Mario

    2014-02-01

    The mechanistic understanding of nanotoxicity requires the physico-chemical characterisation of nanoparticles (NP), and their comparative investigation relative to the corresponding ions and microparticles (MP). Following this approach, the authors studied the dissolution, interaction with medium components, bioavailability in culture medium, uptake and intracellular distribution of radiolabelled Co forms (CoNP, CoMP and Co(2+)) in Balb/3T3 mouse fibroblasts. Co(2+) first saturates the binding sites of molecules in the extracellular milieu (e.g., albumin and histidine) and on the cell surface. Only after saturation, Co(2+) is actively uptaken. CoNP, instead, are predicted to be internalised by endocytosis. Dissolution of Co particles allows the formation of Co compounds (CoNP-rel), whose mechanism of cellular internalisation is unknown. Co uptake (ranking CoMP > CoNP > Co(2+)) reached maximum at 4 h. Once inside the cell, CoNP spread into the cytosol and organelles. Consequently, massive amounts of Co ions and CoNP-rel can reach subcellular compartments normally unexposed to Co(2+). This could explain the fact that the nuclear and mitochondrial Co concentrations resulted significantly higher than those obtained with Co(2+).

  13. Massive Sorghum Collection Genotyped with SSR Markers to Enhance Use of Global Genetic Resources

    PubMed Central

    Bouchet, Sophie; Chantereau, Jacques; Deu, Monique; Gardes, Laetitia; Noyer, Jean-Louis; Rami, Jean-François; Rivallan, Ronan; Li, Yu; Lu, Ping; Wang, Tianyu; Folkertsma, Rolf T.; Arnaud, Elizabeth; Upadhyaya, Hari D.; Glaszmann, Jean-Christophe; Hash, C. Thomas

    2013-01-01

    Large ex situ collections require approaches for sampling manageable amounts of germplasm for in-depth characterization and use. We present here a large diversity survey in sorghum with 3367 accessions and 41 reference nuclear SSR markers. Of 19 alleles on average per locus, the largest numbers of alleles were concentrated in central and eastern Africa. Cultivated sorghum appeared structured according to geographic regions and race within region. A total of 13 groups of variable size were distinguished. The peripheral groups in western Africa, southern Africa and eastern Asia were the most homogeneous and clearly differentiated. Except for Kafir, there was little correspondence between races and marker-based groups. Bicolor, Caudatum, Durra and Guinea types were each dispersed in three groups or more. Races should therefore better be referred to as morphotypes. Wild and weedy accessions were very diverse and scattered among cultivated samples, reinforcing the idea that large gene-flow exists between the different compartments. Our study provides an entry to global sorghum germplasm collections. Our reference marker kit can serve to aggregate additional studies and enhance international collaboration. We propose a core reference set in order to facilitate integrated phenotyping experiments towards refined functional understanding of sorghum diversity. PMID:23565161

  14. Spontaneous rapid reduction of a large acute subdural hematoma.

    PubMed

    Lee, Chul-Hee; Kang, Dong Ho; Hwang, Soo Hyun; Park, In Sung; Jung, Jin-Myung; Han, Jong Woo

    2009-12-01

    The majority of acute post-traumatic subdural hematomas (ASDH) require urgent surgical evacuation. Spontaneous resolution of ASDH has been reported in some cases. We report here on a case of a patient with a large amount of ASDH that was rapidly reduced. A 61-yr-old man was found unconscious following a high speed motor vehicle accident. On initial examination, his Glasgow Coma Score scale was 4/15. His pupils were fully dilated and non-reactive to bright light. Brain computed tomography (CT) showed a massive right-sided ASDH. The decision was made to treat him conservatively because of his poor clinical condition. Another brain CT approximately 14 hr after the initial scan demonstrated a remarkable reduction of the previous ASDH and there was the new appearance of high density in the subdural space adjacent to the falx and the tentorium. Thirty days after his admission, brain CT revealed chronic SDH and the patient underwent surgery. The patient is currently able to obey simple commands. In conclusion, spontaneous rapid resolution/reduction of ASDH may occur in some patients. The mechanisms are most likely the result of dilution by cerebrospinal fluid and the redistribution of hematoma especially in patients with brain atrophy.

  15. Spontaneous Rapid Reduction of a Large Acute Subdural Hematoma

    PubMed Central

    Kang, Dong Ho; Hwang, Soo Hyun; Park, In Sung; Jung, Jin-Myung; Han, Jong Woo

    2009-01-01

    The majority of acute post-traumatic subdural hematomas (ASDH) require urgent surgical evacuation. Spontaneous resolution of ASDH has been reported in some cases. We report here on a case of a patient with a large amount of ASDH that was rapidly reduced. A 61-yr-old man was found unconscious following a high speed motor vehicle accident. On initial examination, his Glasgow Coma Score scale was 4/15. His pupils were fully dilated and non-reactive to bright light. Brain computed tomography (CT) showed a massive right-sided ASDH. The decision was made to treat him conservatively because of his poor clinical condition. Another brain CT approximately 14 hr after the initial scan demonstrated a remarkable reduction of the previous ASDH and there was the new appearance of high density in the subdural space adjacent to the falx and the tentorium. Thirty days after his admission, brain CT revealed chronic SDH and the patient underwent surgery. The patient is currently able to obey simple commands. In conclusion, spontaneous rapid resolution/reduction of ASDH may occur in some patients. The mechanisms are most likely the result of dilution by cerebrospinal fluid and the redistribution of hematoma especially in patients with brain atrophy. PMID:19949689

  16. NVST Data Archiving System Based On FastBit NoSQL Database

    NASA Astrophysics Data System (ADS)

    Liu, Ying-bo; Wang, Feng; Ji, Kai-fan; Deng, Hui; Dai, Wei; Liang, Bo

    2014-06-01

    The New Vacuum Solar Telescope (NVST) is a 1-meter vacuum solar telescope that aims to observe the fine structures of active regions on the Sun. The main tasks of the NVST are high resolution imaging and spectral observations, including the measurements of the solar magnetic field. The NVST has been collecting more than 20 million FITS files since it began routine observations in 2012 and produces a maximum observational records of 120 thousand files in a day. Given the large amount of files, the effective archiving and retrieval of files becomes a critical and urgent problem. In this study, we implement a new data archiving system for the NVST based on the Fastbit Not Only Structured Query Language (NoSQL) database. Comparing to the relational database (i.e., MySQL; My Structured Query Language), the Fastbit database manifests distinctive advantages on indexing and querying performance. In a large scale database of 40 million records, the multi-field combined query response time of Fastbit database is about 15 times faster and fully meets the requirements of the NVST. Our study brings a new idea for massive astronomical data archiving and would contribute to the design of data management systems for other astronomical telescopes.

  17. A web platform for the network analysis of high-throughput data in melanoma and its use to investigate mechanisms of resistance to anti-PD1 immunotherapy.

    PubMed

    Dreyer, Florian S; Cantone, Martina; Eberhardt, Martin; Jaitly, Tanushree; Walter, Lisa; Wittmann, Jürgen; Gupta, Shailendra K; Khan, Faiz M; Wolkenhauer, Olaf; Pützer, Brigitte M; Jäck, Hans-Martin; Heinzerling, Lucie; Vera, Julio

    2018-06-01

    Cellular phenotypes are established and controlled by complex and precisely orchestrated molecular networks. In cancer, mutations and dysregulations of multiple molecular factors perturb the regulation of these networks and lead to malignant transformation. High-throughput technologies are a valuable source of information to establish the complex molecular relationships behind the emergence of malignancy, but full exploitation of this massive amount of data requires bioinformatics tools that rely on network-based analyses. In this report we present the Virtual Melanoma Cell, an online tool developed to facilitate the mining and interpretation of high-throughput data on melanoma by biomedical researches. The platform is based on a comprehensive, manually generated and expert-validated regulatory map composed of signaling pathways important in malignant melanoma. The Virtual Melanoma Cell is a tool designed to accept, visualize and analyze user-generated datasets. It is available at: https://www.vcells.net/melanoma. To illustrate the utilization of the web platform and the regulatory map, we have analyzed a large publicly available dataset accounting for anti-PD1 immunotherapy treatment of malignant melanoma patients. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Computer-assisted map projection research

    USGS Publications Warehouse

    Snyder, John Parr

    1985-01-01

    Computers have opened up areas of map projection research which were previously too complicated to utilize, for example, using a least-squares fit to a very large number of points. One application has been in the efficient transfer of data between maps on different projections. While the transfer of moderate amounts of data is satisfactorily accomplished using the analytical map projection formulas, polynomials are more efficient for massive transfers. Suitable coefficients for the polynomials may be determined more easily for general cases using least squares instead of Taylor series. A second area of research is in the determination of a map projection fitting an unlabeled map, so that accurate data transfer can take place. The computer can test one projection after another, and include iteration where required. A third area is in the use of least squares to fit a map projection with optimum parameters to the region being mapped, so that distortion is minimized. This can be accomplished for standard conformal, equalarea, or other types of projections. Even less distortion can result if complex transformations of conformal projections are utilized. This bulletin describes several recent applications of these principles, as well as historical usage and background.

  19. Hidden sector monopole, vector dark matter and dark radiation with Higgs portal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baek, Seungwon; Ko, P.; Park, Wan-Il, E-mail: sbaek1560@gmail.com, E-mail: pko@kias.re.kr, E-mail: wipark@kias.re.kr

    2014-10-01

    We show that the 't Hooft-Polyakov monopole model in the hidden sector with Higgs portal interaction makes a viable dark matter model, where monopole and massive vector dark matter (VDM) are stable due to topological conservation and the unbroken subgroup U(1 {sub X}. We show that, even though observed CMB data requires the dark gauge coupling to be quite small, a right amount of VDM thermal relic can be obtained via s-channel resonant annihilation for the mass of VDM close to or smaller than the half of SM higgs mass, thanks to Higgs portal interaction. Monopole relic density turns outmore » to be several orders of magnitude smaller than the observed dark matter relic density. Direct detection experiments, particularly, the projected XENON1T experiment, may probe the parameter space where the dark Higgs is lighter than ∼< 50 GeV. In addition, the dark photon associated with the unbroken U(1 {sub X} contributes to the radiation energy density at present, giving Δ N{sub eff}{sup ν} ∼ 0.1 as the extra relativistic neutrino species.« less

  20. High chance that current atmospheric greenhouse concentrations commit to warmings greater than 1.5 °C over land.

    PubMed

    Huntingford, Chris; Mercado, Lina M

    2016-07-27

    The recent Paris UNFCCC climate meeting discussed the possibility of limiting global warming to 2 °C since pre-industrial times, or possibly even 1.5 °C, which would require major future emissions reductions. However, even if climate is stabilised at current atmospheric greenhouse gas (GHG) concentrations, those warming targets would almost certainly be surpassed in the context of mean temperature increases over land only. The reason for this is two-fold. First, current transient warming lags significantly below equilibrium or "committed" warming. Second, almost all climate models indicate warming rates over land are much higher than those for the oceans. We demonstrate this potential for high eventual temperatures over land, even for contemporary GHG levels, using a large set of climate models and for which climate sensitivities are known. Such additional land warming has implications for impacts on terrestrial ecosystems and human well-being. This suggests that even if massive and near-immediate emissions reductions occur such that atmospheric GHGs increase further by only small amounts, careful planning is needed by society to prepare for higher land temperatures in an eventual equilibrium climatic state.

Top