Sample records for efficient results merging

  1. Efficient Merge and Insert Operations for Binary Heaps and Trees

    NASA Technical Reports Server (NTRS)

    Kuszmaul, Christopher Lee; Woo, Alex C. (Technical Monitor)

    2000-01-01

    Binary heaps and binary search trees merge efficiently. We introduce a new amortized analysis that allows us to prove the cost of merging either binary heaps or balanced binary trees is O(l), in the amortized sense. The standard set of other operations (create, insert, delete, extract minimum, in the case of binary heaps, and balanced binary trees, as well as a search operation for balanced binary trees) remain with a cost of O(log n). For binary heaps implemented as arrays, we show a new merge algorithm that has a single operation cost for merging two heaps, a and b, of O(absolute value of a + min(log absolute value of b log log absolute value of b. log absolute value of a log absolute value of b). This is an improvement over O(absolute value of a + log absolute value of a log absolute value of b). The cost of the new merge is so low that it can be used in a new structure which we call shadow heaps. to implement the insert operation to a tunable efficiency. Shadow heaps support the insert operation for simple priority queues in an amortized time of O(f(n)) and other operations in time O((log n log log n)/f (n)), where 1 less than or equal to f (n) less than or equal to log log n. More generally, the results here show that any data structure with operations that change its size by at most one, with the exception of a merge (aka meld) operation, can efficiently amortize the cost of the merge under conditions that are true for most implementations of binary heaps and search trees.

  2. Subscription merging in filter-based publish/subscribe systems

    NASA Astrophysics Data System (ADS)

    Zhang, Shengdong; Shen, Rui

    2013-03-01

    Filter-based publish/subscribe systems suffer from high subscription maintenance cost for each broker in the system stores a large number of subscriptions. Advertisement and covering are not sufficient to conquer such problem. Thus, subscription merging is proposed. However, current researches lack of an efficient and practical merging mechanism. In this paper, we propose a novel subscription merging mechanism. The mechanism is both time and space efficient, and can flexibly control the merging granularity. The merging mechanism has been verified through both theoretical and simulation-based evaluation.

  3. A hierarchical word-merging algorithm with class separability measure.

    PubMed

    Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan

    2014-03-01

    In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.

  4. Scale, mergers and efficiency: the case of Dutch housing corporations.

    PubMed

    Veenstra, Jacob; Koolma, Hendrik M; Allers, Maarten A

    2017-01-01

    The efficiency of social housing providers is a contentious issue. In the Netherlands, there is a widespread belief that housing corporations have substantial potential for efficiency improvements. A related question is whether scale influences efficiency, since recent decades have shown a trend of mergers among corporations. This paper offers a framework to assess the effects of scale and mergers on the efficiency of Dutch housing corporations by using both a data envelopment analysis and a stochastic frontier analysis, using panel data for 2001-2012. The results indicate that most housing corporations operate under diseconomies of scale, implying that merging would be undesirable in most cases. However, merging may have beneficial effects on pure technical efficiency as it forces organizations to reconsider existing practices. A data envelopment analysis indeed confirms this hypothesis, but these results cannot be replicated by a stochastic frontier analysis, meaning that the evidence for this effect is not robust.

  5. Evaluation of the late merge work zone traffic control strategy.

    DOT National Transportation Integrated Search

    2004-01-01

    Several alternative lane merge strategies have been proposed in recent years to process vehicles through work zone lane closures more safely and efficiently. Among these is the late merge. With the late merge, drivers are instructed to use all lanes ...

  6. On the merging rates of envelope-deprived components of binary systems which can give rise to supernova events

    NASA Astrophysics Data System (ADS)

    Tornambe, Amedeo

    1989-08-01

    Theoretical rates of mergings of envelope-deprived components of binary systems, which can give rise to supernova events are described. The effects of the various assumptions on the physical properties of the progenitor system and of its evolutionary behavior through common envelope phases are discussed. Four cases have been analyzed: CO-CO, He-CO, He-He double degenerate mergings and He star-CO dwarf merging. It is found that, above a critical efficiency of the common envelope action in system shrinkage, the rate of CO-CO mergings is not strongly sensitive to the efficiency. Below this critical value, no CO-CO systems will survive for times larger than a few Gyr. In contrast, He-CO dwarf systems will continue to merge at a reasonable rate up to 20 Gyr, and more, also under extreme conditions.

  7. Online Optimal Control of Connected Vehicles for Efficient Traffic Flow at Merging Roads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios-Torres, Jackeline; Malikopoulos, Andreas; Pisu, Pierluigi

    2015-01-01

    This paper addresses the problem of coordinating online connected vehicles at merging roads to achieve a smooth traffic flow without stop-and-go driving. We present a framework and a closed-form solution that optimize the acceleration profile of each vehicle in terms of fuel economy while avoiding collision with other vehicles at the merging zone. The proposed solution is validated through simulation and it is shown that coordination of connected vehicles can reduce significantly fuel consumption and travel time at merging roads.

  8. Automated and Cooperative Vehicle Merging at Highway On-Ramps

    DOE PAGES

    Rios-Torres, Jackeline; Malikopoulos, Andreas A.

    2016-08-05

    Recognition of necessities of connected and automated vehicles (CAVs) is gaining momentum. CAVs can improve both transportation network efficiency and safety through control algorithms that can harmonically use all existing information to coordinate the vehicles. This paper addresses the problem of optimally coordinating CAVs at merging roadways to achieve smooth traffic flow without stop-and-go driving. Here we present an optimization framework and an analytical closed-form solution that allows online coordination of vehicles at merging zones. The effectiveness of the efficiency of the proposed solution is validated through a simulation, and it is shown that coordination of vehicles can significantly reducemore » both fuel consumption and travel time.« less

  9. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Mechanism for shock wave merging in magnetised plasma: criteria and efficiency of formation of low-frequency magnetosonic waves

    NASA Astrophysics Data System (ADS)

    Tishchenko, V. N.; Shaikhislamov, I. F.

    2010-08-01

    The mechanism of merging of shock waves produced by a pulsating energy source is considered for magnetised plasma. The criteria for the emergence of this mechanism are found and its high efficiency for producing low-frequency magnetosonic waves, which have the form of a jet and propagate at large distances without attenuation, is shown.

  10. A community detection algorithm using network topologies and rule-based hierarchical arc-merging strategies

    PubMed Central

    2017-01-01

    The authors use four criteria to examine a novel community detection algorithm: (a) effectiveness in terms of producing high values of normalized mutual information (NMI) and modularity, using well-known social networks for testing; (b) examination, meaning the ability to examine mitigating resolution limit problems using NMI values and synthetic networks; (c) correctness, meaning the ability to identify useful community structure results in terms of NMI values and Lancichinetti-Fortunato-Radicchi (LFR) benchmark networks; and (d) scalability, or the ability to produce comparable modularity values with fast execution times when working with large-scale real-world networks. In addition to describing a simple hierarchical arc-merging (HAM) algorithm that uses network topology information, we introduce rule-based arc-merging strategies for identifying community structures. Five well-studied social network datasets and eight sets of LFR benchmark networks were employed to validate the correctness of a ground-truth community, eight large-scale real-world complex networks were used to measure its efficiency, and two synthetic networks were used to determine its susceptibility to two resolution limit problems. Our experimental results indicate that the proposed HAM algorithm exhibited satisfactory performance efficiency, and that HAM-identified and ground-truth communities were comparable in terms of social and LFR benchmark networks, while mitigating resolution limit problems. PMID:29121100

  11. Effect of changing driving conditions on driver behavior towards design of a safe and efficient traffic system.

    DOT National Transportation Integrated Search

    2013-12-01

    This simulation-based study explores the effects of different work zone configurations, varying distances : between traffic signs, traffic density and individual differences on drivers behavior. Conventional Lane : Merge (CLM) and Joint Lane Merge...

  12. Improved Merge Valve

    NASA Technical Reports Server (NTRS)

    George-Falvy, Dez

    1992-01-01

    Circumferential design combines compactness and efficiency. In remotely controlled valve, flow in tributary duct along circumference of primary duct merged with flow in primary duct. Flow in tributary duct regulated by variable throat nuzzle driven by worm gear. Design leak-proof, and most components easily fabricated on lathe.

  13. Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework

    PubMed Central

    Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko

    2015-01-01

    A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984

  14. Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.

    PubMed

    Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko

    2015-01-01

    A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.

  15. A Graph-Embedding Approach to Hierarchical Visual Word Mergence.

    PubMed

    Wang, Lei; Liu, Lingqiao; Zhou, Luping

    2017-02-01

    Appropriately merging visual words are an effective dimension reduction method for the bag-of-visual-words model in image classification. The approach of hierarchically merging visual words has been extensively employed, because it gives a fully determined merging hierarchy. Existing supervised hierarchical merging methods take different approaches and realize the merging process with various formulations. In this paper, we propose a unified hierarchical merging approach built upon the graph-embedding framework. Our approach is able to merge visual words for any scenario, where a preferred structure and an undesired structure are defined, and, therefore, can effectively attend to all kinds of requirements for the word-merging process. In terms of computational efficiency, we show that our algorithm can seamlessly integrate a fast search strategy developed in our previous work and, thus, well maintain the state-of-the-art merging speed. To the best of our survey, the proposed approach is the first one that addresses the hierarchical visual word mergence in such a flexible and unified manner. As demonstrated, it can maintain excellent image classification performance even after a significant dimension reduction, and outperform all the existing comparable visual word-merging methods. In a broad sense, our work provides an open platform for applying, evaluating, and developing new criteria for hierarchical word-merging tasks.

  16. [An object-oriented remote sensing image segmentation approach based on edge detection].

    PubMed

    Tan, Yu-Min; Huai, Jian-Zhu; Tang, Zhong-Shi

    2010-06-01

    Satellite sensor technology endorsed better discrimination of various landscape objects. Image segmentation approaches to extracting conceptual objects and patterns hence have been explored and a wide variety of such algorithms abound. To this end, in order to effectively utilize edge and topological information in high resolution remote sensing imagery, an object-oriented algorithm combining edge detection and region merging is proposed. Susan edge filter is firstly applied to the panchromatic band of Quickbird imagery with spatial resolution of 0.61 m to obtain the edge map. Thanks to the resulting edge map, a two-phrase region-based segmentation method operates on the fusion image from panchromatic and multispectral Quickbird images to get the final partition result. In the first phase, a quad tree grid consisting of squares with sides parallel to the image left and top borders agglomerates the square subsets recursively where the uniform measure is satisfied to derive image object primitives. Before the merger of the second phrase, the contextual and spatial information, (e. g., neighbor relationship, boundary coding) of the resulting squares are retrieved efficiently by means of the quad tree structure. Then a region merging operation is performed with those primitives, during which the criterion for region merging integrates edge map and region-based features. This approach has been tested on the QuickBird images of some site in Sanxia area and the result is compared with those of ENVI Zoom Definiens. In addition, quantitative evaluation of the quality of segmentation results is also presented. Experiment results demonstrate stable convergence and efficiency.

  17. Merging Quality Processes & Tools with DACUM.

    ERIC Educational Resources Information Center

    McLennan, Krystyna S.

    This paper explains how merging DACUM (Developing a Curriculum) analysis with quality initiatives can reduce waste, increase job efficiency, assist in development of standard operating procedures, and involve employees in positive job improvement methods. In the first half of the paper, the following principles of total quality management (TQM)…

  18. A rule-based shell to hierarchically organize HST observations

    NASA Technical Reports Server (NTRS)

    Bose, Ashim; Gerb, Andrew

    1995-01-01

    An observing program on the Hubble Space Telescope (HST) is described in terms of exposures that are obtained by one or more of the instruments onboard the HST. These exposures are organized into a hierarchy of structures for purposes of efficient scheduling of observations. The process by which exposures get organized into the higher-level structures is called merging. This process relies on rules to determine which observations can be 'merged' into the same higher level structure, and which cannot. The TRANSformation expert system converts proposals for astronomical observations with HST into detailed observing plans. The conversion process includes the task of merging. Within TRANS, we have implemented a declarative shell to facilitate merging. This shell offers the following features: (1) an easy way of specifying rules on when to merge and when not to merge, (2) a straightforward priority mechanism for resolving conflicts among rules, (3) an explanation facility for recording the merging history, (4) a report generating mechanism to help users understand the reasons for merging, and (5) a self-documenting mechanism that documents all the merging rules that have been defined in the shell, ordered by priority. The merging shell is implemented using an object-oriented paradigm in CLOS. It has been a part of operational TRANS (after extensive testing) since July 1993. It has fulfilled all performance expectations, and has considerably simplified the process of implementing new or changed requirements for merging. The users are pleased with its report-generating and self-documenting features.

  19. The collaborative effect of ram pressure and merging on star formation and stripping fraction

    NASA Astrophysics Data System (ADS)

    Bischko, J. C.; Steinhauser, D.; Schindler, S.

    2015-04-01

    Aims: We investigate the effect of ram pressure stripping (RPS) on several simulations of merging pairs of gas-rich spiral galaxies. We are concerned with the changes in stripping efficiency and the time evolution of the star formation rate. Our goal is to provide an estimate of the combined effect of merging and RPS compared to the influence of the individual processes. Methods: We make use of the combined N-body/hydrodynamic code GADGET-2. The code features a threshold-based statistical recipe for star formation, as well as radiative cooling and modeling of galactic winds. In our simulations, we vary mass ratios between 1:4 and 1:8 in a binary merger. We sample different geometric configurations of the merging systems (edge-on and face-on mergers, different impact parameters). Furthermore, we vary the properties of the intracluster medium (ICM) in rough steps: the speed of the merging system relative to the ICM between 500 and 1000 km s-1, the ICM density between 10-29 and 10-27 g cm-3, and the ICM direction relative to the mergers' orbital plane. Ram pressure is kept constant within a simulation time period, as is the ICM temperature of 107 K. Each simulation in the ICM is compared to simulations of the merger in vacuum and the non-merging galaxies with acting ram pressure. Results: Averaged over the simulation time (1 Gyr) the merging pairs show a negligible 5% enhancement in SFR, when compared to single galaxies under the same environmental conditions. The SFRs peak at the time of the galaxies first fly-through. There, our simulations show SFRs of up to 20 M⊙ yr-1 (compared to 3 M⊙ yr-1 of the non-merging galaxies in vacuum). In the most extreme case, this constitutes a short-term (<50 Myr) SFR increase of 50 % over the non-merging galaxies experiencing ram pressure. The wake of merging galaxies in the ICM typically has a third to half the star mass seen in the non-merging galaxies and 5% to 10% less gas mass. The joint effect of RPS and merging, according to our simulations, is not significantly different from pure ram pressure effects.

  20. Why healthcare providers merge.

    PubMed

    Postma, Jeroen; Roos, Anne-Fleur

    2016-04-01

    In many OECD countries, healthcare sectors have become increasingly concentrated as a result of mergers. However, detailed empirical insight into why healthcare providers merge is lacking. Also, we know little about the influence of national healthcare policies on mergers. We fill this gap in the literature by conducting a survey study on mergers among 848 Dutch healthcare executives, of which 35% responded (resulting in a study sample of 239 executives). A total of 65% of the respondents was involved in at least one merger between 2005 and 2012. During this period, Dutch healthcare providers faced a number of policy changes, including increasing competition, more pressure from purchasers, growing financial risks, de-institutionalisation of long-term care and decentralisation of healthcare services to municipalities. Our empirical study shows that healthcare providers predominantly merge to improve the provision of healthcare services and to strengthen their market position. Also efficiency and financial reasons are important drivers of merger activity in healthcare. We find that motives for merger are related to changes in health policies, in particular to the increasing pressure from competitors, insurers and municipalities.

  1. Unsupervised tattoo segmentation combining bottom-up and top-down cues

    NASA Astrophysics Data System (ADS)

    Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen

    2011-06-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.

  2. Merging allylic carbon-hydrogen and selective carbon-carbon bond activation.

    PubMed

    Masarwa, Ahmad; Didier, Dorian; Zabrodski, Tamar; Schinkel, Marvin; Ackermann, Lutz; Marek, Ilan

    2014-01-09

    Since the nineteenth century, many synthetic organic chemists have focused on developing new strategies to regio-, diastereo- and enantioselectively build carbon-carbon and carbon-heteroatom bonds in a predictable and efficient manner. Ideal syntheses should use the least number of synthetic steps, with few or no functional group transformations and by-products, and maximum atom efficiency. One potentially attractive method for the synthesis of molecular skeletons that are difficult to prepare would be through the selective activation of C-H and C-C bonds, instead of the conventional construction of new C-C bonds. Here we present an approach that exploits the multifold reactivity of easily accessible substrates with a single organometallic species to furnish complex molecular scaffolds through the merging of otherwise difficult transformations: allylic C-H and selective C-C bond activations. The resulting bifunctional nucleophilic species, all of which have an all-carbon quaternary stereogenic centre, can then be selectively derivatized by the addition of two different electrophiles to obtain more complex molecular architecture from these easily available starting materials.

  3. Merging allylic carbon-hydrogen and selective carbon-carbon bond activation

    NASA Astrophysics Data System (ADS)

    Masarwa, Ahmad; Didier, Dorian; Zabrodski, Tamar; Schinkel, Marvin; Ackermann, Lutz; Marek, Ilan

    2014-01-01

    Since the nineteenth century, many synthetic organic chemists have focused on developing new strategies to regio-, diastereo- and enantioselectively build carbon-carbon and carbon-heteroatom bonds in a predictable and efficient manner. Ideal syntheses should use the least number of synthetic steps, with few or no functional group transformations and by-products, and maximum atom efficiency. One potentially attractive method for the synthesis of molecular skeletons that are difficult to prepare would be through the selective activation of C-H and C-C bonds, instead of the conventional construction of new C-C bonds. Here we present an approach that exploits the multifold reactivity of easily accessible substrates with a single organometallic species to furnish complex molecular scaffolds through the merging of otherwise difficult transformations: allylic C-H and selective C-C bond activations. The resulting bifunctional nucleophilic species, all of which have an all-carbon quaternary stereogenic centre, can then be selectively derivatized by the addition of two different electrophiles to obtain more complex molecular architecture from these easily available starting materials.

  4. Use of off-axis injection as an alternative to geometrically merging beams in an energy-recovering linac

    DOEpatents

    Douglas, David R [York County, VA

    2012-01-10

    A method of using off-axis particle beam injection in energy-recovering linear accelerators that increases operational efficiency while eliminating the need to merge the high energy re-circulating beam with an injected low energy beam. In this arrangement, the high energy re-circulating beam and the low energy beam are manipulated such that they are within a predetermined distance from one another and then the two immerged beams are injected into the linac and propagated through the system. The configuration permits injection without geometric beam merging as well as decelerated beam extraction without the use of typical beamline elements.

  5. Analysis of traffic congestion induced by the work zone

    NASA Astrophysics Data System (ADS)

    Fei, L.; Zhu, H. B.; Han, X. L.

    2016-05-01

    Based on the cellular automata model, a meticulous two-lane cellular automata model is proposed, in which the driving behavior difference and the difference of vehicles' accelerations between the moving state and the starting state are taken into account. Furthermore the vehicles' motion is refined by using the small cell of one meter long. Then accompanied by coming up with a traffic management measure, a two-lane highway traffic model containing a work zone is presented, in which the road is divided into normal area, merging area and work zone. The vehicles in different areas move forward according to different lane changing rules and position updating rules. After simulation it is found that when the density is small the cluster length in front of the work zone increases with the decrease of the merging probability. Then the suitable merging length and the appropriate speed limit value are recommended. The simulation result in the form of the speed-flow diagram is in good agreement with the empirical data. It indicates that the presented model is efficient and can partially reflect the real traffic. The results may be meaningful for traffic optimization and road construction management.

  6. Exploration of a Dynamic Merging Scheme for Precipitation Estimation over a Small Urban Catchment

    NASA Astrophysics Data System (ADS)

    Al-Azerji, Sherien; Rico-Ramirez, Miguel, ,, Dr.; Han, Dawei, ,, Prof.

    2016-04-01

    The accuracy of quantitative precipitation estimation is of significant importance for urban areas due to the potentially damaging consequences that can result from pluvial flooding. Improved accuracy could be accomplished by merging rain gauge measurements with weather radar data through different merging methods. Several factors may affect the accuracy of the merged data, and the gauge density used for merging is one of the most important. However, if there are no gauges inside the research area, then a gauge network outside the research area can be used for the merging. Generally speaking, the denser the rain gauge network is, the better the merging results that can be achieved. However, in practice, the rain gauge network around the research area is fixed, and the research question is about the optimal merging area. The hypothesis is that if the merging area is too small, there are fewer gauges for merging and thus the result would be poor. If the merging area is too large, gauges far away from the research area can be included in merging. However, due to their large distances, those gauges far away from the research area provide little relevant information to the study and may even introduce noise in merging. Therefore, an optimal merging area that produces the best merged rainfall estimation in the research area could exist. To test this hypothesis, the distance from the centre of the research area and the number of merging gauges around the research area were gradually increased and merging with a new domain of radar data was then performed. The performance of the new merging scheme was compared with a gridded interpolated rainfall from four experimental rain gauges installed inside the research area for validation. The result of this analysis shows that there is indeed an optimum distance from the centre of research area and consequently an optimum number of rain gauges that produce the best merged rainfall data inside the research area. This study is of important and practical value for estimating rainfall in an urban catchment (when there are no gauges available inside the catchment) by merging weather radar with rain gauge data from outside of the catchment. This has not been reported in any literature before now.

  7. In-depth analysis of drivers' merging behavior and rear-end crash risks in work zone merging areas.

    PubMed

    Weng, Jinxian; Xue, Shan; Yang, Ying; Yan, Xuedong; Qu, Xiaobo

    2015-04-01

    This study investigates the drivers' merging behavior and the rear-end crash risk in work zone merging areas during the entire merging implementation period from the time of starting a merging maneuver to that of completing the maneuver. With the merging traffic data from a work zone site in Singapore, a mixed probit model is developed to describe the merging behavior, and two surrogate safety measures including the time to collision (TTC) and deceleration rate to avoid the crash (DRAC) are adopted to compute the rear-end crash risk between the merging vehicle and its neighboring vehicles. Results show that the merging vehicle has a bigger probability of completing a merging maneuver quickly under one of the following situations: (i) the merging vehicle moves relatively fast; (ii) the merging lead vehicle is a heavy vehicle; and (iii) there is a sizable gap in the adjacent through lane. Results indicate that the rear-end crash risk does not monotonically increase as the merging vehicle speed increases. The merging vehicle's rear-end crash risk is also affected by the vehicle type. There is a biggest increment of rear-end crash risk if the merging lead vehicle belongs to a heavy vehicle. Although the reduced remaining distance to work zone could urge the merging vehicle to complete a merging maneuver quickly, it might lead to an increased rear-end crash risk. Interestingly, it is found that the rear-end crash risk could be generally increased over the elapsed time after the merging maneuver being triggered. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Geometric representation methods for multi-type self-defining remote sensing data sets

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1980-01-01

    Efficient and convenient representation of remote sensing data is highly important for an effective utilization. The task of merging different data types is currently dealt with by treating each case as an individual problem. A description is provided of work which is carried out to standardize the multidata merging process. The basic concept of the new approach is that of the self-defining data set (SDDS). The creation of a standard is proposed. This standard would be such that data which may be of interest in a large number of earth resources remote sensing applications would be in a format which allows convenient and automatic merging. Attention is given to details regarding the multidata merging problem, a geometric description of multitype data sets, image reconstruction from track-type data, a data set generation system, and an example multitype data set.

  9. Merge measuring mesh for complex surface parts

    NASA Astrophysics Data System (ADS)

    Ye, Jianhua; Gao, Chenghui; Zeng, Shoujin; Xu, Mingsan

    2018-04-01

    Due to most parts self-occlude and limitation of scanner range, it is difficult to scan the entire part by one time. For modeling of part, multi measuring meshes need to be merged. In this paper, a new merge method is presented. At first, using the grid voxelization method to eliminate the most of non-overlap regions, and retrieval overlap triangles method by the topology of mesh is proposed due to its ability to improve the efficiency. Then, to remove the large deviation of overlap triangles, deleting by overlap distance is discussion. After that, this paper puts forward a new method of merger meshes by registration and combination mesh boundary point. Through experimental analysis, the suggested methods are effective.

  10. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files.

    PubMed

    Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S

    2018-06-01

    Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.

  11. Optimized distributed systems achieve significant performance improvement on sorted merging of massive VCF files

    PubMed Central

    Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng

    2018-01-01

    Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754

  12. Changes in thunderstorm characteristics due to feeder cloud merging

    NASA Astrophysics Data System (ADS)

    Sinkevich, Andrei A.; Krauss, Terrence W.

    2014-06-01

    Cumulus cloud merging is a complex dynamical and microphysical process in which two convective cells merge into a single cell. Previous radar observations and numerical simulations have shown a substantial increase in the maximum area, maximum echo top and maximum reflectivity as a result of the merging process. Although the qualitative aspects of merging have been well documented, the quantitative effects on storm properties remain less defined. Therefore, a statistical assessment of changes in storm characteristics due to merging is of importance. Further investigation into the effects of cloud merging on precipitation flux (Pflux) in a statistical manner provided the motivation for this study in the Asir region of Saudi Arabia. It was confirmed that merging has a strong effect on storm development in this region. The data analysis shows that an increase in the median of the distribution of maximum reflectivity was observed just after merging and was equal to 3.9 dBZ. A detailed analysis of the individual merge cases compared the merged storm Pflux and mass to the sum of the individual Feeder and Storm portions just before merging for each case. The merged storm Pflux increased an average of 106% over the 20-min period after merging, and the mass increased on average 143%. The merged storm clearly became larger and more severe than the sum of the two parts prior to merging. One consequence of this study is that any attempts to evaluate the precipitation enhancement effects of cloud seeding must also include the issue of cloud mergers because merging can have a significant effect on the results.

  13. A Simulation Testbed for Airborne Merging and Spacing

    NASA Technical Reports Server (NTRS)

    Santos, Michel; Manikonda, Vikram; Feinberg, Art; Lohr, Gary

    2008-01-01

    The key innovation in this effort is the development of a simulation testbed for airborne merging and spacing (AM&S). We focus on concepts related to airports with Super Dense Operations where new airport runway configurations (e.g. parallel runways), sequencing, merging, and spacing are some of the concepts considered. We focus on modeling and simulating a complementary airborne and ground system for AM&S to increase efficiency and capacity of these high density terminal areas. From a ground systems perspective, a scheduling decision support tool generates arrival sequences and spacing requirements that are fed to the AM&S system operating on the flight deck. We enhanced NASA's Airspace Concept Evaluation Systems (ACES) software to model and simulate AM&S concepts and algorithms.

  14. Mixed Element Type Unstructured Grid Generation for Viscous Flow Applications

    NASA Technical Reports Server (NTRS)

    Marcum, David L.; Gaither, J. Adam

    2000-01-01

    A procedure is presented for efficient generation of high-quality unstructured grids suitable for CFD simulation of high Reynolds number viscous flow fields. Layers of anisotropic elements are generated by advancing along prescribed normals from solid boundaries. The points are generated such that either pentahedral or tetrahedral elements with an implied connectivity can be be directly recovered. As points are generated they are temporarily attached to a volume triangulation of the boundary points. This triangulation allows efficient local search algorithms to be used when checking merging layers, The existing advancing-front/local-reconnection procedure is used to generate isotropic elements outside of the anisotropic region. Results are presented for a variety of applications. The results demonstrate that high-quality anisotropic unstructured grids can be efficiently and consistently generated for complex configurations.

  15. Efficient fuzzy C-means architecture for image segmentation.

    PubMed

    Li, Hui-Ya; Hwang, Wen-Jyi; Chang, Chia-Yen

    2011-01-01

    This paper presents a novel VLSI architecture for image segmentation. The architecture is based on the fuzzy c-means algorithm with spatial constraint for reducing the misclassification rate. In the architecture, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. In addition, an efficient pipelined circuit is used for the updating process for accelerating the computational speed. Experimental results show that the the proposed circuit is an effective alternative for real-time image segmentation with low area cost and low misclassification rate.

  16. A fast 3D region growing approach for CT angiography applications

    NASA Astrophysics Data System (ADS)

    Ye, Zhen; Lin, Zhongmin; Lu, Cheng-chang

    2004-05-01

    Region growing is one of the most popular methods for low-level image segmentation. Many researches on region growing have focused on the definition of the homogeneity criterion or growing and merging criterion. However, one disadvantage of conventional region growing is redundancy. It requires a large memory usage, and the computation-efficiency is very low especially for 3D images. To overcome this problem, a non-recursive single-pass 3D region growing algorithm named SymRG is implemented and successfully applied to 3D CT angiography (CTA) applications for vessel segmentation and bone removal. The method consists of three steps: segmenting one-dimensional regions of each row; doing region merging to adjacent rows to obtain the region segmentation of each slice; and doing region merging to adjacent slices to obtain the final region segmentation of 3D images. To improve the segmentation speed for very large volume 3D CTA images, this algorithm is applied repeatedly to newly updated local cubes. The next new cube can be estimated by checking isolated segmented regions on all 6 faces of the current local cube. This local non-recursive 3D region-growing algorithm is memory-efficient and computation-efficient. Clinical testings of this algorithm on Brain CTA show this technique could effectively remove whole skull, most of the bones on the skull base, and reveal the cerebral vascular structures clearly.

  17. Combining cell-based hydrodynamics with hybrid particle-field simulations: efficient and realistic simulation of structuring dynamics.

    PubMed

    Sevink, G J A; Schmid, F; Kawakatsu, T; Milano, G

    2017-02-22

    We have extended an existing hybrid MD-SCF simulation technique that employs a coarsening step to enhance the computational efficiency of evaluating non-bonded particle interactions. This technique is conceptually equivalent to the single chain in mean-field (SCMF) method in polymer physics, in the sense that non-bonded interactions are derived from the non-ideal chemical potential in self-consistent field (SCF) theory, after a particle-to-field projection. In contrast to SCMF, however, MD-SCF evolves particle coordinates by the usual Newton's equation of motion. Since collisions are seriously affected by the softening of non-bonded interactions that originates from their evaluation at the coarser continuum level, we have devised a way to reinsert the effect of collisions on the structural evolution. Merging MD-SCF with multi-particle collision dynamics (MPCD), we mimic particle collisions at the level of computational cells and at the same time properly account for the momentum transfer that is important for a realistic system evolution. The resulting hybrid MD-SCF/MPCD method was validated for a particular coarse-grained model of phospholipids in aqueous solution, against reference full-particle simulations and the original MD-SCF model. We additionally implemented and tested an alternative and more isotropic finite difference gradient. Our results show that efficiency is improved by merging MD-SCF with MPCD, as properly accounting for hydrodynamic interactions considerably speeds up the phase separation dynamics, with negligible additional computational costs compared to efficient MD-SCF. This new method enables realistic simulations of large-scale systems that are needed to investigate the applications of self-assembled structures of lipids in nanotechnologies.

  18. Application of transmission infrared spectroscopy and partial least squares regression to predict immunoglobulin G concentration in dairy and beef cow colostrum.

    PubMed

    Elsohaby, Ibrahim; Windeyer, M Claire; Haines, Deborah M; Homerosky, Elizabeth R; Pearson, Jennifer M; McClure, J Trenton; Keefe, Greg P

    2018-03-06

    The objective of this study was to explore the potential of transmission infrared (TIR) spectroscopy in combination with partial least squares regression (PLSR) for quantification of dairy and beef cow colostral immunoglobulin G (IgG) concentration and assessment of colostrum quality. A total of 430 colostrum samples were collected from dairy (n = 235) and beef (n = 195) cows and tested by a radial immunodiffusion (RID) assay and TIR spectroscopy. Colostral IgG concentrations obtained by the RID assay were linked to the preprocessed spectra and divided into combined and prediction data sets. Three PLSR calibration models were built: one for the dairy cow colostrum only, the second for beef cow colostrum only, and the third for the merged dairy and beef cow colostrum. The predictive performance of each model was evaluated separately using the independent prediction data set. The Pearson correlation coefficients between IgG concentrations as determined by the TIR-based assay and the RID assay were 0.84 for dairy cow colostrum, 0.88 for beef cow colostrum, and 0.92 for the merged set of dairy and beef cow colostrum. The average of the differences between colostral IgG concentrations obtained by the RID- and TIR-based assays were -3.5, 2.7, and 1.4 g/L for dairy, beef, and merged colostrum samples, respectively. Further, the average relative error of the colostral IgG predicted by the TIR spectroscopy from the RID assay was 5% for dairy cow, 1.2% for beef cow, and 0.8% for the merged data set. The average intra-assay CV% of the IgG concentration predicted by the TIR-based method were 3.2%, 2.5%, and 6.9% for dairy cow, beef cow, and merged data set, respectively.The utility of TIR method for assessment of colostrum quality was evaluated using the entire data set and showed that TIR spectroscopy accurately identified the quality status of 91% of dairy cow colostrum, 95% of beef cow colostrum, and 89% and 93% of the merged dairy and beef cow colostrum samples, respectively. The results showed that TIR spectroscopy demonstrates potential as a simple, rapid, and cost-efficient method for use as an estimate of IgG concentration in dairy and beef cow colostrum samples and assessment of colostrum quality. The results also showed that merging the dairy and beef cow colostrum sample data sets improved the predictive ability of the TIR spectroscopy.

  19. Visualizing frequent patterns in large multivariate time series

    NASA Astrophysics Data System (ADS)

    Hao, M.; Marwah, M.; Janetzko, H.; Sharma, R.; Keim, D. A.; Dayal, U.; Patnaik, D.; Ramakrishnan, N.

    2011-01-01

    The detection of previously unknown, frequently occurring patterns in time series, often called motifs, has been recognized as an important task. However, it is difficult to discover and visualize these motifs as their numbers increase, especially in large multivariate time series. To find frequent motifs, we use several temporal data mining and event encoding techniques to cluster and convert a multivariate time series to a sequence of events. Then we quantify the efficiency of the discovered motifs by linking them with a performance metric. To visualize frequent patterns in a large time series with potentially hundreds of nested motifs on a single display, we introduce three novel visual analytics methods: (1) motif layout, using colored rectangles for visualizing the occurrences and hierarchical relationships of motifs in a multivariate time series, (2) motif distortion, for enlarging or shrinking motifs as appropriate for easy analysis and (3) motif merging, to combine a number of identical adjacent motif instances without cluttering the display. Analysts can interactively optimize the degree of distortion and merging to get the best possible view. A specific motif (e.g., the most efficient or least efficient motif) can be quickly detected from a large time series for further investigation. We have applied these methods to two real-world data sets: data center cooling and oil well production. The results provide important new insights into the recurring patterns.

  20. Segmentation of remotely sensed data using parallel region growing

    NASA Technical Reports Server (NTRS)

    Tilton, J. C.; Cox, S. C.

    1983-01-01

    The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.

  1. Universal photonic quantum gates assisted by ancilla diamond nitrogen-vacancy centers coupled to resonators

    NASA Astrophysics Data System (ADS)

    Wei, Hai-Rui; Long, Gui Lu

    2015-03-01

    We propose two compact, economic, and scalable schemes for implementing optical controlled-phase-flip and controlled-controlled-phase-flip gates by using the input-output process of a single-sided cavity strongly coupled to a single nitrogen-vacancy-center defect in diamond. Additional photonic qubits, necessary for procedures based on the parity-check measurement or controlled-path and merging gates, are not employed in our schemes. In the controlled-path gate, the paths of the target photon are conditionally controlled by the control photon, and these two paths can be merged back into one by using a merging gate. Only one half-wave plate is employed in our scheme for the controlled-phase-flip gate. Compared with the conventional synthesis procedures for constructing a controlled-controlled-phase-flip gate, the cost of which is two controlled-path gates and two merging gates, or six controlled-not gates, our scheme is more compact and simpler. Our schemes could be performed with a high fidelity and high efficiency with current achievable experimental techniques.

  2. Time-varying mixed logit model for vehicle merging behavior in work zone merging areas.

    PubMed

    Weng, Jinxian; Du, Gang; Li, Dan; Yu, Yao

    2018-08-01

    This study aims to develop a time-varying mixed logit model for the vehicle merging behavior in work zone merging areas during the merging implementation period from the time of starting a merging maneuver to that of completing the maneuver. From the safety perspective, vehicle crash probability and severity between the merging vehicle and its surrounding vehicles are regarded as major factors influencing vehicle merging decisions. Model results show that the model with the use of vehicle crash risk probability and severity could provide higher prediction accuracy than previous models with the use of vehicle speeds and gap sizes. It is found that lead vehicle type, through lead vehicle type, through lag vehicle type, crash probability of the merging vehicle with respect to the through lag vehicle, crash severities of the merging vehicle with respect to the through lead and lag vehicles could exhibit time-varying effects on the merging behavior. One important finding is that the merging vehicle could become more and more aggressive in order to complete the merging maneuver as quickly as possible over the elapsed time, even if it has high vehicle crash risk with respect to the through lead and lag vehicles. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Genesis of magnetic fields in isolated white dwarfs

    NASA Astrophysics Data System (ADS)

    Briggs, Gordon P.; Ferrario, Lilia; Tout, Christopher A.; Wickramasinghe, Dayal T.

    2018-05-01

    A dynamo mechanism driven by differential rotation when stars merge has been proposed to explain the presence of strong fields in certain classes of magnetic stars. In the case of the high field magnetic white dwarfs (HFMWDs), the site of the differential rotation has been variously thought to be the common envelope, the hot outer regions of a merged degenerate core or an accretion disc formed by a tidally disrupted companion that is subsequently accreted by a degenerate core. We have shown previously that the observed incidence of magnetism and the mass distribution in HFMWDs are consistent with the hypothesis that they are the result of merging binaries during common envelope evolution. Here we calculate the magnetic field strengths generated by common envelope interactions for synthetic populations using a simple prescription for the generation of fields and find that the observed magnetic field distribution is also consistent with the stellar merging hypothesis. We use the Kolmogorov-Smirnov test to study the correlation between the calculated and the observed field strengths and find that it is consistent for low envelope ejection efficiency. We also suggest that field generation by the plunging of a giant gaseous planet on to a white dwarf may explain why magnetism among cool white dwarfs (including DZ white dwarfs) is higher than among hot white dwarfs. In this picture a super-Jupiter residing in the outer regions of the white dwarf's planetary system is perturbed into a highly eccentric orbit by a close stellar encounter and is later accreted by the white dwarf.

  4. Genesis of magnetic fields in isolated white dwarfs

    NASA Astrophysics Data System (ADS)

    Briggs, Gordon P.; Ferrario, Lilia; Tout, Christopher A.; Wickramasinghe, Dayal T.

    2018-07-01

    A dynamo mechanism driven by differential rotation when stars merge has been proposed to explain the presence of strong fields in certain classes of magnetic stars. In the case of the high-field magnetic white dwarfs (HFMWDs), the site of the differential rotation has been variously thought to be the common envelope, the hot outer regions of a merged degenerate core or an accretion disc are formed by a tidally disrupted companion that is subsequently accreted by a degenerate core. We have shown previously that the observed incidence of magnetism and the mass distribution in HFMWDs are consistent with the hypothesis that they are the result of merging binaries during common envelope evolution. Here, we calculate the magnetic field strengths generated by common envelope interactions for synthetic populations using a simple prescription for the generation of fields and find that the observed magnetic field distribution is also consistent with the stellar merging hypothesis. We use the Kolmogorov-Smirnov test to study the correlation between the calculated and the observed field strengths and find that it is consistent for low envelope ejection efficiency. We also suggest that the field generation by the plunging of a giant gaseous planet on to a white dwarf may explain why magnetism among cool white dwarfs (including DZ white dwarfs) is higher than among hot white dwarfs. In this picture, a super-Jupiter residing in the outer regions of the white dwarf's planetary system is perturbed into a highly eccentric orbit by a close stellar encounter and is later accreted by the white dwarf.

  5. Design and Test of Mixed-flow Impellers III : Design and Experimental Results for Impeller Model MFI-2A and Comparison with Impeller Model MFI-1A

    NASA Technical Reports Server (NTRS)

    Hamrick, Joseph T; Osborn, Walter M; Beede, William L

    1953-01-01

    A mixed-flow impeller was designed to give a prescribed blade-surface velocity distribution at mean blade height for a given hub-shroud profile. The blade shape at mean blade height, which was produced by the prescribed velocity distribution, was extended by means of radial lines to form the composite blade shape from hub to shroud. The resulting blade was relatively thick; therefore, it was necessary to retain the inverse blade taper which resulted from extension of the radial lines in order to prevent merging or near merging of the separate blades near the hub. For the first test version of the impeller, designated the MFI-2A, the blade height was arbitrarily made greater than that for the basic impeller (the MFI-2) to allow for viscous effects. At design equivalent speed of 1400 feet per second the peak pressure ratio and maximum adiabatic efficiency were 3.95 and 79 percent, respectively. The adiabatic efficiency of the MFI-2A is four points lower than that for impeller model MFI-1A, but because of the higher slip factor for the MFI-2A, the pressure ratios are approximately equal. The procedures followed in the design of the MFI-1A and MFI-2A were, in general, the same; and, although the prescribed initial condition resulted in geometrical configurations that were quite dissimilar, the resulting performance characteristics compare favorably with designs for which considerable development work has been necessary.

  6. Precipitation Data Merging over Mountainous Areas Using Satellite Estimates and Sparse Gauge Observations (PDMMA-USESGO) for Hydrological Modeling — A Case Study over the Tibetan Plateau

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Hsu, K. L.; Sorooshian, S.; Xu, X.

    2017-12-01

    Precipitation in mountain regions generally occurs with high-frequency-intensity, whereas it is not well-captured by sparsely distributed rain-gauges imposing a great challenge on water management. Satellite-based Precipitation Estimation (SPE) provides global high-resolution alternative data for hydro-climatic studies, but are subject to considerable biases. In this study, a model named PDMMA-USESGO for Precipitation Data Merging over Mountainous Areas Using Satellite Estimates and Sparse Gauge Observations is developed to support precipitation mapping and hydrological modeling in mountainous catchments. The PDMMA-USESGO framework includes two calculating steps—adjusting SPE biases and merging satellite-gauge estimates—using the quantile mapping approach, a two-dimensional Gaussian weighting scheme (considering elevation effect), and an inverse root mean square error weighting method. The model is applied and evaluated over the Tibetan Plateau (TP) with the PERSIANN-CCS precipitation retrievals (daily, 0.04°×0.04°) and sparse observations from 89 gauges, for the 11-yr period of 2003-2013. To assess the data merging effects on streamflow modeling, a hydrological evaluation is conducted over a watershed in southeast TP based on the Soil and Water Assessment Tool (SWAT). Evaluation results indicate effectiveness of the model in generating high-resolution-accuracy precipitation estimates over mountainous terrain, with the merged estimates (Mer-SG) presenting consistently improved correlation coefficients, root mean square errors and absolute mean biases from original satellite estimates (Ori-CCS). It is found the Mer-SG forced streamflow simulations exhibit great improvements from those simulations using Ori-CCS, with coefficient of determination (R2) and Nash-Sutcliffe efficiency reach to 0.8 and 0.65, respectively. The presented model and case study serve as valuable references for the hydro-climatic applications using remote sensing-gauge information in other mountain areas of the world.

  7. High power heating of magnetic reconnection in merging tokamak experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ono, Y.; Tanabe, H.; Gi, K.

    2015-05-15

    Significant ion/electron heating of magnetic reconnection up to 1.2 keV was documented in two spherical tokamak plasma merging experiment on MAST with the significantly large Reynolds number R∼10{sup 5}. Measured 1D/2D contours of ion and electron temperatures reveal clearly energy-conversion mechanisms of magnetic reconnection: huge outflow heating of ions in the downstream and localized heating of electrons at the X-point. Ions are accelerated up to the order of poloidal Alfven speed in the reconnection outflow region and are thermalized by fast shock-like density pileups formed in the downstreams, in agreement with recent solar satellite observations and PIC simulation results. The magneticmore » reconnection efficiently converts the reconnecting (poloidal) magnetic energy mostly into ion thermal energy through the outflow, causing the reconnection heating energy proportional to square of the reconnecting (poloidal) magnetic field B{sub rec}{sup 2}  ∼  B{sub p}{sup 2}. The guide toroidal field B{sub t} does not affect the bulk heating of ions and electrons, probably because the reconnection/outflow speeds are determined mostly by the external driven inflow by the help of another fast reconnection mechanism: intermittent sheet ejection. The localized electron heating at the X-point increases sharply with the guide toroidal field B{sub t}, probably because the toroidal field increases electron confinement and acceleration length along the X-line. 2D measurements of magnetic field and temperatures in the TS-3 tokamak merging experiment also reveal the detailed reconnection heating mechanisms mentioned above. The high-power heating of tokamak merging is useful not only for laboratory study of reconnection but also for economical startup and heating of tokamak plasmas. The MAST/TS-3 tokamak merging with B{sub p} > 0.4 T will enables us to heat the plasma to the alpha heating regime: T{sub i} > 5 keV without using any additional heating facility.« less

  8. An Econometric Approach to Evaluate Navy Advertising Efficiency.

    DTIC Science & Technology

    1996-03-01

    This thesis uses an econometric approach to systematically and comprehensively analyze Navy advertising and recruiting data to determine Navy... advertising cost efficiency in the Navy recruiting process. Current recruiting and advertising cost data are merged into an appropriate data base and...evaluated using multiple regression techniques to find assessments of the relationships between Navy advertising expenditures and recruit contracts attained

  9. Unravelling merging behaviors and electrostatic properties of CVD-grown monolayer MoS{sub 2} domains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Song; Yang, Bingchu, E-mail: bingchuyang@csu.edu.cn; Hunan Key Laboratory for Super-Microstructure and Ultrafast Process, Central South University, 932 South Lushan Road, Changsha 410012

    The presence of grain boundaries is inevitable for chemical vapor deposition (CVD)-grown MoS{sub 2} domains owing to various merging behaviors, which greatly limits its potential applications in novel electronic and optoelectronic devices. It is therefore of great significance to unravel the merging behaviors of the synthesized polygon shape MoS{sub 2} domains. Here we provide systematic investigations of merging behaviors and electrostatic properties of CVD-grown polycrystalline MoS{sub 2} crystals by multiple means. Morphological results exhibit various polygon shape features, ascribed to polycrystalline crystals merged with triangle shape MoS{sub 2} single crystals. The thickness of triangle and polygon shape MoS{sub 2} crystalsmore » is identical manifested by Raman intensity and peak position mappings. Three merging behaviors are proposed to illustrate the formation mechanisms of observed various polygon shaped MoS{sub 2} crystals. The combined photoemission electron microscopy and kelvin probe force microscopy results reveal that the surface potential of perfect merged crystals is identical, which has an important implication for fabricating MoS{sub 2}-based devices.« less

  10. Using DOUBLE STAR and CLUSTER Synoptic Observations to Test Global MHD Simulations of the Large-scale Topology of the Dayside Merging Region

    NASA Astrophysics Data System (ADS)

    Berchem, J.; Marchaudon, A.; Bosqued, J.; Escoubet, C. P.; Dunlop, M.; Owen, C. J.; Reme, H.; Balogh, A.; Carr, C.; Fazakerley, A. N.; Cao, J. B.

    2005-12-01

    Synoptic measurements from the DOUBLE STAR and CLUSTER spacecraft offer a unique opportunity to evaluate global models in simulating the complex topology and dynamics of the dayside merging region. We compare observations from the DOUBLE STAR TC-1 and CLUSTER spacecraft on May 8, 2004 with the predictions from a three-dimensional magnetohydrodynamic (MHD) simulation that uses plasma and magnetic field parameters measured upstream of the bow shock by the WIND spacecraft. Results from the global simulation are consistent with the large-scale features observed by CLUSTER and TC-1. We discuss topological changes and plasma flows at the dayside magnetospheric boundary inferred from the simulation results. The simulation shows that the DOUBLE STAR spacecraft passed through the dawn side merging region as the IMF rotated. In particular, the simulation indicates that at times TC-1 was very close to the merging region. In addition, we found that the bifurcation of the merging region in the simulation results is consistent with predictions by the antiparallel merging model. However, because of the draping of the magnetosheath field lines over the magnetopause, the positions and shape of the merging region differ significantly from those predicted by the model.

  11. Microbiological and ecological responses to global environmental changes in polar regions (MERGE): An IPY core coordinating project

    NASA Astrophysics Data System (ADS)

    Naganuma, Takeshi; Wilmotte, Annick

    2009-11-01

    An integrated program, “Microbiological and ecological responses to global environmental changes in polar regions” (MERGE), was proposed in the International Polar Year (IPY) 2007-2008 and endorsed by the IPY committee as a coordinating proposal. MERGE hosts original proposals to the IPY and facilitates their funding. MERGE selected three key questions to produce scientific achievements. Prokaryotic and eukaryotic organisms in terrestrial, lacustrine, and supraglacial habitats were targeted according to diversity and biogeography; food webs and ecosystem evolution; and linkages between biological, chemical, and physical processes in the supraglacial biome. MERGE hosted 13 original and seven additional proposals, with two full proposals. It respected the priorities and achievements of the individual proposals and aimed to unify their significant results. Ideas and projects followed a bottom-up rather than a top-down approach. We intend to inform the MERGE community of the initial results and encourage ongoing collaboration. Scientists from non-polar regions have also participated and are encouraged to remain involved in MERGE. MERGE is formed by scientists from Argentina, Australia, Austria, Belgium, Brazil, Bulgaria, Canada, Egypt, Finland, France, Germany, Italy, Japan, Korea, Malaysia, New Zealand, Philippines, Poland, Russia, Spain, UK, Uruguay, USA, and Vietnam, and associates from Chile, Denmark, Netherlands, and Norway.

  12. Strategies for merging microbial fuel cell technologies in water desalination processes: Start-up protocol and desalination efficiency assessment

    NASA Astrophysics Data System (ADS)

    Borjas, Zulema; Esteve-Núñez, Abraham; Ortiz, Juan Manuel

    2017-07-01

    Microbial Desalination Cells constitute an innovative technology where microbial fuel cell and electrodialysis merge in the same device for obtaining fresh water from saline water with no energy-associated cost for the user. In this work, an anodic biofilm of the electroactive bacteria Geobacter sulfurreducens was able to efficiently convert the acetate present in synthetic waste water into electric current (j = 0.32 mA cm-2) able to desalinate water. .Moreover, we implemented an efficient start-up protocol where desalination up to 90% occurred in a desalination cycle (water production:0.308 L m-2 h-1, initial salinity: 9 mS cm-1, final salinity: <1 mS cm-1) using a filter press-based MDC prototype without any energy supply (excluding peristaltic pump energy). This start-up protocol is not only optimized for time but also simplifies operational procedures making it a more feasible strategy for future scaling-up of MDCs either as a single process or as a pre-treatment method combined with other well established desalination technologies such as reverse osmosis (RO) or reverse electrodialysis.

  13. Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock.

    PubMed

    Gerke, Kirill M; Karsanina, Marina V; Mallants, Dirk

    2015-11-02

    Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing.

  14. Universal Stochastic Multiscale Image Fusion: An Example Application for Shale Rock

    PubMed Central

    Gerke, Kirill M.; Karsanina, Marina V.; Mallants, Dirk

    2015-01-01

    Spatial data captured with sensors of different resolution would provide a maximum degree of information if the data were to be merged into a single image representing all scales. We develop a general solution for merging multiscale categorical spatial data into a single dataset using stochastic reconstructions with rescaled correlation functions. The versatility of the method is demonstrated by merging three images of shale rock representing macro, micro and nanoscale spatial information on mineral, organic matter and porosity distribution. Merging multiscale images of shale rock is pivotal to quantify more reliably petrophysical properties needed for production optimization and environmental impacts minimization. Images obtained by X-ray microtomography and scanning electron microscopy were fused into a single image with predefined resolution. The methodology is sufficiently generic for implementation of other stochastic reconstruction techniques, any number of scales, any number of material phases, and any number of images for a given scale. The methodology can be further used to assess effective properties of fused porous media images or to compress voluminous spatial datasets for efficient data storage. Practical applications are not limited to petroleum engineering or more broadly geosciences, but will also find their way in material sciences, climatology, and remote sensing. PMID:26522938

  15. Observations of the Ion Signatures of Double Merging and the Formation of Newly Closed Field Lines

    NASA Technical Reports Server (NTRS)

    Chandler, Michael O.; Avanov, Levon A.; Craven, Paul D.

    2007-01-01

    Observations from the Polar spacecraft, taken during a period of northward interplanetary magnetic field (IMF) show magnetosheath ions within the magnetosphere with velocity distributions resulting from multiple merging sites along the same field line. The observations from the TIDE instrument show two separate ion energy-time dispersions that are attributed to two widely separated (-20Re) merging sites. Estimates of the initial merging times show that they occurred nearly simultaneously (within 5 minutes.) Along with these populations, cold, ionospheric ions were observed counterstreaming along the field lines. The presence of such ions is evidence that these field lines are connected to the ionosphere on both ends. These results are consistent with the hypothesis that double merging can produce closed field lines populated by solar wind plasma. While the merging sites cannot be unambiguously located, the observations and analyses favor one site poleward of the northern cusp and a second site at low latitudes.

  16. Evaluation and correction of uncertainty due to Gaussian approximation in radar - rain gauge merging using kriging with external drift

    NASA Astrophysics Data System (ADS)

    Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.

    2016-12-01

    It is widely recognised that merging radar rainfall estimates (RRE) with rain gauge data can improve the RRE and provide areal and temporal coverage that rain gauges cannot offer. Many methods to merge radar and rain gauge data are based on kriging and require an assumption of Gaussianity on the variable of interest. In particular, this work looks at kriging with external drift (KED), because it is an efficient, widely used, and well performing merging method. Rainfall, especially at finer temporal scale, does not have a normal distribution and presents a bi-modal skewed distribution. In some applications a Gaussianity assumption is made, without any correction. In other cases, variables are transformed in order to obtain a distribution closer to Gaussian. This work has two objectives: 1) compare different transformation methods in merging applications; 2) evaluate the uncertainty arising when untransformed rainfall data is used in KED. The comparison of transformation methods is addressed under two points of view. On the one hand, the ability to reproduce the original probability distribution after back-transformation of merged products is evaluated with qq-plots, on the other hand the rainfall estimates are compared with an independent set of rain gauge measurements. The tested methods are 1) no transformation, 2) Box-Cox transformations with parameter equal to λ=0.5 (square root), 3) λ=0.25 (square root - square root), and 4) λ=0.1 (almost logarithmic), 5) normal quantile transformation, and 6) singularity analysis. The uncertainty associated with the use of non-transformed data in KED is evaluated in comparison with the best performing product. The methods are tested on a case study in Northern England, using hourly data from 211 tipping bucket rain gauges from the Environment Agency and radar rainfall data at 1 km/5-min resolutions from the UK Met Office. In addition, 25 independent rain gauges from the UK Met Office were used to assess the merged products.

  17. Polymer based organic solar cells using ink-jet printed active layers

    NASA Astrophysics Data System (ADS)

    Aernouts, T.; Aleksandrov, T.; Girotto, C.; Genoe, J.; Poortmans, J.

    2008-01-01

    Ink-jet printing is used to deposit polymer:fullerene blends suitable as active layer for organic solar cells. We show that merging of separately deposited ink droplets into a continuous, pinhole-free organic thin film results from a balance between ink viscosity and surface wetting, whereas for certain of the studied solutions clear coffee drop effect occurs for single droplets; this can be minimized for larger printed areas, yielding smooth layers with minimal surface roughness. Resulting organic films are used as active layer for solar cells with power conversion efficiency of 1.4% under simulated AM1.5 solar illumination.

  18. Spatiotemporal fusion of multiple-satellite aerosol optical depth (AOD) products using Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin

    2016-04-01

    Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.

  19. Design of lane merges at rural freeway construction work zones.

    DOT National Transportation Integrated Search

    2012-10-01

    Practices for the design and control of work zone traffic control configurations have evolved over time : to reflect safer and more efficient management practices. However, they are also recognized as areas : of frequent vehicle conflicts that can ca...

  20. Reconciling mass functions with the star-forming main sequence via mergers

    NASA Astrophysics Data System (ADS)

    Steinhardt, Charles L.; Yurk, Dominic; Capak, Peter

    2017-06-01

    We combine star formation along the 'main sequence', quiescence and clustering and merging to produce an empirical model for the evolution of individual galaxies. Main-sequence star formation alone would significantly steepen the stellar mass function towards low redshift, in sharp conflict with observation. However, a combination of star formation and merging produces a consistent result for correct choice of the merger rate function. As a result, we are motivated to propose a model in which hierarchical merging is disconnected from environmentally independent star formation. This model can be tested via correlation functions and would produce new constraints on clustering and merging.

  1. CFD simulation of local and global mixing time in an agitated tank

    NASA Astrophysics Data System (ADS)

    Li, Liangchao; Xu, Bin

    2017-01-01

    The Issue of mixing efficiency in agitated tanks has drawn serious concern in many industrial processes. The turbulence model is very critical to predicting mixing process in agitated tanks. On the basis of computational fluid dynamics(CFD) software package Fluent 6.2, the mixing characteristics in a tank agitated by dual six-blade-Rushton-turbines(6-DT) are predicted using the detached eddy simulation(DES) method. A sliding mesh(SM) approach is adopted to solve the rotation of the impeller. The simulated flow patterns and liquid velocities in the agitated tank are verified by experimental data in the literature. The simulation results indicate that the DES method can obtain more flow details than Reynolds-averaged Navier-Stokes(RANS) model. Local and global mixing time in the agitated tank is predicted by solving a tracer concentration scalar transport equation. The simulated results show that feeding points have great influence on mixing process and mixing time. Mixing efficiency is the highest for the feeding point at location of midway of the two impellers. Two methods are used to determine global mixing time and get close result. Dimensionless global mixing time remains unchanged with increasing of impeller speed. Parallel, merging and diverging flow pattern form in the agitated tank, respectively, by changing the impeller spacing and clearance of lower impeller from the bottom of the tank. The global mixing time is the shortest for the merging flow, followed by diverging flow, and the longest for parallel flow. The research presents helpful references for design, optimization and scale-up of agitated tanks with multi-impeller.

  2. Simulation of 6 to 3 to 1 merge and squeeze of Au77+ bunches in AGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, C. J.

    2016-05-09

    In order to increase the intensity per Au77+ bunch at AGS extraction, a 6 to 3 to 1 merge scheme was developed and implemented by K. Zeno during the 2016 RHIC run. For this scheme, 12 Booster loads, each consisting of a single bunch, are delivered to AGS per AGS magnetic cycle. The bunch from Booster is itself the result of a 4 to 2 to 1 merge which is carried out on a flat porch during the Booster magnetic cycle. Each Booster bunch is injected into a harmonic 24 bucket on the AGS injection porch. In order to fitmore » into the buckets and allow for the AGS injection kicker rise time, the bunch width must be reduced by exciting quadrupole oscillations just before extraction from Booster. The bunches are injected into two groups of six adjacent harmonic 24 buckets. In each group the 6 bunches are merged into 3 by bringing on RF harmonic 12 while reducing harmonic 24. This is a straightforward 2 to 1 merge (in which two adjacent bunches are merged into one). One ends up with two groups of three adjacent bunches sitting in harmonic 12 buckets. These bunches are accelerated to an intermediate porch for further merging. Doing the merge on a porch that sits above injection energy helps reduce losses that are believed to be due to the space-charge force acting on the bunched particles. (The 6 to 3 merge is done on the injection porch because the harmonic 24 frequency on the intermediate porch would be too high for the AGS RF cavities.) On the intermediate porch each group of 3 bunches is merged into one by bringing on RF harmonics 8 and 4 and then reducing harmonics 12 and 8. One ends up with 2 bunches, each the result of a 6 to 3 to 1 merge and each sitting in a harmonic 4 bucket. This puts 6 Booster loads into each bunch. Each merged bunch needs to be squeezed into a harmonic 12 bucket for subsequent acceleration. This is done by again bringing on harmonic 8 and then harmonic 12. Results of simulations of the 6 to 3 to 1 merge and the subsequent squeeze into harmonic 12 buckets are presented in this note. In particular, they provide a benchmark for what can be achieved with the available RF voltages.« less

  3. Towards the optimal fusion of high-resolution Digital Elevation Models for detailed urban flood assessment

    NASA Astrophysics Data System (ADS)

    Leitão, J. P.; de Sousa, L. M.

    2018-06-01

    Newly available, more detailed and accurate elevation data sets, such as Digital Elevation Models (DEMs) generated on the basis of imagery from terrestrial LiDAR (Light Detection and Ranging) systems or Unmanned Aerial Vehicles (UAVs), can be used to improve flood-model input data and consequently increase the accuracy of the flood modelling results. This paper presents the first application of the MBlend merging method and assesses the impact of combining different DEMs on flood modelling results. It was demonstrated that different raster merging methods can have different and substantial impacts on these results. In addition to the influence associated with the method used to merge the original DEMs, the magnitude of the impact also depends on (i) the systematic horizontal and vertical differences of the DEMs, and (ii) the orientation between the DEM boundary and the terrain slope. The greater water depth and flow velocity differences between the flood modelling results obtained using the reference DEM and the merged DEMs ranged from -9.845 to 0.002 m, and from 0.003 to 0.024 m s-1 respectively; these differences can have a significant impact on flood hazard estimates. In most of the cases investigated in this study, the differences from the reference DEM results were smaller for the MBlend method than for the results of the two conventional methods. This study highlighted the importance of DEM merging when conducting flood modelling and provided hints on the best DEM merging methods to use.

  4. On the Performance of Alternate Conceptual Ecohydrological Models for Streamflow Prediction

    NASA Astrophysics Data System (ADS)

    Naseem, Bushra; Ajami, Hoori; Cordery, Ian; Sharma, Ashish

    2016-04-01

    A merging of a lumped conceptual hydrological model with two conceptual dynamic vegetation models is presented to assess the performance of these models for simultaneous simulations of streamflow and leaf area index (LAI). Two conceptual dynamic vegetation models with differing representation of ecological processes are merged with a lumped conceptual hydrological model (HYMOD) to predict catchment scale streamflow and LAI. The merged RR-LAI-I model computes relative leaf biomass based on transpiration rates while the RR-LAI-II model computes above ground green and dead biomass based on net primary productivity and water use efficiency in response to soil moisture dynamics. To assess the performance of these models, daily discharge and 8-day MODIS LAI product for 27 catchments of 90 - 1600km2 in size located in the Murray - Darling Basin in Australia are used. Our results illustrate that when single-objective optimisation was focussed on maximizing the objective function for streamflow or LAI, the other un-calibrated predicted outcome (LAI if streamflow is the focus) was consistently compromised. Thus, single-objective optimization cannot take into account the essence of all processes in the conceptual ecohydrological models. However, multi-objective optimisation showed great strength for streamflow and LAI predictions. Both response outputs were better simulated by RR-LAI-II than RR-LAI-I due to better representation of physical processes such as net primary productivity (NPP) in RR-LAI-II. Our results highlight that simultaneous calibration of streamflow and LAI using a multi-objective algorithm proves to be an attractive tool for improved streamflow predictions.

  5. Classroom to Clinic: Merging Education and Research to Efficiently Prototype Medical Devices

    PubMed Central

    Begg, Nikolai D.; Walsh, Conor; Custer, David; Gupta, Rajiv; Osborn, Lynn R.; Slocum, Alexander H.

    2013-01-01

    Innovation in patient care requires both clinical and technical skills, and this paper presents the methods and outcomes of a nine-year, clinical-academic collaboration to develop and evaluate new medical device technologies, while teaching mechanical engineering. Together, over the course of a single semester, seniors, graduate students, and clinicians conceive, design, build, and test proof-of-concept prototypes. Projects initiated in the course have generated intellectual property and peer-reviewed publications, stimulated further research, furthered student and clinician careers, and resulted in technology licenses and start-up ventures. PMID:27170859

  6. Classroom to Clinic: Merging Education and Research to Efficiently Prototype Medical Devices.

    PubMed

    Hanumara, Nevan C; Begg, Nikolai D; Walsh, Conor; Custer, David; Gupta, Rajiv; Osborn, Lynn R; Slocum, Alexander H

    2013-01-01

    Innovation in patient care requires both clinical and technical skills, and this paper presents the methods and outcomes of a nine-year, clinical-academic collaboration to develop and evaluate new medical device technologies, while teaching mechanical engineering. Together, over the course of a single semester, seniors, graduate students, and clinicians conceive, design, build, and test proof-of-concept prototypes. Projects initiated in the course have generated intellectual property and peer-reviewed publications, stimulated further research, furthered student and clinician careers, and resulted in technology licenses and start-up ventures.

  7. Development of a Tandem Electrodynamic Trap Apparatus for Merging Charged Droplets and Spectroscopic Characterization of Resultant Dried Particles.

    PubMed

    Kohno, Jun-Ya; Higashiura, Tetsu; Eguchi, Takaaki; Miura, Shumpei; Ogawa, Masato

    2016-08-11

    Materials work in multicomponent forms. A wide range of compositions must be tested to obtain the optimum composition for a specific application. We propose optimization using a series of small levitated single particles. We describe a tandem-trap apparatus for merging liquid droplets and analyzing the merged droplets and/or dried particles that are produced from the merged droplets under levitation conditions. Droplet merging was confirmed by Raman spectroscopic studies of the levitated particles. The tandem-trap apparatus enables the synthesis of a particle and spectroscopic investigation of its properties. This provides a basis for future investigation of the properties of levitated single particles.

  8. Anatomy of Data Integration

    PubMed Central

    Brazhnik, Olga; Jones, John F.

    2007-01-01

    Producing reliable information is the ultimate goal of data processing. The ocean of data created with the advances of science and technologies calls for integration of data coming from heterogeneous sources that are diverse in their purposes, business rules, underlying models and enabling technologies. Reference models, Semantic Web, standards, ontology, and other technologies enable fast and efficient merging of heterogeneous data, while the reliability of produced information is largely defined by how well the data represent the reality. In this paper we initiate a framework for assessing the informational value of data that includes data dimensions; aligning data quality with business practices; identifying authoritative sources and integration keys; merging models; uniting updates of varying frequency and overlapping or gapped data sets. PMID:17071142

  9. Distortion in Two-Dimensional Shapes of Merging Nanobubbles: Evidence for Anisotropic Gas Flow Mechanism.

    PubMed

    Park, Jong Bo; Shin, Dongha; Kang, Sangmin; Cho, Sung-Pyo; Hong, Byung Hee

    2016-11-01

    Two nanobubbles that merge in a graphene liquid cell take elliptical shapes rather than the ideal circular shapes. This phenomenon was investigated in detail by using in situ transmission electron microscopy (TEM). The results show that the distortion in the two-dimensional shapes of the merging nanobubbles is attributed to the anisotropic gas transport flux between the nanobubbles. We also predicted and confirmed the same phenomenon in a three-nanobubble system, indicating that the relative size difference is important in determining the shape of merging nanobubbles.

  10. Infrared and visible image fusion scheme based on NSCT and low-level visual features

    NASA Astrophysics Data System (ADS)

    Li, Huafeng; Qiu, Hongmei; Yu, Zhengtao; Zhang, Yafei

    2016-05-01

    Multi-scale transform (MST) is an efficient tool for image fusion. Recently, many fusion methods have been developed based on different MSTs, and they have shown potential application in many fields. In this paper, we propose an effective infrared and visible image fusion scheme in nonsubsampled contourlet transform (NSCT) domain, in which the NSCT is firstly employed to decompose each of the source images into a series of high frequency subbands and one low frequency subband. To improve the fusion performance we designed two new activity measures for fusion of the lowpass subbands and the highpass subbands. These measures are developed based on the fact that the human visual system (HVS) percept the image quality mainly according to its some low-level features. Then, the selection principles of different subbands are presented based on the corresponding activity measures. Finally, the merged subbands are constructed according to the selection principles, and the final fused image is produced by applying the inverse NSCT on these merged subbands. Experimental results demonstrate the effectiveness and superiority of the proposed method over the state-of-the-art fusion methods in terms of both visual effect and objective evaluation results.

  11. Modeling of the merging of two colliding field reversed configuration plasmoids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Guanqiong; Wang, Xiaoguang; Li, Lulu

    2016-06-15

    The field reversed configuration (FRC) is one of the candidate plasma targets for the magneto-inertial fusion, and a high temperature FRC can be formed by using the collision-merging technology. Although the merging process and mechanism of FRC are quite complicated, it is thinkable to build a simple model to investigate the macroscopic equilibrium parameters including the density, the temperature and the separatrix volume, which may play an important role in the collision-merging process of FRC. It is quite interesting that the estimates of the related results based on our simple model are in agreement with the simulation results of amore » two-dimensional magneto-hydrodynamic code (MFP-2D), which has being developed by our group since the last couple of years, while these results can qualitatively fit the results of C-2 experiments by Tri-alpha energy company. On the other hand, the simple model can be used to investigate how to increase the density of the merged FRC. It is found that the amplification of the density depends on the poloidal flux-increase factor and the temperature increases with the translation speed of two plasmoids.« less

  12. Throughput and Energy Efficiency of a Cooperative Hybrid ARQ Protocol for Underwater Acoustic Sensor Networks

    PubMed Central

    Ghosh, Arindam; Lee, Jae-Won; Cho, Ho-Shin

    2013-01-01

    Due to its efficiency, reliability and better channel and resource utilization, cooperative transmission technologies have been attractive options in underwater as well as terrestrial sensor networks. Their performance can be further improved if merged with forward error correction (FEC) techniques. In this paper, we propose and analyze a retransmission protocol named Cooperative-Hybrid Automatic Repeat reQuest (C-HARQ) for underwater acoustic sensor networks, which exploits both the reliability of cooperative ARQ (CARQ) and the efficiency of incremental redundancy-hybrid ARQ (IR-HARQ) using rate-compatible punctured convolution (RCPC) codes. Extensive Monte Carlo simulations are performed to investigate the performance of the protocol, in terms of both throughput and energy efficiency. The results clearly reveal the enhancement in performance achieved by the C-HARQ protocol, which outperforms both CARQ and conventional stop and wait ARQ (S&W ARQ). Further, using computer simulations, optimum values of various network parameters are estimated so as to extract the best performance out of the C-HARQ protocol. PMID:24217359

  13. Integration and the performance of healthcare networks:do integration strategies enhance efficiency, profitability, and image?

    PubMed Central

    Wan, Thomas T.H.; Ma, Allen; Y.J.Lin, Blossom

    2001-01-01

    Abstract Purpose This study examines the integration effects on efficiency and financial viability of the top 100 integrated healthcare networks (IHNs) in the United States. Theory A contingency- strategic theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. Methods The lists of the top 100 IHNs ranked in two years, 1998 and 1999, by the SMG Marketing Group were merged to create a database for the study. Multiple indicators were used to examine the relationship between IHNs' characteristics and their performance in efficiency and financial viability. A path analytical model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' images, represented by attaining ranking among the top 100 in two consecutive years, were analysed. Results and conclusion No positive associations were found between integration and network performance in efficiency or profits. Longitudinal data are needed to investigate the effect of integration on healthcare networks' financial performance. PMID:16896405

  14. Entanglement and Coherence in Quantum State Merging.

    PubMed

    Streltsov, A; Chitambar, E; Rana, S; Bera, M N; Winter, A; Lewenstein, M

    2016-06-17

    Understanding the resource consumption in distributed scenarios is one of the main goals of quantum information theory. A prominent example for such a scenario is the task of quantum state merging, where two parties aim to merge their tripartite quantum state parts. In standard quantum state merging, entanglement is considered to be an expensive resource, while local quantum operations can be performed at no additional cost. However, recent developments show that some local operations could be more expensive than others: it is reasonable to distinguish between local incoherent operations and local operations which can create coherence. This idea leads us to the task of incoherent quantum state merging, where one of the parties has free access to local incoherent operations only. In this case the resources of the process are quantified by pairs of entanglement and coherence. Here, we develop tools for studying this process and apply them to several relevant scenarios. While quantum state merging can lead to a gain of entanglement, our results imply that no merging procedure can gain entanglement and coherence at the same time. We also provide a general lower bound on the entanglement-coherence sum and show that the bound is tight for all pure states. Our results also lead to an incoherent version of Schumacher compression: in this case the compression rate is equal to the von Neumann entropy of the diagonal elements of the corresponding quantum state.

  15. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic

    USGS Publications Warehouse

    Chavez, P.S.; Sides, S.C.; Anderson, J.A.

    1991-01-01

    The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the Hue-Intensity-Saturation (HIS), Principal Component Analysis (PCA), and High-Pass Filter (HPF) procedures. The HIS method distorted the spectral characteristics of the data the most. The HPF method distorted the spectral characteristics the least; the distortions were minimal and difficult to detect. -Authors

  16. Building Roof Segmentation from Aerial Images Using a Line-and Region-Based Watershed Segmentation Technique

    PubMed Central

    Merabet, Youssef El; Meurie, Cyril; Ruichek, Yassine; Sbihi, Abderrahmane; Touahni, Raja

    2015-01-01

    In this paper, we present a novel strategy for roof segmentation from aerial images (orthophotoplans) based on the cooperation of edge- and region-based segmentation methods. The proposed strategy is composed of three major steps. The first one, called the pre-processing step, consists of simplifying the acquired image with an appropriate couple of invariant and gradient, optimized for the application, in order to limit illumination changes (shadows, brightness, etc.) affecting the images. The second step is composed of two main parallel treatments: on the one hand, the simplified image is segmented by watershed regions. Even if the first segmentation of this step provides good results in general, the image is often over-segmented. To alleviate this problem, an efficient region merging strategy adapted to the orthophotoplan particularities, with a 2D modeling of roof ridges technique, is applied. On the other hand, the simplified image is segmented by watershed lines. The third step consists of integrating both watershed segmentation strategies into a single cooperative segmentation scheme in order to achieve satisfactory segmentation results. Tests have been performed on orthophotoplans containing 100 roofs with varying complexity, and the results are evaluated with the VINETcriterion using ground-truth image segmentation. A comparison with five popular segmentation techniques of the literature demonstrates the effectiveness and the reliability of the proposed approach. Indeed, we obtain a good segmentation rate of 96% with the proposed method compared to 87.5% with statistical region merging (SRM), 84% with mean shift, 82% with color structure code (CSC), 80% with efficient graph-based segmentation algorithm (EGBIS) and 71% with JSEG. PMID:25648706

  17. ECCENTRICITY EVOLUTION THROUGH ACCRETION OF PROTOPLANETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsumoto, Yuji; Nagasawa, Makiko; Ida, Shigeru, E-mail: yuji.matsumoto@nao.ac.jp, E-mail: nagasawa.m.ad@m.titech.ac.jp, E-mail: ida@elsi.jp

    2015-09-10

    Most super-Earths detected by the radial velocity (RV) method have significantly smaller eccentricities than the eccentricities corresponding to velocity dispersion equal to their surface escape velocity (“escape eccentricities”). If orbital instability followed by giant impacts among protoplanets that have migrated from outer regions is considered, it is usually considered that eccentricities of the merged bodies become comparable to those of orbital crossing bodies, which are excited up to their escape eccentricities by close scattering. However, the eccentricity evolution in the in situ accretion model has not been studied in detail. Here, we investigate the eccentricity evolution through N-body simulations. Wemore » have found that the merged planets tend to have much smaller eccentricities than escape eccentricities due to very efficient collision damping. If the protoplanet orbits are initially well separated and their eccentricities are securely increased, an inner protoplanet collides at its apocenter with an outer protoplanet at its pericenter. The eccentricity of the merged body is the smallest for such configurations. Orbital inclinations are also damped by this mechanism and planets tend to share a same orbital plane, which is consistent with Kepler data. Such efficient collision damping is not found when we start calculations from densely packed orbits of the protoplanets. If the protoplanets are initially in the mean-motion resonances, which corresponds to well separated orbits, the in situ accretion model well reproduces the features of eccentricities and inclinations of multiple super-Earths/Earth systems discovered by RV and Kepler surveys.« less

  18. Actin dynamics provides membrane tension to merge fusing vesicles into the plasma membrane

    PubMed Central

    Wen, Peter J.; Grenklo, Staffan; Arpino, Gianvito; Tan, Xinyu; Liao, Hsien-Shun; Heureaux, Johanna; Peng, Shi-Yong; Chiang, Hsueh-Cheng; Hamid, Edaeni; Zhao, Wei-Dong; Shin, Wonchul; Näreoja, Tuomas; Evergren, Emma; Jin, Yinghui; Karlsson, Roger; Ebert, Steven N.; Jin, Albert; Liu, Allen P.; Shupliakov, Oleg; Wu, Ling-Gang

    2016-01-01

    Vesicle fusion is executed via formation of an Ω-shaped structure (Ω-profile), followed by closure (kiss-and-run) or merging of the Ω-profile into the plasma membrane (full fusion). Although Ω-profile closure limits release but recycles vesicles economically, Ω-profile merging facilitates release but couples to classical endocytosis for recycling. Despite its crucial role in determining exocytosis/endocytosis modes, how Ω-profile merging is mediated is poorly understood in endocrine cells and neurons containing small ∼30–300 nm vesicles. Here, using confocal and super-resolution STED imaging, force measurements, pharmacology and gene knockout, we show that dynamic assembly of filamentous actin, involving ATP hydrolysis, N-WASP and formin, mediates Ω-profile merging by providing sufficient plasma membrane tension to shrink the Ω-profile in neuroendocrine chromaffin cells containing ∼300 nm vesicles. Actin-directed compounds also induce Ω-profile accumulation at lamprey synaptic active zones, suggesting that actin may mediate Ω-profile merging at synapses. These results uncover molecular and biophysical mechanisms underlying Ω-profile merging. PMID:27576662

  19. Comparing the effect on the AGS longitudinal emittance of gold ions from the BtA stripping foil with and without a Booster Bunch Merge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeno, K.

    The aim of this note to better understand the effect of merging the Gold bunches in the Booster into one on the resulting AGS longitudinal emittance as compared to not merging them. The reason it matters whether they are merged or not is because they pass through a stripping foil in the BtA line. Data was taken last run (Run 17) for the case where the bunches are not merged, and it will be compared with data from cases where the bunches are merged. Previous data from Tandem operation will also be considered. There are two main pieces to thismore » puzzle. The first is the ε growth associated with the energy spread due to ‘energy straggling’ in the BtA stripping foil and the second is the effective ε growth associated with the energy loss that occurs while passing through the foil. Both of these effects depend on whether or not the Booster bunches have been merged into one.« less

  20. Experimental testing of spray dryer for control of incineration emissions.

    PubMed

    Wey, M Y; Wu, H Y; Tseng, H H; Chen, J C

    2003-05-01

    The research investigated the absorption/adsorption efficiency of sulfur dioxide (SO2), heavy metals, and polycyclic aromatic hydrocarbons (PAHs) with different Ca-based sorbents in a spray dryer during incineration process. For further improving the adsorption capacity of Ca-based sorbents, different spraying pressure and additives were carried out in this study. Experimental results showed that CaO could be used as an alternative sorbent in the spray dryer at an optimal initial particle size distribution of spraying droplet. In the spray dryer, Ca-based sorbents provided a lot of sites for heavy metals and PAHs condensing and calcium and alkalinity to react with metals to form merged species. As a result, heavy metals and PAHs could be removed from the flue gas simultaneously by condensation and adsorption. The additions of additives NaHCO3, SiO2, and KMnO4 were also found to be effective in improving the removal efficiency of these air pollutants.

  1. Modeling the source of GW150914 with targeted numerical-relativity simulations

    NASA Astrophysics Data System (ADS)

    Lovelace, Geoffrey; Lousto, Carlos O.; Healy, James; Scheel, Mark A.; Garcia, Alyssa; O'Shaughnessy, Richard; Boyle, Michael; Campanelli, Manuela; Hemberger, Daniel A.; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla; Teukolsky, Saul A.; Zlochower, Yosef

    2016-12-01

    In fall of 2015, the two LIGO detectors measured the gravitational wave signal GW150914, which originated from a pair of merging black holes (Abbott et al Virgo, LIGO Scientific 2016 Phys. Rev. Lett. 116 061102). In the final 0.2 s (about 8 gravitational-wave cycles) before the amplitude reached its maximum, the observed signal swept up in amplitude and frequency, from 35 Hz to 150 Hz. The theoretical gravitational-wave signal for merging black holes, as predicted by general relativity, can be computed only by full numerical relativity, because analytic approximations fail near the time of merger. Moreover, the nearly-equal masses, moderate spins, and small number of orbits of GW150914 are especially straightforward and efficient to simulate with modern numerical-relativity codes. In this paper, we report the modeling of GW150914 with numerical-relativity simulations, using black-hole masses and spins consistent with those inferred from LIGO’s measurement (Abbott et al LIGO Scientific Collaboration, Virgo Collaboration 2016 Phys. Rev. Lett. 116 241102). In particular, we employ two independent numerical-relativity codes that use completely different analytical and numerical methods to model the same merging black holes and to compute the emitted gravitational waveform; we find excellent agreement between the waveforms produced by the two independent codes. These results demonstrate the validity, impact, and potential of current and future studies using rapid-response, targeted numerical-relativity simulations for better understanding gravitational-wave observations.

  2. A novel method to identify pathways associated with renal cell carcinoma based on a gene co-expression network

    PubMed Central

    RUAN, XIYUN; LI, HONGYUN; LIU, BO; CHEN, JIE; ZHANG, SHIBAO; SUN, ZEQIANG; LIU, SHUANGQING; SUN, FAHAI; LIU, QINGYONG

    2015-01-01

    The aim of the present study was to develop a novel method for identifying pathways associated with renal cell carcinoma (RCC) based on a gene co-expression network. A framework was established where a co-expression network was derived from the database as well as various co-expression approaches. First, the backbone of the network based on differentially expressed (DE) genes between RCC patients and normal controls was constructed by the Search Tool for the Retrieval of Interacting Genes/Proteins (STRING) database. The differentially co-expressed links were detected by Pearson’s correlation, the empirical Bayesian (EB) approach and Weighted Gene Co-expression Network Analysis (WGCNA). The co-expressed gene pairs were merged by a rank-based algorithm. We obtained 842; 371; 2,883 and 1,595 co-expressed gene pairs from the co-expression networks of the STRING database, Pearson’s correlation EB method and WGCNA, respectively. Two hundred and eighty-one differentially co-expressed (DC) gene pairs were obtained from the merged network using this novel method. Pathway enrichment analysis based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) database and the network enrichment analysis (NEA) method were performed to verify feasibility of the merged method. Results of the KEGG and NEA pathway analyses showed that the network was associated with RCC. The suggested method was computationally efficient to identify pathways associated with RCC and has been identified as a useful complement to traditional co-expression analysis. PMID:26058425

  3. Changing Services and Space at an Academic Library

    ERIC Educational Resources Information Center

    Bradigan, Pamela S.; Rodman, Ruey L.

    2006-01-01

    This paper focuses on how an academic library managed changes in services and space to meet customer needs for streamlined services, increasing efficiency for students, faculty and staff in finding, analyzing, sharing, and producing knowledge. The Ohio State University's John A. Prior Health Sciences Library (PHSL) has merged its circulation and…

  4. Ambulatory care pavilion takes its place out front by solving multiple needs.

    PubMed

    Saukaitis, C A

    1994-09-01

    In sum, this structure exemplifies the fact that high-tech tertiary care medical centers can be user-friendly to the ambulatory health care consumer by serving their routine needs conveniently and efficiently. Says Gerald Miller, president of Crozer-Chester: "The ambulatory care pavilion has enabled Crozer to successfully and efficiently merge physicians' offices with institutional-based services and inpatient services. We are pleased with how the pavilion positions our medical center for the next century.

  5. Modeling the Compression of Merged Compact Toroids by Multiple Plasma Jets

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Knapp, Charles E.; Kirkpatrick, Ron; Rodgers, Stephen L. (Technical Monitor)

    2000-01-01

    A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner. The gaseous liner is used to implode a magnetized target to produce the fusion reaction in a standoff manner. In this paper, the merging of the plasma jets to form the gaseous liner is investigated numerically. The Los Alamos SPHINX code, based on the smoothed particle hydrodynamics method is used to model the interaction of the jets. 2-D and 3-D simulations have been performed to study the characteristics of the resulting flow when these jets collide. The results show that the jets merge to form a plasma liner that converge radially which may be used to compress the central plasma to fusion conditions. Details of the computational model and the SPH numerical methods will be presented together with the numerical results.

  6. Characterization of Metering, Merging and Spacing Requirements for Future Trajectory-Based Operations

    NASA Technical Reports Server (NTRS)

    Johnson, Sally

    2017-01-01

    Trajectory-Based Operations (TBO) is one of the essential paradigm shifts in the NextGen transformation of the National Airspace System. Under TBO, aircraft are managed by 4-dimensional trajectories, and airborne and ground-based metering, merging, and spacing operations are key to managing those trajectories. This paper presents the results of a study of potential metering, merging, and spacing operations within a future TBO environment. A number of operational scenarios for tactical and strategic uses of metering, merging, and spacing are described, and interdependencies between concurrent tactical and strategic operations are identified.

  7. Generation, recognition, and consistent fusion of partial boundary representations from range images

    NASA Astrophysics Data System (ADS)

    Kohlhepp, Peter; Hanczak, Andrzej M.; Li, Gang

    1994-10-01

    This paper presents SOMBRERO, a new system for recognizing and locating 3D, rigid, non- moving objects from range data. The objects may be polyhedral or curved, partially occluding, touching or lying flush with each other. For data collection, we employ 2D time- of-flight laser scanners mounted to a moving gantry robot. By combining sensor and robot coordinates, we obtain 3D cartesian coordinates. Boundary representations (Brep's) provide view independent geometry models that are both efficiently recognizable and derivable automatically from sensor data. SOMBRERO's methods for generating, matching and fusing Brep's are highly synergetic. A split-and-merge segmentation algorithm with dynamic triangular builds a partial (21/2D) Brep from scattered data. The recognition module matches this scene description with a model database and outputs recognized objects, their positions and orientations, and possibly surfaces corresponding to unknown objects. We present preliminary results in scene segmentation and recognition. Partial Brep's corresponding to different range sensors or viewpoints can be merged into a consistent, complete and irredundant 3D object or scene model. This fusion algorithm itself uses the recognition and segmentation methods.

  8. Hierarchical Image Segmentation of Remotely Sensed Data using Massively Parallel GNU-LINUX Software

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2003-01-01

    A hierarchical set of image segmentations is a set of several image segmentations of the same image at different levels of detail in which the segmentations at coarser levels of detail can be produced from simple merges of regions at finer levels of detail. In [1], Tilton, et a1 describes an approach for producing hierarchical segmentations (called HSEG) and gave a progress report on exploiting these hierarchical segmentations for image information mining. The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HSWO) approach to region growing, which was described as early as 1989 by Beaulieu and Goldberg. The HSWO approach seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing (e.g. Horowitz and T. Pavlidis, [3]). In addition, HSEG optionally interjects between HSWO region growing iterations, merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the utility of the segmentation results, especially for larger images, it also significantly increases HSEG s computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) was devised, which includes special code to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. The recursive nature of RHSEG makes for a straightforward parallel implementation. This paper describes the HSEG algorithm, its recursive formulation (referred to as RHSEG), and the implementation of RHSEG using massively parallel GNU-LINUX software. Results with Landsat TM data are included comparing RHSEG with classic region growing.

  9. Airborne-Managed Spacing in Multiple Arrival Streams

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan; Abbott, Terence; Krishnamurthy, Karthik

    2004-01-01

    A significant bottleneck in the current air traffic system occurs at the runway. Expanding airports and adding new runways will help solve this problem; however, this comes at a significant cost, financially, politically and environmentally. A complementary solution is to safely increase the capacity of current runways. This can be achieved by precise spacing at the runway threshold with a resulting reduction in the spacing buffer required under today s operations. At the NASA Langley Research Center, the Advanced Air Transportation Technologies (AATT) Project is investigating airborne technologies and procedures that will assist the pilot in achieving precise spacing behind another aircraft. This new spacing clearance instructs the pilot to follow speed cues from a new on-board guidance system called Airborne Merging and Spacing for Terminal Arrivals (AMSTAR). AMSTAR receives Automatic Dependent Surveillance-Broadcast (ADS-B) reports from the leading aircraft and calculates the appropriate speed for the ownership to fly in order to achieve the desired spacing interval, time or distance-based, at the runway threshold. Since the goal is overall system capacity, the speed guidance algorithm is designed to provide system benefit over individual efficiency. This paper discusses the concept of operations and design of AMSTAR to support airborne precision spacing. Results from the previous stage of development, focused only on in-trail spacing, are discussed along with the evolution of the concept to include merging of converging streams of traffic. This paper also examines how this operation might support future wake vortex-based separation and other advances in terminal area operations. Finally, the research plan for the merging capabilities, to be performed during the summer and fall of 2004 is presented.

  10. End-to-end simulation of bunch merging for a muon collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bao, Yu; Stratakis, Diktys; Hanson, Gail G.

    2015-05-03

    Muon accelerator beams are commonly produced indirectly through pion decay by interaction of a charged particle beam with a target. Efficient muon capture requires the muons to be first phase-rotated by rf cavities into a train of 21 bunches with much reduced energy spread. Since luminosity is proportional to the square of the number of muons per bunch, it is crucial for a Muon Collider to use relatively few bunches with many muons per bunch. In this paper we will describe a bunch merging scheme that should achieve this goal. We present for the first time a complete end-to-end simulationmore » of a 6D bunch merger for a Muon Collider. The 21 bunches arising from the phase-rotator, after some initial cooling, are merged in longitudinal phase space into seven bunches, which then go through seven paths with different lengths and reach the final collecting "funnel" at the same time. The final single bunch has a transverse and a longitudinal emittance that matches well with the subsequent 6D rectilinear cooling scheme.« less

  11. Geographically weighted regression based methods for merging satellite and gauge precipitation

    NASA Astrophysics Data System (ADS)

    Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo

    2018-03-01

    Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.

  12. Web-Scale Search-Based Data Extraction and Integration

    DTIC Science & Technology

    2011-10-17

    differently, posing challenges for aggregating this information. For example, for the task of finding population for cities in Benin, we were faced with...merged record. Our GeoMerging algorithm attempts to address various ambiguity challenges : • For name: The name of a hospital is not a unique...departments in the same building. For agent-extractor results from structured sources, our GeoMerging algorithm overcomes these challenges using a two

  13. Analytical network process based optimum cluster head selection in wireless sensor network.

    PubMed

    Farman, Haleem; Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad

    2017-01-01

    Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process.

  14. Analytical network process based optimum cluster head selection in wireless sensor network

    PubMed Central

    Javed, Huma; Jan, Bilal; Ahmad, Jamil; Ali, Shaukat; Khalil, Falak Naz; Khan, Murad

    2017-01-01

    Wireless Sensor Networks (WSNs) are becoming ubiquitous in everyday life due to their applications in weather forecasting, surveillance, implantable sensors for health monitoring and other plethora of applications. WSN is equipped with hundreds and thousands of small sensor nodes. As the size of a sensor node decreases, critical issues such as limited energy, computation time and limited memory become even more highlighted. In such a case, network lifetime mainly depends on efficient use of available resources. Organizing nearby nodes into clusters make it convenient to efficiently manage each cluster as well as the overall network. In this paper, we extend our previous work of grid-based hybrid network deployment approach, in which merge and split technique has been proposed to construct network topology. Constructing topology through our proposed technique, in this paper we have used analytical network process (ANP) model for cluster head selection in WSN. Five distinct parameters: distance from nodes (DistNode), residual energy level (REL), distance from centroid (DistCent), number of times the node has been selected as cluster head (TCH) and merged node (MN) are considered for CH selection. The problem of CH selection based on these parameters is tackled as a multi criteria decision system, for which ANP method is used for optimum cluster head selection. Main contribution of this work is to check the applicability of ANP model for cluster head selection in WSN. In addition, sensitivity analysis is carried out to check the stability of alternatives (available candidate nodes) and their ranking for different scenarios. The simulation results show that the proposed method outperforms existing energy efficient clustering protocols in terms of optimum CH selection and minimizing CH reselection process that results in extending overall network lifetime. This paper analyzes that ANP method used for CH selection with better understanding of the dependencies of different components involved in the evaluation process. PMID:28719616

  15. Homotopy Algorithm for Fixed Order Mixed H2/H(infinity) Design

    NASA Technical Reports Server (NTRS)

    Whorton, Mark; Buschek, Harald; Calise, Anthony J.

    1996-01-01

    Recent developments in the field of robust multivariable control have merged the theories of H-infinity and H-2 control. This mixed H-2/H-infinity compensator formulation allows design for nominal performance by H-2 norm minimization while guaranteeing robust stability to unstructured uncertainties by constraining the H-infinity norm. A key difficulty associated with mixed H-2/H-infinity compensation is compensator synthesis. A homotopy algorithm is presented for synthesis of fixed order mixed H-2/H-infinity compensators. Numerical results are presented for a four disk flexible structure to evaluate the efficiency of the algorithm.

  16. The New Universities of Russia: Problems and Solutions

    ERIC Educational Resources Information Center

    Kiroi, V. N.

    2011-01-01

    Russian universities do poorly in international rankings, and this will hurt Russia's ability to compete successfully in the global market. One way in which to try to improve this situation is to create new universities by merging institutions and organize them in innovative ways to become more efficient and effective. [This article was translated…

  17. Multiplicity in public health supply systems: a learning agenda.

    PubMed

    Bornbusch, Alan; Bates, James

    2013-08-01

    Supply chain integration-merging products for health programs into a single supply chain-tends to be the dominant model in health sector reform. However, multiplicity in a supply system may be justified as a risk management strategy that can better ensure product availability, advance specific health program objectives, and increase efficiency.

  18. Result Merging Strategies for a Current News Metasearcher.

    ERIC Educational Resources Information Center

    Rasolofo, Yves; Hawking, David; Savoy, Jacques

    2003-01-01

    Metasearching of online current news services is a potentially useful Web application of distributed information retrieval techniques. Reports experiences in building a metasearcher designed to provide up-to-date searching over a significant number of rapidly changing current news sites, focusing on how to merge results from the search engines at…

  19. LDA merging and splitting with applications to multiagent cooperative learning and system alteration.

    PubMed

    Pang, Shaoning; Ban, Tao; Kadobayashi, Youki; Kasabov, Nikola K

    2012-04-01

    To adapt linear discriminant analysis (LDA) to real-world applications, there is a pressing need to equip it with an incremental learning ability to integrate knowledge presented by one-pass data streams, a functionality to join multiple LDA models to make the knowledge sharing between independent learning agents more efficient, and a forgetting functionality to avoid reconstruction of the overall discriminant eigenspace caused by some irregular changes. To this end, we introduce two adaptive LDA learning methods: LDA merging and LDA splitting. These provide the benefits of ability of online learning with one-pass data streams, retained class separability identical to the batch learning method, high efficiency for knowledge sharing due to condensed knowledge representation by the eigenspace model, and more preferable time and storage costs than traditional approaches under common application conditions. These properties are validated by experiments on a benchmark face image data set. By a case study on the application of the proposed method to multiagent cooperative learning and system alternation of a face recognition system, we further clarified the adaptability of the proposed methods to complex dynamic learning tasks.

  20. A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.

    PubMed

    Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F

    2018-03-01

    Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.

  1. The prediction of crystal structure by merging knowledge methods with first principles quantum mechanics

    NASA Astrophysics Data System (ADS)

    Ceder, Gerbrand

    2007-03-01

    The prediction of structure is a key problem in computational materials science that forms the platform on which rational materials design can be performed. Finding structure by traditional optimization methods on quantum mechanical energy models is not possible due to the complexity and high dimensionality of the coordinate space. An unusual, but efficient solution to this problem can be obtained by merging ideas from heuristic and ab initio methods: In the same way that scientist build empirical rules by observation of experimental trends, we have developed machine learning approaches that extract knowledge from a large set of experimental information and a database of over 15,000 first principles computations, and used these to rapidly direct accurate quantum mechanical techniques to the lowest energy crystal structure of a material. Knowledge is captured in a Bayesian probability network that relates the probability to find a particular crystal structure at a given composition to structure and energy information at other compositions. We show that this approach is highly efficient in finding the ground states of binary metallic alloys and can be easily generalized to more complex systems.

  2. Interactions of a co-rotating vortex pair at multiple offsets

    NASA Astrophysics Data System (ADS)

    Forster, Kyle J.; Barber, Tracie J.; Diasinos, Sammy; Doig, Graham

    2017-05-01

    Two NACA0012 vanes at various lateral offsets were investigated by wind tunnel testing to observe the interactions between the streamwise vortices. The vanes were separated by nine chord lengths in the streamwise direction to allow the upstream vortex to impact on the downstream geometry. These vanes were evaluated at an angle of incidence of 8° and a Reynolds number of 7 ×104 using particle image velocimetry. A helical motion of the vortices was observed, with rotational rate increasing as the offset was reduced to the point of vortex merging. Downstream meandering of the weaker vortex was found to increase in magnitude near the point of vortex merging. The merging process occurred more rapidly when the upstream vortex was passed on the pressure side of the vane, with the downstream vortex being produced with less circulation and consequently merging into the upstream vortex. The merging distance was found to be statistical rather than deterministic quantity, indicating that the meandering of the vortices affected their separations and energies. This resulted in a fluctuation of the merging location. A loss of circulation associated with the merging process was identified, with the process of achieving vortex circularity causing vorticity diffusion, however all merged cases maintained higher circulation than a single vortex condition. The presence of the upstream vortex was found to reduce the strength of the downstream vortex in all offsets evaluated.

  3. Alternative Beam Efficiency Calculations for a Large-aperture Multiple-frequency Microwave Radiometer (LAMMR)

    NASA Technical Reports Server (NTRS)

    Schmidt, R. F.

    1979-01-01

    The fundamental definition of beam efficiency, given in terms of a far field radiation pattern, was used to develop alternative definitions which improve accuracy, reduce the amount of calculation required, and isolate the separate factors composing beam efficiency. Well-known definitions of aperture efficiency were introduced successively to simplify the denominator of the fundamental definition. The superposition of complex vector spillover and backscattered fields was examined, and beam efficiency analysis in terms of power patterns was carried out. An extension from single to dual reflector geometries was included. It is noted that the alternative definitions are advantageous in the mathematical simulation of a radiometer system, and are not intended for the measurements discipline where fields have merged and therefore lost their identity.

  4. Comparison of online and offline based merging methods for high resolution rainfall intensities

    NASA Astrophysics Data System (ADS)

    Shehu, Bora; Haberlandt, Uwe

    2016-04-01

    Accurate rainfall intensities with high spatial and temporal resolution are crucial for urban flow prediction. Commonly, raw or bias corrected radar fields are used for forecasting, while different merging products are employed for simulation. The merging products are proven to be adequate for rainfall intensities estimation, however their application in forecasting is limited as they are developed for offline mode. This study aims at adapting and refining the offline merging techniques for the online implementation, and at comparing the performance of these methods for high resolution rainfall data. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of different spatial and temporal filters on the predictive skill of all methods. Raw radar data and kriging interpolation of station data are considered as a reference to check the benefit of the merged products. The methods are applied for several extreme events in the time period 2006-2012 caused by different meteorological conditions, and their performance is evaluated by split sampling. The study area is located within the 112 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps. The results of this study reveal how the performance of the methods is affected by the adjustment of radar data, choice of merging method and selected event. Merging techniques can be used to improve the performance of online rainfall estimation, which gives way to the application of merging products in forecasting.

  5. Recursive Hierarchical Image Segmentation by Region Growing and Constrained Spectral Clustering

    NASA Technical Reports Server (NTRS)

    Tilton, James C.

    2002-01-01

    This paper describes an algorithm for hierarchical image segmentation (referred to as HSEG) and its recursive formulation (referred to as RHSEG). The HSEG algorithm is a hybrid of region growing and constrained spectral clustering that produces a hierarchical set of image segmentations based on detected convergence points. In the main, HSEG employs the hierarchical stepwise optimization (HS WO) approach to region growing, which seeks to produce segmentations that are more optimized than those produced by more classic approaches to region growing. In addition, HSEG optionally interjects between HSWO region growing iterations merges between spatially non-adjacent regions (i.e., spectrally based merging or clustering) constrained by a threshold derived from the previous HSWO region growing iteration. While the addition of constrained spectral clustering improves the segmentation results, especially for larger images, it also significantly increases HSEG's computational requirements. To counteract this, a computationally efficient recursive, divide-and-conquer, implementation of HSEG (RHSEG) has been devised and is described herein. Included in this description is special code that is required to avoid processing artifacts caused by RHSEG s recursive subdivision of the image data. Implementations for single processor and for multiple processor computer systems are described. Results with Landsat TM data are included comparing HSEG with classic region growing. Finally, an application to image information mining and knowledge discovery is discussed.

  6. Tightly integrated single- and multi-crystal data collection strategy calculation and parallelized data processing in JBluIce beamline control system

    PubMed Central

    Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; Hilgart, Mark C.; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K.; Smith, Janet L.; Fischetti, Robert F.

    2014-01-01

    The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce. PMID:25484844

  7. Measures of health sciences journal use: a comparison of vendor, link-resolver, and local citation statistics*

    PubMed Central

    De Groote, Sandra L.; Blecic, Deborah D.; Martin, Kristin

    2013-01-01

    Objective: Libraries require efficient and reliable methods to assess journal use. Vendors provide complete counts of articles retrieved from their platforms. However, if a journal is available on multiple platforms, several sets of statistics must be merged. Link-resolver reports merge data from all platforms into one report but only record partial use because users can access library subscriptions from other paths. Citation data are limited to publication use. Vendor, link-resolver, and local citation data were examined to determine correlation. Because link-resolver statistics are easy to obtain, the study library especially wanted to know if they correlate highly with the other measures. Methods: Vendor, link-resolver, and local citation statistics for the study institution were gathered for health sciences journals. Spearman rank-order correlation coefficients were calculated. Results: There was a high positive correlation between all three data sets, with vendor data commonly showing the highest use. However, a small percentage of titles showed anomalous results. Discussion and Conclusions: Link-resolver data correlate well with vendor and citation data, but due to anomalies, low link-resolver data would best be used to suggest titles for further evaluation using vendor data. Citation data may not be needed as it correlates highly with other measures. PMID:23646026

  8. Tightly integrated single- and multi-crystal data collection strategy calculation and parallelized data processing in JBluIce beamline control system

    DOE PAGES

    Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; ...

    2014-11-18

    The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates amore » collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.« less

  9. Drivers’ Visual Characteristics when Merging onto or Exiting an Urban Expressway

    PubMed Central

    Cheng, Ying; Gao, Li; Zhao, Yanan; Du, Feng

    2016-01-01

    The aim of this study is to examine drivers’ visual and driving behavior while merging onto or exiting an urban expressway with low and high traffic densities. The analysis was conducted according to three periods (approaching, merging or exiting, and accelerating or decelerating). A total of 10 subjects (8 males and 2 females) with ages ranging from 25 to 52 years old (M = 30.0 years old) participated in the study. The research was conducted in a natural driving situation, and the drivers’ eye movements were monitored and recorded using an eye tracking system. The results show that the influence of traffic density on the glance duration and scan duration is more significant when merging than when exiting. The results also demonstrate that the number of glances and the mean glance duration are mainly related to the driving task (e.g., the merging period). Therefore, drivers’ visual search strategies mainly depend on the current driving task. With regard to driving behavior, the variation tendencies of the duration and the velocity of each period are similar. These results support building an automated driving assistant system that can automatically identify gaps and accelerate or decelerate the car accordingly or provide suggestions to the driver to do so. PMID:27657888

  10. Changing cluster composition in cluster randomised controlled trials: design and analysis considerations

    PubMed Central

    2014-01-01

    Background There are many methodological challenges in the conduct and analysis of cluster randomised controlled trials, but one that has received little attention is that of post-randomisation changes to cluster composition. To illustrate this, we focus on the issue of cluster merging, considering the impact on the design, analysis and interpretation of trial outcomes. Methods We explored the effects of merging clusters on study power using standard methods of power calculation. We assessed the potential impacts on study findings of both homogeneous cluster merges (involving clusters randomised to the same arm of a trial) and heterogeneous merges (involving clusters randomised to different arms of a trial) by simulation. To determine the impact on bias and precision of treatment effect estimates, we applied standard methods of analysis to different populations under analysis. Results Cluster merging produced a systematic reduction in study power. This effect depended on the number of merges and was most pronounced when variability in cluster size was at its greatest. Simulations demonstrate that the impact on analysis was minimal when cluster merges were homogeneous, with impact on study power being balanced by a change in observed intracluster correlation coefficient (ICC). We found a decrease in study power when cluster merges were heterogeneous, and the estimate of treatment effect was attenuated. Conclusions Examples of cluster merges found in previously published reports of cluster randomised trials were typically homogeneous rather than heterogeneous. Simulations demonstrated that trial findings in such cases would be unbiased. However, simulations also showed that any heterogeneous cluster merges would introduce bias that would be hard to quantify, as well as having negative impacts on the precision of estimates obtained. Further methodological development is warranted to better determine how to analyse such trials appropriately. Interim recommendations include avoidance of cluster merges where possible, discontinuation of clusters following heterogeneous merges, allowance for potential loss of clusters and additional variability in cluster size in the original sample size calculation, and use of appropriate ICC estimates that reflect cluster size. PMID:24884591

  11. Safety performance functions for freeway merge zones.

    DOT National Transportation Integrated Search

    2013-12-01

    This report documents the results of a research project to support CDOT in the area of Safety : Performance Function (SPF) development. The project involved collecting data and developing SPFs for : ramp-freeway merge zones categorized as isolated, n...

  12. A video multitracking system for quantification of individual behavior in a large fish shoal: advantages and limits.

    PubMed

    Delcourt, Johann; Becco, Christophe; Vandewalle, Nicolas; Poncin, Pascal

    2009-02-01

    The capability of a new multitracking system to track a large number of unmarked fish (up to 100) is evaluated. This system extrapolates a trajectory from each individual and analyzes recorded sequences that are several minutes long. This system is very efficient in statistical individual tracking, where the individual's identity is important for a short period of time in comparison with the duration of the track. Individual identification is typically greater than 99%. Identification is largely efficient (more than 99%) when the fish images do not cross the image of a neighbor fish. When the images of two fish merge (occlusion), we consider that the spot on the screen has a double identity. Consequently, there are no identification errors during occlusions, even though the measurement of the positions of each individual is imprecise. When the images of these two merged fish separate (separation), individual identification errors are more frequent, but their effect is very low in statistical individual tracking. On the other hand, in complete individual tracking, where individual fish identity is important for the entire trajectory, each identification error invalidates the results. In such cases, the experimenter must observe whether the program assigns the correct identification, and, when an error is made, must edit the results. This work is not too costly in time because it is limited to the separation events, accounting for fewer than 0.1% of individual identifications. Consequently, in both statistical and rigorous individual tracking, this system allows the experimenter to gain time by measuring the individual position automatically. It can also analyze the structural and dynamic properties of an animal group with a very large sample, with precision and sampling that are impossible to obtain with manual measures.

  13. Hybrid region merging method for segmentation of high-resolution remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo

    2014-12-01

    Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.

  14. Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.

    2008-01-01

    This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.

  15. Resource cost results for one-way entanglement distillation and state merging of compound and arbitrarily varying quantum sources

    NASA Astrophysics Data System (ADS)

    Boche, H.; Janßen, G.

    2014-08-01

    We consider one-way quantum state merging and entanglement distillation under compound and arbitrarily varying source models. Regarding quantum compound sources, where the source is memoryless, but the source state an unknown member of a certain set of density matrices, we continue investigations begun in the work of Bjelaković et al. ["Universal quantum state merging," J. Math. Phys. 54, 032204 (2013)] and determine the classical as well as entanglement cost of state merging. We further investigate quantum state merging and entanglement distillation protocols for arbitrarily varying quantum sources (AVQS). In the AVQS model, the source state is assumed to vary in an arbitrary manner for each source output due to environmental fluctuations or adversarial manipulation. We determine the one-way entanglement distillation capacity for AVQS, where we invoke the famous robustification and elimination techniques introduced by Ahlswede. Regarding quantum state merging for AVQS we show by example that the robustification and elimination based approach generally leads to suboptimal entanglement as well as classical communication rates.

  16. Query Transformations for Result Merging

    DTIC Science & Technology

    2014-11-01

    tors, term dependence, query expansion 1. INTRODUCTION Federated search deals with the problem of aggregating results from multiple search engines . The...invidual search engines are (i) typically focused on a particular domain or a particular corpus, (ii) employ diverse retrieval models, and (iii...determine which search engines are appropri- ate for addressing the information need (resource selection), and (ii) merging the results returned by

  17. Electro-Fermentation - Merging Electrochemistry with Fermentation in Industrial Applications.

    PubMed

    Schievano, Andrea; Pepé Sciarria, Tommy; Vanbroekhoven, Karolien; De Wever, Heleen; Puig, Sebastià; Andersen, Stephen J; Rabaey, Korneel; Pant, Deepak

    2016-11-01

    Electro-fermentation (EF) merges traditional industrial fermentation with electrochemistry. An imposed electrical field influences the fermentation environment and microbial metabolism in either a reductive or oxidative manner. The benefit of this approach is to produce target biochemicals with improved selectivity, increase carbon efficiency, limit the use of additives for redox balance or pH control, enhance microbial growth, or in some cases enhance product recovery. We discuss the principles of electrically driven fermentations and how EF can be used to steer both pure culture and microbiota-based fermentations. An overview is given on which advantages EF may bring to both existing and innovative industrial fermentation processes, and which doors might be opened in waste biomass utilization towards added-value biorefineries. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. A method for automatic grain segmentation of multi-angle cross-polarized microscopic images of sandstone

    NASA Astrophysics Data System (ADS)

    Jiang, Feng; Gu, Qing; Hao, Huizhen; Li, Na; Wang, Bingqian; Hu, Xiumian

    2018-06-01

    Automatic grain segmentation of sandstone is to partition mineral grains into separate regions in the thin section, which is the first step for computer aided mineral identification and sandstone classification. The sandstone microscopic images contain a large number of mixed mineral grains where differences among adjacent grains, i.e., quartz, feldspar and lithic grains, are usually ambiguous, which make grain segmentation difficult. In this paper, we take advantage of multi-angle cross-polarized microscopic images and propose a method for grain segmentation with high accuracy. The method consists of two stages, in the first stage, we enhance the SLIC (Simple Linear Iterative Clustering) algorithm, named MSLIC, to make use of multi-angle images and segment the images as boundary adherent superpixels. In the second stage, we propose the region merging technique which combines the coarse merging and fine merging algorithms. The coarse merging merges the adjacent superpixels with less evident boundaries, and the fine merging merges the ambiguous superpixels using the spatial enhanced fuzzy clustering. Experiments are designed on 9 sets of multi-angle cross-polarized images taken from the three major types of sandstones. The results demonstrate both the effectiveness and potential of the proposed method, comparing to the available segmentation methods.

  19. Optical-Near-infrared Color Gradients and Merging History of Elliptical Galaxies

    NASA Astrophysics Data System (ADS)

    Kim, Duho; Im, Myungshin

    2013-04-01

    It has been suggested that merging plays an important role in the formation and the evolution of elliptical galaxies. While gas dissipation by star formation is believed to steepen metallicity and color gradients of the merger products, mixing of stars through dissipation-less merging (dry merging) is believed to flatten them. In order to understand the past merging history of elliptical galaxies, we studied the optical-near-infrared (NIR) color gradients of 204 elliptical galaxies. These galaxies are selected from the overlap region of the Sloan Digital Sky Survey (SDSS) Stripe 82 and the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS). The use of optical and NIR data (g, r, and K) provides large wavelength baselines, and breaks the age-metallicity degeneracy, allowing us to derive age and metallicity gradients. The use of the deep SDSS Stripe 82 images makes it possible for us to examine how the color/age/metallicity gradients are related to merging features. We find that the optical-NIR color and the age/metallicity gradients of elliptical galaxies with tidal features are consistent with those of relaxed ellipticals, suggesting that the two populations underwent a similar merging history on average and that mixing of stars was more or less completed before the tidal features disappeared. Elliptical galaxies with dust features have steeper color gradients than the other two types, even after masking out dust features during the analysis, which can be due to a process involving wet merging. More importantly, we find that the scatter in the color/age/metallicity gradients of the relaxed and merging feature types decreases as their luminosities (or masses) increase at M > 1011.4 M ⊙ but stays large at lower luminosities. Mean metallicity gradients appear nearly constant over the explored mass range, but a possible flattening is observed at the massive end. According to our toy model that predicts how the distribution of metallicity gradients changes as a result of major dry merging, the mean metallicity gradient should flatten by 40% and its scatter becomes smaller by 80% per a mass-doubling scale if ellipticals evolve only through major dry merger. Our result, although limited by a number statistics at the massive end, is consistent with the picture that major dry merging is an important mechanism for the evolution for ellipticals at M > 1011.4 M ⊙, but is less important at the lower mass range.

  20. OPTICAL-NEAR-INFRARED COLOR GRADIENTS AND MERGING HISTORY OF ELLIPTICAL GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Duho; Im, Myungshin

    2013-04-01

    It has been suggested that merging plays an important role in the formation and the evolution of elliptical galaxies. While gas dissipation by star formation is believed to steepen metallicity and color gradients of the merger products, mixing of stars through dissipation-less merging (dry merging) is believed to flatten them. In order to understand the past merging history of elliptical galaxies, we studied the optical-near-infrared (NIR) color gradients of 204 elliptical galaxies. These galaxies are selected from the overlap region of the Sloan Digital Sky Survey (SDSS) Stripe 82 and the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Surveymore » (LAS). The use of optical and NIR data (g, r, and K) provides large wavelength baselines, and breaks the age-metallicity degeneracy, allowing us to derive age and metallicity gradients. The use of the deep SDSS Stripe 82 images makes it possible for us to examine how the color/age/metallicity gradients are related to merging features. We find that the optical-NIR color and the age/metallicity gradients of elliptical galaxies with tidal features are consistent with those of relaxed ellipticals, suggesting that the two populations underwent a similar merging history on average and that mixing of stars was more or less completed before the tidal features disappeared. Elliptical galaxies with dust features have steeper color gradients than the other two types, even after masking out dust features during the analysis, which can be due to a process involving wet merging. More importantly, we find that the scatter in the color/age/metallicity gradients of the relaxed and merging feature types decreases as their luminosities (or masses) increase at M > 10{sup 11.4} M{sub Sun} but stays large at lower luminosities. Mean metallicity gradients appear nearly constant over the explored mass range, but a possible flattening is observed at the massive end. According to our toy model that predicts how the distribution of metallicity gradients changes as a result of major dry merging, the mean metallicity gradient should flatten by 40% and its scatter becomes smaller by 80% per a mass-doubling scale if ellipticals evolve only through major dry merger. Our result, although limited by a number statistics at the massive end, is consistent with the picture that major dry merging is an important mechanism for the evolution for ellipticals at M > 10{sup 11.4} M{sub Sun }, but is less important at the lower mass range.« less

  1. A numerical solution of the Navier-Stokes equations for chemically nonequilibrium, merged stagnation shock layers on spheres and two-dimensional cylinders in air

    NASA Technical Reports Server (NTRS)

    Johnston, K. D.; Hendricks, W. L.

    1978-01-01

    Results of solving the Navier-Stokes equations for chemically nonequilibrium, merged stagnation shock layers on spheres and two-dimensional cylinders are presented. The effects of wall catalysis and slip are also examined. The thin shock layer assumption is not made, and the thick viscous shock is allowed to develop within the computational domain. The results show good comparison with existing data. Due to the more pronounced merging of shock layer and boundary layer for the sphere, the heating rates for spheres become higher than those for cylinders as the altitude is increased.

  2. Holes in the ocean: Filling voids in bathymetric lidar data

    NASA Astrophysics Data System (ADS)

    Coleman, John B.; Yao, Xiaobai; Jordan, Thomas R.; Madden, Marguertie

    2011-04-01

    The mapping of coral reefs may be efficiently accomplished by the use of airborne laser bathymetry. However, there are often data holes within the bathymetry data which must be filled in order to produce a complete representation of the coral habitat. This study presents a method to fill these data holes through data merging and interpolation. The method first merges ancillary digital sounding data with airborne laser bathymetry data in order to populate data points in all areas but particularly those of data holes. What follows is to generate an elevation surface by spatial interpolation based on the merged data points obtained in the first step. We conduct a case study of the Dry Tortugas National Park in Florida and produced an enhanced digital elevation model in the ocean with this method. Four interpolation techniques, including Kriging, natural neighbor, spline, and inverse distance weighted, are implemented and evaluated on their ability to accurately and realistically represent the shallow-water bathymetry of the study area. The natural neighbor technique is found to be the most effective. Finally, this enhanced digital elevation model is used in conjunction with Ikonos imagery to produce a complete, three-dimensional visualization of the study area.

  3. Measures of health sciences journal use: a comparison of vendor, link-resolver, and local citation statistics.

    PubMed

    De Groote, Sandra L; Blecic, Deborah D; Martin, Kristin

    2013-04-01

    Libraries require efficient and reliable methods to assess journal use. Vendors provide complete counts of articles retrieved from their platforms. However, if a journal is available on multiple platforms, several sets of statistics must be merged. Link-resolver reports merge data from all platforms into one report but only record partial use because users can access library subscriptions from other paths. Citation data are limited to publication use. Vendor, link-resolver, and local citation data were examined to determine correlation. Because link-resolver statistics are easy to obtain, the study library especially wanted to know if they correlate highly with the other measures. Vendor, link-resolver, and local citation statistics for the study institution were gathered for health sciences journals. Spearman rank-order correlation coefficients were calculated. There was a high positive correlation between all three data sets, with vendor data commonly showing the highest use. However, a small percentage of titles showed anomalous results. Link-resolver data correlate well with vendor and citation data, but due to anomalies, low link-resolver data would best be used to suggest titles for further evaluation using vendor data. Citation data may not be needed as it correlates highly with other measures.

  4. An open-source software package for multivariate modeling and clustering: applications to air quality management.

    PubMed

    Wang, Xiuquan; Huang, Guohe; Zhao, Shan; Guo, Junhong

    2015-09-01

    This paper presents an open-source software package, rSCA, which is developed based upon a stepwise cluster analysis method and serves as a statistical tool for modeling the relationships between multiple dependent and independent variables. The rSCA package is efficient in dealing with both continuous and discrete variables, as well as nonlinear relationships between the variables. It divides the sample sets of dependent variables into different subsets (or subclusters) through a series of cutting and merging operations based upon the theory of multivariate analysis of variance (MANOVA). The modeling results are given by a cluster tree, which includes both intermediate and leaf subclusters as well as the flow paths from the root of the tree to each leaf subcluster specified by a series of cutting and merging actions. The rSCA package is a handy and easy-to-use tool and is freely available at http://cran.r-project.org/package=rSCA . By applying the developed package to air quality management in an urban environment, we demonstrate its effectiveness in dealing with the complicated relationships among multiple variables in real-world problems.

  5. Observations and Modeling of Merging Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Golovich, Nathan Ryan

    Context: Galaxy clusters grow hierarchically with continuous accretion bookended by major merging events that release immense gravitational potential energy (as much as ˜1065 erg). This energy creates an environment for rich astrophysics. Precise measurements of the dark matter halo, intracluster medium, and galaxy population have resulted in a number of important results including dark matter constraints and explanations of the generation of cosmic rays. However, since the timescale of major mergers (˜several Gyr) relegates observations of individual systems to mere snapshots, these results are difficult to understand under a consistent dynamical framework. While computationally expensive simulations are vital in this regard, the vastness of parameter space has necessitated simulations of idealized mergers that are unlikely to capture the full richness. Merger speeds, geometries, and timescales each have a profound consequential effect, but even these simple dynamical properties of the mergers are often poorly understood. A method to identify and constrain the best systems for probing the rich astrophysics of merging clusters is needed. Such a method could then be utilized to prioritize observational follow up and best inform proper exploration of dynamical phase space. Task: In order to identify and model a large number of systems, in this dissertation, we compile an ensemble of major mergers each containing radio relics. We then complete a pan-chromatic study of these 29 systems including wide field optical photometry, targeted optical spectroscopy of member galaxies, radio, and X-ray observations. We use the optical observations to model the galaxy substructure and estimate line of sight motion. In conjunction with the radio and X-ray data, these substructure models helped elucidate the most likely merger scenario for each system and further constrain the dynamical properties of each system. We demonstrate the power of this technique through detailed analyses of two individual merging clusters. Each are largely bimodal mergers occurring in the plane of the sky. We build on the dynamical analyses of Dawson (2013b) and Ng et al. (2015) in order to constrain the merger speeds, timescales, and geometry for these two systems, which are among a gold sample earmarked for further follow up. Findings: MACS J1149.5+2223 has a previously unidentified southern subcluster involved in a major merger with the well-studied northern subcluster. We confirm the system to be among the most massive clusters known, and we study the dynamics of the merger. MACS J1149.5+2223 appears to be a more evolved system than the Bullet Cluster observed near apocenter. ZwCl 0008.8+5215 is a less massive but a bimodal system with two radio relics and a cool-core "bullet" analogous to the namesake of the Bullet Cluster. These two systems occupy different regions of merger phase space with the pericentric relative velocities of ˜2800 km s-1 and ˜1800 km s-1 for MACS J1149.5+2223 and ZwCl 0008.8+5215, respectively. The time since pericenter for the observed states are ˜1.2 Gyr and ˜0.8 Gyr, respectivel. In the ensemble analysis, we confirm that radio relic selection is an efficient trigger for the identification of major mergers. In particular, 28 of the 29 systems exhibit galaxy substructure aligned with the radio relics and the disturbed intra-cluster medium. Radio relics are typically aligned within 20° of the axis connecting the two galaxy subclusters. Furthermore, when radio relics are aligned with substructure, the line of sight velocity difference between the two subclusters is small compared with the infall velocity. This strongly implies radio relic selection is an efficient selector of systems merging in the plane of the sky. While many of the systems are complex with several simultaneous merging subclusters, these systems generally only contain one radio relic. Systems with double radio relics uniformly suggest major mergers with two dominant substructures well aligned between the radio relics. Conclusions: Radio relics are efficient triggers for identifying major mergers occurring within the plane of the sky. This is ideal for observing offsets between galaxies and dark matter distributions as well as cluster shocks. Double radio relic systems, in particular, have the simplest geometries, which allow for accurate dynamical models and inferred astrophysics. Comparing and contrasting the dynamical models of MACS J1149.5+2223 and ZwCl 0008.8+5215 with similar studies in the literature (Dawson, 2013b; Ng et al., 2015; van Weeren et al., 2017), a wide range of dynamical phase space (˜ 1500 - 3000 km -1 at pericenter and ˜ 500 - 1500 Myr after pericenter) may be sampled with radio relic mergers. With sufficient samples of bimodal systems, velocity dependence of underlying astrophysics may be uncovered. (Abstract shortened by ProQuest.).

  6. Integrated optimisation technique based on computer-aided capacity and safety evaluation for managing downstream lane-drop merging area of signalised junctions

    NASA Astrophysics Data System (ADS)

    Chen, CHAI; Yiik Diew, WONG

    2017-02-01

    This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.

  7. Improving real-time efficiency of case-based reasoning for medical diagnosis.

    PubMed

    Park, Yoon-Joo

    2014-01-01

    Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.

  8. A Low-Complexity and High-Performance 2D Look-Up Table for LDPC Hardware Implementation

    NASA Astrophysics Data System (ADS)

    Chen, Jung-Chieh; Yang, Po-Hui; Lain, Jenn-Kaie; Chung, Tzu-Wen

    In this paper, we propose a low-complexity, high-efficiency two-dimensional look-up table (2D LUT) for carrying out the sum-product algorithm in the decoding of low-density parity-check (LDPC) codes. Instead of employing adders for the core operation when updating check node messages, in the proposed scheme, the main term and correction factor of the core operation are successfully merged into a compact 2D LUT. Simulation results indicate that the proposed 2D LUT not only attains close-to-optimal bit error rate performance but also enjoys a low complexity advantage that is suitable for hardware implementation.

  9. Enhancing the antimicrobial activity of d-limonene nanoemulsion with the inclusion of ε-polylysine.

    PubMed

    Zahi, Mohamed Reda; El Hattab, Mohamed; Liang, Hao; Yuan, Qipeng

    2017-04-15

    The objective of this research was to investigate the synergism between ε-polylysine and d-limonene and develop a novel nanoemulsion system by merging the positive effect of these two antimicrobial agents. Results from the checkerboard method showed that ε-polylysine and d-limonene exhibit strong synergistic and useful additive effects against Escherichia coli, Staphylococcus aureus, Bacillus subtilis and Saccharomyces cerevisiae. In addition, d-limonene nanoemulsion with the inclusion of ε-polylysine was successfully prepared by high pressure homogenizer technology. Its antimicrobial efficiency was compared with pure d-limonene nanoemulsion by measuring the minimal inhibitory concentration, electronic microscope observation and the leakage of the intercellular constituents. The results demonstrated a wide improvement of the antimicrobial activity of d-limonene nanoemulsion following the inclusion of ε-polylysine. Overall, the current study may have a valuable contribution to make in developing a more efficient antimicrobial system in the food industry. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Potential gains from hospital mergers in Denmark.

    PubMed

    Kristensen, Troels; Bogetoft, Peter; Pedersen, Kjeld Moeller

    2010-12-01

    The Danish hospital sector faces a major rebuilding program to centralize activity in fewer and larger hospitals. We aim to conduct an efficiency analysis of hospitals and to estimate the potential cost savings from the planned hospital mergers. We use Data Envelopment Analysis (DEA) to estimate a cost frontier. Based on this analysis, we calculate an efficiency score for each hospital and estimate the potential gains from the proposed mergers by comparing individual efficiencies with the efficiency of the combined hospitals. Furthermore, we apply a decomposition algorithm to split merger gains into technical efficiency, size (scale) and harmony (mix) gains. The motivation for this decomposition is that some of the apparent merger gains may actually be available with less than a full-scale merger, e.g., by sharing best practices and reallocating certain resources and tasks. Our results suggest that many hospitals are technically inefficient, and the expected "best practice" hospitals are quite efficient. Also, some mergers do not seem to lower costs. This finding indicates that some merged hospitals become too large and therefore experience diseconomies of scale. Other mergers lead to considerable cost reductions; we find potential gains resulting from learning better practices and the exploitation of economies of scope. To ensure robustness, we conduct a sensitivity analysis using two alternative returns-to-scale assumptions and two alternative estimation approaches. We consistently find potential gains from improving the technical efficiency and the exploitation of economies of scope from mergers.

  11. A Wireless Fiber Photometry System Based on a High-Precision CMOS Biosensor With Embedded Continuous-Time Modulation.

    PubMed

    Khiarak, Mehdi Noormohammadi; Martianova, Ekaterina; Bories, Cyril; Martel, Sylvain; Proulx, Christophe D; De Koninck, Yves; Gosselin, Benoit

    2018-06-01

    Fluorescence biophotometry measurements require wide dynamic range (DR) and high-sensitivity laboratory apparatus. Indeed, it is often very challenging to accurately resolve the small fluorescence variations in presence of noise and high-background tissue autofluorescence. There is a great need for smaller detectors combining high linearity, high sensitivity, and high-energy efficiency. This paper presents a new biophotometry sensor merging two individual building blocks, namely a low-noise sensing front-end and a order continuous-time modulator (CTSDM), into a single module for enabling high-sensitivity and high energy-efficiency photo-sensing. In particular, a differential CMOS photodetector associated with a differential capacitive transimpedance amplifier-based sensing front-end is merged with an incremental order 1-bit CTSDM to achieve a large DR, low hardware complexity, and high-energy efficiency. The sensor leverages a hardware sharing strategy to simplify the implementation and reduce power consumption. The proposed CMOS biosensor is integrated within a miniature wireless head mountable prototype for enabling biophotometry with a single implantable fiber in the brain of live mice. The proposed biophotometry sensor is implemented in a 0.18- CMOS technology, consuming from a 1.8- supply voltage, while achieving a peak dynamic range of over a 50- input bandwidth, a sensitivity of 24 mV/nW, and a minimum detectable current of 2.46- at a 20- sampling rate.

  12. DETECTION OF FLUX EMERGENCE, SPLITTING, MERGING, AND CANCELLATION OF NETWORK FIELD. I. SPLITTING AND MERGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iida, Y.; Yokoyama, T.; Hagenaar, H. J.

    2012-06-20

    Frequencies of magnetic patch processes on the supergranule boundary, namely, flux emergence, splitting, merging, and cancellation, are investigated through automatic detection. We use a set of line-of-sight magnetograms taken by the Solar Optical Telescope (SOT) on board the Hinode satellite. We found 1636 positive patches and 1637 negative patches in the data set, whose time duration is 3.5 hr and field of view is 112'' Multiplication-Sign 112''. The total numbers of magnetic processes are as follows: 493 positive and 482 negative splittings, 536 positive and 535 negative mergings, 86 cancellations, and 3 emergences. The total numbers of emergence and cancellationmore » are significantly smaller than those of splitting and merging. Further, the frequency dependence of the merging and splitting processes on the flux content are investigated. Merging has a weak dependence on the flux content with a power-law index of only 0.28. The timescale for splitting is found to be independent of the parent flux content before splitting, which corresponds to {approx}33 minutes. It is also found that patches split into any flux contents with the same probability. This splitting has a power-law distribution of the flux content with an index of -2 as a time-independent solution. These results support that the frequency distribution of the flux content in the analyzed flux range is rapidly maintained by merging and splitting, namely, surface processes. We suggest a model for frequency distributions of cancellation and emergence based on this idea.« less

  13. Studies on Plasmoid Merging using Compact Toroid Injectors

    NASA Astrophysics Data System (ADS)

    Allfrey, Ian; Matsumoto, Tadafumi; Roche, Thomas; Gota, Hiroshi; Edo, Takahiro; Asai, Tomohiko; Sheftman, Daniel; Osin Team; Dima Team

    2017-10-01

    C-2 and C-2U experiments have used magnetized coaxial plasma guns (MCPG) to inject compact toroids (CTs) for refueling the long-lived advanced beam-driven field-reversed configuration (FRC) plasma. This refueling method will also be used for the C-2W experiment. To minimize momentum transfer from the CT to the FRC two CTs are injected radially, diametrically opposed and coincident in time. To improve understanding of the CT characteristics TAE has a dedicated test bed for the development of CT injectors (CTI), where plasmoid merging experiments are performed. The test bed has two CTIs on axis with both axial and transverse magnetic fields. The 1 kG magnetic fields, intended to approximate the magnetic field strength and injection angle on C-2W, allow studies of cross-field transport and merging. Both CTIs are capable of injecting multiple CTs at up to 1 kHz. The resulting merged CT lives >100 μs with a radius of 25 cm. More detailed results of CT parameters will be presented.

  14. The Effects of University Mergers in China since 1990s: From the Perspective of Knowledge Production

    ERIC Educational Resources Information Center

    Mao, Ya-qing; Du, Yuan; Liu, Jing-juan

    2009-01-01

    Purpose: The purpose of this paper is to discover and better understand the efficiency of university mergers from the perspective of knowledge production, with the research capability as the point of contact. Design/methodology/approach: In total, 20 colleges and universities directly under the central ministries that merged in 2000 were taken as…

  15. X Marks the Spot: Creating and Managing a Single Service Point to Improve Customer Service and Maximize Resources

    ERIC Educational Resources Information Center

    Venner, Mary Ann; Keshmiripour, Seti

    2016-01-01

    This article will describe how merging service points in an academic library is an opportunity to improve customer service and utilize staffing resources more efficiently. Combining service points provides libraries with the ability to create a more positive library experience for patrons by minimizing the ping-pong effect for assistance. The…

  16. Merging Education and Business Models to Create and Sustain Transformational Change

    ERIC Educational Resources Information Center

    Isenberg, Susan

    2010-01-01

    In 2004, a large Midwest hospital was losing money, patients, employees, and physicians. A business consultant was hired to engage key employees in a process to improve the quality and efficiency of patient care. The improvement was negligible after the first year, so a 3-man consultancy was added in 2005 to engage all employees in an educational…

  17. Merging weak and QCD showers with matrix elements

    DOE PAGES

    Christiansen, Jesper Roy; Prestel, Stefan

    2016-01-22

    In this study, we present a consistent way of combining associated weak boson radiation in hard dijet events with hard QCD radiation in Drell–Yan-like scatterings. This integrates multiple tree-level calculations with vastly different cross sections, QCD- and electroweak parton-shower resummation into a single framework. The new merging strategy is implemented in the P ythia event generator and predictions are confronted with LHC data. Improvements over the previous strategy are observed. Results of the new electroweak-improved merging at a future 100 TeV proton collider are also investigated.

  18. Merging weak and QCD showers with matrix elements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christiansen, Jesper Roy; Prestel, Stefan

    In this study, we present a consistent way of combining associated weak boson radiation in hard dijet events with hard QCD radiation in Drell–Yan-like scatterings. This integrates multiple tree-level calculations with vastly different cross sections, QCD- and electroweak parton-shower resummation into a single framework. The new merging strategy is implemented in the P ythia event generator and predictions are confronted with LHC data. Improvements over the previous strategy are observed. Results of the new electroweak-improved merging at a future 100 TeV proton collider are also investigated.

  19. A Merged Dataset for Solar Probe Plus FIELDS Magnetometers

    NASA Astrophysics Data System (ADS)

    Bowen, T. A.; Dudok de Wit, T.; Bale, S. D.; Revillet, C.; MacDowall, R. J.; Sheppard, D.

    2016-12-01

    The Solar Probe Plus FIELDS experiment will observe turbulent magnetic fluctuations deep in the inner heliosphere. The FIELDS magnetometer suite implements a set of three magnetometers: two vector DC fluxgate magnetometers (MAGs), sensitive from DC- 100Hz, as well as a vector search coil magnetometer (SCM), sensitive from 10Hz-50kHz. Single axis measurements are additionally made up to 1MHz. To study the full range of observations, we propose merging data from the individual magnetometers into a single dataset. A merged dataset will improve the quality of observations in the range of frequencies observed by both magnetometers ( 10-100 Hz). Here we present updates on the individual MAG and SCM calibrations as well as our results on generating a cross-calibrated and merged dataset.

  20. Automated cloud screening of AVHRR imagery using split-and-merge clustering

    NASA Technical Reports Server (NTRS)

    Gallaudet, Timothy C.; Simpson, James J.

    1991-01-01

    Previous methods to segment clouds from ocean in AVHRR imagery have shown varying degrees of success, with nighttime approaches being the most limited. An improved method of automatic image segmentation, the principal component transformation split-and-merge clustering (PCTSMC) algorithm, is presented and applied to cloud screening of both nighttime and daytime AVHRR data. The method combines spectral differencing, the principal component transformation, and split-and-merge clustering to sample objectively the natural classes in the data. This segmentation method is then augmented by supervised classification techniques to screen clouds from the imagery. Comparisons with other nighttime methods demonstrate its improved capability in this application. The sensitivity of the method to clustering parameters is presented; the results show that the method is insensitive to the split-and-merge thresholds.

  1. Estimation of Airline Benefits from Avionics Upgrade under Preferential Merge Re-sequence Scheduling

    NASA Technical Reports Server (NTRS)

    Kotegawa, Tatsuya; Cayabyab, Charlene Anne; Almog, Noam

    2013-01-01

    Modernization of the airline fleet avionics is essential to fully enable future technologies and procedures for increasing national airspace system capacity. However in the current national airspace system, system-wide benefits gained by avionics upgrade are not fully directed to aircraft/airlines that upgrade, resulting in slow fleet modernization rate. Preferential merge re-sequence scheduling is a best-equipped-best-served concept designed to incentivize avionics upgrade among airlines by allowing aircraft with new avionics (high-equipped) to be re-sequenced ahead of aircraft without the upgrades (low-equipped) at enroute merge waypoints. The goal of this study is to investigate the potential benefits gained or lost by airlines under a high or low-equipped fleet scenario if preferential merge resequence scheduling is implemented.

  2. The half-wave rectifier response of the magnetosphere and antiparallel merging

    NASA Technical Reports Server (NTRS)

    Crooker, N. U.

    1980-01-01

    In some ways the magnetosphere behaves as if merging occurs only when the interplanetary magnetic field (IMF) is southward, and in other ways it behaves as if merging occurs for all IMF orientations. An explanation of this duality is offered in terms of a geometrical antiparallel merging model which predicts merging for all IMF orientations but magnetic flux transfer to the tail only for southward IMF. This is in contrast to previous models of component merging, where merging and flux transfer occur together for nearly all IMF orientations. That the problematic duality can be explained by the model is compelling evidence that antiparallel merging should be seriously considered in constructing theories of the merging process.

  3. The University Illustration Merged in Thailand

    ERIC Educational Resources Information Center

    Puangyod, Paithoon; Sirisuthi, Chaiyuth; Sriphutharin, Sumalee

    2015-01-01

    This research aimed to reflect the merged university's scenario: the case study of Nakhon-Phanom University in 4 aspects: administration, personnel management, technology management and missions. It was divided into 2 parts. The research results were as follows: Part 1: Nakhon-Phanom University's education arrangement in light of the…

  4. The effect of vortex merging and non-merging on the transfer of modal turbulent kinetic energy content

    NASA Astrophysics Data System (ADS)

    Ground, Cody; Vergine, Fabrizio; Maddalena, Luca

    2016-08-01

    A defining feature of the turbulent free shear layer is that its growth is hindered by compressibility effects, thus limiting its potential to sufficiently mix the injected fuel and surrounding airstream at the supersonic Mach numbers intrinsic to the combustor of air-breathing hypersonic vehicles. The introduction of streamwise vorticity is often proposed in an attempt to counteract these undesired effects. This fact makes the strategy of introducing multiple streamwise vortices and imposing upon them certain modes of mutual interaction in order to potentially enhance mixing an intriguing concept. However, many underlying fundamental characteristics of the flowfields in the presence such interactions are not yet well understood; therefore, the fundamental physics of these flowfields should be independently investigated before the explicit mixing performance is characterized. In this work, experimental measurements are taken with the stereoscopic particle image velocimetry technique on two specifically targeted modes of vortex interaction—the merging and non-merging of two corotating vortices. The fluctuating velocity fields are analyzed utilizing the proper orthogonal decomposition (POD) in order to identify the content, organization, and distribution of the modal turbulent kinetic energy content of the fluctuating velocity eigenmodes. The effects of the two modes of vortex interaction are revealed by the POD analysis which shows distinct differences in the modal features of the two cases. When comparing the low-order eigenmodes of the two cases, the size of the structures contained within the first ten modes is seen to increase as the flow progresses downstream for the merging case, whereas the opposite is true for the non-merging case. Additionally, the relative modal energy contribution of the first ten eigenmodes increases as the vortices evolve downstream for the merging case, whereas in the non-merging case the relative modal energy contribution decreases. The POD results show that the vortex merging process reorients and redistributes the relative turbulent kinetic energy content toward the larger-scale structures within the low-order POD eigenmodes. This result suggests that by specifically designing the vortex generation system to impose preselected modes of vortex interaction upon the flow it is possible to exert some form of control over the downstream evolution and distribution of the global and modal turbulent kinetic energy content.

  5. SMALL-SCALE MAGNETIC ISLANDS IN THE SOLAR WIND AND THEIR ROLE IN PARTICLE ACCELERATION. I. DYNAMICS OF MAGNETIC ISLANDS NEAR THE HELIOSPHERIC CURRENT SHEET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khabarova, O.; Zank, G. P.; Li, G.

    2015-08-01

    Increases of ion fluxes in the keV–MeV range are sometimes observed near the heliospheric current sheet (HCS) during periods when other sources are absent. These resemble solar energetic particle events, but the events are weaker and apparently local. Conventional explanations based on either shock acceleration of charged particles or particle acceleration due to magnetic reconnection at interplanetary current sheets (CSs) are not persuasive. We suggest instead that recurrent magnetic reconnection occurs at the HCS and smaller CSs in the solar wind, a consequence of which is particle energization by the dynamically evolving secondary CSs and magnetic islands. The effectiveness of themore » trapping and acceleration process associated with magnetic islands depends in part on the topology of the HCS. We show that the HCS possesses ripples superimposed on the large-scale flat or wavy structure. We conjecture that the ripples can efficiently confine plasma and provide tokamak-like conditions that are favorable for the appearance of small-scale magnetic islands that merge and/or contract. Particles trapped in the vicinity of merging islands and experiencing multiple small-scale reconnection events are accelerated by the induced electric field and experience first-order Fermi acceleration in contracting magnetic islands according to the transport theory of Zank et al. We present multi-spacecraft observations of magnetic island merging and particle energization in the absence of other sources, providing support for theory and simulations that show particle energization by reconnection related processes of magnetic island merging and contraction.« less

  6. Pedestrian crowd dynamics in merging sections: Revisiting the ;faster-is-slower; phenomenon

    NASA Astrophysics Data System (ADS)

    Shahhoseini, Zahra; Sarvi, Majid; Saberi, Meead

    2018-02-01

    The study of the discharge of active or self-driven matter in narrow passages has become of the growing interest in a variety of fields. The question has particularly important practical applications for the safety of pedestrian human flows notably in emergency scenarios. It has been suggested predominantly through simulation in some theoretical studies as well as through few experimentations that under certain circumstances, an elevated vigour to escape may exacerbate the outflow and cause further delay although the experimental evidence is rather mixed. The dimensions of this complex phenomenon known as the "faster-is slower" effect are of crucial importance to be understood owing to its potential practical implications for the emergency management. The contextual requirements of observing this phenomenon are yet to be identified. It is not clear whether a "do not speed up" policy is universally beneficial and advisable in an evacuation scenario. Here for the first time we experimentally examine this phenomenon in relation to the pedestrian flows at merging sections as a common geometric feature of crowd egress. Various merging angles and three different speed regimes were examined in high-density laboratory experiments. The measurements of flow interruptions and egress efficiency all indicated that the pedestrians were discharged faster when moving at elevated speed levels. We also observed clear dependencies between the discharge rate and the physical layout of the merging with certain designs clearly outperforming others. But regardless of the design, we observed faster throughput and greater avalanche sizes when we instructed pedestrians to run. Our results give the suggestion that observation of the faster-is-slower effect may necessitate certain critical conditions including passages being overly narrow relative to the size of participles (pedestrians) to create long-lasting blockages. The faster-is-slower assumption may not be universal and there may be circumstances where faster is, in fact, faster for evacuees. In the light of these findings, we suggest that it is important to identify and formulate those conditions so they can be disentangled from one another in the models. Misguided overgeneralisations may have unintended adverse ramifications for the safe evacuation management, and this highlights the need for further exploration of this phenomenon.

  7. How Certain are We of the Uncertainties in Recent Ozone Profile Trend Assessments of Merged Limbo Ccultation Records? Challenges and Possible Ways Forward

    NASA Technical Reports Server (NTRS)

    Hubert, Daan; Lambert, Jean-Christopher; Verhoelst, Tijl; Granville, Jose; Keppens, Arno; Baray, Jean-Luc; Cortesi, Ugo; Degenstein, D. A.; Froidevaux, Lucien; Godin-Beekmann, Sophie; hide

    2015-01-01

    Most recent assessments of long-term changes in the vertical distribution of ozone (by e.g. WMO and SI2N) rely on data sets that integrate observations by multiple instruments. Several merged satellite ozone profile records have been developed over the past few years; each considers a particular set of instruments and adopts a particular merging strategy. Their intercomparison by Tummon et al. revealed that the current merging schemes are not sufficiently refined to correct for all major differences between the limb/occultation records. This shortcoming introduces uncertainties that need to be known to obtain a sound interpretation of the different satellite-based trend studies. In practice however, producing realistic uncertainty estimates is an intricate task which depends on a sufficiently detailed understanding of the characteristics of each contributing data record and on the subsequent interplay and propagation of these through the merging scheme. Our presentation discusses these challenges in the context of limb/occultation ozone profile records, but they are equally relevant for other instruments and atmospheric measurements. We start by showing how the NDACC and GAW-affiliated ground-based networks of ozonesonde and lidar instruments allowed us to characterize fourteen limb/occultation ozone profile records, together providing a global view over the last three decades. Our prime focus will be on techniques to estimate long-term drift since our results suggest this is the main driver of the major trend differences between the merged data sets. The single-instrument drift estimates are then used for a tentative estimate of the systematic uncertainty in the profile trends from merged data records. We conclude by reflecting on possible further steps needed to improve the merging algorithms and to obtain a better characterization of the uncertainties involved.

  8. Merging National Forest and National Forest Health Inventories to Obtain an Integrated Forest Resource Inventory – Experiences from Bavaria, Slovenia and Sweden

    PubMed Central

    Kovač, Marko; Bauer, Arthur; Ståhl, Göran

    2014-01-01

    Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120

  9. An improved method for pancreas segmentation using SLIC and interactive region merging

    NASA Astrophysics Data System (ADS)

    Zhang, Liyuan; Yang, Huamin; Shi, Weili; Miao, Yu; Li, Qingliang; He, Fei; He, Wei; Li, Yanfang; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang

    2017-03-01

    Considering the weak edges in pancreas segmentation, this paper proposes a new solution which integrates more features of CT images by combining SLIC superpixels and interactive region merging. In the proposed method, Mahalanobis distance is first utilized in SLIC method to generate better superpixel images. By extracting five texture features and one gray feature, the similarity measure between two superpixels becomes more reliable in interactive region merging. Furthermore, object edge blocks are accurately addressed by re-segmentation merging process. Applying the proposed method to four cases of abdominal CT images, we segment pancreatic tissues to verify the feasibility and effectiveness. The experimental results show that the proposed method can make segmentation accuracy increase to 92% on average. This study will boost the application process of pancreas segmentation for computer-aided diagnosis system.

  10. Photon merging and splitting in electromagnetic field inhomogeneities

    NASA Astrophysics Data System (ADS)

    Gies, Holger; Karbstein, Felix; Seegert, Nico

    2016-04-01

    We investigate photon merging and splitting processes in inhomogeneous, slowly varying electromagnetic fields. Our study is based on the three-photon polarization tensor following from the Heisenberg-Euler effective action. We put special emphasis on deviations from the well-known constant field results, also revisiting the selection rules for these processes. In the context of high-intensity laser facilities, we analytically determine compact expressions for the number of merged/split photons as obtained in the focal spots of intense laser beams. For the parameter range of typical petawatt class laser systems as pump and probe, we provide estimates for the numbers of signal photons attainable in an actual experiment. The combination of frequency upshifting, polarization dependence and scattering off the inhomogeneities renders photon merging an ideal signature for the experimental exploration of nonlinear quantum vacuum properties.

  11. Opportunities and Efficiencies in Building a New Service Desk Model.

    PubMed

    Mayo, Alexa; Brown, Everly; Harris, Ryan

    2017-01-01

    In July 2015, the Health Sciences and Human Services Library (HS/HSL) at the University of Maryland, Baltimore (UMB), merged its reference and circulation services, creating the Information Services Department and Information Services Desk. Designing the Information Services Desk with a team approach allowed for the re-examination of the HS/HSL's service model from the ground up. With the creation of a single service point, the HS/HSL was able to create efficiencies, improve the user experience by eliminating handoffs, create a collaborative team environment, and engage information services staff in a variety of new projects.

  12. A Graphical Operator Interface for a Telerobotic Inspection System

    NASA Technical Reports Server (NTRS)

    Kim, W. S.; Tso, K. S.; Hayati, S.

    1993-01-01

    Operator interface has recently emerged as an important element for efficient and safe operatorinteractions with the telerobotic system. Recent advances in graphical user interface (GUI) andgraphics/video merging technologies enable development of more efficient, flexible operatorinterfaces. This paper describes an advanced graphical operator interface newly developed for aremote surface inspection system at Jet Propulsion Laboratory. The interface has been designed sothat remote surface inspection can be performed by a single operator with an integrated robot controland image inspection capability. It supports three inspection strategies of teleoperated human visual inspection, human visual inspection with automated scanning, and machine-vision-based automated inspection.

  13. A region-based segmentation method for ultrasound images in HIFU therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Dong, E-mail: dongz@whu.edu.cn; Liu, Yu; Yang, Yan

    Purpose: Precisely and efficiently locating a tumor with less manual intervention in ultrasound-guided high-intensity focused ultrasound (HIFU) therapy is one of the keys to guaranteeing the therapeutic result and improving the efficiency of the treatment. The segmentation of ultrasound images has always been difficult due to the influences of speckle, acoustic shadows, and signal attenuation as well as the variety of tumor appearance. The quality of HIFU guidance images is even poorer than that of conventional diagnostic ultrasound images because the ultrasonic probe used for HIFU guidance usually obtains images without making contact with the patient’s body. Therefore, the segmentationmore » becomes more difficult. To solve the segmentation problem of ultrasound guidance image in the treatment planning procedure for HIFU therapy, a novel region-based segmentation method for uterine fibroids in HIFU guidance images is proposed. Methods: Tumor partitioning in HIFU guidance image without manual intervention is achieved by a region-based split-and-merge framework. A new iterative multiple region growing algorithm is proposed to first split the image into homogenous regions (superpixels). The features extracted within these homogenous regions will be more stable than those extracted within the conventional neighborhood of a pixel. The split regions are then merged by a superpixel-based adaptive spectral clustering algorithm. To ensure the superpixels that belong to the same tumor can be clustered together in the merging process, a particular construction strategy for the similarity matrix is adopted for the spectral clustering, and the similarity matrix is constructed by taking advantage of a combination of specifically selected first-order and second-order texture features computed from the gray levels and the gray level co-occurrence matrixes, respectively. The tumor region is picked out automatically from the background regions by an algorithm according to a priori information about the tumor position, shape, and size. Additionally, an appropriate cluster number for spectral clustering can be determined by the same algorithm, thus the automatic segmentation of the tumor region is achieved. Results: To evaluate the performance of the proposed method, 50 uterine fibroid ultrasound images from different patients receiving HIFU therapy were segmented, and the obtained tumor contours were compared with those delineated by an experienced radiologist. For area-based evaluation results, the mean values of the true positive ratio, the false positive ratio, and the similarity were 94.42%, 4.71%, and 90.21%, respectively, and the corresponding standard deviations were 2.54%, 3.12%, and 3.50%, respectively. For distance-based evaluation results, the mean values of the normalized Hausdorff distance and the normalized mean absolute distance were 4.93% and 0.90%, respectively, and the corresponding standard deviations were 2.22% and 0.34%, respectively. The running time of the segmentation process was 12.9 s for a 318 × 333 (pixels) image. Conclusions: Experiments show that the proposed method can segment the tumor region accurately and efficiently with less manual intervention, which provides for the possibility of automatic segmentation and real-time guidance in HIFU therapy.« less

  14. An eMERGE Clinical Center at Partners Personalized Medicine

    PubMed Central

    Smoller, Jordan W.; Karlson, Elizabeth W.; Green, Robert C.; Kathiresan, Sekar; MacArthur, Daniel G.; Talkowski, Michael E.; Murphy, Shawn N.; Weiss, Scott T.

    2016-01-01

    The integration of electronic medical records (EMRs) and genomic research has become a major component of efforts to advance personalized and precision medicine. The Electronic Medical Records and Genomics (eMERGE) network, initiated in 2007, is an NIH-funded consortium devoted to genomic discovery and implementation research by leveraging biorepositories linked to EMRs. In its most recent phase, eMERGE III, the network is focused on facilitating implementation of genomic medicine by detecting and disclosing rare pathogenic variants in clinically relevant genes. Partners Personalized Medicine (PPM) is a center dedicated to translating personalized medicine into clinical practice within Partners HealthCare. One component of the PPM is the Partners Healthcare Biobank, a biorepository comprising broadly consented DNA samples linked to the Partners longitudinal EMR. In 2015, PPM joined the eMERGE Phase III network. Here we describe the elements of the eMERGE clinical center at PPM, including plans for genomic discovery using EMR phenotypes, evaluation of rare variant penetrance and pleiotropy, and a novel randomized trial of the impact of returning genetic results to patients and clinicians. PMID:26805891

  15. An eMERGE Clinical Center at Partners Personalized Medicine.

    PubMed

    Smoller, Jordan W; Karlson, Elizabeth W; Green, Robert C; Kathiresan, Sekar; MacArthur, Daniel G; Talkowski, Michael E; Murphy, Shawn N; Weiss, Scott T

    2016-01-20

    The integration of electronic medical records (EMRs) and genomic research has become a major component of efforts to advance personalized and precision medicine. The Electronic Medical Records and Genomics (eMERGE) network, initiated in 2007, is an NIH-funded consortium devoted to genomic discovery and implementation research by leveraging biorepositories linked to EMRs. In its most recent phase, eMERGE III, the network is focused on facilitating implementation of genomic medicine by detecting and disclosing rare pathogenic variants in clinically relevant genes. Partners Personalized Medicine (PPM) is a center dedicated to translating personalized medicine into clinical practice within Partners HealthCare. One component of the PPM is the Partners Healthcare Biobank, a biorepository comprising broadly consented DNA samples linked to the Partners longitudinal EMR. In 2015, PPM joined the eMERGE Phase III network. Here we describe the elements of the eMERGE clinical center at PPM, including plans for genomic discovery using EMR phenotypes, evaluation of rare variant penetrance and pleiotropy, and a novel randomized trial of the impact of returning genetic results to patients and clinicians.

  16. Merging K-means with hierarchical clustering for identifying general-shaped groups.

    PubMed

    Peterson, Anna D; Ghosh, Arka P; Maitra, Ranjan

    2018-01-01

    Clustering partitions a dataset such that observations placed together in a group are similar but different from those in other groups. Hierarchical and K -means clustering are two approaches but have different strengths and weaknesses. For instance, hierarchical clustering identifies groups in a tree-like structure but suffers from computational complexity in large datasets while K -means clustering is efficient but designed to identify homogeneous spherically-shaped clusters. We present a hybrid non-parametric clustering approach that amalgamates the two methods to identify general-shaped clusters and that can be applied to larger datasets. Specifically, we first partition the dataset into spherical groups using K -means. We next merge these groups using hierarchical methods with a data-driven distance measure as a stopping criterion. Our proposal has the potential to reveal groups with general shapes and structure in a dataset. We demonstrate good performance on several simulated and real datasets.

  17. Data parallel sorting for particle simulation

    NASA Technical Reports Server (NTRS)

    Dagum, Leonardo

    1992-01-01

    Sorting on a parallel architecture is a communications intensive event which can incur a high penalty in applications where it is required. In the case of particle simulation, only integer sorting is necessary, and sequential implementations easily attain the minimum performance bound of O (N) for N particles. Parallel implementations, however, have to cope with the parallel sorting problem which, in addition to incurring a heavy communications cost, can make the minimun performance bound difficult to attain. This paper demonstrates how the sorting problem in a particle simulation can be reduced to a merging problem, and describes an efficient data parallel algorithm to solve this merging problem in a particle simulation. The new algorithm is shown to be optimal under conditions usual for particle simulation, and its fieldwise implementation on the Connection Machine is analyzed in detail. The new algorithm is about four times faster than a fieldwise implementation of radix sort on the Connection Machine.

  18. Applicability of Zipper Merge Versus Early Merge in Kentucky Work Zones

    DOT National Transportation Integrated Search

    2017-12-24

    In an effort to improve work zone safety and streamline traffic flows, a number of state transportation agencies (STAs) have experimented with the zipper merge. The zipper merge differs from a conventional, or early, merge in that vehicles do not mer...

  19. Merge of Five Previous Catalogues Into the Ground Truth Catalogue and Registration Based on MOLA Data with THEMIS-DIR, MDIM and MOC Data-Sets

    NASA Astrophysics Data System (ADS)

    Salamuniccar, G.; Loncaric, S.

    2008-03-01

    The Catalogue from our previous work was merged with the date of Barlow, Rodionova, Boyce, and Kuzmin. The resulting ground truth catalogue with 57,633 craters was registered, using MOLA data, with THEMIS-DIR, MDIM, and MOC data-sets.

  20. The turbulent cascade of individual eddies

    NASA Astrophysics Data System (ADS)

    Huertas-Cerdeira, Cecilia; Lozano-Durán, Adrián; Jiménez, Javier

    2014-11-01

    The merging and splitting processes of Reynolds-stress carrying structures in the inertial range of scales are studied through their time-resolved evolution in channels at Reλ = 100 - 200 . Mergers and splits coexist during the whole life of the structures, and are responsible for a substantial part of their growth and decay. Each interaction involves two or more eddies and results in little overall volume loss or gain. Most of them involve a small eddy that merges with, or splits from, a significantly larger one. Accordingly, if merge and split indexes are respectively defined as the maximum number of times that a structure has merged from its birth or will split until its death, the mean eddy volume grows linearly with both indexes, suggesting an accretion process rather than a hierarchical fragmentation. However, a non-negligible number of interactions involve eddies of similar scale, with a second probability peak of the volume of the smaller parent or child at 0.3 times that of the resulting or preceding structure. Funded by the Multiflow project of the ERC.

  1. Observations and Simulations of Formation of Broad Plasma Depletions Through Merging Process

    NASA Technical Reports Server (NTRS)

    Huang, Chao-Song; Retterer, J. M.; Beaujardiere, O. De La; Roddy, P. A.; Hunton, D.E.; Ballenthin, J. O.; Pfaff, Robert F.

    2012-01-01

    Broad plasma depletions in the equatorial ionosphere near dawn are region in which the plasma density is reduced by 1-3 orders of magnitude over thousands of kilometers in longitude. This phenomenon is observed repeatedly by the Communication/Navigation Outage Forecasting System (C/NOFS) satellite during deep solar minimum. The plasma flow inside the depletion region can be strongly upward. The possible causal mechanism for the formation of broad plasma depletions is that the broad depletions result from merging of multiple equatorial plasma bubbles. The purpose of this study is to demonstrate the feasibility of the merging mechanism with new observations and simulations. We present C/NOFS observations for two cases. A series of plasma bubbles is first detected by C/NOFS over a longitudinal range of 3300-3800 km around midnight. Each of the individual bubbles has a typical width of approx 100 km in longitude, and the upward ion drift velocity inside the bubbles is 200-400 m/s. The plasma bubbles rotate with the Earth to the dawn sector and become broad plasma depletions. The observations clearly show the evolution from multiple plasma bubbles to broad depletions. Large upward plasma flow occurs inside the depletion region over 3800 km in longitude and exists for approx 5 h. We also present the numerical simulations of bubble merging with the physics-based low-latitude ionospheric model. It is found that two separate plasma bubbles join together and form a single, wider bubble. The simulations show that the merging process of plasma bubbles can indeed occur in incompressible ionospheric plasma. The simulation results support the merging mechanism for the formation of broad plasma depletions.

  2. BBMerge – Accurate paired shotgun read merging via overlap

    DOE PAGES

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    2017-10-26

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  3. BBMerge – Accurate paired shotgun read merging via overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  4. Merging Digital Surface Models Implementing Bayesian Approaches

    NASA Astrophysics Data System (ADS)

    Sadeq, H.; Drummond, J.; Li, Z.

    2016-06-01

    In this research different DSMs from different sources have been merged. The merging is based on a probabilistic model using a Bayesian Approach. The implemented data have been sourced from very high resolution satellite imagery sensors (e.g. WorldView-1 and Pleiades). It is deemed preferable to use a Bayesian Approach when the data obtained from the sensors are limited and it is difficult to obtain many measurements or it would be very costly, thus the problem of the lack of data can be solved by introducing a priori estimations of data. To infer the prior data, it is assumed that the roofs of the buildings are specified as smooth, and for that purpose local entropy has been implemented. In addition to the a priori estimations, GNSS RTK measurements have been collected in the field which are used as check points to assess the quality of the DSMs and to validate the merging result. The model has been applied in the West-End of Glasgow containing different kinds of buildings, such as flat roofed and hipped roofed buildings. Both quantitative and qualitative methods have been employed to validate the merged DSM. The validation results have shown that the model was successfully able to improve the quality of the DSMs and improving some characteristics such as the roof surfaces, which consequently led to better representations. In addition to that, the developed model has been compared with the well established Maximum Likelihood model and showed similar quantitative statistical results and better qualitative results. Although the proposed model has been applied on DSMs that were derived from satellite imagery, it can be applied to any other sourced DSMs.

  5. Surface temperature dataset for North America obtained by application of optimal interpolation algorithm merging tree-ring chronologies and climate model output

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua

    2017-02-01

    A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.

  6. Redundant via insertion in self-aligned double patterning

    NASA Astrophysics Data System (ADS)

    Song, Youngsoo; Jung, Jinwook; Shin, Youngsoo

    2017-03-01

    Redundant via (RV) insertion is employed to enhance via manufacturability, and has been extensively studied. Self-aligned double patterning (SADP) process, brings a new challenge to RV insertion since newly created cut for each RV insertion has to be taken care of. Specifically, when a cut for RV, which we simply call RV-cut, is formed, cut conflict may occur with nearby line-end cuts, which results in a decrease in RV candidates. We introduce cut merging to reduce the number of cut conflicts; merged cuts are processed with stitch using litho-etch-litho-etch (LELE) multi-patterning method. In this paper, we propose a new RV insertion method with cut merging in SADP for the first time. In our experiments, a simple RV insertion yields 55.3% vias to receives RVs; our proposed method that considers cut merging increases that number to 69.6% on average of test circuits.

  7. Echo™ User Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, Dustin Yewell

    Echo™ is a MATLAB-based software package designed for robust and scalable analysis of complex data workflows. An alternative to tedious, error-prone conventional processes, Echo is based on three transformative principles for data analysis: self-describing data, name-based indexing, and dynamic resource allocation. The software takes an object-oriented approach to data analysis, intimately connecting measurement data with associated metadata. Echo operations in an analysis workflow automatically track and merge metadata and computation parameters to provide a complete history of the process used to generate final results, while automated figure and report generation tools eliminate the potential to mislabel those results. History reportingmore » and visualization methods provide straightforward auditability of analysis processes. Furthermore, name-based indexing on metadata greatly improves code readability for analyst collaboration and reduces opportunities for errors to occur. Echo efficiently manages large data sets using a framework that seamlessly allocates resources such that only the necessary computations to produce a given result are executed. Echo provides a versatile and extensible framework, allowing advanced users to add their own tools and data classes tailored to their own specific needs. Applying these transformative principles and powerful features, Echo greatly improves analyst efficiency and quality of results in many application areas.« less

  8. Spectroscopic Measurement of Ion Flow During Merging Start-up of Field-Reversed Configuration

    NASA Astrophysics Data System (ADS)

    Oka, Hirotaka; Inomoto, Michiaki; Tanabe, Hiroshi; Annoura, Masanobu; Ono, Yasushi; Nemoto, Koshichi

    2012-10-01

    The counter-helicity merging method [1] of field-reversed configuration (FRC) formation involves generation of bidirectional toroidal flow, known as a ``sling-shot.'' In two fluids regime, reconnection process is strongly affected by the Hall effect [2]. In this study, we have investigated the behavior of toroidal bidirectional flow generated by the counter-helicity merging in two-fluids regime. We use 2D Ion Doppler Spectroscopy to mesure toroidal ion flow during merging start-up of FRC from Ar gas. We defined two cases: one case with a radially pushed-in X line (case I) and the other case with a radially pushed-out X line(case O). The flow during the plasma merging shows radial asymmetry, as expected from the magnetic measurement, but finally relaxes to a unidirectional flow in plasma current direction in both cases. We observed larger toroidal flow in the plasma current direction in case I after FRC is formed, though the FRC in case O has larger magnetic flux. These results suggest that more ions are lost during merging start-up in case I. This selective ion loss might account for stability and confinement of FRCs probably maintained by high energy ions.[4pt] [1] Y. Ono, et al., Nucl. Fusion 39, pp. 2001-2008 (1999).[0pt] [2] M. Inomoto, et al., Phys. Rev. Lett., 97, 135002, (2006)

  9. Merging of multi-string BWTs with applications

    PubMed Central

    Holt, James; McMillan, Leonard

    2014-01-01

    Motivation: The throughput of genomic sequencing has increased to the point that is overrunning the rate of downstream analysis. This, along with the desire to revisit old data, has led to a situation where large quantities of raw, and nearly impenetrable, sequence data are rapidly filling the hard drives of modern biology labs. These datasets can be compressed via a multi-string variant of the Burrows–Wheeler Transform (BWT), which provides the side benefit of searches for arbitrary k-mers within the raw data as well as the ability to reconstitute arbitrary reads as needed. We propose a method for merging such datasets for both increased compression and downstream analysis. Results: We present a novel algorithm that merges multi-string BWTs in O(LCS×N) time where LCS is the length of their longest common substring between any of the inputs, and N is the total length of all inputs combined (number of symbols) using O(N×log2(F)) bits where F is the number of multi-string BWTs merged. This merged multi-string BWT is also shown to have a higher compressibility compared with the input multi-string BWTs separately. Additionally, we explore some uses of a merged multi-string BWT for bioinformatics applications. Availability and implementation: The MSBWT package is available through PyPI with source code located at https://code.google.com/p/msbwt/. Contact: holtjma@cs.unc.edu PMID:25172922

  10. Merging photoredox and nickel catalysis: decarboxylative cross-coupling of carboxylic acids with vinyl halides.

    PubMed

    Noble, Adam; McCarver, Stefan J; MacMillan, David W C

    2015-01-21

    Decarboxylative cross-coupling of alkyl carboxylic acids with vinyl halides has been accomplished through the synergistic merger of photoredox and nickel catalysis. This new methodology has been successfully applied to a variety of α-oxy and α-amino acids, as well as simple hydrocarbon-substituted acids. Diverse vinyl iodides and bromides give rise to vinylation products in high efficiency under mild, operationally simple reaction conditions.

  11. Applying Semantic Web Concepts to Support Net-Centric Warfare Using the Tactical Assessment Markup Language (TAML)

    DTIC Science & Technology

    2006-06-01

    SPARQL SPARQL Protocol and RDF Query Language SQL Structured Query Language SUMO Suggested Upper Merged Ontology SW... Query optimization algorithms are implemented in the Pellet reasoner in order to ensure querying a knowledge base is efficient . These algorithms...memory as a treelike structure in order for the data to be queried . XML Query (XQuery) is the standard language used when querying XML

  12. Star Formation of Merging Disk Galaxies with AGN Feedback Effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Jongwon; Smith, Rory; Yi, Sukyoung K., E-mail: jw.park@yonsei.ac.kr

    2017-08-20

    Using a numerical hydrodynamics code, we perform various idealized galaxy merger simulations to study the star formation (SF) of two merging disk galaxies. Our simulations include gas accretion onto supermassive black holes and active galactic nucleus (AGN) feedback. By comparing AGN simulations with those without AGNs, we attempt to understand when the AGN feedback effect is significant. Using ∼70 simulations, we investigate SF with the AGN effect in mergers with a variety of mass ratios, inclinations, orbits, galaxy structures, and morphologies. Using these merger simulations with AGN feedback, we measure merger-driven SF using the burst efficiency parameter introduced by Coxmore » et al. We confirm previous studies which demonstrated that, in galaxy mergers, AGN suppresses SF more efficiently than in isolated galaxies. However, we also find that the effect of AGNs on SF is larger in major than in minor mergers. In minor merger simulations with different primary bulge-to-total ratios, the effect of bulge fraction on the merger-driven SF decreases due to AGN feedback. We create models of Sa-, Sb-, and Sc-type galaxies and compare their SF properties while undergoing mergers. With the current AGN prescriptions, the difference in merger-driven SF is not as pronounced as in the recent observational study of Kaviraj. We discuss the implications of this discrepancy.« less

  13. Field-aligned currents and ion convection at high altitudes

    NASA Technical Reports Server (NTRS)

    Burch, J. L.; Reiff, P. H.

    1985-01-01

    Hot plasma observations from Dynamics Explorer 1 have been used to investigate solar-wind ion injection, Birkeland currents, and plasma convection at altitudes above 2 earth-radii in the morning sector. The results of the study, along with the antiparallel merging hypothesis, have been used to construct a By-dependent global convection model. A significant element of the model is the coexistence of three types of convection cells (merging cells, viscous cells, and lobe cells). As the IMF direction varies, the model accounts for the changing roles of viscous and merging processes and makes testable predictions about several magnetospheric phenomena, including the newly-observed theta aurora in the polar cap.

  14. Incorporating Edge Information into Best Merge Region-Growing Segmentation

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Pasolli, Edoardo

    2014-01-01

    We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.

  15. Merging Digital Medicine and Economics: Two Moving Averages Unlock Biosignals for Better Health.

    PubMed

    Elgendi, Mohamed

    2018-01-06

    Algorithm development in digital medicine necessitates ongoing knowledge and skills updating to match the current demands and constant progression in the field. In today's chaotic world there is an increasing trend to seek out simple solutions for complex problems that can increase efficiency, reduce resource consumption, and improve scalability. This desire has spilled over into the world of science and research where many disciplines have taken to investigating and applying more simplistic approaches. Interestingly, through a review of current literature and research efforts, it seems that the learning and teaching principles in digital medicine continue to push towards the development of sophisticated algorithms with a limited scope and has not fully embraced or encouraged a shift towards more simple solutions that yield equal or better results. This short note aims to demonstrate that within the world of digital medicine and engineering, simpler algorithms can offer effective and efficient solutions, where traditionally more complex algorithms have been used. Moreover, the note demonstrates that bridging different research disciplines is very beneficial and yields valuable insights and results.

  16. Efficient low-bit-rate adaptive mesh-based motion compensation technique

    NASA Astrophysics Data System (ADS)

    Mahmoud, Hanan A.; Bayoumi, Magdy A.

    2001-08-01

    This paper proposes a two-stage global motion estimation method using a novel quadtree block-based motion estimation technique and an active mesh model. In the first stage, motion parameters are estimated by fitting block-based motion vectors computed using a new efficient quadtree technique, that divides a frame into equilateral triangle blocks using the quad-tree structure. Arbitrary partition shapes are achieved by allowing 4-to-1, 3-to-1 and 2-1 merge/combine of sibling blocks having the same motion vector . In the second stage, the mesh is constructed using an adaptive triangulation procedure that places more triangles over areas with high motion content, these areas are estimated during the first stage. finally the motion compensation is achieved by using a novel algorithm that is carried by both the encoder and the decoder to determine the optimal triangulation of the resultant partitions followed by affine mapping at the encoder. Computer simulation results show that the proposed method gives better performance that the conventional ones in terms of the peak signal-to-noise ration (PSNR) and the compression ratio (CR).

  17. Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images.

    PubMed

    Jung, Hyung-Sup; Park, Sung-Whan

    2014-12-18

    Data fusion is defined as the combination of data from multiple sensors such that the resulting information is better than would be possible when the sensors are used individually. The multi-sensor fusion of panchromatic (PAN) and thermal infrared (TIR) images is a good example of this data fusion. While a PAN image has higher spatial resolution, a TIR one has lower spatial resolution. In this study, we have proposed an efficient method to fuse Landsat 8 PAN and TIR images using an optimal scaling factor in order to control the trade-off between the spatial details and the thermal information. We have compared the fused images created from different scaling factors and then tested the performance of the proposed method at urban and rural test areas. The test results show that the proposed method merges the spatial resolution of PAN image and the temperature information of TIR image efficiently. The proposed method may be applied to detect lava flows of volcanic activity, radioactive exposure of nuclear power plants, and surface temperature change with respect to land-use change.

  18. Constructing Efficient and Stable Perovskite Solar Cells via Interconnecting Perovskite Grains.

    PubMed

    Hou, Xian; Huang, Sumei; Ou-Yang, Wei; Pan, Likun; Sun, Zhuo; Chen, Xiaohong

    2017-10-11

    A high-quality perovskite film with interconnected perovskite grains was obtained by incorporating terephthalic acid (TPA) additive into the perovskite precursor solution. The presence of TPA changed the crystallization kinetics of the perovskite film and promoted lateral growth of grains in the vicinity of crystal boundaries. As a result, sheet-shaped perovskite was formed and covered onto the bottom grains, which made some adjacent grains partly merge together to form grains-interconnected perovskite film. Perovskite solar cells (PSCs) with TPA additive exhibited a power conversion efficiency (PCE) of 18.51% with less hysteresis, which is obviously higher than that of pristine cells (15.53%). PSCs without and with TPA additive retain 18 and 51% of the initial PCE value, respectively, aging for 35 days exposed to relative humidity 30% in air without encapsulation. Furthermore, MAPbI 3 film with TPA additive shows superior thermal stability to the pristine one under 100 °C baking. The results indicate that the presence of TPA in perovskite film can greatly improve the performance of PSCs as well as their moisture resistance and thermal stability.

  19. Merging history of three bimodal clusters

    NASA Astrophysics Data System (ADS)

    Maurogordato, S.; Sauvageot, J. L.; Bourdin, H.; Cappi, A.; Benoist, C.; Ferrari, C.; Mars, G.; Houairi, K.

    2011-01-01

    We present a combined X-ray and optical analysis of three bimodal galaxy clusters selected as merging candidates at z ~ 0.1. These targets are part of MUSIC (MUlti-Wavelength Sample of Interacting Clusters), which is a general project designed to study the physics of merging clusters by means of multi-wavelength observations. Observations include spectro-imaging with XMM-Newton EPIC camera, multi-object spectroscopy (260 new redshifts), and wide-field imaging at the ESO 3.6 m and 2.2 m telescopes. We build a global picture of these clusters using X-ray luminosity and temperature maps together with galaxy density and velocity distributions. Idealized numerical simulations were used to constrain the merging scenario for each system. We show that A2933 is very likely an equal-mass advanced pre-merger ~200 Myr before the core collapse, while A2440 and A2384 are post-merger systems (~450 Myr and ~1.5 Gyr after core collapse, respectively). In the case of A2384, we detect a spectacular filament of galaxies and gas spreading over more than 1 h-1 Mpc, which we infer to have been stripped during the previous collision. The analysis of the MUSIC sample allows us to outline some general properties of merging clusters: a strong luminosity segregation of galaxies in recent post-mergers; the existence of preferential axes - corresponding to the merging directions - along which the BCGs and structures on various scales are aligned; the concomitance, in most major merger cases, of secondary merging or accretion events, with groups infalling onto the main cluster, and in some cases the evidence of previous merging episodes in one of the main components. These results are in good agreement with the hierarchical scenario of structure formation, in which clusters are expected to form by successive merging events, and matter is accreted along large-scale filaments. Based on data obtained with the European Southern Observatory, Chile (programs 072.A-0595, 075.A-0264, and 079.A-0425).Tables 5-7 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/525/A79

  20. 3D geometric split-merge segmentation of brain MRI datasets.

    PubMed

    Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis

    2014-05-01

    In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Study on traffic characteristics for a typical expressway on-ramp bottleneck considering various merging behaviors

    NASA Astrophysics Data System (ADS)

    Sun, Jie; Li, Zhipeng; Sun, Jian

    2015-12-01

    Recurring bottlenecks at freeway/expressway are considered as the main cause of traffic congestion in urban traffic system while on-ramp bottlenecks are the most significant sites that may result in congestion. In this paper, the traffic bottleneck characteristics for a simple and typical expressway on-ramp are investigated by the means of simulation modeling under the open boundary condition. In simulations, the running behaviors of each vehicle are described by a car-following model with a calibrated optimal velocity function, and lane changing actions at the merging section are modeled by a novel set of rules. We numerically derive the traffic volume of on-ramp bottleneck under different upstream arrival rates of mainline and ramp flows. It is found that the vehicles from the ramp strongly affect the pass of mainline vehicles and the merging ratio changes with the increasing of ramp vehicle, when the arrival rate of mainline flow is greater than a critical value. In addition, we clarify the dependence of the merging ratio of on-ramp bottleneck on the probability of lane changing and the length of the merging section, and some corresponding intelligent control strategies are proposed in actual traffic application.

  2. Development of High-Field ST Merging Experiment: TS-U for High Power Reconnection Heating

    NASA Astrophysics Data System (ADS)

    Ono, Y.; Koike, H.; Tanabe, H.; Himeno, S.; Ishida, S.; Kimura, K.; Kawanami, M.; Narita, M.; Takahata, Y.; Yokoyama, T.; Inomoto, M.; Cheng, C. Z.

    2016-10-01

    We are developing high-magnetic field ST merging/ reconnection experiment TS-U with Brec = 0.3-0.5T, based on our scaling law of reconnection heating energy proportional to square of the reconnecting (poloidal) magnetic field Brec. This scaling law indicates that the high-Brec ST merging will heat ions to the burning plasma regime without using any additional heating facility. Its mechanism is that the reconnection outflow accelerates mainly ions up to the poloidal Alfven speed like the Sweet-Parker model. The shock-like density pileups thermalize the accelerated ions in the down-streams in agreement with recent solar satellite observations and PIC simulation results. We already documented significant ion heating of spheromak and ST mergings up to 0.25keV in TS-3 and 1.2keV in MAST, leading us to the high-Brec merging experiment TS-U. It is noted that high-resolution (>500 channel) 2D measurements of ion and electron temperatures is being developed for the purpose of solving all acceleration and heating effects of magnetic reconnection, such as the huge outflow heating of ions in the downstream and electron heating localized at the X-point.

  3. Multi-Instrument Manager Tool for Data Acquisition and Merging of Optical and Electrical Mobility Size Distributions

    NASA Astrophysics Data System (ADS)

    Tritscher, Torsten; Koched, Amine; Han, Hee-Siew; Filimundi, Eric; Johnson, Tim; Elzey, Sherrie; Avenido, Aaron; Kykal, Carsten; Bischof, Oliver F.

    2015-05-01

    Electrical mobility classification (EC) followed by Condensation Particle Counter (CPC) detection is the technique combined in Scanning Mobility Particle Sizers(SMPS) to retrieve nanoparticle size distributions in the range from 2.5 nm to 1 μm. The detectable size range of SMPS systems can be extended by the addition of an Optical Particle Sizer(OPS) that covers larger sizes from 300 nm to 10 μm. This optical sizing method reports an optical equivalent diameter, which is often different from the electrical mobility diameter measured by the standard SMPS technique. Multi-Instrument Manager (MIMTM) software developed by TSI incorporates algorithms that facilitate merging SMPS data sets with data based on optical equivalent diameter to compile single, wide-range size distributions. Here we present MIM 2.0, the next-generation of the data merging tool that offers many advanced features for data merging and post-processing. MIM 2.0 allows direct data acquisition with OPS and NanoScan SMPS instruments to retrieve real-time particle size distributions from 10 nm to 10 μm, which we show in a case study at a fireplace. The merged data can be adjusted using one of the merging options, which automatically determines an overall aerosol effective refractive index. As a result an indirect and average characterization of aerosol optical and shape properties is possible. The merging tool allows several pre-settings, data averaging and adjustments, as well as the export of data sets and fitted graphs. MIM 2.0 also features several post-processing options for SMPS data and differences can be visualized in a multi-peak sample over a narrow size range.

  4. Next-to-leading order QCD predictions for top-quark pair production with up to two jets merged with a parton shower

    DOE PAGES

    Höche, Stefan; Krauss, Frank; Maierhöfer, Philipp; ...

    2015-06-26

    We present differential cross sections for the production of top-quark pairs in conjunction with up to two jets, computed at next-to-leading order in perturbative QCD and consistently merged with a parton shower in the SHERPA+OPENLOOPS framework. Top quark decays including spin correlation effects are taken into account at leading order accuracy. The calculation yields a unified description of top-pair plus multi-jet production, and detailed results are presented for various key observables at the Large Hadron Collider. As a result, a large improvement with respect to the multi-jet merging approach at leading order is found for the total transverse energy spectrum,more » which plays a prominent role in searches for physics beyond the Standard Model.« less

  5. Optimization of Ocean Color Algorithms: Application to Satellite Data Merging

    NASA Technical Reports Server (NTRS)

    Maritorena, Stephane; Siegel, David A.; Morel, Andre

    2003-01-01

    The objective of our program is to develop and validate a procedure for ocean color data merging which is one of the major goals of the SIMBIOS project. The need for a merging capability is dictated by the fact that since the launch of MODIS on the Terra platform and over the next decade, several global ocean color missions from various space agencies are or will be operational simultaneously. The apparent redundancy in simultaneous ocean color missions can actually be exploited to various benefits. The most obvious benefit is improved coverage. The patchy and uneven daily coverage from any single sensor can be improved by using a combination of sensors. Beside improved coverage of the global Ocean the merging of Ocean color data should also result in new, improved, more diverse and better data products with lower uncertainties. Ultimately, ocean color data merging should result in the development of a unified, scientific quality, ocean color time series, from SeaWiFS to NPOESS and beyond. Various approaches can be used for ocean color data merging and several have been tested within the frame of the SIMBIOS program. As part of the SIMBIOS Program, we have developed a merging method for ocean color data. Conversely to other methods our approach does not combine end-products like the subsurface chlorophyll concentration (chl) from different sensors to generate a unified product. Instead, our procedure uses the normalized water-leaving radiances (L(sub WN)(lambda)) from single or multiple sensors and uses them in the inversion of a semi-analytical ocean color model that allows the retrieval of several ocean color variables simultaneously. Beside ensuring simultaneity and consistency of the retrievals (all products are derived from a single algorithm), this model-based approach has various benefits over techniques that blend end-products (e.g. chlorophyll): 1) it works with single or multiple data sources regardless of their specific bands, 2) it exploits band redundancies and band differences, 3) it accounts for uncertainties in the (L(sub WN)(lambda)) data and, 4) it provides uncertainty estimates for the retrieved variables.

  6. 77 FR 16849 - Notice of Realignment/Merger of Five Regional Audit Offices: Boston, MA Will Merge With New York...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-22

    ... of Five Regional Audit Offices: Boston, MA Will Merge With New York, NY; and the Gulf Coast Region... result from the reorganization; (3) a discussion of the impact on the local economy; and (4) an estimate... Department (such as the establishment of new or combination of existing organization units within a field...

  7. Optimal PMU placement using topology transformation method in power systems.

    PubMed

    Rahman, Nadia H A; Zobaa, Ahmed F

    2016-09-01

    Optimal phasor measurement units (PMUs) placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP). The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.

  8. Binary partition tree analysis based on region evolution and its application to tree simplification.

    PubMed

    Lu, Huihai; Woods, John C; Ghanbari, Mohammed

    2007-04-01

    Pyramid image representations via tree structures are recognized methods for region-based image analysis. Binary partition trees can be applied which document the merging process with small details found at the bottom levels and larger ones close to the root. Hindsight of the merging process is stored within the tree structure and provides the change histories of an image property from the leaf to the root node. In this work, the change histories are modelled by evolvement functions and their second order statistics are analyzed by using a knee function. Knee values show the reluctancy of each merge. We have systematically formulated these findings to provide a novel framework for binary partition tree analysis, where tree simplification is demonstrated. Based on an evolvement function, for each upward path in a tree, the tree node associated with the first reluctant merge is considered as a pruning candidate. The result is a simplified version providing a reduced solution space and still complying with the definition of a binary tree. The experiments show that image details are preserved whilst the number of nodes is dramatically reduced. An image filtering tool also results which preserves object boundaries and has applications for segmentation.

  9. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.

    PubMed

    Kalathil, Shaeen; Elias, Elizabeth

    2015-11-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.

  10. Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space

    PubMed Central

    Kalathil, Shaeen; Elias, Elizabeth

    2014-01-01

    This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921

  11. Subaru adaptive-optics high-spatial-resolution infrared K- and L'-band imaging search for deeply buried dual AGNs in merging galaxies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imanishi, Masatoshi; Saito, Yuriko, E-mail: masa.imanishi@nao.ac.jp

    2014-01-01

    We present the results of infrared K- (2.2 μm) and L'-band (3.8 μm) high-spatial-resolution (<0.''2) imaging observations of nearby gas- and dust-rich infrared luminous merging galaxies, assisted by the adaptive optics system on the Subaru 8.2 m telescope. We investigate the presence and frequency of red K – L' compact sources, which are sensitive indicators of active galactic nuclei (AGNs), including AGNs that are deeply buried in gas and dust. We observed 29 merging systems and confirmed at least one AGN in all but one system. However, luminous dual AGNs were detected in only four of the 29 systems (∼14%),more » despite our method's being sensitive to buried AGNs. For multiple nuclei sources, we compared the estimated AGN luminosities with supermassive black hole (SMBH) masses inferred from large-aperture K-band stellar emission photometry in individual nuclei. We found that mass accretion rates onto SMBHs are significantly different among multiple SMBHs, such that larger-mass SMBHs generally show higher mass accretion rates when normalized to SMBH mass. Our results suggest that non-synchronous mass accretion onto SMBHs in gas- and dust-rich infrared luminous merging galaxies hampers the observational detection of kiloparsec-scale multiple active SMBHs. This could explain the significantly smaller detection fraction of kiloparsec-scale dual AGNs when compared with the number expected from simple theoretical predictions. Our results also indicate that mass accretion onto SMBHs is dominated by local conditions, rather than by global galaxy properties, reinforcing the importance of observations to our understanding of how multiple SMBHs are activated and acquire mass in gas- and dust-rich merging galaxies.« less

  12. Filtered Rayleigh scattering mixing measurements of merging and non-merging streamwise vortex interactions in supersonic flow

    NASA Astrophysics Data System (ADS)

    Ground, Cody R.; Gopal, Vijay; Maddalena, Luca

    2018-04-01

    By introducing large-scale streamwise vortices into a supersonic flow it is possible to enhance the rate of mixing between two fluid streams. However, increased vorticity content alone does not explicitly serve as a predictor of mixing enhancement. Additional factors, particularly the mutual interactions occurring between neighboring vortical structures, affect the underlying fundamental physics that influence the rate at which the fluids mix. As part of a larger systematic study on supersonic streamwise vortex interactions, this work experimentally quantifies the average rate of mixing of helium and air in the presence of two separate modes of vortex interaction, the merging and non-merging of a pair of co-rotating vortices. In these experiments vortex-generating expansion ramps are placed on a strut injector. The freestream Mach number is set at 2.5 and helium is injected as a passive scalar. Average injectant mole fractions at selected flow planes downstream of the injector are measured utilizing the filtered Rayleigh scattering technique. The filtered Rayleigh scattering measurements reveal that, in the domain surveyed, the merging vortex interaction strongly displaces the plume from its initial horizontal orientation while the non-merging vortex interaction more rapidly mixes the helium and air. The results of the current experiments are consistent with associated knowledge derived from previous analyses of the two studied configurations which have included the detailed experimental characterization of entrainment, turbulent kinetic energy, and vorticity of both modes of vortex interaction.

  13. Bouncing-to-Merging Transition in Drop Impact on Liquid Film: Role of Liquid Viscosity.

    PubMed

    Tang, Xiaoyu; Saha, Abhishek; Law, Chung K; Sun, Chao

    2018-02-27

    When a drop impacts on a liquid surface, it can either bounce back or merge with the surface. The outcome affects many industrial processes, in which merging is preferred in spray coating to generate a uniform layer and bouncing is desired in internal combustion engines to prevent accumulation of the fuel drop on the wall. Thus, a good understanding of how to control the impact outcome is highly demanded to optimize the performance. For a given liquid, a regime diagram of bouncing and merging outcomes can be mapped in the space of Weber number (ratio of impact inertia and surface tension) versus film thickness. In addition, recognizing that the liquid viscosity is a fundamental fluid property that critically affects the impact outcome through viscous dissipation of the impact momentum, here we investigate liquids with a wide range of viscosity from 0.7 to 100 cSt, to assess its effect on the regime diagram. Results show that while the regime diagram maintains its general structure, the merging regime becomes smaller for more viscous liquids and the retraction merging regime disappears when the viscosity is very high. The viscous effects are modeled and subsequently the mathematical relations for the transition boundaries are proposed which agree well with the experiments. The new expressions account for all the liquid properties and impact conditions, thus providing a powerful tool to predict and manipulate the outcome when a drop impacts on a liquid film.

  14. Star Formation in Merging Galaxies Using FIRE

    NASA Astrophysics Data System (ADS)

    Perez, Adrianna; Hung, Chao-Ling; Naiman, Jill; Moreno, Jorge; Hopkins, Philip

    2018-01-01

    Galaxy interactions and mergers are efficient mechanisms to birth stars at rates that are significantly higher than found in our Milky Way galaxy. The Kennicut-Schmidt (KS) relation is an empirical relationship between the star-forming rate and gas surface densities of galaxies (Schmidt 1959; Kennicutt 1998). Although most galaxies follow the KS relation, the high levels of star formation in galaxy mergers places them outside of this otherwise tight relationship. The goal of this research is to analyze the gas content and star formation of simulated merging galaxies. Our work utilizes the Feedback In Realistic Environments (FIRE) model (Hopkins et al., 2014). The FIRE project is a high-resolution cosmological simulation that resolves star-forming regions and incorporates stellar feedback in a physically realistic way. In this work, we have noticed a significant increase in the star formation rate at first and second passage, when the two black holes of each galaxy approach one other. Next, we will analyze spatially resolved star-forming regions over the course of the interacting system. Then, we can study when and how the rates that gas converts into stars deviate from the standard KS. These analyses will provide important insights into the physical mechanisms that regulate star formation of normal and merging galaxies and valuable theoretical predictions that can be used to compare with current and future observations from ALMA or the James Webb Space Telescope.

  15. Flow around a corrugated wing over the range of dragonfly flight

    NASA Astrophysics Data System (ADS)

    Padinjattayil, Sooraj; Agrawal, Amit

    2017-11-01

    The dragonfly flight is very much affected by the corrugations on their wings. A PIV based study is conducted on a rigid corrugated wing for a range of Reynolds number 300-12000 and three different angles of attack (5°-15°) to understand the mechanism of dragonfly flight better. The study revealed that the shape of the corrugation plays a key role in generating vortices. The vortices trapped in the valleys of corrugation dictates the shape of a virtual airfoil around the corrugated wing. A fluid roller bearing effect is created over the virtual airfoil when the trapped vortices merge with each other. A travelling wave produced by the moving virtual boundary around the fluid roller bearings avoids the formation of boundary layer on the virtual surface, thereby leading to high aerodynamic performance. It is found that the lift coefficient increases as the number of vortices increases on the suction surface. Also, it is shown that the partially merged co- rotating vortices give higher lift as compared to fully merged vortices. Further, the virtual airfoil formed around the corrugated wing is compared with a superhydrophobic airfoil which exhibits slip on its surface; several similarities in their flow characteristics are observed. The corrugated airfoil performs superior to the superhydrophobic airfoil in the aerodynamic efficiency due to the virtual slip caused by the travelling wave.

  16. A parallel algorithm for multi-level logic synthesis using the transduction method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Lim, Chieng-Fai

    1991-01-01

    The Transduction Method has been shown to be a powerful tool in the optimization of multilevel networks. Many tools such as the SYLON synthesis system (X90), (CM89), (LM90) have been developed based on this method. A parallel implementation is presented of SYLON-XTRANS (XM89) on an eight processor Encore Multimax shared memory multiprocessor. It minimizes multilevel networks consisting of simple gates through parallel pruning, gate substitution, gate merging, generalized gate substitution, and gate input reduction. This implementation, called Parallel TRANSduction (PTRANS), also uses partitioning to break large circuits up and performs inter- and intra-partition dynamic load balancing. With this, good speedups and high processor efficiencies are achievable without sacrificing the resulting circuit quality.

  17. BamTools: a C++ API and toolkit for analyzing and managing BAM files

    PubMed Central

    Barnett, Derek W.; Garrison, Erik K.; Quinlan, Aaron R.; Strömberg, Michael P.; Marth, Gabor T.

    2011-01-01

    Motivation: Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. Results: We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. Availability: BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools. Contact: barnetde@bc.edu PMID:21493652

  18. Merging Photoredox and Nickel Catalysis: Decarboxylative Cross-Coupling of Carboxylic Acids with Vinyl Halides

    PubMed Central

    2015-01-01

    Decarboxylative cross-coupling of alkyl carboxylic acids with vinyl halides has been accomplished through the synergistic merger of photoredox and nickel catalysis. This new methodology has been successfully applied to a variety of α-oxy and α-amino acids, as well as simple hydrocarbon-substituted acids. Diverse vinyl iodides and bromides give rise to vinylation products in high efficiency under mild, operationally simple reaction conditions. PMID:25521443

  19. High Fidelity and Multiscale Algorithms for Collisional-radiative and Nonequilibrium Plasmas (Briefing Charts)

    DTIC Science & Technology

    2014-07-01

    of models for variable conditions: – Use implicit models to eliminate constraint of sequence of fast time scales: c, ve, – Price to pay: lack...collisions: – Elastic – Bragiinski terms – Inelastic – warning! Rates depend on both T and relative velocity – Multi-fluid CR model from...merge/split for particle management, efficient sampling, inelastic collisions … – Level grouping schemes of electronic states, for dynamical coarse

  20. Comparison of Landsat MSS and merged MSS/RBV data for analysis of natural vegetation

    NASA Technical Reports Server (NTRS)

    Roller, N. E. G.; Cox, S.

    1980-01-01

    Improved resolution could make satellite remote sensing data more useful for surveys of natural vegetation. Although improved satellite/sensor systems appear to be several years away, one potential interim solution to the problem of achieving greater resolution without sacrificing spectral sensitivity is through the merging of Landsat RBV and MSS data. This paper describes the results of a study performed to obtain a preliminary evaluation of the usefulness of two types of products that can be made by merging Landsat RBV and MSS data. The products generated were a false color composite image and a computer recognition map. Of these two products, the false color composite image appears to be the most useful.

  1. An optimal merging technique for high-resolution precipitation products: OPTIMAL MERGING OF PRECIPITATION METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.

    2011-04-01

    Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutionsmore » and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.« less

  2. NIMROD simulations of the IPA FRC experiment

    NASA Astrophysics Data System (ADS)

    Milroy, Richard

    2015-11-01

    The IPA experiment created a high temperature plasma by merging and compressing supersonic θ-pinch formed FRCs. The NIMROD code has been used to simulate this process. These calculations include the θ-pinch formation and acceleration of two FRC's using the dynamic formation methodology, and their translation to a central compression chamber where they merge and are magnetically compressed. Transport coefficients have been tuned so simulation results agree well with experimental observation. The inclusion of the Hall term is essential for the FRCs merge quickly, as observed experimentally through the excluded flux profiles. The inclusion of a significant anisotropic viscosity is required for the excluded flux profiles to agree well with the experiment. We plan to extend this validation work using the new ARPA-E funded Venti experiment at Helion Energy in Redmond WA. This will be a very well diagnosed experiment where two FRCs merge (like the IPA experiment) and are then compressed to near-fusion conditions. Preliminary calculations with parameters relevant to this experiment have been made, and some numerical issues identified.

  3. Merging of the Dirac points in electronic artificial graphene

    NASA Astrophysics Data System (ADS)

    Feilhauer, J.; Apel, W.; Schweitzer, L.

    2015-12-01

    Theory predicts that graphene under uniaxial compressive strain in an armchair direction should undergo a topological phase transition from a semimetal into an insulator. Due to the change of the hopping integrals under compression, both Dirac points shift away from the corners of the Brillouin zone towards each other. For sufficiently large strain, the Dirac points merge and an energy gap appears. However, such a topological phase transition has not yet been observed in normal graphene (due to its large stiffness) neither in any other electronic system. We show numerically and analytically that such a merging of the Dirac points can be observed in electronic artificial graphene created from a two-dimensional electron gas by application of a triangular lattice of repulsive antidots. Here, the effect of strain is modeled by tuning the distance between the repulsive potentials along the armchair direction. Our results show that the merging of the Dirac points should be observable in a recent experiment with molecular graphene.

  4. A GPU-Accelerated Approach for Feature Tracking in Time-Varying Imagery Datasets.

    PubMed

    Peng, Chao; Sahani, Sandip; Rushing, John

    2017-10-01

    We propose a novel parallel connected component labeling (CCL) algorithm along with efficient out-of-core data management to detect and track feature regions of large time-varying imagery datasets. Our approach contributes to the big data field with parallel algorithms tailored for GPU architectures. We remove the data dependency between frames and achieve pixel-level parallelism. Due to the large size, the entire dataset cannot fit into cached memory. Frames have to be streamed through the memory hierarchy (disk to CPU main memory and then to GPU memory), partitioned, and processed as batches, where each batch is small enough to fit into the GPU. To reconnect the feature regions that are separated due to data partitioning, we present a novel batch merging algorithm to extract the region connection information across multiple batches in a parallel fashion. The information is organized in a memory-efficient structure and supports fast indexing on the GPU. Our experiment uses a commodity workstation equipped with a single GPU. The results show that our approach can efficiently process a weather dataset composed of terabytes of time-varying radar images. The advantages of our approach are demonstrated by comparing to the performance of an efficient CPU cluster implementation which is being used by the weather scientists.

  5. Integrating Hospital Administrative Data to Improve Health Care Efficiency and Outcomes: “The Socrates Story”

    PubMed Central

    Lawrence, Justin; Delaney, Conor P.

    2013-01-01

    Evaluation of health care outcomes has become increasingly important as we strive to improve quality and efficiency while controlling cost. Many groups feel that analysis of large datasets will be useful in optimizing resource utilization; however, the ideal blend of clinical and administrative data points has not been developed. Hospitals and health care systems have several tools to measure cost and resource utilization, but the data are often housed in disparate systems that are not integrated and do not permit multisystem analysis. Systems Outcomes and Clinical Resources AdministraTive Efficiency Software (SOCRATES) is a novel data merging, warehousing, analysis, and reporting technology, which brings together disparate hospital administrative systems generating automated or customizable risk-adjusted reports. Used in combination with standardized enhanced care pathways, SOCRATES offers a mechanism to improve the quality and efficiency of care, with the ability to measure real-time changes in outcomes. PMID:24436649

  6. Integrating hospital administrative data to improve health care efficiency and outcomes: "the socrates story".

    PubMed

    Lawrence, Justin; Delaney, Conor P

    2013-03-01

    Evaluation of health care outcomes has become increasingly important as we strive to improve quality and efficiency while controlling cost. Many groups feel that analysis of large datasets will be useful in optimizing resource utilization; however, the ideal blend of clinical and administrative data points has not been developed. Hospitals and health care systems have several tools to measure cost and resource utilization, but the data are often housed in disparate systems that are not integrated and do not permit multisystem analysis. Systems Outcomes and Clinical Resources AdministraTive Efficiency Software (SOCRATES) is a novel data merging, warehousing, analysis, and reporting technology, which brings together disparate hospital administrative systems generating automated or customizable risk-adjusted reports. Used in combination with standardized enhanced care pathways, SOCRATES offers a mechanism to improve the quality and efficiency of care, with the ability to measure real-time changes in outcomes.

  7. Short-range quantitative precipitation forecasting using Deep Learning approaches

    NASA Astrophysics Data System (ADS)

    Akbari Asanjan, A.; Yang, T.; Gao, X.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Predicting short-range quantitative precipitation is very important for flood forecasting, early flood warning and other hydrometeorological purposes. This study aims to improve the precipitation forecasting skills using a recently developed and advanced machine learning technique named Long Short-Term Memory (LSTM). The proposed LSTM learns the changing patterns of clouds from Cloud-Top Brightness Temperature (CTBT) images, retrieved from the infrared channel of Geostationary Operational Environmental Satellite (GOES), using a sophisticated and effective learning method. After learning the dynamics of clouds, the LSTM model predicts the upcoming rainy CTBT events. The proposed model is then merged with a precipitation estimation algorithm termed Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks (PERSIANN) to provide precipitation forecasts. The results of merged LSTM with PERSIANN are compared to the results of an Elman-type Recurrent Neural Network (RNN) merged with PERSIANN and Final Analysis of Global Forecast System model over the states of Oklahoma, Florida and Oregon. The performance of each model is investigated during 3 storm events each located over one of the study regions. The results indicate the outperformance of merged LSTM forecasts comparing to the numerical and statistical baselines in terms of Probability of Detection (POD), False Alarm Ratio (FAR), Critical Success Index (CSI), RMSE and correlation coefficient especially in convective systems. The proposed method shows superior capabilities in short-term forecasting over compared methods.

  8. Merging Station Observations with Large-Scale Gridded Data to Improve Hydrological Predictions over Chile

    NASA Astrophysics Data System (ADS)

    Peng, L.; Sheffield, J.; Verbist, K. M. J.

    2016-12-01

    Hydrological predictions at regional-to-global scales are often hampered by the lack of meteorological forcing data. The use of large-scale gridded meteorological data is able to overcome this limitation, but these data are subject to regional biases and unrealistic values at local scale. This is especially challenging in regions such as Chile, where climate exhibits high spatial heterogeneity as a result of long latitude span and dramatic elevation changes. However, regional station-based observational datasets are not fully exploited and have the potential of constraining biases and spatial patterns. This study aims at adjusting precipitation and temperature estimates from the Princeton University global meteorological forcing (PGF) gridded dataset to improve hydrological simulations over Chile, by assimilating 982 gauges from the Dirección General de Aguas (DGA). To merge station data with the gridded dataset, we use a state-space estimation method to produce optimal gridded estimates, considering both the error of the station measurements and the gridded PGF product. The PGF daily precipitation, maximum and minimum temperature at 0.25° spatial resolution are adjusted for the period of 1979-2010. Precipitation and temperature gauges with long and continuous records (>70% temporal coverage) are selected, while the remaining stations are used for validation. The leave-one-out cross validation verifies the robustness of this data assimilation approach. The merged dataset is then used to force the Variable Infiltration Capacity (VIC) hydrological model over Chile at daily time step which are compared to the observations of streamflow. Our initial results show that the station-merged PGF precipitation effectively captures drizzle and the spatial pattern of storms. Overall the merged dataset has significant improvements compared to the original PGF with reduced biases and stronger inter-annual variability. The invariant spatial pattern of errors between the station data and the gridded product opens up the possibility of merging real-time satellite and intermittent gauge observations to produce more accurate real-time hydrological predictions.

  9. Automated separation of merged Langerhans islets

    NASA Astrophysics Data System (ADS)

    Švihlík, Jan; Kybic, Jan; Habart, David

    2016-03-01

    This paper deals with separation of merged Langerhans islets in segmentations in order to evaluate correct histogram of islet diameters. A distribution of islet diameters is useful for determining the feasibility of islet transplantation in diabetes. First, the merged islets at training segmentations are manually separated by medical experts. Based on the single islets, the merged islets are identified and the SVM classifier is trained on both classes (merged/single islets). The testing segmentations were over-segmented using watershed transform and the most probable back merging of islets were found using trained SVM classifier. Finally, the optimized segmentation is compared with ground truth segmentation (correctly separated islets).

  10. MetaGenomic Assembly by Merging (MeGAMerge)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scholz Chien-Chi Lo, Matthew B.

    2015-08-03

    "MetaGenomic Assembly by Merging" (MeGAMerge)Is a novel method of merging of multiple genomic assembly or long read data sources for assembly by use of internal trimming/filtering of data, followed by use of two 3rd party tools to merge data by overlap based assembly.

  11. MaMR: High-performance MapReduce programming model for material cloud applications

    NASA Astrophysics Data System (ADS)

    Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng

    2017-02-01

    With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.

  12. Efficient architecture for spike sorting in reconfigurable hardware.

    PubMed

    Hwang, Wen-Jyi; Lee, Wei-Hao; Lin, Shiow-Jyu; Lai, Sheng-Ying

    2013-11-01

    This paper presents a novel hardware architecture for fast spike sorting. The architecture is able to perform both the feature extraction and clustering in hardware. The generalized Hebbian algorithm (GHA) and fuzzy C-means (FCM) algorithm are used for feature extraction and clustering, respectively. The employment of GHA allows efficient computation of principal components for subsequent clustering operations. The FCM is able to achieve near optimal clustering for spike sorting. Its performance is insensitive to the selection of initial cluster centers. The hardware implementations of GHA and FCM feature low area costs and high throughput. In the GHA architecture, the computation of different weight vectors share the same circuit for lowering the area costs. Moreover, in the FCM hardware implementation, the usual iterative operations for updating the membership matrix and cluster centroid are merged into one single updating process to evade the large storage requirement. To show the effectiveness of the circuit, the proposed architecture is physically implemented by field programmable gate array (FPGA). It is embedded in a System-on-Chip (SOC) platform for performance measurement. Experimental results show that the proposed architecture is an efficient spike sorting design for attaining high classification correct rate and high speed computation.

  13. Improved exploration of fishery resources through the integration of remotely sensed merged sea level anomaly, chlorophyll concentration, and sea surface temperature

    NASA Astrophysics Data System (ADS)

    Priya, R. Kanmani Shanmuga; Balaguru, B.; Ramakrishnan, S.

    2013-10-01

    The capabilities of evolving satellite remote sensing technology, combined with conventional data collection techniques, provide a powerful tool for efficient and cost effective management of living marine resources. Fishes are the valuable living marine resources producing food, profit and pleasure to the human community. Variations in oceanic condition play a role in natural fluctuations of fish stocks. The Satellite Altimeter derived Merged Sea Level Anomaly(MSLA) results in the better understanding of ocean variability and mesosclae oceanography and provides good possibility to reveal the zones of high dynamic activity. This study comprised the synergistic analysis of signatures of SEAWIFS derived chlorophyll concentration, National Oceanic and Atmospheric Administration-Advanced Very High Resolution Radiometer(NOAA-AVHRR) derived Sea Surface Temperature and the monthly Merged Sea Level Anomaly data derived from Topex/Poseidon, Jason-1 and ERS-1 Altimeters for the past 7 years during the period from 1998 to 2004. The overlapping Chlorophyll, SST and MSLA were suggested for delineating Potential Fishing Zones (PFZs). The Chlorophyll and SST data set were found to have influenced by short term persistence from days to week while MSLA signatures of respective features persisted for longer duration. Hence, the study used Altimeter derived MSLA as an index for long term variability detection of fish catches along with Chlorophyll and SST images and the maps showing PFZs of the study area were generated. The real time Fishing statistics of the same duration were procured from FSI Mumbai. The catch contours were generated with respect to peak spectra of chlorophyll variation and trough spectra of MSLA and SST variation. The vice- a- versa patterns were observed in the poor catch contours. The Catch Per Unit Effort (CPUE) for each fishing trail was calculated to normalize the fish catch. Based on the statistical analysis the actual CPUEs were classified at each probable MSLA depth zones and plotted on the same images.

  14. Particle acceleration during merging-compression plasma start-up in the Mega Amp Spherical Tokamak

    NASA Astrophysics Data System (ADS)

    McClements, K. G.; Allen, J. O.; Chapman, S. C.; Dendy, R. O.; Irvine, S. W. A.; Marshall, O.; Robb, D.; Turnyanskiy, M.; Vann, R. G. L.

    2018-02-01

    Magnetic reconnection occurred during merging-compression plasma start-up in the Mega Amp Spherical Tokamak (MAST), resulting in the prompt acceleration of substantial numbers of ions and electrons to highly suprathermal energies. Accelerated field-aligned ions (deuterons and protons) were detected using a neutral particle analyser at energies up to about 20 keV during merging in early MAST pulses, while nonthermal electrons have been detected indirectly in more recent pulses through microwave bursts. However no increase in soft x-ray emission was observed until later in the merging phase, by which time strong electron heating had been detected through Thomson scattering measurements. A test-particle code CUEBIT is used to model ion acceleration in the presence of an inductive toroidal electric field with a prescribed spatial profile and temporal evolution based on Hall-MHD simulations of the merging process. The simulations yield particle distributions with properties similar to those observed experimentally, including strong field alignment of the fast ions and the acceleration of protons to higher energies than deuterons. Particle-in-cell modelling of a plasma containing a dilute field-aligned suprathermal electron component suggests that at least some of the microwave bursts can be attributed to the anomalous Doppler instability driven by anisotropic fast electrons, which do not produce measurable enhancements in soft x-ray emission either because they are insufficiently energetic or because the nonthermal bremsstrahlung emissivity during this phase of the pulse is below the detection threshold. There is no evidence of runaway electron acceleration during merging, possibly due to the presence of three-dimensional field perturbations.

  15. Visual Simultaneous Localization And Mapping (VSLAM) methods applied to indoor 3D topographical and radiological mapping in real-time

    NASA Astrophysics Data System (ADS)

    Hautot, Felix; Dubart, Philippe; Bacri, Charles-Olivier; Chagneau, Benjamin; Abou-Khalil, Roger

    2017-09-01

    New developments in the field of robotics and computer vision enables to merge sensors to allow fast realtime localization of radiological measurements in the space/volume with near-real time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarii and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations

  16. Massive Black Hole Mergers: Can we see what LISA will hear?

    NASA Technical Reports Server (NTRS)

    Centrella, Joan

    2009-01-01

    Coalescing massive black hole binaries are formed when galaxies merge. The final stages of this coalescence produce strong gravitational wave signals that can be detected by the space-borne LISA. When the black holes merge in the presence of gas and magnetic fields, various types of electromagnetic signals may also be produced. Modeling such electromagnetic counterparts requires evolving the behavior of both gas and fields in the strong-field regions around the black holes. We have taken a first step towards this problem by mapping the flow of pressureless matter in the dynamic, 3-D general relativistic spacetime around the merging black holes. We report on the results of these initial simulations and discuss their likely importance for future hydrodynamical simulations.

  17. An Elementary Algorithm for Autonomous Air Terminal Merging and Interval Management

    NASA Technical Reports Server (NTRS)

    White, Allan L.

    2017-01-01

    A central element of air traffic management is the safe merging and spacing of aircraft during the terminal area flight phase. This paper derives and examines an algorithm for the merging and interval managing problem for Standard Terminal Arrival Routes. It describes a factor analysis for performance based on the distribution of arrivals, the operating period of the terminal, and the topology of the arrival routes; then presents results from a performance analysis and from a safety analysis for a realistic topology based on typical routes for a runway at Phoenix International Airport. The heart of the safety analysis is a statistical derivation on how to conduct a safety analysis for a local simulation when the safety requirement is given for the entire airspace.

  18. Resonant charge transfer in He/+/-He collisions studied with the merging-beams technique

    NASA Technical Reports Server (NTRS)

    Rundel, R. D.; Nitz, D. E.; Smith, K. A.; Geis, M. W.; Stebbings, R. F.

    1979-01-01

    Absolute cross sections are reported for the resonant charge-transfer reaction He(+) + He yields He + He(+) at collision energies between 0.1 and 187 eV. The results, obtained using a new merging-beam apparatus are in agreement both with theory and with measurements made using other experimental techniques. The experimentally determined cross sections between 0.5 and 187 eV fall about a line given by sigma exp 1/2(sq-A) = 5.09-2.99 lnW, where W is the collision energy in eV. Considerable attention is paid to the configuration and operation of the apparatus. Tests and calculations which confirm the interpretation of the experimental data in a merging-beam experiment are discussed.

  19. A Generalized Method for the Comparable and Rigorous Calculation of the Polytropic Efficiencies of Turbocompressors

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis

    2018-03-01

    The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.

  20. Efficiency and optimal size of hospitals: Results of a systematic search

    PubMed Central

    Guglielmo, Annamaria

    2017-01-01

    Background National Health Systems managers have been subject in recent years to considerable pressure to increase concentration and allow mergers. This pressure has been justified by a belief that larger hospitals lead to lower average costs and better clinical outcomes through the exploitation of economies of scale. In this context, the opportunity to measure scale efficiency is crucial to address the question of optimal productive size and to manage a fair allocation of resources. Methods and findings This paper analyses the stance of existing research on scale efficiency and optimal size of the hospital sector. We performed a systematic search of 45 past years (1969–2014) of research published in peer-reviewed scientific journals recorded by the Social Sciences Citation Index concerning this topic. We classified articles by the journal’s category, research topic, hospital setting, method and primary data analysis technique. Results showed that most of the studies were focussed on the analysis of technical and scale efficiency or on input / output ratio using Data Envelopment Analysis. We also find increasing interest concerning the effect of possible changes in hospital size on quality of care. Conclusions Studies analysed in this review showed that economies of scale are present for merging hospitals. Results supported the current policy of expanding larger hospitals and restructuring/closing smaller hospitals. In terms of beds, studies reported consistent evidence of economies of scale for hospitals with 200–300 beds. Diseconomies of scale can be expected to occur below 200 beds and above 600 beds. PMID:28355255

  1. The Merging of Traditional Chinese Medicine and Western Medicine in China: Old Ideas Cross Culturally Communicated through New Perspectives.

    ERIC Educational Resources Information Center

    Schnell, James A.

    Cross-cultural communication between China and the West, instigated in 1979 by the establishment of an open-door policy in China, has led to the merging of Traditional Chinese Medicine (TCM) with the medical practices of the West. The result of these medical exchanges is a blending of medical practices that proves to be more effective in the…

  2. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  3. Interactions between Coronal Mass Ejections Viewed in Coordinated Imaging and In Situ Observations

    NASA Technical Reports Server (NTRS)

    Liu, Ying D.; Luhmann, Janet G.; Moestl, Christian; Martinez-Oliveros, Juan C.; Bale, Stewart D.; Lin, Robert P.; Harrison, Richard A.; Temmer, Manuela; Webb, David F.; Odstrcil, Dusan

    2013-01-01

    The successive coronal mass ejections (CMEs) from 2010 July 30 - August 1 present us the first opportunity to study CME-CME interactions with unprecedented heliospheric imaging and in situ observations from multiple vantage points. We describe two cases of CME interactions: merging of two CMEs launched close in time and overtaking of a preceding CME by a shock wave. The first two CMEs on August 1 interact close to the Sun and form a merged front, which then overtakes the July 30 CME near 1 AU, as revealed by wide-angle imaging observations. Connections between imaging observations and in situ signatures at 1 AU suggest that the merged front is a shock wave, followed by two ejecta observed at Wind which seem to have already merged. In situ measurements show that the CME from July 30 is being overtaken by the shock at 1 AU and is significantly compressed, accelerated and heated. The interaction between the preceding ejecta and shock also results in variations in the shock strength and structure on a global scale, as shown by widely separated in situ measurements from Wind and STEREO B. These results indicate important implications of CME-CME interactions for shock propagation, particle acceleration and space weather forecasting.

  4. Economic and organizational determinants of HMO mergers and failures.

    PubMed

    Feldman, R; Wholey, D; Christianson, J

    1996-01-01

    This study analyzed data from all operational health maintenance organizations (HMOs) in the United States from 1986 through 1993. Eighty HMOs disappeared through mergers and 149 failed over that period. We estimated a multinomial logit model to predict whether an HMO would merge and survive, merge and disappear, or fail, relative to the probability of no event. We found that enrollment and profitability play a critical role in explaining HMO mergers and failures: large and profitable HMOs were more likely to merge and survive, but less likely to merge and disappear or fail. These results explain why HMO merger and failure rates fell after 1988, as most surviving HMOs became larger and more profitable. Among several market-area variables in the model, state anti-takeover regulations had a negative impact on mergers. Mergers were more likely in markets with more competing HMOs, but the overall market penetration of HMOs had no effect on mergers. This result may have important implications for the current debate over the future of the competitive health care strategy. If public policy successfully stimulates the development of large numbers of new HMOs, another wave of mergers and failures is likely to occur. But it appears that growth in overall HMO penetration will not lead inevitably to increased market concentration.

  5. Integrating Quality Assurance Systems in a Merged Higher Education Institution

    ERIC Educational Resources Information Center

    Kistan, Chandru

    2005-01-01

    Purpose: This article seeks to highlight the challenges and issues that face merging higher education institutions and also to outline some of the challenges in integrating the quality assurance systems during the pre-, interim and post-merger phases in a merged university. Design/methodology/approach: Case studies of merged and merging…

  6. Delayed Geodynamo in Hadean

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, J.

    2014-12-01

    Paleointensity measurements of Archean rocks reveal a strong geodynamo at ~3.45 Ga, while excess nitrogen content of lunar soil samples implies no geodynamo at ~3.9 Ga. Here I propose that initiation of a strong geodynamo is delayed due to accretion style of Earth, involving collision and merging of a few dozen Moon to Mars size planetary embryos. Two accretion scenarios consisting of 25 and 50 embryos are investigated. The collision of an embryo heats the proto-Earth's core differentially and the rotating low-viscosity core stably stratifies, creating a spherically symmetric and radially increasing temperature distribution. Convection starts in the outer core after each impact but is destroyed by the next impact. The iron core of an impacting embryo descends in the mantle and merges to the proto-Earth's core. Both adiabatic and non-adiabatic merging cases are studied. A major part of the gravitational energy released due to core merging is used to lift up the upper portion of the core to emplace the impactor core material at the neutrally buoyant level in the proto-Earth's core. The remaining energy is converted to heat. In the adiabatic case the merging embryo's core retains all of the remaining energy, while in the non-adiabatic merging 50% of the remaining energy is shared with the outer part of the proto-Earth's core where the embryo's core descends. The two merging models result in significantly different temperature distributions in the core at the end of accretion. After the accretion, the convecting shell in the outer core grows monotonically and generates geodynamo gradually. It takes about 50-100 Myr for the convecting shell to generate a strong dipole field at the surface, 50,000 to 100,000 nT, in the presence of a large stably stratified liquid inner core when the convecting outer core thickness exceeds about one half the radius of the Earth's core.

  7. Optimization Of Ocean Color Algorithms: Application To Satellite And In Situ Data Merging. Chapter 9

    NASA Technical Reports Server (NTRS)

    Maritorena, Stephane; Siegel, David A.; Morel, Andre

    2003-01-01

    The objective of our program is to develop and validate a procedure for ocean color data merging which is one of the major goals of the SIMBIOS project (McClain et al., 1995). The need for a merging capability is dictated by the fact that since the launch of MODIS on the Terra platform and over the next decade, several global ocean color missions from various space agencies are or will be operational simultaneously. The apparent redundancy in simultaneous ocean color missions can actually be exploited to various benefits. The most obvious benefit is improved coverage (Gregg et al., 1998; Gregg & Woodward, 1998). The patchy and uneven daily coverage from any single sensor can be improved by using a combination of sensors. Beside improved coverage of the global ocean the merging of ocean color data should also result in new, improved, more diverse and better data products with lower uncertainties. Ultimately, ocean color data merging should result in the development of a unified, scientific quality, ocean color time series, from SeaWiFS to NPOESS and beyond. Various approaches can be used for ocean color data merging and several have been tested within the frame of the SIMBIOS program (see e.g. Kwiatkowska & Fargion, 2003, Franz et al., 2003). As part of the SIMBIOS Program, we have developed a merging method for ocean color data. Conversely to other methods our approach does not combine end-products like the subsurface chlorophyll concentration (chl) from different sensors to generate a unified product. Instead, our procedure uses the normalized waterleaving radiances (LwN( )) from single or multiple sensors and uses them in the inversion of a semianalytical ocean color model that allows the retrieval of several ocean color variables simultaneously. Beside ensuring simultaneity and consistency of the retrievals (all products are derived from a single algorithm), this model-based approach has various benefits over techniques that blend end-products (e.g. chlorophyll): 1) it works with single or multiple data sources regardless of their specific bands, 2) it exploits band redundancies and band differences, 3) it accounts for uncertainties in the LwN( ) data and, 4) it provides uncertainty estimates for the retrieved variables.

  8. Detecting the changes in rural communities in Taiwan by applying multiphase segmentation on FORMOSA-2 satellite imagery

    NASA Astrophysics Data System (ADS)

    Huang, Yishuo

    2015-09-01

    Agricultural activities mainly occur in rural areas; recently, ecological conservation and biological diversity are being emphasized in rural communities to promote sustainable development for rural communities, especially for rural communities in Taiwan. Therefore, since 2005, many rural communities in Taiwan have compiled their own development strategies in order to create their own unique characteristics to attract people to visit and stay in rural communities. By implementing these strategies, young people can stay in their own rural communities and the rural communities are rejuvenated. However, some rural communities introduce artificial construction into the community such that the ecological and biological environments are significantly degraded. The strategies need to be efficiently monitored because up to 67 rural communities have proposed rejuvenation projects. In 2015, up to 440 rural communities were estimated to be involved in rural community rejuvenations. How to monitor the changes occurring in those rural communities participating in rural community rejuvenation such that ecological conservation and ecological diversity can be satisfied is an important issue in rural community management. Remote sensing provides an efficient and rapid method to achieve this issue. Segmentation plays a fundamental role in human perception. In this respect, segmentation can be used as the process of transforming the collection of pixels of an image into a group of regions or objects with meaning. This paper proposed an algorithm based on the multiphase approach to segment the normalized difference vegetation index, NDVI, of the rural communities into several sub-regions, and to have the NDVI distribution in each sub-region be homogeneous. Those regions whose values of NDVI are close will be merged into the same class. In doing so, a complex NDVI map can be simplified into two groups: the high and low values of NDVI. The class with low NDVI values corresponds to those regions containing roads, buildings, and other manmade construction works and the class with high values of NDVI indicates that those regions contain vegetation in good health. In order to verify the processed results, the regional boundaries were extracted and laid down on the given images to check whether the extracted boundaries were laid down on buildings, roads, or other artificial constructions. In addition to the proposed approach, another approach called statistical region merging was employed by grouping sets of pixels with homogeneous properties such that those sets are iteratively grown by combining smaller regions or pixels. In doing so, the segmented NDVI map can be generated. By comparing the areas of the merged classes in different years, the changes occurring in the rural communities of Taiwan can be detected. The satellite imagery of FORMOSA-2 with 2-m ground resolution is employed to evaluate the performance of the proposed approach. The satellite imagery of two rural communities (Jhumen and Taomi communities) is chosen to evaluate environmental changes between 2005 and 2010. The change maps of 2005-2010 show that a high density of green on a patch of land is increased by 19.62 ha in Jhumen community and conversely a similar patch of land is significantly decreased by 236.59 ha in Taomi community. Furthermore, the change maps created by another image segmentation method called statistical region merging generate similar processed results to multiphase segmentation.

  9. dbVOR: a database system for importing pedigree, phenotype and genotype data and exporting selected subsets.

    PubMed

    Baron, Robert V; Conley, Yvette P; Gorin, Michael B; Weeks, Daniel E

    2015-03-18

    When studying the genetics of a human trait, we typically have to manage both genome-wide and targeted genotype data. There can be overlap of both people and markers from different genotyping experiments; the overlap can introduce several kinds of problems. Most times the overlapping genotypes are the same, but sometimes they are different. Occasionally, the lab will return genotypes using a different allele labeling scheme (for example 1/2 vs A/C). Sometimes, the genotype for a person/marker index is unreliable or missing. Further, over time some markers are merged and bad samples are re-run under a different sample name. We need a consistent picture of the subset of data we have chosen to work with even though there might possibly be conflicting measurements from multiple data sources. We have developed the dbVOR database, which is designed to hold data efficiently for both genome-wide and targeted experiments. The data are indexed for fast retrieval by person and marker. In addition, we store pedigree and phenotype data for our subjects. The dbVOR database allows us to select subsets of the data by several different criteria and to merge their results into a coherent and consistent whole. Data may be filtered by: family, person, trait value, markers, chromosomes, and chromosome ranges. The results can be presented in columnar, Mega2, or PLINK format. dbVOR serves our needs well. It is freely available from https://watson.hgen.pitt.edu/register . Documentation for dbVOR can be found at https://watson.hgen.pitt.edu/register/docs/dbvor.html .

  10. BamTools: a C++ API and toolkit for analyzing and managing BAM files.

    PubMed

    Barnett, Derek W; Garrison, Erik K; Quinlan, Aaron R; Strömberg, Michael P; Marth, Gabor T

    2011-06-15

    Analysis of genomic sequencing data requires efficient, easy-to-use access to alignment results and flexible data management tools (e.g. filtering, merging, sorting, etc.). However, the enormous amount of data produced by current sequencing technologies is typically stored in compressed, binary formats that are not easily handled by the text-based parsers commonly used in bioinformatics research. We introduce a software suite for programmers and end users that facilitates research analysis and data management using BAM files. BamTools provides both the first C++ API publicly available for BAM file support as well as a command-line toolkit. BamTools was written in C++, and is supported on Linux, Mac OSX and MS Windows. Source code and documentation are freely available at http://github.org/pezmaster31/bamtools.

  11. Formation and survival of Population III stellar systems

    NASA Astrophysics Data System (ADS)

    Hirano, Shingo; Bromm, Volker

    2017-09-01

    The initial mass function of the first, Population III (Pop III), stars plays a vital role in shaping galaxy formation and evolution in the early Universe. One key remaining issue is the final fate of secondary protostars formed in the accretion disc, specifically whether they merge or survive. We perform a suite of hydrodynamic simulations of the complex interplay among fragmentation, protostellar accretion and merging inside dark matter minihaloes. Instead of the traditional sink particle method, we employ a stiff equation of state approach, so that we can more robustly ascertain the viscous transport inside the disc. The simulations show inside-out fragmentation because the gas collapses faster in the central region. Fragments migrate on the viscous time-scale, over which angular momentum is lost, enabling them to move towards the disc centre, where merging with the primary protostar can occur. This process depends on the fragmentation scale, such that there is a maximum scale of (1-5) × 104 au, inside which fragments can migrate to the primary protostar. Viscous transport is active until radiative feedback from the primary protostar destroys the accretion disc. The final mass spectrum and multiplicity thus crucially depends on the effect of viscosity in the disc. The entire disc is subjected to efficient viscous transport in the primordial case with viscous parameter α ≤ 1. An important aspect of this question is the survival probability of Pop III binary systems, possible gravitational wave sources to be probed with the Advanced LIGO detectors.

  12. Synthetic aperture radar/LANDSAT MSS image registration

    NASA Technical Reports Server (NTRS)

    Maurer, H. E. (Editor); Oberholtzer, J. D. (Editor); Anuta, P. E. (Editor)

    1979-01-01

    Algorithms and procedures necessary to merge aircraft synthetic aperture radar (SAR) and LANDSAT multispectral scanner (MSS) imagery were determined. The design of a SAR/LANDSAT data merging system was developed. Aircraft SAR images were registered to the corresponding LANDSAT MSS scenes and were the subject of experimental investigations. Results indicate that the registration of SAR imagery with LANDSAT MSS imagery is feasible from a technical viewpoint, and useful from an information-content viewpoint.

  13. The Challenge of Wider Library Units: Merging Libraries and Developing Taxing Districts May Be a Way to Stabilize Funding, but the Path Is Not Always Clear

    ERIC Educational Resources Information Center

    Hennen, Thomas J., Jr.

    2004-01-01

    Last year, Louisville, KY, grew overnight from the country's 64th largest city to the 16th largest, the result of the merger of the city with Jefferson County. Pittsburgh and Buffalo, NY, are among other communities discussing city-county mergers. Many smaller communities are considering merging services, such as police and fire, or consolidating…

  14. Learning a constrained conditional random field for enhanced segmentation of fallen trees in ALS point clouds

    NASA Astrophysics Data System (ADS)

    Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe

    2018-06-01

    In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.

  15. Mining MaNGA for Merging Galaxies: A New Imaging and Kinematic Technique from Hydrodynamical Simulations

    NASA Astrophysics Data System (ADS)

    Nevin, Becky; Comerford, Julia M.; Blecha, Laura

    2018-06-01

    Merging galaxies play a key role in galaxy evolution, and progress in our understanding of galaxy evolution is slowed by the difficulty of making accurate galaxy merger identifications. Mergers are typically identified using imaging alone, which has its limitations and biases. With the growing popularity of integral field spectroscopy (IFS), it is now possible to use kinematic signatures to improve galaxy merger identifications. I use GADGET-3 hydrodynamical simulations of merging galaxies with the radiative transfer code SUNRISE, the later of which enables me to apply the same analysis to simulations and observations. From the simulated galaxies, I have developed the first merging galaxy classification scheme that is based on kinematics and imaging. Utilizing a Linear Discriminant Analysis tool, I have determined which kinematic and imaging predictors are most useful for identifying mergers of various merger parameters (such as orientation, mass ratio, gas fraction, and merger stage). I will discuss the strengths and limitations of the classification technique and then my initial results for applying the classification to the >10,000 observed galaxies in the MaNGA (Mapping Nearby Galaxies at Apache Point) IFS survey. Through accurate identification of merging galaxies in the MaNGA survey, I will advance our understanding of supermassive black hole growth in galaxy mergers and other open questions related to galaxy evolution.

  16. Cluster Physics with Merging Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Molnar, Sandor

    Collisions between galaxy clusters provide a unique opportunity to study matter in a parameter space which cannot be explored in our laboratories on Earth. In the standard ΛCDM model, where the total density is dominated by the cosmological constant (Λ) and the matter density by cold dark matter (CDM), structure formation is hierarchical, and clusters grow mostly by merging. Mergers of two massive clusters are the most energetic events in the universe after the Big Bang, hence they provide a unique laboratory to study cluster physics. The two main mass components in clusters behave differently during collisions: the dark matter is nearly collisionless, responding only to gravity, while the gas is subject to pressure forces and dissipation, and shocks and turbulence are developed during collisions. In the present contribution we review the different methods used to derive the physical properties of merging clusters. Different physical processes leave their signatures on different wavelengths, thus our review is based on a multifrequency analysis. In principle, the best way to analyze multifrequency observations of merging clusters is to model them using N-body/HYDRO numerical simulations. We discuss the results of such detailed analyses. New high spatial and spectral resolution ground and space based telescopes will come online in the near future. Motivated by these new opportunities, we briefly discuss methods which will be feasible in the near future in studying merging clusters.

  17. The application of DEA (Data Envelopment Analysis) window analysis in the assessment of influence on operational efficiencies after the establishment of branched hospitals.

    PubMed

    Jia, Tongying; Yuan, Huiyun

    2017-04-12

    Many large-scaled public hospitals have established branched hospitals in China. This study is to provide evidence for strategy making on the management and development of multi-branched hospitals by evaluating and comparing the operational efficiencies of different hospitals before and after their establishment of branched hospitals. DEA (Data Envelopment Analysis) window analysis was performed on a 7-year data pool from five public hospitals provided by health authorities and institutional surveys. The operational efficiencies of sample hospitals measured in this study (including technical efficiency, pure technical efficiency and scale efficiency) had overall trends towards increase during this 7-year period of time, however, a temporary downturn occurred shortly after the establishment of branched hospitals; pure technical efficiency contributed more to the improvement of technical efficiency compared to scale efficiency. The establishment of branched-hospitals did not lead to a long-term negative effect on hospital operational efficiencies. Our data indicated the importance of improving scale efficiency via the optimization of organizational management, as well as the advantage of a different form of branch-establishment, merging and reorganization. This study brought an insight into the practical application of DEA window analysis on the assessment of hospital operational efficiencies.

  18. Best Merge Region Growing with Integrated Probabilistic Classification for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2011-01-01

    A new method for spectral-spatial classification of hyperspectral images is proposed. The method is based on the integration of probabilistic classification within the hierarchical best merge region growing algorithm. For this purpose, preliminary probabilistic support vector machines classification is performed. Then, hierarchical step-wise optimization algorithm is applied, by iteratively merging regions with the smallest Dissimilarity Criterion (DC). The main novelty of this method consists in defining a DC between regions as a function of region statistical and geometrical features along with classification probabilities. Experimental results are presented on a 200-band AVIRIS image of the Northwestern Indiana s vegetation area and compared with those obtained by recently proposed spectral-spatial classification techniques. The proposed method improves classification accuracies when compared to other classification approaches.

  19. A Fast-Time Simulation Environment for Airborne Merging and Spacing Research

    NASA Technical Reports Server (NTRS)

    Bussink, Frank J. L.; Doble, Nathan A.; Barmore, Bryan E.; Singer, Sharon

    2005-01-01

    As part of NASA's Distributed Air/Ground Traffic Management (DAG-TM) effort, NASA Langley Research Center is developing concepts and algorithms for merging multiple aircraft arrival streams and precisely spacing aircraft over the runway threshold. An airborne tool has been created for this purpose, called Airborne Merging and Spacing for Terminal Arrivals (AMSTAR). To evaluate the performance of AMSTAR and complement human-in-the-loop experiments, a simulation environment has been developed that enables fast-time studies of AMSTAR operations. The environment is based on TMX, a multiple aircraft desktop simulation program created by the Netherlands National Aerospace Laboratory (NLR). This paper reviews the AMSTAR concept, discusses the integration of the AMSTAR algorithm into TMX and the enhancements added to TMX to support fast-time AMSTAR studies, and presents initial simulation results.

  20. Stereoselectivity in N-Iminium Ion Cyclization: Development of an Efficient Synthesis of (±)-Cephalotaxine.

    PubMed

    Liu, Hao; Yu, Jing; Li, Xinyu; Yan, Rui; Xiao, Ji-Chang; Hong, Ran

    2015-09-18

    A stereoselective N-iminium ion cyclization with allylsilane to construct vicinal quaternary-tertiary carbon centers was developed for the concise synthesis of (±)-cephalotaxine. The current strategy features a TiCl4-promoted cyclization and ring-closure metathesis to furnish the spiro-ring system. The stereochemical outcome in the N-acyliminium ion cyclization was rationalized by the stereoelectronic effect of the Z- or E-allylsilane. Two diastereomers arising from the cyclization were merged into the formal synthesis of (±)-cephalotaxine.

  1. Progress on the Use of Combined Analog and Photon Counting Detection for Raman Lidar

    NASA Technical Reports Server (NTRS)

    Newsom, Rob; Turner, Dave; Clayton, Marian; Ferrare, Richard

    2008-01-01

    The Atmospheric Radiation Measurement (ARM) program Raman Lidar (CARL) was upgraded in 2004 with a new data system that provides simultaneous measurements of both the photomultiplier analog output voltage and photon counts. The so-called merge value added procedure (VAP) was developed to combine the analog and count-rate signals into a single signal with improved dynamic range. Earlier versions of this VAP tended to cause unacceptably large biases in the water vapor mixing ratio during the daytime as a result of improper matching between the analog and count-rate signals in the presence of elevated solar background levels. We recently identified several problems and tested a modified version of the merge VAP by comparing profiles of water vapor mixing ratio derived from CARL with simultaneous sonde data over a six month period. We show that the modified merge VAP significantly reduces the daytime bias, and results in mean differences that are within approximately 1% for both nighttime and daytime measurements.

  2. Penetration of the interplanetary magnetic field B(sub y) magnetosheath plasma into the magnetosphere: Implications for the predominant magnetopause merging site

    NASA Technical Reports Server (NTRS)

    Newell, Patrick T.; Sibeck, David G.; Meng, Ching-I

    1995-01-01

    Magnetosheath plasma peertated into the magnetospere creating the particle cusp, and similarly the interplanetary magnetic field (IMF) B(sub y) component penetrates the magnetopause. We reexamine the phenomenology of such penetration to investigate implications for the magnetopause merging site. Three models are popular: (1) the 'antiparallel' model, in which merging occurs where the local magnetic shear is largest (usually high magnetic latitude); (2) a tilted merging line passing through the subsolar point but extending to very high latitudes; or (3) a tilted merging line passing through the subsolar point in which most merging occurs within a few Earth radii of the equatorial plane and local noon (subsolar merging). It is difficult to distinguish between the first two models, but the third implies some very different predictions. We show that properties of the particle cusp imply that plasma injection into the magnetosphere occurs most often at high magnetic latitudes. In particular, we note the following: (1) The altitude of the merging site inferred from midaltitude cusp ion pitch angle dispersion is typically 8-12 R(sub E). (2) The highest ion energy observable when moving poleward through the cusp drops long before the bulk of the cusp plasma is reached, implying that ions are swimming upstream against the sheath flow shortly after merging. (3) Low-energy ions are less able to enter the winter cusp than the summer cusp. (4) The local time behavior of the cusp as a function of B(sub y) and B(sub z) corroborates predictions of the high-latitude merging models. We also reconsider the penetration of the IMF B(sub y) component onto closed dayside field lines. Our approach, in which closed field lines ove to fill in flux voids created by asymmetric magnetopause flux erosion, shows that strich subsolar merging cannot account for the observations.

  3. Coupling the Solar-Wind/IMF to the Ionosphere through the High Latitude Cusps

    NASA Technical Reports Server (NTRS)

    Maynard, Nelson C.

    2003-01-01

    Magnetic merging is a primary means for coupling energy from the solar wind into the magnetosphere-ionosphere system. The location and nature of the process remain as open questions. By correlating measurements form diverse locations and using large-scale MHD models to put the measurements in context, it is possible to constrain out interpretations of the global and meso-scale dynamics of magnetic merging. Recent evidence demonstrates that merging often occurs at high latitudes in the vicinity of the cusps. The location is in part controlled by the clock angle in the interplanetary magnetic field (IMF) Y-Z plane. In fact, B(sub Y) bifurcated the cusp relative to source regions. The newly opened field lines may couple to the ionosphere at MLT locations of as much as 3 hr away from local noon. On the other side of noon the cusp may be connected to merging sites in the opposite hemisphere. In face, the small convection cell is generally driven by opposite hemisphere merging. B(sub X) controls the timing of the interaction and merging sites in each hemisphere, which may respond to planar features in the IMF at different times. Correlation times are variable and are controlled by the dynamics of the tilt of the interplanetary electric field phase plane. The orientation of the phase plane may change significantly on time scales of tens of minutes. Merging is temporally variable and may be occurring at multiple sites simultaneously. Accelerated electrons from the merging process excite optical signatures at the foot of the newly opened field lines. All-sky photometer observations of 557.7 nm emissions in the cusp region provide a "television picture" of the merging process and may be used to infer the temporal and spatial variability of merging, tied to variations in the IMF.

  4. Recent Plasma Observations Related to Magnetic Merging and the Low-Latitude Boundary Layer. Case Study by Polar, March 18, 2006

    NASA Technical Reports Server (NTRS)

    Chandler, M.; Avanov, L.; Craven, P.; Mozer, F.; Moore, T. E.

    2007-01-01

    We have begun an investigation of the nature of the low-latitude boundary layer in the mid-altitude cusp region using data from the Polar spacecraft. Magnetosheath-like plasma is frequently observed deep (in terms of distance from the magnetopause and in invariant latitude) in the magnetosphere. One such case, taken during a long period of northward interplanetary magnetic field (IMP) on March 18, 2006, shows injected magnetosheath ions within the magnetosphere with velocity distributions resulting from two separate merging sites along the same field lines. Cold ionospheric ions were also observed counterstreaming along the field lines, evidence that these field lines were closed. Our results support the idea of double reconnection under northward IMP on the same group of field lines can provide a source for the LLBL. However, the flow direction of the accelerated magnetosheath ions antiparallel to the local magnetic field and given location of the spacecraft suggest that these two injection sites are located northward of the spacecraft position. Observed convection velocities of the magnetic field lines are inconsistent with those expected for double post-cusp reconnection in both hemispheres. These observations favor a scenario in which a group of newly closed field lines was created by a combination of high shear merging at high latitudes in the northern hemisphere and low shear merging at lower latitudes at the dayside magnetopause.

  5. The eMERGE Network: a consortium of biorepositories linked to electronic medical records data for conducting genomic studies.

    PubMed

    McCarty, Catherine A; Chisholm, Rex L; Chute, Christopher G; Kullo, Iftikhar J; Jarvik, Gail P; Larson, Eric B; Li, Rongling; Masys, Daniel R; Ritchie, Marylyn D; Roden, Dan M; Struewing, Jeffery P; Wolf, Wendy A

    2011-01-26

    The eMERGE (electronic MEdical Records and GEnomics) Network is an NHGRI-supported consortium of five institutions to explore the utility of DNA repositories coupled to Electronic Medical Record (EMR) systems for advancing discovery in genome science. eMERGE also includes a special emphasis on the ethical, legal and social issues related to these endeavors. The five sites are supported by an Administrative Coordinating Center. Setting of network goals is initiated by working groups: (1) Genomics, (2) Informatics, and (3) Consent & Community Consultation, which also includes active participation by investigators outside the eMERGE funded sites, and (4) Return of Results Oversight Committee. The Steering Committee, comprised of site PIs and representatives and NHGRI staff, meet three times per year, once per year with the External Scientific Panel. The primary site-specific phenotypes for which samples have undergone genome-wide association study (GWAS) genotyping are cataract and HDL, dementia, electrocardiographic QRS duration, peripheral arterial disease, and type 2 diabetes. A GWAS is also being undertaken for resistant hypertension in ≈ 2,000 additional samples identified across the network sites, to be added to data available for samples already genotyped. Funded by ARRA supplements, secondary phenotypes have been added at all sites to leverage the genotyping data, and hypothyroidism is being analyzed as a cross-network phenotype. Results are being posted in dbGaP. Other key eMERGE activities include evaluation of the issues associated with cross-site deployment of common algorithms to identify cases and controls in EMRs, data privacy of genomic and clinically-derived data, developing approaches for large-scale meta-analysis of GWAS data across five sites, and a community consultation and consent initiative at each site. Plans are underway to expand the network in diversity of populations and incorporation of GWAS findings into clinical care. By combining advanced clinical informatics, genome science, and community consultation, eMERGE represents a first step in the development of data-driven approaches to incorporate genomic information into routine healthcare delivery.

  6. Modeling cooperative driving behavior in freeway merges.

    DOT National Transportation Integrated Search

    2011-11-01

    Merging locations are major sources of freeway bottlenecks and are therefore important for freeway operations analysis. Microscopic simulation tools have been successfully used to analyze merging bottlenecks and to design optimum geometric configurat...

  7. Efficient replication of a paramyxovirus independent of full zippering of the fusion protein six-helix bundle domain

    PubMed Central

    Brindley, Melinda A.; Plattet, Philippe; Plemper, Richard Karl

    2014-01-01

    Enveloped viruses such as HIV and members of the paramyxovirus family use metastable, proteinaceous fusion machineries to merge the viral envelope with cellular membranes for infection. A hallmark of the fusogenic glycoproteins of these pathogens is refolding into a thermodynamically highly stable fusion core structure composed of six antiparallel α-helices, and this structure is considered instrumental for pore opening and/or enlargement. Using a paramyxovirus fusion (F) protein, we tested this paradigm by engineering covalently restricted F proteins that are predicted to be unable to close the six-helix bundle core structure fully. Several candidate bonds formed efficiently, resulting in F trimers and higher-order complexes containing covalently linked dimers. The engineered F complexes were incorporated into recombinant virions efficiently and were capable of refolding into a postfusion conformation without temporary or permanent disruption of the disulfide bonds. They efficiently formed fusion pores based on virus replication and quantitative cell-to-cell and virus-to-cell fusion assays. Complementation of these F mutants with a monomeric, fusion-inactive F variant enriched the F oligomers for heterotrimers containing a single disulfide bond, without affecting fusion complementation profiles compared with standard F protein. Our demonstration that complete closure of the fusion core does not drive paramyxovirus entry may aid the design of strategies for inhibiting virus entry. PMID:25157143

  8. Efficient replication of a paramyxovirus independent of full zippering of the fusion protein six-helix bundle domain.

    PubMed

    Brindley, Melinda A; Plattet, Philippe; Plemper, Richard Karl

    2014-09-09

    Enveloped viruses such as HIV and members of the paramyxovirus family use metastable, proteinaceous fusion machineries to merge the viral envelope with cellular membranes for infection. A hallmark of the fusogenic glycoproteins of these pathogens is refolding into a thermodynamically highly stable fusion core structure composed of six antiparallel α-helices, and this structure is considered instrumental for pore opening and/or enlargement. Using a paramyxovirus fusion (F) protein, we tested this paradigm by engineering covalently restricted F proteins that are predicted to be unable to close the six-helix bundle core structure fully. Several candidate bonds formed efficiently, resulting in F trimers and higher-order complexes containing covalently linked dimers. The engineered F complexes were incorporated into recombinant virions efficiently and were capable of refolding into a postfusion conformation without temporary or permanent disruption of the disulfide bonds. They efficiently formed fusion pores based on virus replication and quantitative cell-to-cell and virus-to-cell fusion assays. Complementation of these F mutants with a monomeric, fusion-inactive F variant enriched the F oligomers for heterotrimers containing a single disulfide bond, without affecting fusion complementation profiles compared with standard F protein. Our demonstration that complete closure of the fusion core does not drive paramyxovirus entry may aid the design of strategies for inhibiting virus entry.

  9. Parameter estimation for chaotic systems using a hybrid adaptive cuckoo search with simulated annealing algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sheng, Zheng, E-mail: 19994035@sina.com; Wang, Jun; Zhou, Bihua

    2014-03-15

    This paper introduces a novel hybrid optimization algorithm to establish the parameters of chaotic systems. In order to deal with the weaknesses of the traditional cuckoo search algorithm, the proposed adaptive cuckoo search with simulated annealing algorithm is presented, which incorporates the adaptive parameters adjusting operation and the simulated annealing operation in the cuckoo search algorithm. Normally, the parameters of the cuckoo search algorithm are kept constant that may result in decreasing the efficiency of the algorithm. For the purpose of balancing and enhancing the accuracy and convergence rate of the cuckoo search algorithm, the adaptive operation is presented tomore » tune the parameters properly. Besides, the local search capability of cuckoo search algorithm is relatively weak that may decrease the quality of optimization. So the simulated annealing operation is merged into the cuckoo search algorithm to enhance the local search ability and improve the accuracy and reliability of the results. The functionality of the proposed hybrid algorithm is investigated through the Lorenz chaotic system under the noiseless and noise condition, respectively. The numerical results demonstrate that the method can estimate parameters efficiently and accurately in the noiseless and noise condition. Finally, the results are compared with the traditional cuckoo search algorithm, genetic algorithm, and particle swarm optimization algorithm. Simulation results demonstrate the effectiveness and superior performance of the proposed algorithm.« less

  10. Empirical correspondence between trophic transfer efficiency in freshwater food webs and the slope of their size spectra.

    PubMed

    Mehner, Thomas; Lischke, Betty; Scharnweber, Kristin; Attermeyer, Katrin; Brothers, Soren; Gaedke, Ursula; Hilt, Sabine; Brucet, Sandra

    2018-06-01

    The density of organisms declines with size, because larger organisms need more energy than smaller ones and energetic losses occur when larger organisms feed on smaller ones. A potential expression of density-size distributions are Normalized Biomass Size Spectra (NBSS), which plot the logarithm of biomass independent of taxonomy within bins of logarithmic organismal size, divided by the bin width. Theoretically, the NBSS slope of multi-trophic communities is exactly -1.0 if the trophic transfer efficiency (TTE, ratio of production rates between adjacent trophic levels) is 10% and the predator-prey mass ratio (PPMR) is fixed at 10 4 . Here we provide evidence from four multi-trophic lake food webs that empirically estimated TTEs correspond to empirically estimated slopes of the respective community NBSS. Each of the NBSS considered pelagic and benthic organisms spanning size ranges from bacteria to fish, all sampled over three seasons in 1 yr. The four NBSS slopes were significantly steeper than -1.0 (range -1.14 to -1.19, with 95% CIs excluding -1). The corresponding average TTEs were substantially lower than 10% in each of the four food webs (range 1.0% to 3.6%, mean 1.85%). The overall slope merging all biomass-size data pairs from the four systems (-1.17) was almost identical to the slope predicted from the arithmetic mean TTE of the four food webs (-1.18) assuming a constant PPMR of 10 4 . Accordingly, our empirical data confirm the theoretically predicted quantitative relationship between TTE and the slope of the biomass-size distribution. Furthermore, we show that benthic and pelagic organisms can be merged into a community NBSS, but future studies have yet to explore potential differences in habitat-specific TTEs and PPMRs. We suggest that community NBSS may provide valuable information on the structure of food webs and their energetic pathways, and can result in improved accuracy of TTE-estimates. © 2018 by the Ecological Society of America.

  11. Co-axial heterostructures integrating palladium/titanium dioxide with carbon nanotubes for efficient electrocatalytic hydrogen evolution.

    PubMed

    Valenti, Giovanni; Boni, Alessandro; Melchionna, Michele; Cargnello, Matteo; Nasi, Lucia; Bertoni, Giovanni; Gorte, Raymond J; Marcaccio, Massimo; Rapino, Stefania; Bonchio, Marcella; Fornasiero, Paolo; Prato, Maurizio; Paolucci, Francesco

    2016-12-12

    Considering the depletion of fossil-fuel reserves and their negative environmental impact, new energy schemes must point towards alternative ecological processes. Efficient hydrogen evolution from water is one promising route towards a renewable energy economy and sustainable development. Here we show a tridimensional electrocatalytic interface, featuring a hierarchical, co-axial arrangement of a palladium/titanium dioxide layer on functionalized multi-walled carbon nanotubes. The resulting morphology leads to a merging of the conductive nanocarbon core with the active inorganic phase. A mechanistic synergy is envisioned by a cascade of catalytic events promoting water dissociation, hydride formation and hydrogen evolution. The nanohybrid exhibits a performance exceeding that of state-of-the-art electrocatalysts (turnover frequency of 15000 H 2 per hour at 50 mV overpotential). The Tafel slope of ∼130 mV per decade points to a rate-determining step comprised of water dissociation and formation of hydride. Comparative activities of the isolated components or their physical mixtures demonstrate that the good performance evolves from the synergistic hierarchical structure.

  12. Co-axial heterostructures integrating palladium/titanium dioxide with carbon nanotubes for efficient electrocatalytic hydrogen evolution

    NASA Astrophysics Data System (ADS)

    Valenti, Giovanni; Boni, Alessandro; Melchionna, Michele; Cargnello, Matteo; Nasi, Lucia; Bertoni, Giovanni; Gorte, Raymond J.; Marcaccio, Massimo; Rapino, Stefania; Bonchio, Marcella; Fornasiero, Paolo; Prato, Maurizio; Paolucci, Francesco

    2016-12-01

    Considering the depletion of fossil-fuel reserves and their negative environmental impact, new energy schemes must point towards alternative ecological processes. Efficient hydrogen evolution from water is one promising route towards a renewable energy economy and sustainable development. Here we show a tridimensional electrocatalytic interface, featuring a hierarchical, co-axial arrangement of a palladium/titanium dioxide layer on functionalized multi-walled carbon nanotubes. The resulting morphology leads to a merging of the conductive nanocarbon core with the active inorganic phase. A mechanistic synergy is envisioned by a cascade of catalytic events promoting water dissociation, hydride formation and hydrogen evolution. The nanohybrid exhibits a performance exceeding that of state-of-the-art electrocatalysts (turnover frequency of 15000 H2 per hour at 50 mV overpotential). The Tafel slope of ~130 mV per decade points to a rate-determining step comprised of water dissociation and formation of hydride. Comparative activities of the isolated components or their physical mixtures demonstrate that the good performance evolves from the synergistic hierarchical structure.

  13. Realizing luminescent downshifting in ZnO thin films by Ce doping with enhancement of photocatalytic activity

    NASA Astrophysics Data System (ADS)

    Narayanan, Nripasree; Deepak, N. K.

    2018-04-01

    ZnO thin films doped with Ce at different concentration were deposited on glass substrates by spray pyrolysis technique. XRD analysis revealed the phase purity and polycrystalline nature of the films with hexagonal wurtzite geometry and the composition analysis confirmed the incorporation of Ce in the ZnO lattice in the case of doped films. Crystalline quality and optical transmittance diminished while electrical conductivity enhanced with Ce doping. Ce doping resulted in a red-shift of optical energy gap due to the downshift of the conduction band minimum after merging with Ce related impurity bands formed below the conduction band in the forbidden gap. In the room temperature photoluminescence spectra, UV emission intensity of the doped films decreased while the intensity of the visible emission band increased drastically implying the degradation in crystallinity as well as the incorporation of defect levels capable of luminescence downshifting. Ce doping showed improvement in photocatalytic efficiency by effectively trapping the free carriers and then transferring for dye degradation. Thus Ce doped ZnO thin films are capable of acting as luminescent downshifters as well as efficient photocatalysts.

  14. DNA barcode-based delineation of putative species: efficient start for taxonomic workflows

    PubMed Central

    Kekkonen, Mari; Hebert, Paul D N

    2014-01-01

    The analysis of DNA barcode sequences with varying techniques for cluster recognition provides an efficient approach for recognizing putative species (operational taxonomic units, OTUs). This approach accelerates and improves taxonomic workflows by exposing cryptic species and decreasing the risk of synonymy. This study tested the congruence of OTUs resulting from the application of three analytical methods (ABGD, BIN, GMYC) to sequence data for Australian hypertrophine moths. OTUs supported by all three approaches were viewed as robust, but 20% of the OTUs were only recognized by one or two of the methods. These OTUs were examined for three criteria to clarify their status. Monophyly and diagnostic nucleotides were both uninformative, but information on ranges was useful as sympatric sister OTUs were viewed as distinct, while allopatric OTUs were merged. This approach revealed 124 OTUs of Hypertrophinae, a more than twofold increase from the currently recognized 51 species. Because this analytical protocol is both fast and repeatable, it provides a valuable tool for establishing a basic understanding of species boundaries that can be validated with subsequent studies. PMID:24479435

  15. Co-axial heterostructures integrating palladium/titanium dioxide with carbon nanotubes for efficient electrocatalytic hydrogen evolution

    PubMed Central

    Valenti, Giovanni; Boni, Alessandro; Melchionna, Michele; Cargnello, Matteo; Nasi, Lucia; Bertoni, Giovanni; Gorte, Raymond J.; Marcaccio, Massimo; Rapino, Stefania; Bonchio, Marcella; Fornasiero, Paolo; Prato, Maurizio; Paolucci, Francesco

    2016-01-01

    Considering the depletion of fossil-fuel reserves and their negative environmental impact, new energy schemes must point towards alternative ecological processes. Efficient hydrogen evolution from water is one promising route towards a renewable energy economy and sustainable development. Here we show a tridimensional electrocatalytic interface, featuring a hierarchical, co-axial arrangement of a palladium/titanium dioxide layer on functionalized multi-walled carbon nanotubes. The resulting morphology leads to a merging of the conductive nanocarbon core with the active inorganic phase. A mechanistic synergy is envisioned by a cascade of catalytic events promoting water dissociation, hydride formation and hydrogen evolution. The nanohybrid exhibits a performance exceeding that of state-of-the-art electrocatalysts (turnover frequency of 15000 H2 per hour at 50 mV overpotential). The Tafel slope of ∼130 mV per decade points to a rate-determining step comprised of water dissociation and formation of hydride. Comparative activities of the isolated components or their physical mixtures demonstrate that the good performance evolves from the synergistic hierarchical structure. PMID:27941752

  16. Design and Implementation of Scientific Software Components to Enable Multiscale Modeling: The Effective Fragment Potential (QM/EFP) Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaenko, Alexander; Windus, Theresa L.; Sosonkina, Masha

    2012-10-19

    The design and development of scientific software components to provide an interface to the effective fragment potential (EFP) methods are reported. Multiscale modeling of physical and chemical phenomena demands the merging of software packages developed by research groups in significantly different fields. Componentization offers an efficient way to realize new high performance scientific methods by combining the best models available in different software packages without a need for package readaptation after the initial componentization is complete. The EFP method is an efficient electronic structure theory based model potential that is suitable for predictive modeling of intermolecular interactions in large molecularmore » systems, such as liquids, proteins, atmospheric aerosols, and nanoparticles, with an accuracy that is comparable to that of correlated ab initio methods. The developed components make the EFP functionality accessible for any scientific component-aware software package. The performance of the component is demonstrated on a protein interaction model, and its accuracy is compared with results obtained with coupled cluster methods.« less

  17. Galaxy mergers and gravitational lens statistics

    NASA Technical Reports Server (NTRS)

    Rix, Hans-Walter; Maoz, Dan; Turner, Edwin L.; Fukugita, Masataka

    1994-01-01

    We investigate the impact of hierarchical galaxy merging on the statistics of gravitational lensing of distant sources. Since no definite theoretical predictions for the merging history of luminous galaxies exist, we adopt a parameterized prescription, which allows us to adjust the expected number of pieces comprising a typical present galaxy at z approximately 0.65. The existence of global parameter relations for elliptical galaxies and constraints on the evolution of the phase space density in dissipationless mergers, allow us to limit the possible evolution of galaxy lens properties under merging. We draw two lessons from implementing this lens evolution into statistical lens calculations: (1) The total optical depth to multiple imaging (e.g., of quasars) is quite insensitive to merging. (2) Merging leads to a smaller mean separation of observed multiple images. Because merging does not reduce drastically the expected lensing frequency, it cannot make lambda-dominated cosmologies compatible with the existing lensing observations. A comparison with the data from the Hubble Space Telescope (HST) Snapshot Survey shows that models with little or no evolution of the lens population are statistically favored over strong merging scenarios. A specific merging scenario proposed to Toomre can be rejected (95% level) by such a comparison. Some versions of the scenario proposed by Broadhurst, Ellis, & Glazebrook are statistically acceptable.

  18. Modeling merging behavior at lane drops : [tech transfer summary].

    DOT National Transportation Integrated Search

    2015-02-01

    A better understanding of the merging behavior of drivers will lead : to the development of better lane-drop traffic-control plans and : strategies, which will provide better guidance to drivers for safer : merging.

  19. Simulation of the target creation through FRC merging for a magneto-inertial fusion concept

    NASA Astrophysics Data System (ADS)

    Li, Chenguang; Yang, Xianjun

    2017-04-01

    A two-dimensional magnetohydrodynamics model has been used to simulate the target creation process in a magneto-inertial fusion concept named Magnetized Plasma Fusion Reactor (MPFR) [C. Li and X. Yang, Phys. Plasmas 23, 102702 (2016)], where the target plasma created through Field reversed configuration (FRC) merging was compressed by an imploding liner driven by the pulsed-power driver. In the scheme, two initial FRCs (Field reversed configurations) are translated into the region where FRC merging occurs, bringing out the target plasma ready for compression. The simulations cover the three stages of the target creation process: formation, translation, and merging. The factors affecting the achieved target are analyzed numerically. The magnetic field gradient produced by the conical coils is found to determine how fast the FRC is accelerated to peak velocity and the collision merging occurs. Moreover, it is demonstrated that FRC merging can be realized by real coils with gaps showing nearly identical performance, and the optimized target by FRC merging shows larger internal energy and retained flux, which is more suitable for the MPFR concept.

  20. Merged SAGE II / MIPAS / OMPS Ozone Record : Impact of Transfer Standard on Ozone Trends.

    NASA Astrophysics Data System (ADS)

    Kramarova, N. A.; Laeng, A.; von Clarmann, T.; Stiller, G. P.; Walker, K. A.; Zawodny, J. M.; Plieninger, J.

    2017-12-01

    The deseasonalized ozone anomalies from SAGE II, MIPAS and OMPS-LP datasets are merged into one long record. Two versions of the dataset will be presented : ACE-FTS instrument or MLS instrument are used as a transfer standard. The data are provided in 10 degrees latitude bins, going from 60N to 60S for the period from October 1984 to March 2017. The main differences between presented in this study merged ozone record and the merged SAGE II / Ozone_CCI / OMPS-Saskatoon dataset by V. Sofieva are: - the OMPS-LP data are from the NASA GSFC version 2 processor - the MIPAS 2002-2004 date are taken into the record - Data are merged using a transfer standard. In overlapping periods data are merged as weighted means where the weights are inversely proportional to the standard errors of the means (SEM) of the corresponding individual monthly means. The merged dataset comes with the uncertainty estimates. Ozone trends are calculated out of both versions of the dataset. The impact of transfer standard on obtained trends is discussed.

  1. Convolutional networks for fast, energy-efficient neuromorphic computing

    PubMed Central

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  2. Convolutional networks for fast, energy-efficient neuromorphic computing.

    PubMed

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  3. LP01 to LP11 mode convertor based on side-polished small-core single-mode fiber

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Li, Yang; Li, Wei-dong

    2018-03-01

    An all-fiber LP01-LP11 mode convertor based on side-polished small-core single-mode fibers (SMFs) is numerically demonstrated. The linearly polarized incident beam in one arm experiences π shift through a fiber half waveplate, and the side-polished parts merge into an equivalent twin-core fiber (TCF) which spatially shapes the incident LP01 modes to the LP11 mode supported by the step-index few-mode fiber (FMF). Optimum conditions for the highest conversion efficiency are investigated using the beam propagation method (BPM) with an approximate efficiency as high as 96.7%. The proposed scheme can operate within a wide wavelength range from 1.3 μm to1.7 μm with overall conversion efficiency greater than 95%. The effective mode area and coupling loss are also characterized in detail by finite element method (FEM).

  4. A Kinetic-MHD Theory for the Self-Consistent Energy Exchange Between Energetic Particles and Active Small-scale Flux Ropes

    NASA Astrophysics Data System (ADS)

    le Roux, J. A.

    2017-12-01

    We developed previously a focused transport kinetic theory formalism with Fokker-plank coefficients (and its Parker transport limit) to model large-scale energetic particle transport and acceleration in solar wind regions with multiple contracting and merging small-scale flux ropes on MHD (inertial) scales (Zank et al. 2014; le Roux et al. 2015). The theory unifies the main acceleration mechanisms identified in particle simulations for particles temporarily trapped in such active flux rope structures, such as acceleration by the parallel electric field in reconnection regions between merging flux ropes, curvature drift acceleration in incompressible/compressible contracting and merging flux ropes, and betatron acceleration (e.g., Dahlin et al 2016). Initial analytical solutions of the Parker transport equation in the test particle limit showed that the energetic particle pressure from efficient flux-rope energization can potentially be high in turbulent solar wind regions containing active flux-rope structures. This requires taking into account the back reaction of energetic particles on flux ropes to more accurately determine the efficiency of energetic particles acceleration by small-scale flux ropes. To accomplish this goal we developed recently an extension of the kinetic theory to a kinetic-MHD level. We will present the extended theory showing the focused transport equation to be coupled to a solar wind MHD transport equation for small-scale flux-rope energy density extracted from a recently published nearly incompressible theory for solar wind MHD turbulence with a plasma beta of 1 (Zank et al. 2017). In the flux-rope transport equation appears new expressions for the damping/growth rates of flux-rope energy derived from assuming energy conservation in the interaction between energetic particles and small-scale flux ropes for all the main flux-rope acceleration mechanisms, whereas previous expressions for average particle acceleration rates have been explored in more detail. Future applications will involve exploring the relative role of diffusive shock and flux-ropes acceleration in the vicinity of traveling shocks in the supersonic solar wind near Earth where many flux-rope structures were detected recently (Hu et al 2017, this session).

  5. CLASH: THE ENHANCED LENSING EFFICIENCY OF THE HIGHLY ELONGATED MERGING CLUSTER MACS J0416.1-2403

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitrin, A.; Bartelmann, M.; Carrasco, M.

    2013-01-10

    We perform a strong lensing analysis of the merging galaxy cluster MACS J0416.1-2403 (M0416; z = 0.42) in recent CLASH/HST observations. We identify 70 new multiple images and candidates of 23 background sources in the range 0.7 {approx}< z{sub phot} {approx}< 6.14 including two probable high-redshift dropouts, revealing a highly elongated lens with axis ratio {approx_equal}5:1, and a major axis of {approx}100'' (z{sub s} {approx} 2). Compared to other well-studied clusters, M0416 shows an enhanced lensing efficiency. Although the critical area is not particularly large ({approx_equal} 0.6 {open_square}'; z{sub s} {approx} 2), the number of multiple images, per critical area,more » is anomalously high. We calculate that the observed elongation boosts the number of multiple images, per critical area, by a factor of {approx}2.5 Multiplication-Sign , due to the increased ratio of the caustic area relative to the critical area. Additionally, we find that the observed separation between the two main mass components enlarges the critical area by a factor of {approx}2. These geometrical effects can account for the high number (density) of multiple images observed. We find in numerical simulations that only {approx}4% of the clusters (with M{sub vir} {>=} 6 Multiplication-Sign 10{sup 14} h {sup -1} M{sub Sun }) exhibit critical curves as elongated as in M0416.« less

  6. Robustness via Run-Time Adaptation of Contingent Plans

    NASA Technical Reports Server (NTRS)

    Bresina, John L.; Washington, Richard; Norvig, Peter (Technical Monitor)

    2000-01-01

    In this paper, we discuss our approach to making the behavior of planetary rovers more robust for the purpose of increased productivity. Due to the inherent uncertainty in rover exploration, the traditional approach to rover control is conservative, limiting the autonomous operation of the rover and sacrificing performance for safety. Our objective is to increase the science productivity possible within a single uplink by allowing the rover's behavior to be specified with flexible, contingent plans and by employing dynamic plan adaptation during execution. We have deployed a system exhibiting flexible, contingent execution; this paper concentrates on our ongoing efforts on plan adaptation, Plans can be revised in two ways: plan steps may be deleted, with execution continuing with the plan suffix; and the current plan may be merged with an "alternate plan" from an on-board library. The plan revision action is chosen to maximize the expected utility of the plan. Plan merging and action deletion constitute a more conservative general-purpose planning system; in return, our approach is more efficient and more easily verified, two important criteria for deployed rovers.

  7. Summit-to-sea mapping and change detection using satellite imagery: tools for conservation and management of coral reefs.

    PubMed

    Shapiro, A C; Rohmann, S O

    2005-05-01

    Continuous summit-to-sea maps showing both land features and shallow-water coral reefs have been completed in Puerto Rico and the U.S. Virgin Islands, using circa 2000 Landsat 7 Enhanced Thematic Mapper (ETM+) Imagery. Continuous land/sea terrain was mapped by merging Digital Elevation Models (DEM) with satellite-derived bathymetry. Benthic habitat characterizations were created by unsupervised classifications of Landsat imagery clustered using field data, and produced maps with an estimated overall accuracy of>75% (Tau coefficient >0.65). These were merged with Geocover-LC (land use/land cover) data to create continuous land/ sea cover maps. Image pairs from different dates were analyzed using Principle Components Analysis (PCA) in order to detect areas of change in the marine environment over two different time intervals: 2000 to 2001, and 1991 to 2003. This activity demonstrates the capabilities of Landsat imagery to produce continuous summit-to-sea maps, as well as detect certain changes in the shallow-water marine environment, providing a valuable tool for efficient coastal zone monitoring and effective management and conservation.

  8. Technical Note: Improving the VMERGE treatment planning algorithm for rotational radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaddy, Melissa R., E-mail: mrgaddy@ncsu.edu; Papp,

    2016-07-15

    Purpose: The authors revisit the VMERGE treatment planning algorithm by Craft et al. [“Multicriteria VMAT optimization,” Med. Phys. 39, 686–696 (2012)] for arc therapy planning and propose two changes to the method that are aimed at improving the achieved trade-off between treatment time and plan quality at little additional planning time cost, while retaining other desirable properties of the original algorithm. Methods: The original VMERGE algorithm first computes an “ideal,” high quality but also highly time consuming treatment plan that irradiates the patient from all possible angles in a fine angular grid with a highly modulated beam and then makesmore » this plan deliverable within practical treatment time by an iterative fluence map merging and sequencing algorithm. We propose two changes to this method. First, we regularize the ideal plan obtained in the first step by adding an explicit constraint on treatment time. Second, we propose a different merging criterion that comprises of identifying and merging adjacent maps whose merging results in the least degradation of radiation dose. Results: The effect of both suggested modifications is evaluated individually and jointly on clinical prostate and paraspinal cases. Details of the two cases are reported. Conclusions: In the authors’ computational study they found that both proposed modifications, especially the regularization, yield noticeably improved treatment plans for the same treatment times than what can be obtained using the original VMERGE method. The resulting plans match the quality of 20-beam step-and-shoot IMRT plans with a delivery time of approximately 2 min.« less

  9. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network.

    PubMed

    Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C

    2013-06-01

    Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing.

  10. Validity analysis on merged and averaged data using within and between analysis: focus on effect of qualitative social capital on self-rated health.

    PubMed

    Shin, Sang Soo; Shin, Young-Jeon

    2016-01-01

    With an increasing number of studies highlighting regional social capital (SC) as a determinant of health, many studies are using multi-level analysis with merged and averaged scores of community residents' survey responses calculated from community SC data. Sufficient examination is required to validate if the merged and averaged data can represent the community. Therefore, this study analyzes the validity of the selected indicators and their applicability in multi-level analysis. Within and between analysis (WABA) was performed after creating community variables using merged and averaged data of community residents' responses from the 2013 Community Health Survey in Korea, using subjective self-rated health assessment as a dependent variable. Further analysis was performed following the model suggested by WABA result. Both E-test results (1) and WABA results (2) revealed that single-level analysis needs to be performed using qualitative SC variable with cluster mean centering. Through single-level multivariate regression analysis, qualitative SC with cluster mean centering showed positive effect on self-rated health (0.054, p<0.001), although there was no substantial difference in comparison to analysis using SC variables without cluster mean centering or multi-level analysis. As modification in qualitative SC was larger within the community than between communities, we validate that relational analysis of individual self-rated health can be performed within the group, using cluster mean centering. Other tests besides the WABA can be performed in the future to confirm the validity of using community variables and their applicability in multi-level analysis.

  11. Seamless contiguity method for parallel segmentation of remote sensing image

    NASA Astrophysics Data System (ADS)

    Wang, Geng; Wang, Guanghui; Yu, Mei; Cui, Chengling

    2015-12-01

    Seamless contiguity is the key technology for parallel segmentation of remote sensing data with large quantities. It can be effectively integrate fragments of the parallel processing into reasonable results for subsequent processes. There are numerous methods reported in the literature for seamless contiguity, such as establishing buffer, area boundary merging and data sewing. et. We proposed a new method which was also based on building buffers. The seamless contiguity processes we adopt are based on the principle: ensuring the accuracy of the boundary, ensuring the correctness of topology. Firstly, block number is computed based on data processing ability, unlike establishing buffer on both sides of block line, buffer is established just on the right side and underside of the line. Each block of data is segmented respectively and then gets the segmentation objects and their label value. Secondly, choose one block(called master block) and do stitching on the adjacent blocks(called slave block), process the rest of the block in sequence. Through the above processing, topological relationship and boundaries of master block are guaranteed. Thirdly, if the master block polygons boundaries intersect with buffer boundary and the slave blocks polygons boundaries intersect with block line, we adopt certain rules to merge and trade-offs them. Fourthly, check the topology and boundary in the buffer area. Finally, a set of experiments were conducted and prove the feasibility of this method. This novel seamless contiguity algorithm provides an applicable and practical solution for efficient segmentation of massive remote sensing image.

  12. The Careful Puppet Master: Reducing risk and fortifying acceptance testing with Jenkins CI

    NASA Astrophysics Data System (ADS)

    Smith, Jason A.; Richman, Gabriel; DeStefano, John; Pryor, James; Rao, Tejas; Strecker-Kellogg, William; Wong, Tony

    2015-12-01

    Centralized configuration management, including the use of automation tools such as Puppet, can greatly increase provisioning speed and efficiency when configuring new systems or making changes to existing systems, reduce duplication of work, and improve automated processes. However, centralized management also brings with it a level of inherent risk: a single change in just one file can quickly be pushed out to thousands of computers and, if that change is not properly and thoroughly tested and contains an error, could result in catastrophic damage to many services, potentially bringing an entire computer facility offline. Change management procedures can—and should—be formalized in order to prevent such accidents. However, like the configuration management process itself, if such procedures are not automated, they can be difficult to enforce strictly. Therefore, to reduce the risk of merging potentially harmful changes into our production Puppet environment, we have created an automated testing system, which includes the Jenkins CI tool, to manage our Puppet testing process. This system includes the proposed changes and runs Puppet on a pool of dozens of RedHat Enterprise Virtualization (RHEV) virtual machines (VMs) that replicate most of our important production services for the purpose of testing. This paper describes our automated test system and how it hooks into our production approval process for automatic acceptance testing. All pending changes that have been pushed to production must pass this validation process before they can be approved and merged into production.

  13. Development of a definition, classification system, and model for cultural geology

    NASA Astrophysics Data System (ADS)

    Mitchell, Lloyd W., III

    The concept for this study is based upon a personal interest by the author, an American Indian, in promoting cultural perspectives in undergraduate college teaching and learning environments. Most academicians recognize that merged fields can enhance undergraduate curricula. However, conflict may occur when instructors attempt to merge social science fields such as history or philosophy with geoscience fields such as mining and geomorphology. For example, ideologies of Earth structures derived from scientific methodologies may conflict with historical and spiritual understandings of Earth structures held by American Indians. Specifically, this study addresses the problem of how to combine cultural studies with the geosciences into a new merged academic discipline called cultural geology. This study further attempts to develop the merged field of cultural geology using an approach consisting of three research foci: a definition, a classification system, and a model. Literature reviews were conducted for all three foci. Additionally, to better understand merged fields, a literature review was conducted specifically for academic fields that merged social and physical sciences. Methodologies concentrated on the three research foci: definition, classification system, and model. The definition was derived via a two-step process. The first step, developing keyword hierarchical ranking structures, was followed by creating and analyzing semantic word meaning lists. The classification system was developed by reviewing 102 classification systems and incorporating selected components into a system framework. The cultural geology model was created also utilizing a two-step process. A literature review of scientific models was conducted. Then, the definition and classification system were incorporated into a model felt to reflect the realm of cultural geology. A course syllabus was then developed that incorporated the resulting definition, classification system, and model. This study concludes that cultural geology can be introduced as a merged discipline by using a three-foci framework consisting of a definition, classification system, and model. Additionally, this study reveals that cultural beliefs, attitudes, and behaviors, can be incorporated into a geology course during the curriculum development process, using an approach known as 'learner-centered'. This study further concludes that cultural beliefs, derived from class members, are an important source of curriculum materials.

  14. Galaxy evolution in merging clusters: The passive core of the "Train Wreck" cluster of galaxies, A 520

    NASA Astrophysics Data System (ADS)

    Deshev, Boris; Finoguenov, Alexis; Verdugo, Miguel; Ziegler, Bodo; Park, Changbom; Hwang, Ho Seong; Haines, Christopher; Kamphuis, Peter; Tamm, Antti; Einasto, Maret; Hwang, Narae; Park, Byeong-Gon

    2017-11-01

    Aims: The mergers of galaxy clusters are the most energetic events in the Universe after the Big Bang. With the increased availability of multi-object spectroscopy and X-ray data, an ever increasing fraction of local clusters are recognised as exhibiting signs of recent or past merging events on various scales. Our goal is to probe how these mergers affect the evolution and content of their member galaxies. We specifically aim to answer the following questions: is the quenching of star formation in merging clusters enhanced when compared with relaxed clusters? Is the quenching preceded by a (short-lived) burst of star formation? Methods: We obtained optical spectroscopy of >400 galaxies in the field of the merging cluster Abell 520. We combine these observations with archival data to obtain a comprehensive picture of the state of star formation in the members of this merging cluster. Finally, we compare these observations with a control sample of ten non-merging clusters at the same redshift from The Arizona Cluster Redshift Survey (ACReS). We split the member galaxies into passive, star forming or recently quenched depending on their spectra. Results: The core of the merger shows a decreased fraction of star forming galaxies compared to clusters in the non-merging sample. This region, dominated by passive galaxies, is extended along the axis of the merger. We find evidence of rapid quenching of the galaxies during the core passage with no signs of a star burst on the time scales of the merger (≲0.4 Gyr). Additionally, we report the tentative discovery of an infalling group along the main filament feeding the merger, currently at 2.5 Mpc from the merger centre. This group contains a high fraction of star forming galaxies as well as approximately two thirds of all the recently quenched galaxies in our survey. The reduced spectra are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A131

  15. Banging Galaxy Clusters: High Fidelity X-ray Temperature and Radio Maps to Probe the Physics of Merging Clusters

    NASA Astrophysics Data System (ADS)

    Burns, Jack O.; Hallman, Eric J.; Alden, Brian; Datta, Abhirup; Rapetti, David

    2017-06-01

    We present early results from an X-ray/Radio study of a sample of merging galaxy clusters. Using a novel X-ray pipeline, we have generated high-fidelity temperature maps from existing long-integration Chandra data for a set of clusters including Abell 115, A520, and MACSJ0717.5+3745. Our pipeline, written in python and operating on the NASA ARC high performance supercomputer Pleiades, generates temperature maps with minimal user interaction. This code will be released, with full documentation, on GitHub in beta to the community later this year. We have identified a population of observable shocks in the X-ray data that allow us to characterize the merging activity. In addition, we have compared the X-ray emission and properties to the radio data from observations with the JVLA and GMRT. These merging clusters contain radio relics and/or radio halos in each case. These data products illuminate the merger process, and how the energy of the merger is dissipated into thermal and non-thermal forms. This research was supported by NASA ADAP grant NNX15AE17G.

  16. Experimental studies of collisional plasma shocks and plasma interpenetration via merging supersonic plasma jets

    NASA Astrophysics Data System (ADS)

    Hsu, S. C.; Moser, A. L.; Merritt, E. C.; Adams, C. S.

    2015-11-01

    Over the past 4 years on the Plasma Liner Experiment (PLX) at LANL, we have studied obliquely and head-on-merging supersonic plasma jets of an argon/impurity or hydrogen/impurity mixture. The jets are formed/launched by pulsed-power-driven railguns. In successive experimental campaigns, we characterized the (a) evolution of plasma parameters of a single plasma jet as it propagated up to ~ 1 m away from the railgun nozzle, (b) density profiles and 2D morphology of the stagnation layer and oblique shocks that formed between obliquely merging jets, and (c) collisionless interpenetration transitioning to collisional stagnation between head-on-merging jets. Key plasma diagnostics included a fast-framing CCD camera, an 8-chord visible interferometer, a survey spectrometer, and a photodiode array. This talk summarizes the primary results mentioned above, and highlights analyses of inferred post-shock temperatures based on observations of density gradients that we attribute to shock-layer thickness. We also briefly describe more recent PLX experiments on Rayleigh-Taylor-instability evolution with magnetic and viscous effects, and potential future collisionless shock experiments enabled by low-impurity, higher-velocity plasma jets formed by contoured-gap coaxial guns. Supported by DOE Fusion Energy Sciences and LANL LDRD.

  17. Structure reconstruction of TiO2-based multi-wall nanotubes: first-principles calculations.

    PubMed

    Bandura, A V; Evarestov, R A; Lukyanov, S I

    2014-07-28

    A new method of theoretical modelling of polyhedral single-walled nanotubes based on the consolidation of walls in the rolled-up multi-walled nanotubes is proposed. Molecular mechanics and ab initio quantum mechanics methods are applied to investigate the merging of walls in nanotubes constructed from the different phases of titania. The combination of two methods allows us to simulate the structures which are difficult to find only by ab initio calculations. For nanotube folding we have used (1) the 3-plane fluorite TiO2 layer; (2) the anatase (101) 6-plane layer; (3) the rutile (110) 6-plane layer; and (4) the 6-plane layer with lepidocrocite morphology. The symmetry of the resulting single-walled nanotubes is significantly lower than the symmetry of initial coaxial cylindrical double- or triple-walled nanotubes. These merged nanotubes acquire higher stability in comparison with the initial multi-walled nanotubes. The wall thickness of the merged nanotubes exceeds 1 nm and approaches the corresponding parameter of the experimental patterns. The present investigation demonstrates that the merged nanotubes can integrate the two different crystalline phases in one and the same wall structure.

  18. Fluid Merging Viscosity Measurement (FMVM) Experiment on the International Space Station

    NASA Technical Reports Server (NTRS)

    Antar, Basil N.; Ethridge, Edwin; Lehman, Daniel; Kaukler, William

    2007-01-01

    The concept of using low gravity experimental data together with fluid dynamical numerical simulations for measuring the viscosity of highly viscous liquids was recently validated on the International Space Station (ISS). After testing the proof of concept for this method with parabolic flight experiments, an ISS experiment was proposed and later conducted onboard the ISS in July, 2004 and subsequently in May of 2005. In that experiment a series of two liquid drops were brought manually together until they touched and then were allowed to merge under the action of capillary forces alone. The merging process was recorded visually in order to measure the contact radius speed as the merging proceeded. Several liquids were tested and for each liquid several drop diameters were used. It has been shown that when the coefficient of surface tension for the liquid is known, the contact radius speed can then determine the coefficient of viscosity for that liquid. The viscosity is determined by fitting the experimental speed to theoretically calculated contact radius speed for the same experimental parameters. Experimental and numerical results will be presented in which the viscosity of different highly viscous liquids were determined, to a high degree of accuracy, using this technique.

  19. Investigation of alternative work zone merging sign configurations.

    DOT National Transportation Integrated Search

    2013-12-01

    This study investigated the effect of an alternative merge sign configuration within a freeway work zone. In this alternative : configuration, the graphical lane closed sign from the MUTCD was compared with a MERGE/arrow sign on one side and a : RIGH...

  20. Triadic split-merge sampler

    NASA Astrophysics Data System (ADS)

    van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap

    2018-04-01

    In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.

  1. Empowering genomic medicine by establishing critical sequencing result data flows: the eMERGE example.

    PubMed

    Aronson, Samuel; Babb, Lawrence; Ames, Darren; Gibbs, Richard A; Venner, Eric; Connelly, John J; Marsolo, Keith; Weng, Chunhua; Williams, Marc S; Hartzler, Andrea L; Liang, Wayne H; Ralston, James D; Devine, Emily Beth; Murphy, Shawn; Chute, Christopher G; Caraballo, Pedro J; Kullo, Iftikhar J; Freimuth, Robert R; Rasmussen, Luke V; Wehbe, Firas H; Peterson, Josh F; Robinson, Jamie R; Wiley, Ken; Overby Taylor, Casey

    2018-05-31

    The eMERGE Network is establishing methods for electronic transmittal of patient genetic test results from laboratories to healthcare providers across organizational boundaries. We surveyed the capabilities and needs of different network participants, established a common transfer format, and implemented transfer mechanisms based on this format. The interfaces we created are examples of the connectivity that must be instantiated before electronic genetic and genomic clinical decision support can be effectively built at the point of care. This work serves as a case example for both standards bodies and other organizations working to build the infrastructure required to provide better electronic clinical decision support for clinicians.

  2. Merged analog and photon counting profiles used as input for other RLPROF VAPs

    DOE Data Explorer

    Newsom, Rob

    2014-10-03

    The rlprof_merge VAP "merges" the photon counting and analog signals appropriately for each channel, creating an output data file that is very similar to the original raw data file format that the Raman lidar initially had.

  3. Merged analog and photon counting profiles used as input for other RLPROF VAPs

    DOE Data Explorer

    Newsom, Rob

    1998-03-01

    The rlprof_merge VAP "merges" the photon counting and analog signals appropriately for each channel, creating an output data file that is very similar to the original raw data file format that the Raman lidar initially had.

  4. STAR CLUSTER FORMATION AND DESTRUCTION IN THE MERGING GALAXY NGC 3256

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mulia, A. J.; Chandar, R.; Whitmore, B. C.

    2016-07-20

    We use the Advanced Camera for Surveys on the Hubble Space Telescope to study the rich population of young massive star clusters in the main body of NGC 3256, a merging pair of galaxies with a high star formation rate (SFR) and SFR per unit area (Σ{sub SFR}). These clusters have luminosity and mass functions that follow power laws, dN / dL ∝ L{sup α} with α = 2.23 ± 0.07, and dN / dM ∝ M{sup β} with β = 1.86 ± 0.34 for τ < 10 Myr clusters, similar to those found in more quiescent galaxies. The agemore » distribution can be described by dN / dτ ∝ τ{sup γ}, with γ ≈ 0.67 ± 0.08 for clusters younger than about a few hundred million years, with no obvious dependence on cluster mass. This is consistent with a picture where ∼80% of the clusters are disrupted each decade in time. We investigate the claim that galaxies with high Σ{sub SFR} form clusters more efficiently than quiescent systems by determining the fraction of stars in bound clusters (Γ) and the CMF/SFR statistic (CMF is the cluster mass function) for NGC 3256 and comparing the results with those for other galaxies. We find that the CMF/SFR statistic for NGC 3256 agrees well with that found for galaxies with Σ{sub SFR} and SFRs that are lower by 1–3 orders of magnitude, but that estimates for Γ are only robust when the same sets of assumptions are applied. Currently, Γ values available in the literature have used different sets of assumptions, making it more difficult to compare the results between galaxies.« less

  5. Star Cluster Formation and Destruction in the Merging Galaxy NGC 3256

    NASA Astrophysics Data System (ADS)

    Mulia, A. J.; Chandar, R.; Whitmore, B. C.

    2016-07-01

    We use the Advanced Camera for Surveys on the Hubble Space Telescope to study the rich population of young massive star clusters in the main body of NGC 3256, a merging pair of galaxies with a high star formation rate (SFR) and SFR per unit area (ΣSFR). These clusters have luminosity and mass functions that follow power laws, dN/dL ∝ L α with α = -2.23 ± 0.07, and dN/dM ∝ M β with β = -1.86 ± 0.34 for τ < 10 Myr clusters, similar to those found in more quiescent galaxies. The age distribution can be described by dN/dτ ∝ τ γ , with γ ≈ -0.67 ± 0.08 for clusters younger than about a few hundred million years, with no obvious dependence on cluster mass. This is consistent with a picture where ˜80% of the clusters are disrupted each decade in time. We investigate the claim that galaxies with high ΣSFR form clusters more efficiently than quiescent systems by determining the fraction of stars in bound clusters (Γ) and the CMF/SFR statistic (CMF is the cluster mass function) for NGC 3256 and comparing the results with those for other galaxies. We find that the CMF/SFR statistic for NGC 3256 agrees well with that found for galaxies with ΣSFR and SFRs that are lower by 1-3 orders of magnitude, but that estimates for Γ are only robust when the same sets of assumptions are applied. Currently, Γ values available in the literature have used different sets of assumptions, making it more difficult to compare the results between galaxies.

  6. Merging constitutional and motional covalent dynamics in reversible imine formation and exchange processes.

    PubMed

    Kovaříček, Petr; Lehn, Jean-Marie

    2012-06-06

    The formation and exchange processes of imines of salicylaldehyde, pyridine-2-carboxaldehyde, and benzaldehyde have been studied, showing that the former has features of particular interest for dynamic covalent chemistry, displaying high efficiency and fast rates. The monoimines formed with aliphatic α,ω-diamines display an internal exchange process of self-transimination type, inducing a local motion of either "stepping-in-place" or "single-step" type by bond interchange, whose rate decreases rapidly with the distance of the terminal amino groups. Control of the speed of the process over a wide range may be achieved by substituents, solvent composition, and temperature. These monoimines also undergo intermolecular exchange, thus merging motional and constitutional covalent behavior within the same molecule. With polyamines, the monoimines formed execute internal motions that have been characterized by extensive one-dimensional, two-dimensional, and EXSY proton NMR studies. In particular, with linear polyamines, nondirectional displacement occurs by shifting of the aldehyde residue along the polyamine chain serving as molecular track. Imines thus behave as simple prototypes of systems displaying relative motions of molecular moieties, a subject of high current interest in the investigation of synthetic and biological molecular motors. The motional processes described are of dynamic covalent nature and take place without change in molecular constitution. They thus represent a category of dynamic covalent motions, resulting from reversible covalent bond formation and dissociation. They extend dynamic covalent chemistry into the area of molecular motions. A major further step will be to achieve control of directionality. The results reported here for imines open wide perspectives, together with other chemical groups, for the implementation of such features in multifunctional molecules toward the design of molecular devices presenting a complex combination of motional and constitutional dynamic behaviors.

  7. Optically based technique for producing merged spectra of water-leaving radiances from ocean color remote sensing.

    PubMed

    Mélin, Frédéric; Zibordi, Giuseppe

    2007-06-20

    An optically based technique is presented that produces merged spectra of normalized water-leaving radiances L(WN) by combining spectral data provided by independent satellite ocean color missions. The assessment of the merging technique is based on a four-year field data series collected by an autonomous above-water radiometer located on the Acqua Alta Oceanographic Tower in the Adriatic Sea. The uncertainties associated with the merged L(WN) obtained from the Sea-viewing Wide Field-of-view Sensor and the Moderate Resolution Imaging Spectroradiometer are consistent with the validation statistics of the individual sensor products. The merging including the third mission Medium Resolution Imaging Spectrometer is also addressed for a reduced ensemble of matchups.

  8. Target-based calibration method for multifields of view measurement using multiple stereo digital image correlation systems

    NASA Astrophysics Data System (ADS)

    Dong, Shuai; Yu, Shanshan; Huang, Zheng; Song, Shoutan; Shao, Xinxing; Kang, Xin; He, Xiaoyuan

    2017-12-01

    Multiple digital image correlation (DIC) systems can enlarge the measurement field without losing effective resolution in the area of interest (AOI). However, the results calculated in substereo DIC systems are located in its local coordinate system in most cases. To stitch the data obtained by each individual system, a data merging algorithm is presented in this paper for global measurement of multiple stereo DIC systems. A set of encoded targets is employed to assist the extrinsic calibration, of which the three-dimensional (3-D) coordinates are reconstructed via digital close range photogrammetry. Combining the 3-D targets with precalibrated intrinsic parameters of all cameras, the extrinsic calibration is significantly simplified. After calculating in substereo DIC systems, all data can be merged into a universal coordinate system based on the extrinsic calibration. Four stereo DIC systems are applied to a four point bending experiment of a steel reinforced concrete beam structure. Results demonstrate high accuracy for the displacement data merging in the overlapping field of views (FOVs) and show feasibility for the distributed FOVs measurement.

  9. Are merging black holes born from stellar collapse or previous mergers?

    NASA Astrophysics Data System (ADS)

    Gerosa, Davide; Berti, Emanuele

    2017-06-01

    Advanced LIGO detectors at Hanford and Livingston made two confirmed and one marginal detection of binary black holes during their first observing run. The first event, GW150914, was from the merger of two black holes much heavier that those whose masses have been estimated so far, indicating a formation scenario that might differ from "ordinary" stellar evolution. One possibility is that these heavy black holes resulted from a previous merger. When the progenitors of a black hole binary merger result from previous mergers, they should (on average) merge later, be more massive, and have spin magnitudes clustered around a dimensionless spin ˜0.7 . Here we ask the following question: can gravitational-wave observations determine whether merging black holes were born from the collapse of massive stars ("first generation"), rather than being the end product of earlier mergers ("second generation")? We construct simple, observationally motivated populations of black hole binaries, and we use Bayesian model selection to show that measurements of the masses, luminosity distance (or redshift), and "effective spin" of black hole binaries can indeed distinguish between these different formation scenarios.

  10. Image Fusion of CT and MR with Sparse Representation in NSST Domain

    PubMed Central

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. PMID:29250134

  11. Image Fusion of CT and MR with Sparse Representation in NSST Domain.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Zhang, Huan; Xia, Shunren

    2017-01-01

    Multimodal image fusion techniques can integrate the information from different medical images to get an informative image that is more suitable for joint diagnosis, preoperative planning, intraoperative guidance, and interventional treatment. Fusing images of CT and different MR modalities are studied in this paper. Firstly, the CT and MR images are both transformed to nonsubsampled shearlet transform (NSST) domain. So the low-frequency components and high-frequency components are obtained. Then the high-frequency components are merged using the absolute-maximum rule, while the low-frequency components are merged by a sparse representation- (SR-) based approach. And the dynamic group sparsity recovery (DGSR) algorithm is proposed to improve the performance of the SR-based approach. Finally, the fused image is obtained by performing the inverse NSST on the merged components. The proposed fusion method is tested on a number of clinical CT and MR images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation.

  12. Interfacial Stability of Spherically Converging Plasma Jets for Magnetized Target Fusion

    NASA Technical Reports Server (NTRS)

    Thio, Y. C. Francis; Cassibry, Jason; Wu, S. T.; Eskridge, Richard; Smith, James; Lee, Michael; Rodgers, Stephen L. (Technical Monitor)

    2000-01-01

    A fusion propulsion scheme has been proposed that makes use of the merging of a spherical distribution of plasma jets to dynamically form a gaseous liner to implode a magnetized target to produce the fusion reaction. In this paper, a study is made of the interfacial stability of the interaction of these jets. Specifically, the Orr-Sommerfeld equation is integrated to obtain the growth rate of a perturbation to the primary flow at the interface between the colliding jets. The results lead to an estimate on the tolerances on the relative flow velocities of the merging plasma jets to form a stable, imploding liner. The results show that the maximum temporal growth rate of the perturbed flow at the jet interface is very small in comparison with the time to full compression of the liner. These data suggest that, as far as the stability of the interface between the merging jets is concerned, the formation of the gaseous liner can withstand velocity variation of the order of 10% between the neighboring jets over the density and temperature ranges investigated.

  13. Intermittent magnetic reconnection in TS-3 merging experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ono, Y.; Hayashi, Y.; Ii, T.

    2011-11-15

    Ejection of current sheet with plasma mass causes impulsive and intermittent magnetic reconnection in the TS-3 spherical tokamak (ST) merging experiment. Under high guide toroidal field, the sheet resistivity is almost classical due to the sheet thickness much longer than the ion gyroradius. Large inflow flux and low current-sheet resistivity result in flux and plasma pileup followed by rapid growth of the current sheet. When the pileup exceeds a critical limit, the sheet is ejected mechanically from the squeezed X-point area. The reconnection (outflow) speed is slow during the flux/plasma pileup and is fast during the ejection, suggesting that intermittentmore » reconnection similar to the solar flare increases the averaged reconnection speed. These transient effects enable the merging tokamaks to have the fast reconnection as well as the high-power reconnection heating, even when their current-sheet resistivity is low under high guide field.« less

  14. Accurate Grid-based Clustering Algorithm with Diagonal Grid Searching and Merging

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Ye, Chengcheng; Zhu, Erzhou

    2017-09-01

    Due to the advent of big data, data mining technology has attracted more and more attentions. As an important data analysis method, grid clustering algorithm is fast but with relatively lower accuracy. This paper presents an improved clustering algorithm combined with grid and density parameters. The algorithm first divides the data space into the valid meshes and invalid meshes through grid parameters. Secondly, from the starting point located at the first point of the diagonal of the grids, the algorithm takes the direction of “horizontal right, vertical down” to merge the valid meshes. Furthermore, by the boundary grid processing, the invalid grids are searched and merged when the adjacent left, above, and diagonal-direction grids are all the valid ones. By doing this, the accuracy of clustering is improved. The experimental results have shown that the proposed algorithm is accuracy and relatively faster when compared with some popularly used algorithms.

  15. Wavelength-adaptive dehazing using histogram merging-based classification for UAV images.

    PubMed

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-03-19

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results.

  16. Experimental characterization of a transition from collisionless to collisional interaction between head-on-merging supersonic plasma jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moser, Auna L., E-mail: mosera@fusion.gat.com; Hsu, Scott C., E-mail: scotthsu@lanl.gov

    We present results from experiments on the head-on merging of two supersonic plasma jets in an initially collisionless regime for the counter-streaming ions. The plasma jets are of either an argon/impurity or hydrogen/impurity mixture and are produced by pulsed-power-driven railguns. Based on time- and space-resolved fast-imaging, multi-chord interferometry, and survey-spectroscopy measurements of the overlapping region between the merging jets, we observe that the jets initially interpenetrate, consistent with calculated inter-jet ion collision lengths, which are long. As the jets interpenetrate, a rising mean-charge state causes a rapid decrease in the inter-jet ion collision length. Finally, the interaction becomes collisional andmore » the jets stagnate, eventually producing structures consistent with collisional shocks. These experimental observations can aid in the validation of plasma collisionality and ionization models for plasmas with complex equations of state.« less

  17. Triple collocation based merging of satellite soil moisture retrievals

    USDA-ARS?s Scientific Manuscript database

    We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...

  18. Hall effect on a Merging Formation Process of a Field-Reversed Configuration

    NASA Astrophysics Data System (ADS)

    Kaminou, Yasuhiro; Guo, Xuehan; Inomoto, Michiaki; Ono, Yasushi; Horiuchi, Ritoku

    2015-11-01

    Counter-helicity spheromak merging is one of the formation methods of a Field-Reversed Configuration (FRC). In counter-helicity spheromak merging, two spheromaks with opposing toroidal fields merge together, through magnetic reconnection events and relax into a FRC, which has no or little toroidal field. This process contains magnetic reconnection and a relaxation phenomena, and the Hall effect has some essential effects on these process because the X-point in the magnetic reconnection or the O-point of the FRC has no or little magnetic field. However, the Hall effect as both global and local effect on counter-helicity spheromak merging has not been elucidated. In this poster, we conducted 2D/3D Hall-MHD simulations and experiments of counter-helicity spheromak merging. We find that the Hall effect enhances the reconnection rate, and reduces the generation of toroidal sheared-flow. The suppression of the ``slingshot effect'' affects the relaxation process. We will discuss details in the poster.

  19. Merging Photoredox and Nickel Catalysis: The Direct Synthesis of Ketones via the Decarboxylative Arylation of α-Oxo Acids**

    PubMed Central

    Chu, Lingling; Lipshultz, Jeffrey M.

    2015-01-01

    The direct decarboxylative arylation of α-oxo acids has been achieved via synergistic visible light-mediated photoredox and nickel catalyses. This method offers rapid entry to aryl and alkyl ketone architectures from simple α-oxo acid precursors via an acyl radical intermediate. Significant substrate scope is observed with respect to both the oxo acid and arene coupling partners. This mild decarboxylative arylation can also be utilized to efficiently access medicinal agents, as demonstrated by the rapid synthesis of fenofibrate. PMID:26014029

  20. In-situ databases and comparison of ESA Ocean Colour Climate Change Initiative (OC-CCI) products with precursor data, towards an integrated approach for ocean colour validation and climate studies

    NASA Astrophysics Data System (ADS)

    Brotas, Vanda; Valente, André; Couto, André B.; Grant, Mike; Chuprin, Andrei; Jackson, Thomas; Groom, Steve; Sathyendranath, Shubha

    2014-05-01

    Ocean colour (OC) is an Oceanic Essential Climate Variable, which is used by climate modellers and researchers. The European Space Agency (ESA) Climate Change Initiative project, is the ESA response for the need of climate-quality satellite data, with the goal of providing stable, long-term, satellite-based ECV data products. The ESA Ocean Colour CCI focuses on the production of Ocean Colour ECV uses remote sensing reflectances to derive inherent optical properties and chlorophyll a concentration from ESA's MERIS (2002-2012) and NASA's SeaWiFS (1997 - 2010) and MODIS (2002-2012) sensor archives. This work presents an integrated approach by setting up a global database of in situ measurements and by inter-comparing OC-CCI products with pre-cursor datasets. The availability of in situ databases is fundamental for the validation of satellite derived ocean colour products. A global distribution in situ database was assembled, from several pre-existing datasets, with data spanning between 1997 and 2012. It includes in-situ measurements of remote sensing reflectances, concentration of chlorophyll-a, inherent optical properties and diffuse attenuation coefficient. The database is composed from observations of the following datasets: NOMAD, SeaBASS, MERMAID, AERONET-OC, BOUSSOLE and HOTS. The result was a merged dataset tuned for the validation of satellite-derived ocean colour products. This was an attempt to gather, homogenize and merge, a large high-quality bio-optical marine in situ data, as using all datasets in a single validation exercise increases the number of matchups and enhances the representativeness of different marine regimes. An inter-comparison analysis between OC-CCI chlorophyll-a product and satellite pre-cursor datasets was done with single missions and merged single mission products. Single mission datasets considered were SeaWiFS, MODIS-Aqua and MERIS; merged mission datasets were obtained from the GlobColour (GC) as well as the Making Earth Science Data Records for Use in Research Environments (MEaSUREs). OC-CCI product was found to be most similar to SeaWiFS record, and generally, the OC-CCI record was most similar to records derived from single mission than merged mission initiatives. Results suggest that CCI product is a more consistent dataset than other available merged mission initiatives. In conclusion, climate related science, requires long term data records to provide robust results, OC-CCI product proves to be a worthy data record for climate research, as it combines multi-sensor OC observations to provide a >15-year global error-characterized record.

  1. Tropical Rainfall Distributions Determined Using TRMM Combined with other Satellite and Raingauge Information

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.; Huffman, George J.; Bolvin, David T.; Curtis, Scott; Nelkin, Eric J.

    1999-01-01

    Abstract A technique is described to use Tropical Rain Measuring Mission (TRMM) combined radar/radiometer information to adjust geosynchronous infrared satellite data (the TRMM Adjusted GOES Precipitation Index, or TRMM AGPI). The AGPI is then merged with rain gauge information (mostly over land; the TRMM merged product) to provide fine- scale (1 deg latitude/longitude) pentad and monthly analyses, respectively. The TRMM merged estimates are 10% higher than those from the Global Precipitation Climatology Project (GPCP) when integrated over the tropical oceans (37 deg N-S) for 1998, with 20% differences noted in the most heavily raining areas. In the dry subtropics the TRMM values are smaller than the GPCP estimates. The TRMM merged-product tropical-mean estimates for 1998 are 3.3 mm/ day over ocean and 3.1 mm/ day over land and ocean combined. Regional differences are noted between the western and eastern Pacific Ocean maxima when TRMM and GPCP are compared. In the eastern Pacific rain maximum the TRMM and GPCP mean values are nearly equal, very different from the other tropical rainy areas where TRMM merged-product estimates are higher. This regional difference may indicate that TRMM is better at taking in to account the vertical structure of the rain systems and the difference in structure between the western and eastern (shallower) Pacific convection. Comparisons of these TRMM merged analysis estimates with surface data sets shows varied results; the bias is near zero when compared to western Pacific Ocean atoll raingauge data, but significantly positive compared to Kwajalein radar estimates (adjusted by rain gauges). Over land the TRMM estimates also show a significant positive bias. The inclusion of gauge information in the final merged product significantly reduces the bias over land, as expected. The monthly precipitation patterns produced by the TRMM merged data process clearly show the evolution of the ENSO tropical precipitation pattern from early 1998 (El Nino) through early 1999 (La Nina) and beyond. The El Nino minus La Nina difference map shows the eastern Pacific maximum, the maritime continent minima and other tropical and mid-latitude features. The differences in the Pacific are very similar to those detected by the GPCP analyses. However, summing the El Nino minus La Nina differences over the global tropical oceans yields divergent answers from TRMM, GPCP and other estimates. This emphasizes the need for additional validation and analysis before it is feasible to understand the relations between global precipitation anomalies and Pacific Ocean ENSO temperature changes.

  2. Mining CANDELS for Tidal Features to Constrain Major Merging During Cosmic Noon

    NASA Astrophysics Data System (ADS)

    McIntosh, Daniel H.; Mantha, Kameswara; Ciaschi, Cody; Evan, Rubyet A.; Fries, Logan B.; Landry, Luther; Thompson, Scott E.; Snyder, Gregory; Guo, Yicheng; Ceverino, Daniel; Häuβler, Boris; Primack, Joel; Simons, Raymond C.; Zheng, Xianzhong; Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS) Team

    2018-01-01

    The role of major merging in the rapid buildup and development of massive galaxies at z>1 remains an open question. New theories and observations suggest that non-merging processes like violent disk instabilities may be more vital than previously thought at assembling bulges, producing clumps, and inducing morphological disturbances that may be misinterpreted as the product of major merging. We will present initial results on a systematic search for hallmark tidal indicators of major merging in a complete sample of nearly 6000 massive z>1 galaxies from CANDELS (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey), the premiere HST/WFC3 Treasury program. We have visually inspected published GALFIT F160W residual (image-model) maps and produced a comprehensive new catalog of Sersic residual characteristics based on a variety of natural features and poor-fit artifacts. Using this catalog, we find the frequency of galaxies with tidal signatures is very small in CANDELS data. Accounting for the brief time scale associated with faint transient tidal features, our preliminary finding indicates that merger fractions derived from the CANDELS morphological classification efforts are substantially overestimated. We are using the database of residual classifications as a baseline to (1) produce improved multi-component residual maps using GALFIT_M, (2) automatically extract and quantify plausible tidal indicators and substructures (clumps vs. multiple nuclei), (3) develop a new deep-learning classification pipeline to robustly identify merger indicators in imaging data, and (4) inform the systematic analyses of synthetic mock (CANDELized) images from zoom-in hydrodynamic simulations to thoroughly quantify the impacts of cosmological dimming, and calibrate the observability timescale of tidal feature detections. Our study will ultimately yield novel constraints on merger rates at z>1 and a definitive census of massive high-noon galaxies with tidal and double-nuclei merging signatures in rest-frame optical HST imaging.

  3. Genomic evaluation of regional dairy cattle breeds in single-breed and multibreed contexts.

    PubMed

    Jónás, D; Ducrocq, V; Fritz, S; Baur, A; Sanchez, M-P; Croiseau, P

    2017-02-01

    An important prerequisite for high prediction accuracy in genomic prediction is the availability of a large training population, which allows accurate marker effect estimation. This requirement is not fulfilled in case of regional breeds with a limited number of breeding animals. We assessed the efficiency of the current French routine genomic evaluation procedure in four regional breeds (Abondance, Tarentaise, French Simmental and Vosgienne) as well as the potential benefits when the training populations consisting of males and females of these breeds are merged to form a multibreed training population. Genomic evaluation was 5-11% more accurate than a pedigree-based BLUP in three of the four breeds, while the numerically smallest breed showed a < 1% increase in accuracy. Multibreed genomic evaluation was beneficial for two breeds (Abondance and French Simmental) with maximum gains of 5 and 8% in correlation coefficients between yield deviations and genomic estimated breeding values, when compared to the single-breed genomic evaluation results. Inflation of genomic evaluation of young candidates was also reduced. Our results indicate that genomic selection can be effective in regional breeds as well. Here, we provide empirical evidence proving that genetic distance between breeds is only one of the factors affecting the efficiency of multibreed genomic evaluation. © 2016 Blackwell Verlag GmbH.

  4. On transform coding tools under development for VP10

    NASA Astrophysics Data System (ADS)

    Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao

    2016-09-01

    Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.

  5. msBiodat analysis tool, big data analysis for high-throughput experiments.

    PubMed

    Muñoz-Torres, Pau M; Rokć, Filip; Belužic, Robert; Grbeša, Ivana; Vugrek, Oliver

    2016-01-01

    Mass spectrometry (MS) are a group of a high-throughput techniques used to increase knowledge about biomolecules. They produce a large amount of data which is presented as a list of hundreds or thousands of proteins. Filtering those data efficiently is the first step for extracting biologically relevant information. The filtering may increase interest by merging previous data with the data obtained from public databases, resulting in an accurate list of proteins which meet the predetermined conditions. In this article we present msBiodat Analysis Tool, a web-based application thought to approach proteomics to the big data analysis. With this tool, researchers can easily select the most relevant information from their MS experiments using an easy-to-use web interface. An interesting feature of msBiodat analysis tool is the possibility of selecting proteins by its annotation on Gene Ontology using its Gene Id, ensembl or UniProt codes. The msBiodat analysis tool is a web-based application that allows researchers with any programming experience to deal with efficient database querying advantages. Its versatility and user-friendly interface makes easy to perform fast and accurate data screening by using complex queries. Once the analysis is finished, the result is delivered by e-mail. msBiodat analysis tool is freely available at http://msbiodata.irb.hr.

  6. Multistage quantum absorption heat pumps.

    PubMed

    Correa, Luis A

    2014-04-01

    It is well known that heat pumps, while being all limited by the same basic thermodynamic laws, may find realization on systems as "small" and "quantum" as a three-level maser. In order to quantitatively assess how the performance of these devices scales with their size, we design generalized N-dimensional ideal heat pumps by merging N-2 elementary three-level stages. We set them to operate in the absorption chiller mode between given hot and cold baths and study their maximum achievable cooling power and the corresponding efficiency as a function of N. While the efficiency at maximum power is roughly size-independent, the power itself slightly increases with the dimension, quickly saturating to a constant. Thus, interestingly, scaling up autonomous quantum heat pumps does not render a significant enhancement beyond the optimal double-stage configuration.

  7. A cluster merging method for time series microarray with production values.

    PubMed

    Chira, Camelia; Sedano, Javier; Camara, Monica; Prieto, Carlos; Villar, Jose R; Corchado, Emilio

    2014-09-01

    A challenging task in time-course microarray data analysis is to cluster genes meaningfully combining the information provided by multiple replicates covering the same key time points. This paper proposes a novel cluster merging method to accomplish this goal obtaining groups with highly correlated genes. The main idea behind the proposed method is to generate a clustering starting from groups created based on individual temporal series (representing different biological replicates measured in the same time points) and merging them by taking into account the frequency by which two genes are assembled together in each clustering. The gene groups at the level of individual time series are generated using several shape-based clustering methods. This study is focused on a real-world time series microarray task with the aim to find co-expressed genes related to the production and growth of a certain bacteria. The shape-based clustering methods used at the level of individual time series rely on identifying similar gene expression patterns over time which, in some models, are further matched to the pattern of production/growth. The proposed cluster merging method is able to produce meaningful gene groups which can be naturally ranked by the level of agreement on the clustering among individual time series. The list of clusters and genes is further sorted based on the information correlation coefficient and new problem-specific relevant measures. Computational experiments and results of the cluster merging method are analyzed from a biological perspective and further compared with the clustering generated based on the mean value of time series and the same shape-based algorithm.

  8. Oscillations Excited by Plasmoids Formed During Magnetic Reconnection in a Vertical Gravitationally Stratified Current Sheet

    NASA Astrophysics Data System (ADS)

    Jelínek, P.; Karlický, M.; Van Doorsselaere, T.; Bárta, M.

    2017-10-01

    Using the FLASH code, which solves the full set of the 2D non-ideal (resistive) time-dependent magnetohydrodynamic (MHD) equations, we study processes during the magnetic reconnection in a vertical gravitationally stratified current sheet. We show that during these processes, which correspond to processes in solar flares, plasmoids are formed due to the tearing mode instability of the current sheet. These plasmoids move upward or downward along the vertical current sheet and some of them merge into larger plasmoids. We study the density and temperature structure of these plasmoids and their time evolution in detail. We found that during the merging of two plasmoids, the resulting larger plasmoid starts to oscillate with a period largely determined by L/{c}{{A}}, where L is the size of the plasmoid and c A is the Alfvén speed in the lateral parts of the plasmoid. In our model, L/{c}{{A}} evaluates to ˜ 25 {{s}}. Furthermore, the plasmoid moving downward merges with the underlying flare arcade, which causes oscillations of the arcade. In our model, the period of this arcade oscillation is ˜ 35 {{s}}, which also corresponds to L/{c}{{A}}, but here L means the length of the loop and c A is the average Alfvén speed in the loop. We also show that the merging process of the plasmoid with the flare arcade is a complex process as presented by complex density and temperature structures of the oscillating arcade. Moreover, all these processes are associated with magnetoacoustic waves produced by the motion and merging of plasmoids.

  9. Comparing Approaches to Deal With Non-Gaussianity of Rainfall Data in Kriging-Based Radar-Gauge Rainfall Merging

    NASA Astrophysics Data System (ADS)

    Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.

    2017-11-01

    Merging radar and rain gauge rainfall data is a technique used to improve the quality of spatial rainfall estimates and in particular the use of Kriging with External Drift (KED) is a very effective radar-rain gauge rainfall merging technique. However, kriging interpolations assume Gaussianity of the process. Rainfall has a strongly skewed, positive, probability distribution, characterized by a discontinuity due to intermittency. In KED rainfall residuals are used, implicitly calculated as the difference between rain gauge data and a linear function of the radar estimates. Rainfall residuals are non-Gaussian as well. The aim of this work is to evaluate the impact of applying KED to non-Gaussian rainfall residuals, and to assess the best techniques to improve Gaussianity. We compare Box-Cox transformations with λ parameters equal to 0.5, 0.25, and 0.1, Box-Cox with time-variant optimization of λ, normal score transformation, and a singularity analysis technique. The results suggest that Box-Cox with λ = 0.1 and the singularity analysis is not suitable for KED. Normal score transformation and Box-Cox with optimized λ, or λ = 0.25 produce satisfactory results in terms of Gaussianity of the residuals, probability distribution of the merged rainfall products, and rainfall estimate quality, when validated through cross-validation. However, it is observed that Box-Cox transformations are strongly dependent on the temporal and spatial variability of rainfall and on the units used for the rainfall intensity. Overall, applying transformations results in a quantitative improvement of the rainfall estimates only if the correct transformations for the specific data set are used.

  10. Design of the data quality control system for the ALICE O2

    NASA Astrophysics Data System (ADS)

    von Haller, Barthélémy; Lesiak, Patryk; Otwinowski, Jacek; ALICE Collaboration

    2017-10-01

    ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). A major upgrade of the experiment is planned for 2019-20. In order to cope with a 100 times higher data rate and with the continuous readout of the Time Projection Chamber (TPC), it is necessary to upgrade the Online and Offline computing to a new common system called O2. The online Data Quality Monitoring (DQM) and the offline Quality Assurance (QA) are critical aspects of the data acquisition and reconstruction software chains. The former intends to provide shifters with precise and complete information in order to quickly identify and overcome problems while the latter aims at providing good quality data for physics analyses. DQM and QA typically involve the gathering of data, its distributed analysis by user-defined algorithms, the merging of the resulting objects and their visualization. This paper discusses the architecture and the design of the data Quality Control system that regroups the DQM and QA. In addition it presents the main design requirements and early results of a working prototype. A special focus is put on the merging of monitoring objects generated by the QC tasks. The merging is a crucial and challenging step of the O2 system, not only for QC but also for the calibration. Various scenarios and implementations have been made and large-scale tests carried out. This document presents the final results of this extensive work on merging. We conclude with the plan of work for the coming years that will bring the QC to production by 2019.

  11. Responsible Student Affairs Practice: Merging Student Development and Quality Management.

    ERIC Educational Resources Information Center

    Whitner, Phillip A.; And Others

    The merging of Total Quality Management (TQM) and Involvement Theory into a managerial philosophy can assist student affairs professionals with an approach for conducting work that improves student affairs practice. When merged or integrated, accountability can easily be obtained because the base philosophies of qualitative research, TQM, and…

  12. Tool for Merging Proposals Into DSN Schedules

    NASA Technical Reports Server (NTRS)

    Khanampornpan, Teerapat; Kwok, John; Call, Jared

    2008-01-01

    A Practical Extraction and Reporting Language (Perl) script called merge7da has been developed to facilitate determination, by a project scheduler in NASA's Deep Space Network, of whether a proposal for use of the DSN could create a conflict with the current DSN schedule. Prior to the development of merge7da, there was no way to quickly identify potential schedule conflicts: it was necessary to submit a proposal and wait a day or two for a response from a DSN scheduling facility. By using merge7da to detect and eliminate potential schedule conflicts before submitting a proposal, a project scheduler saves time and gains assurance that the proposal will probably be accepted. merge7da accepts two input files, one of which contains the current DSN schedule and is in a DSN-standard format called '7da'. The other input file contains the proposal and is in another DSN-standard format called 'C1/C2'. merge7da processes the two input files to produce a merged 7da-format output file that represents the DSN schedule as it would be if the proposal were to be adopted. This 7da output file can be loaded into various DSN scheduling software tools now in use.

  13. Merged ozone profiles from four MIPAS processors

    NASA Astrophysics Data System (ADS)

    Laeng, Alexandra; von Clarmann, Thomas; Stiller, Gabriele; Dinelli, Bianca Maria; Dudhia, Anu; Raspollini, Piera; Glatthor, Norbert; Grabowski, Udo; Sofieva, Viktoria; Froidevaux, Lucien; Walker, Kaley A.; Zehner, Claus

    2017-04-01

    The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) was an infrared (IR) limb emission spectrometer on the Envisat platform. Currently, there are four MIPAS ozone data products, including the operational Level-2 ozone product processed at ESA, with the scientific prototype processor being operated at IFAC Florence, and three independent research products developed by the Istituto di Fisica Applicata Nello Carrara (ISAC-CNR)/University of Bologna, Oxford University, and the Karlsruhe Institute of Technology-Institute of Meteorology and Climate Research/Instituto de Astrofísica de Andalucía (KIT-IMK/IAA). Here we present a dataset of ozone vertical profiles obtained by merging ozone retrievals from four independent Level-2 MIPAS processors. We also discuss the advantages and the shortcomings of this merged product. As the four processors retrieve ozone in different parts of the spectra (microwindows), the source measurements can be considered as nearly independent with respect to measurement noise. Hence, the information content of the merged product is greater and the precision is better than those of any parent (source) dataset. The merging is performed on a profile per profile basis. Parent ozone profiles are weighted based on the corresponding error covariance matrices; the error correlations between different profile levels are taken into account. The intercorrelations between the processors' errors are evaluated statistically and are used in the merging. The height range of the merged product is 20-55 km, and error covariance matrices are provided as diagnostics. Validation of the merged dataset is performed by comparison with ozone profiles from ACE-FTS (Atmospheric Chemistry Experiment-Fourier Transform Spectrometer) and MLS (Microwave Limb Sounder). Even though the merging is not supposed to remove the biases of the parent datasets, around the ozone volume mixing ratio peak the merged product is found to have a smaller (up to 0.1 ppmv) bias with respect to ACE-FTS than any of the parent datasets. The bias with respect to MLS is of the order of 0.15 ppmv at 20-30 km height and up to 0.45 ppmv at larger altitudes. The agreement between the merged data MIPAS dataset with ACE-FTS is better than that with MLS. This is, however, the case for all parent processors as well.

  14. A statistical study of merging galaxies: Theory and observations

    NASA Technical Reports Server (NTRS)

    Chatterjee, Tapan K.

    1990-01-01

    A study of the expected frequency of merging galaxies is conducted, using the impulsive approximation. Results indicate that if we consider mergers involving galaxy pairs without halos in a single crossing time or orbital period, the expected frequency of mergers is two orders of magnitude below the observed value for the present epoch. If we consider mergers involving several orbital periods or crossing times, the expected frequency goes up by an order of magnitude. Preliminary calculation indicate that if we consider galaxy mergers between pairs with massive halos, the merger is very much hastened.

  15. The correlation of the depth of anesthesia and postoperative cognitive impairment: A meta-analysis based on randomized controlled trials.

    PubMed

    Lu, Xing; Jin, Xin; Yang, Suwei; Xia, Yanfei

    2018-03-01

    To comprehensively evaluate the associations between the depth of anesthesia and postoperative delirium (POD) or postoperative cognitive dysfunction (POCD). Using the Cochrane evaluation system, the included studies were conducted with quality assessment. We searched Cochrane library, Embase and PubMed databases without language restriction. The retrieval time is up to August 2017. According to the PRISMA guideline, the results associated with POCD and POD separately were compared between low and high bispectral index (BIS) groups under fixed effects model or random effects model. Besides, the risk ratio (RR) and 95% confidence intervals (95% CIs) were utilized as the effect sizes for merging the results. Furthermore, sensitivity analysis was performed to evaluate the stability of the results. Using Egger's test, publication bias was assessed for the included studies. Totally, 4 studies with high qualities were selected for this meta-analysis. The merged results of POCD showed no significant difference between low and high BIS groups (RR (95% CI)=0.84 (0.21, 3.45), P>0.05). Sensitivity analysis showed that the merged results of POCD were not stable (RR (95%CI)=0.41 (0.17, 0.99)-1.88 (1.09, 3.22), P=0.046). Additionally, no significant publication bias for POCD was found (P=0.385). There was no significant correlation between the depth of anesthesia and POCD. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Design and Evaluation of the Terminal Area Precision Scheduling and Spacing System

    NASA Technical Reports Server (NTRS)

    Swenson, Harry N.; Thipphavong, Jane; Sadovsky, Alex; Chen, Liang; Sullivan, Chris; Martin, Lynne

    2011-01-01

    This paper describes the design, development and results from a high fidelity human-in-the-loop simulation of an integrated set of trajectory-based automation tools providing precision scheduling, sequencing and controller merging and spacing functions. These integrated functions are combined into a system called the Terminal Area Precision Scheduling and Spacing (TAPSS) system. It is a strategic and tactical planning tool that provides Traffic Management Coordinators, En Route and Terminal Radar Approach Control air traffic controllers the ability to efficiently optimize the arrival capacity of a demand-impacted airport while simultaneously enabling fuel-efficient descent procedures. The TAPSS system consists of four-dimensional trajectory prediction, arrival runway balancing, aircraft separation constraint-based scheduling, traffic flow visualization and trajectory-based advisories to assist controllers in efficient metering, sequencing and spacing. The TAPSS system was evaluated and compared to today's ATC operation through extensive series of human-in-the-loop simulations for arrival flows into the Los Angeles International Airport. The test conditions included the variation of aircraft demand from a baseline of today's capacity constrained periods through 5%, 10% and 20% increases. Performance data were collected for engineering and human factor analysis and compared with similar operations both with and without the TAPSS system. The engineering data indicate operations with the TAPSS show up to a 10% increase in airport throughput during capacity constrained periods while maintaining fuel-efficient aircraft descent profiles from cruise to landing.

  17. On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    1996-10-01

    One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.

  18. Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases.

    PubMed

    Neal, Maxwell L; Carlson, Brian E; Thompson, Christopher T; James, Ryan C; Kim, Karam G; Tran, Kenneth; Crampin, Edmund J; Cook, Daniel L; Gennari, John H

    2015-01-01

    Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen's semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the "Pandit-Hinch-Niederer" (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach.

  19. Semantics-Based Composition of Integrated Cardiomyocyte Models Motivated by Real-World Use Cases

    PubMed Central

    Neal, Maxwell L.; Carlson, Brian E.; Thompson, Christopher T.; James, Ryan C.; Kim, Karam G.; Tran, Kenneth; Crampin, Edmund J.; Cook, Daniel L.; Gennari, John H.

    2015-01-01

    Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen’s semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the “Pandit-Hinch-Niederer” (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach. PMID:26716837

  20. A new technique for solving puzzles.

    PubMed

    Makridis, Michael; Papamarkos, Nikos

    2010-06-01

    This paper proposes a new technique for solving jigsaw puzzles. The novelty of the proposed technique is that it provides an automatic jigsaw puzzle solution without any initial restriction about the shape of pieces, the number of neighbor pieces, etc. The proposed technique uses both curve- and color-matching similarity features. A recurrent procedure is applied, which compares and merges puzzle pieces in pairs, until the original puzzle image is reformed. Geometrical and color features are extracted on the characteristic points (CPs) of the puzzle pieces. CPs, which can be considered as high curvature points, are detected by a rotationally invariant corner detection algorithm. The features which are associated with color are provided by applying a color reduction technique using the Kohonen self-organized feature map. Finally, a postprocessing stage checks and corrects the relative position between puzzle pieces to improve the quality of the resulting image. Experimental results prove the efficiency of the proposed technique, which can be further extended to deal with even more complex jigsaw puzzle problems.

  1. An ontology of scientific experiments

    PubMed Central

    Soldatova, Larisa N; King, Ross D

    2006-01-01

    The formal description of experiments for efficient analysis, annotation and sharing of results is a fundamental part of the practice of science. Ontologies are required to achieve this objective. A few subject-specific ontologies of experiments currently exist. However, despite the unity of scientific experimentation, no general ontology of experiments exists. We propose the ontology EXPO to meet this need. EXPO links the SUMO (the Suggested Upper Merged Ontology) with subject-specific ontologies of experiments by formalizing the generic concepts of experimental design, methodology and results representation. EXPO is expressed in the W3C standard ontology language OWL-DL. We demonstrate the utility of EXPO and its ability to describe different experimental domains, by applying it to two experiments: one in high-energy physics and the other in phylogenetics. The use of EXPO made the goals and structure of these experiments more explicit, revealed ambiguities, and highlighted an unexpected similarity. We conclude that, EXPO is of general value in describing experiments and a step towards the formalization of science. PMID:17015305

  2. Low-cost telepresence for collaborative virtual environments.

    PubMed

    Rhee, Seon-Min; Ziegler, Remo; Park, Jiyoung; Naef, Martin; Gross, Markus; Kim, Myoung-Hee

    2007-01-01

    We present a novel low-cost method for visual communication and telepresence in a CAVE -like environment, relying on 2D stereo-based video avatars. The system combines a selection of proven efficient algorithms and approximations in a unique way, resulting in a convincing stereoscopic real-time representation of a remote user acquired in a spatially immersive display. The system was designed to extend existing projection systems with acquisition capabilities requiring minimal hardware modifications and cost. The system uses infrared-based image segmentation to enable concurrent acquisition and projection in an immersive environment without a static background. The system consists of two color cameras and two additional b/w cameras used for segmentation in the near-IR spectrum. There is no need for special optics as the mask and color image are merged using image-warping based on a depth estimation. The resulting stereo image stream is compressed, streamed across a network, and displayed as a frame-sequential stereo texture on a billboard in the remote virtual environment.

  3. Realistic and efficient 2D crack simulation

    NASA Astrophysics Data System (ADS)

    Yadegar, Jacob; Liu, Xiaoqing; Singh, Abhishek

    2010-04-01

    Although numerical algorithms for 2D crack simulation have been studied in Modeling and Simulation (M&S) and computer graphics for decades, realism and computational efficiency are still major challenges. In this paper, we introduce a high-fidelity, scalable, adaptive and efficient/runtime 2D crack/fracture simulation system by applying the mathematically elegant Peano-Cesaro triangular meshing/remeshing technique to model the generation of shards/fragments. The recursive fractal sweep associated with the Peano-Cesaro triangulation provides efficient local multi-resolution refinement to any level-of-detail. The generated binary decomposition tree also provides efficient neighbor retrieval mechanism used for mesh element splitting and merging with minimal memory requirements essential for realistic 2D fragment formation. Upon load impact/contact/penetration, a number of factors including impact angle, impact energy, and material properties are all taken into account to produce the criteria of crack initialization, propagation, and termination leading to realistic fractal-like rubble/fragments formation. The aforementioned parameters are used as variables of probabilistic models of cracks/shards formation, making the proposed solution highly adaptive by allowing machine learning mechanisms learn the optimal values for the variables/parameters based on prior benchmark data generated by off-line physics based simulation solutions that produce accurate fractures/shards though at highly non-real time paste. Crack/fracture simulation has been conducted on various load impacts with different initial locations at various impulse scales. The simulation results demonstrate that the proposed system has the capability to realistically and efficiently simulate 2D crack phenomena (such as window shattering and shards generation) with diverse potentials in military and civil M&S applications such as training and mission planning.

  4. Final report on the Magnetized Target Fusion Collaboration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    John Slough

    Nuclear fusion has the potential to satisfy the prodigious power that the world will demand in the future, but it has yet to be harnessed as a practical energy source. The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. It is the contention here that a simpler path to fusion can be achieved by creating fusion conditions in a different regime at small scale (~ a few cm). One such program now under study, referred tomore » as Magnetized Target Fusion (MTF), is directed at obtaining fusion in this high energy density regime by rapidly compressing a compact toroidal plasmoid commonly referred to as a Field Reversed Configuration (FRC). To make fusion practical at this smaller scale, an efficient method for compressing the FRC to fusion gain conditions is required. In one variant of MTF a conducting metal shell is imploded electrically. This radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target plasmoid suppresses the thermal transport to the confining shell, thus lowering the imploding power needed to compress the target. The undertaking to be described in this proposal is to provide a suitable target FRC, as well as a simple and robust method for inserting and stopping the FRC within the imploding liner. The timescale for testing and development can be rapidly accelerated by taking advantage of a new facility funded by the Department of Energy. At this facility, two inductive plasma accelerators (IPA) were constructed and tested. Recent experiments with these IPAs have demonstrated the ability to rapidly form, accelerate and merge two hypervelocity FRCs into a compression chamber. The resultant FRC that was formed was hot (T&ion ~ 400 eV), stationary, and stable with a configuration lifetime several times that necessary for the MTF liner experiments. The accelerator length was less than 1 meter, and the time from the initiation of formation to the establishment of the final equilibrium was less than 10 microseconds. With some modification, each accelerator was made capable of producing FRCs suitable for the production of the target plasma for the MTF liner experiment. Based on the initial FRC merging/compression results, the design and methodology for an experimental realization of the target plasma for the MTF liner experiment can now be defined. A high density FRC plasmoid is to be formed and accelerated out of each IPA into a merging/compression chamber similar to the imploding liner at AFRL. The properties of the resultant FRC plasma (size, temperature, density, flux, lifetime) are obtained in the reevant regime of interest. The process still needs to be optimized, and a final design for implementation at AFRL must now be carried out. When implemented at AFRL it is anticipated that the colliding/merging FRCs will then be compressed by the liner. In this manner it is hoped that ultimately a plasma with ion temperatures reaching the 10 keV range and fusion gain near unity can be obtained.« less

  5. Numerical studies of the margin of vortices with decaying cores

    NASA Technical Reports Server (NTRS)

    Liu, G. C.; Ting, L.

    1986-01-01

    The merging of vortices to a single one is a canonical incompressible viscous flow problem. The merging process begins when the core sizes or the vortices are comparable to their distances and ends when the contour lines of constant vorticity lines are circularized around one center. Approximate solutions to this problem are constructed by adapting the asymptotic solutions for distinct vortices. For the early stage of merging, the next-order terms in the asymptotic solutions are added to the leading term. For the later stage of merging, the vorticity distribution is reinitialized by vortices with overlapping core structures guided by the 'rule of merging' and the velocity of the 'vortex centers' are then defined by a minimum principle. To show the accuracy of the approximate solution, it is compared with the finite-difference solution.

  6. The Fate of Merging Neutron Stars

    NASA Astrophysics Data System (ADS)

    Kohler, Susanna

    2017-08-01

    A rapidly spinning, highly magnetized neutron star is one possible outcome when two smaller neutron stars merge. [Casey Reed/Penn State University]When two neutron stars collide, the new object that they make can reveal information about the interior physics of neutron stars. New theoretical work explores what we should be seeing, and what it can teach us.Neutron Star or Black Hole?So far, the only systems from which weve detected gravitational waves are merging black holes. But other compact-object binaries exist and are expected to merge on observable timescales in particular, binary neutron stars. When two neutron stars merge, the resulting object falls into one of three categories:a stable neutron star,a black hole, ora supramassive neutron star, a large neutron star thats supported by its rotation but will eventually collapse to a black hole after it loses angular momentum.Histograms of the initial (left) and final (right) distributions of objects in the authors simulations, for five different equations of state. Most cases resulted primarily in the formation of neutron stars (NSs) or supramassive neutron stars (sNSs), not black holes (BHs). [Piro et al. 2017]Whether a binary-neutron-star merger results in another neutron star, a black hole, or a supramassive neutron star depends on the final mass of the remnant and what the correct equation of state is that describes the interiors of neutron stars a longstanding astrophysical puzzle.In a recent study, a team of scientists led by Anthony Piro (Carnegie Observatories) estimated which of these outcomes we should expect for mergers of binary neutron stars. The teams results along with future observations of binary neutron stars may help us to eventually pin down the equation of state for neutron stars.Merger OutcomesPiro and collaborators used relativistic calculations of spinning and non-spinning neutron stars to estimate the mass range that neutron stars would have for several different realistic equations of state. They then combined this information with Monte Carlo simulations based on the mass distribution of neutron-star binaries in our galaxy. From these simulations, Piro and collaborators could predict the distribution of fates expected for merging neutron-star binaries, given different equations of state.The authors found that the fate of the merger could vary greatly depending on the equation of state you assume. Intriguingly, all equations of state resulted in a surprisingly high fraction of systems that merged to form a neutron star or a supramassive neutron star in fact, four out of the five equations of state predicted that 80100% of systems would result in a neutron star or a supermassive neutron star.Lessons from ObservationsThe frequency bands covered by various current and planned gravitational wave observatories. Advanced LIGO has the right frequency coverage to be able to explore a neutron-star remnant if the signal is loud enough. [Christopher Moore, Robert Cole and Christopher Berry]These results have important implications for our future observations. The high predicted fraction of neutron stars resulting from these mergers tells us that its especially important for gravitational-wave observatories to probe 14 kHz emission. This frequency range will enable us to study the post-merger neutron-star or supramassive-neutron-star remnants.Even if we cant observe the remnants behavior after it forms, we can still compare the distribution of remnants that we observe in the future to the predictions made by Piro and collaborators. This will potentially allow us to constrain the neutron-star equation of state, revealing the physics of neutron-star interiors even without direct observations.CitationAnthony L. Piro et al 2017 ApJL 844 L19. doi:10.3847/2041-8213/aa7f2f

  7. Computing and visualizing time-varying merge trees for high-dimensional data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oesterling, Patrick; Heine, Christian; Weber, Gunther H.

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  8. The formation and build-up of the red-sequence over the past 9 Gyr in VIPERS

    NASA Astrophysics Data System (ADS)

    Fritz, Alexander; Abbas, U.; Adami, C.; Arnouts, S.; Bel, J.; Bolzonella, M.; Bottini, D.; Branchini, E.; Burden, A.; Cappi, A.; Coupon, J.; Cucciati, O.; Davidzon, I.; De Lucia, G.; de la Torre, S.; Di Porto, C.; Franzetti, P.; Fumana, M.; Garilli, B.; Granett, B. R.; Guzzo, L.; Ilbert, O.; Iovino, A.; Krywult, J.; Le Brun, V.; Le Fèvre, O.; Maccagni, D.; Małek, K.; Marchetti, A.; Marinoni, C.; Marulli, F.; McCracken, H. J.; Mellier, Y.; Moscardini, L.; Nichol, R. C.; Paioro, L.; Peacock, J. A.; Percival, W. J.; Polletta, M.; Pollo, A.; Scodeggio, M.; Tasca, L. A. M.; Tojeiro, R.; Vergani, D.; Zamorani, G.; Zanichelli, A.; VIPERS Team

    2015-02-01

    We present the Luminosity Function (LF) and Colour-Magnitude Relation (CMR) using ~45000 galaxies drawn from the VIMOS Public Extragalactic Redshift Survey (VIPERS). Using different selection criteria, we define several samples of early-type galaxies and explore their impact on the evolution of the red-sequence (RS) and the effects of dust. Our results suggest a rapid build-up of the RS within a short time scale. We find a rise in the number density of early-type galaxies and a strong evolution in LF and CMR. Massive galaxies exist already 9 Gyr ago and experience an efficient quenching of their star formation at z = 1, followed by a passive evolution with only limited merging activity. In contrast, low-mass galaxies indicate a different mass assembly history and cause a slow build-up of the CMR over cosmic time.

  9. Cloud access to interoperable IVOA-compliant VOSpace storage

    NASA Astrophysics Data System (ADS)

    Bertocco, S.; Dowler, P.; Gaudet, S.; Major, B.; Pasian, F.; Taffoni, G.

    2018-07-01

    Handling, processing and archiving the huge amount of data produced by the new generation of experiments and instruments in Astronomy and Astrophysics are among the more exciting challenges to address in designing the future data management infrastructures and computing services. We investigated the feasibility of a data management and computation infrastructure, available world-wide, with the aim of merging the FAIR data management provided by IVOA standards with the efficiency and reliability of a cloud approach. Our work involved the Canadian Advanced Network for Astronomy Research (CANFAR) infrastructure and the European EGI federated cloud (EFC). We designed and deployed a pilot data management and computation infrastructure that provides IVOA-compliant VOSpace storage resources and wide access to interoperable federated clouds. In this paper, we detail the main user requirements covered, the technical choices and the implemented solutions and we describe the resulting Hybrid cloud Worldwide infrastructure, its benefits and limitations.

  10. Assessing hydrometeorological impacts with terrestrial and aerial Lidar data in Monterrey, México

    NASA Astrophysics Data System (ADS)

    Yepez Rincon, F.; Lozano Garcia, D.; Vela Coiffier, P.; Rivera Rivera, L.

    2013-10-01

    Light Detection Ranging (Lidar) is an efficient tool to gather points reflected from a terrain and store them in a xyz coordinate system, allowing the generation of 3D data sets to manage geoinformation. Translation of these coordinates, from an arbitrary system into a geographical base, makes data feasible and useful to calculate volumes and define topographic characteristics at different scales. Lidar technological advancement in topographic mapping enables the generation of highly accurate and densely sampled elevation models, which are in high demand by many industries like construction, mining and forestry. This study merges terrestrial and aerial Lidar data to evaluate the effectiveness of these tools assessing volumetric changes after a hurricane event of riverbeds and scour bridges The resulted information could be an optimal approach to improve hydrological and hydraulic models, to aid authorities in proper to decision making in construction, urban planning, and homeland security.

  11. TOPEM: A PET-TOF endorectal probe, compatible with MRI for diagnosis and follow up of prostate cancer

    NASA Astrophysics Data System (ADS)

    Garibaldi, F.; Capuani, S.; Colilli, S.; Cosentino, L.; Cusanno, F.; De Leo, R.; Finocchiaro, P.; Foresta, M.; Giove, F.; Giuliani, F.; Gricia, M.; Loddo, F.; Lucentini, M.; Maraviglia, B.; Meddi, F.; Monno, E.; Musico, P.; Pappalardo, A.; Perrino, R.; Ranieri, A.; Rivetti, A.; Santavenere, F.; Tamma, C.

    2013-02-01

    Prostate cancer is the most common disease in men and the second leading cause of cancer death. Generic large instruments for diagnosis have sensitivity, spatial resolution, and contrast inferior with respect to dedicated prostate imagers. Multimodality imaging can play a significant role merging anatomical and functional details coming from simultaneous PET and MRI. The TOPEM project has the goal of designing, building, and testing an endorectal PET-TOF MRI probe. The performance is dominated by the detector close to the source. Results from simulation show spatial resolution of ∼1.5 mm for source distances up to 80 mm. The efficiency is significantly improved with respect to the external PET. Mini-detectors have been built and tested. We obtained, for the first time, to our best knowledge, timing resolution of <400 ps and at the same time Depth Of Interaction (DOI) resolution of 1 mm or less.

  12. Automatic specular reflections removal for endoscopic images

    NASA Astrophysics Data System (ADS)

    Tan, Ke; Wang, Bin; Gao, Yuan

    2017-07-01

    Endoscopy imaging is utilized to provide a realistic view about the surfaces of organs inside the human body. Owing to the damp internal environment, these surfaces usually have a glossy appearance showing specular reflections. For many computer vision algorithms, the highlights created by specular reflections may become a significant source of error. In this paper, we present a novel method for restoration of the specular reflection regions from a single image. Specular restoration process starts with generating a substitute specular-free image with RPCA method. Then the specular removed image was obtained by taking the binary weighting template of highlight regions as the weighting for merging the original specular image and the substitute image. The modified template was furthermore discussed for the concealment of artificial effects in the edge of specular regions. Experimental results on the removal of the endoscopic image with specular reflections demonstrate the efficiency of the proposed method comparing to the existing methods.

  13. Scalable Faceted Ranking in Tagging Systems

    NASA Astrophysics Data System (ADS)

    Orlicki, José I.; Alvarez-Hamelin, J. Ignacio; Fierens, Pablo I.

    Nowadays, web collaborative tagging systems which allow users to upload, comment on and recommend contents, are growing. Such systems can be represented as graphs where nodes correspond to users and tagged-links to recommendations. In this paper we analyze the problem of computing a ranking of users with respect to a facet described as a set of tags. A straightforward solution is to compute a PageRank-like algorithm on a facet-related graph, but it is not feasible for online computation. We propose an alternative: (i) a ranking for each tag is computed offline on the basis of tag-related subgraphs; (ii) a faceted order is generated online by merging rankings corresponding to all the tags in the facet. Based on the graph analysis of YouTube and Flickr, we show that step (i) is scalable. We also present efficient algorithms for step (ii), which are evaluated by comparing their results with two gold standards.

  14. Identification techniques for highly boosted W bosons that decay into hadrons

    DOE PAGES

    Khachatryan, Vardan

    2014-12-02

    In searches for new physics in the energy regime of the LHC, it is becoming increasingly important to distinguish single-jet objects that originate from the merging of the decay products of W bosons produced with high transverse momenta from jets initiated by single partons. Algorithms are defined to identify such W jets for different signals of interest, using techniques that are also applicable to other decays of bosons to hadrons that result in a single jet, such as those from highly boosted Z and Higgs bosons. The efficiency for tagging W jets is measured in data collected with the CMSmore » detector at a center-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 19.7 fb -1. The performance of W tagging in data is compared with predictions from several Monte Carlo simulators.« less

  15. ArrayNinja: An Open Source Platform for Unified Planning and Analysis of Microarray Experiments.

    PubMed

    Dickson, B M; Cornett, E M; Ramjan, Z; Rothbart, S B

    2016-01-01

    Microarray-based proteomic platforms have emerged as valuable tools for studying various aspects of protein function, particularly in the field of chromatin biochemistry. Microarray technology itself is largely unrestricted in regard to printable material and platform design, and efficient multidimensional optimization of assay parameters requires fluidity in the design and analysis of custom print layouts. This motivates the need for streamlined software infrastructure that facilitates the combined planning and analysis of custom microarray experiments. To this end, we have developed ArrayNinja as a portable, open source, and interactive application that unifies the planning and visualization of microarray experiments and provides maximum flexibility to end users. Array experiments can be planned, stored to a private database, and merged with the imaged results for a level of data interaction and centralization that is not currently attainable with available microarray informatics tools. © 2016 Elsevier Inc. All rights reserved.

  16. Use of LANDSAT-1 data for the detection and mapping of saline seeps in Montana

    NASA Technical Reports Server (NTRS)

    May, G. A. (Principal Investigator); Petersen, G. W.

    1976-01-01

    The author has identified the following significant results. April, May, and August are the best times to detect saline seeps. Specific times within these months would be dependent upon weather, phenology, and growth conditions. Saline seeps can be efficiently and accurately mapped, within resolution capabilities, from merged May and August LANDSAT 1 data. Seeps were mapped by detecting salt crusts in the spring and indicator plants in the fall. These indicator plants were kochia, inkweed, and foxtail barley. The total hectares of the mapped saline seeps were calculated and tabulated. Saline seeps less than two hectares in size or that have linear configurations less than 200 meters in width were not mapped using the LANDSAT 1 data. Saline seep signatures developed in the Coffee Creek test site were extended to map saline seeps located outside this area.

  17. Electronic health records: postadoption physician satisfaction and continued use.

    PubMed

    Wright, Edward; Marvel, Jon

    2012-01-01

    One goal of public-policy makers in general and health care managers in particular is the adoption and efficient utilization of electronic health record (EHR) systems throughout the health care industry. Consequently, this investigation focused on the effects of known antecedents of technology adoption on physician satisfaction with EHR technology and the continued use of such systems. The American Academy of Family Physicians provided support in the survey of 453 physicians regarding their satisfaction with their EHR use experience. A conceptual model merging technology adoption and computer user satisfaction models was tested using structural equation modeling. Results indicate that effort expectancy (ease of use) has the most substantive effect on physician satisfaction and the continued use of EHR systems. As such, health care managers should be especially sensitive to the user and computer interface of prospective EHR systems to avoid costly and disruptive system selection mistakes.

  18. THE YOUNG STELLAR OBJECT POPULATION IN THE VELA-D MOLECULAR CLOUD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strafella, F.; Maruccia, Y.; Maiolo, B.

    2015-01-10

    We investigate the young stellar population in the Vela Molecular Ridge, Cloud-D, a star-forming region observed by both the Spitzer/NASA and Herschel/ESA space telescopes. The point-source, band-merged, Spitzer-IRAC catalog complemented with MIPS photometry previously obtained is used to search for candidate young stellar objects (YSOs), also including sources detected in less than four IRAC bands. Bona fide YSOs are selected by using appropriate color-color and color-magnitude criteria aimed at excluding both Galactic and extragalactic contaminants. The derived star formation rate and efficiency are compared with the same quantities characterizing other star-forming clouds. Additional photometric data, spanning from the near-IR tomore » the submillimeter, are used to evaluate both bolometric luminosity and temperature for 33 YSOs located in a region of the cloud observed by both Spitzer and Herschel. The luminosity-temperature diagram suggests that some of these sources are representative of Class 0 objects with bolometric temperatures below 70 K and luminosities of the order of the solar luminosity. Far-IR observations from the Herschel/Hi-GAL key project for a survey of the Galactic plane are also used to obtain a band-merged photometric catalog of Herschel sources intended to independently search for protostars. We find 122 Herschel cores located on the molecular cloud, 30 of which are protostellar and 92 of which are starless. The global protostellar luminosity function is obtained by merging the Spitzer and Herschel protostars. Considering that 10 protostars are found in both the Spitzer and Herschel lists, it follows that in the investigated region we find 53 protostars and that the Spitzer-selected protostars account for approximately two-thirds of the total.« less

  19. Automated tilt series alignment and tomographic reconstruction in IMOD.

    PubMed

    Mastronarde, David N; Held, Susannah R

    2017-02-01

    Automated tomographic reconstruction is now possible in the IMOD software package, including the merging of tomograms taken around two orthogonal axes. Several developments enable the production of high-quality tomograms. When using fiducial markers for alignment, the markers to be tracked through the series are chosen automatically; if there is an excess of markers available, a well-distributed subset is selected that is most likely to track well. Marker positions are refined by applying an edge-enhancing Sobel filter, which results in a 20% improvement in alignment error for plastic-embedded samples and 10% for frozen-hydrated samples. Robust fitting, in which outlying points are given less or no weight in computing the fitting error, is used to obtain an alignment solution, so that aberrant points from the automated tracking can have little effect on the alignment. When merging two dual-axis tomograms, the alignment between them is refined from correlations between local patches; a measure of structure was developed so that patches with insufficient structure to give accurate correlations can now be excluded automatically. We have also developed a script for running all steps in the reconstruction process with a flexible mechanism for setting parameters, and we have added a user interface for batch processing of tilt series to the Etomo program in IMOD. Batch processing is fully compatible with interactive processing and can increase efficiency even when the automation is not fully successful, because users can focus their effort on the steps that require manual intervention. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. A Track Initiation Method for the Underwater Target Tracking Environment

    NASA Astrophysics Data System (ADS)

    Li, Dong-dong; Lin, Yang; Zhang, Yao

    2018-04-01

    A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.

  1. [Fragment-based drug discovery: concept and aim].

    PubMed

    Tanaka, Daisuke

    2010-03-01

    Fragment-Based Drug Discovery (FBDD) has been recognized as a newly emerging lead discovery methodology that involves biophysical fragment screening and chemistry-driven fragment-to-lead stages. Although fragments, defined as structurally simple and small compounds (typically <300 Da), have not been employed in conventional high-throughput screening (HTS), the recent significant progress in the biophysical screening methods enables fragment screening at a practical level. The intention of FBDD primarily turns our attention to weakly but specifically binding fragments (hit fragments) as the starting point of medicinal chemistry. Hit fragments are then promoted to more potent lead compounds through linking or merging with another hit fragment and/or attaching functional groups. Another positive aspect of FBDD is ligand efficiency. Ligand efficiency is a useful guide in screening hit selection and hit-to-lead phases to achieve lead-likeness. Owing to these features, a number of successful applications of FBDD to "undruggable targets" (where HTS and other lead identification methods failed to identify useful lead compounds) have been reported. As a result, FBDD is now expected to complement more conventional methodologies. This review, as an introduction of the following articles, will summarize the fundamental concepts of FBDD and will discuss its advantages over other conventional drug discovery approaches.

  2. Low-cost fused taper polymer optical fiber (LFT-POF) splitters for environmental and home-networking solution

    NASA Astrophysics Data System (ADS)

    Supian, L. S.; Ab-Rahman, Mohammad Syuhaimi; Harun, Mohd Hazwan; Gunab, Hadi; Sulaiman, Malik; Naim, Nani Fadzlina

    2017-08-01

    In visible optical communication over the multimode PMMA fibers, the overall cost of optical network can be reduced by deploying economical splitters for distributing the optical data signals from a point to multipoint in transmission network. The low-cost splitters shall have two main characteristics; good uniformity and high power efficiency. The most cost-effective and environmental friendly optical splitter having those characteristics have been developed. The device material is 100% purely based on the multimode step-index PMMA Polymer Optical Fiber (POF). The region which all fibers merged as single fiber is called as fused-taper POF. This ensures that all fibers are melted and fused properly. The results for uniformity and power efficiency of all splitters have been revealed by injecting red LED transmitter with 650 nm wavelength into input port while each end of output fibers measured by optical power meter. Final analysis shows our fused-taper splitter has low excess loss 0.53 dB and each of the output port has low insertion loss, which the average value is below 7 dB. In addition, the splitter has good uniformity that is 32:37:31% in which it is suitably used for demultiplexer fabrication.

  3. 29 CFR 4211.31 - Allocation of unfunded vested benefits following the merger of plans.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) through (d) of this section, when two or more multiemployer plans merge, the merged plan shall adopt one... allocation methods prescribed in §§ 4211.32 through 4211.35, and the method adopted shall apply to all employer withdrawals occurring after the initial plan year. Alternatively, a merged plan may adopt its own...

  4. 76 FR 54931 - Post Office (PO) Box Fee Groups for Merged Locations

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-06

    ... POSTAL SERVICE 39 CFR Part 111 Post Office (PO) Box Fee Groups for Merged Locations AGENCY: Postal... different ZIP Code TM location because of a merger of two or more ZIP Code locations into a single location... merged with a location whose box section is more than one fee group level different, the location would...

  5. Merged Federal Files [Academic Year] 1978-79 [machine-readable data file].

    ERIC Educational Resources Information Center

    National Center for Education Statistics (ED), Washington, DC.

    The Merged Federal File for 1978-79 contains school district level data from the following six source files: (1) the Census of Governments' Survey of Local Government Finances--School Systems (F-33) (with 16,343 records merged); (2) the National Center for Education Statistics Survey of School Systems (School District Universe) (with 16,743…

  6. Tropical Rainfall Analysis Using TRMM in Combination With Other Satellite Gauge Data: Comparison with Global Precipitation Climatology Project (GPCP) Results

    NASA Technical Reports Server (NTRS)

    Adler, Robert F.; Huffman, George J.; Bolvin, David; Nelkin, Eric; Curtis, Scott

    1999-01-01

    This paper describes recent results of using Tropical Rainfall Measuring Mission (TRMM) information as the key calibration tool in a merged analysis on a 1 deg x 1 deg latitude/longitude monthly scale based on multiple satellite sources and raingauge analysis. The procedure used to produce the GPCP data set is a stepwise approach which first combines the satellite low-orbit microwave and geosynchronous IR observations into a "multi-satellite" product and than merges that result with the raingauge analysis. Preliminary results produced with the still-stabilizing TRMM algorithms indicate that TRMM shows tighter spatial gradients in tropical rain maxima with higher peaks in the center of the maxima. The TRMM analyses will be used to evaluate the evolution of the 1998 ENSO variations, again in comparison with the GPCP analyses.

  7. How Do Galaxies Grow?

    NASA Astrophysics Data System (ADS)

    2008-08-01

    Astronomers have caught multiple massive galaxies in the act of merging about 4 billion years ago. This discovery, made possible by combining the power of the best ground- and space-based telescopes, uniquely supports the favoured theory of how galaxies form. ESO PR Photo 24/08 ESO PR Photo 24/08 Merging Galaxies in Groups How do galaxies form? The most widely accepted answer to this fundamental question is the model of 'hierarchical formation', a step-wise process in which small galaxies merge to build larger ones. One can think of the galaxies forming in a similar way to how streams merge to form rivers, and how these rivers, in turn, merge to form an even larger river. This theoretical model predicts that massive galaxies grow through many merging events in their lifetime. But when did their cosmological growth spurts finish? When did the most massive galaxies get most of their mass? To answer these questions, astronomers study massive galaxies in clusters, the cosmological equivalent of cities filled with galaxies. "Whether the brightest galaxies in clusters grew substantially in the last few billion years is intensely debated. Our observations show that in this time, these galaxies have increased their mass by 50%," says Kim-Vy Tran from the University of Zürich, Switzerland, who led the research. The astronomers made use of a large ensemble of telescopes and instruments, including ESO's Very Large Telescope (VLT) and the Hubble Space Telescope, to study in great detail galaxies located 4 billion light-years away. These galaxies lie in an extraordinary system made of four galaxy groups that will assemble into a cluster. In particular, the team took images with VIMOS and spectra with FORS2, both instruments on the VLT. From these and other observations, the astronomers could identify a total of 198 galaxies belonging to these four groups. The brightest galaxies in each group contain between 100 and 1000 billion of stars, a property that makes them comparable to the most massive galaxies belonging to clusters. "Most surprising is that in three of the four groups, the brightest galaxy also has a bright companion galaxy. These galaxy pairs are merging systems," says Tran. The brightest galaxy in each group can be ordered in a time sequence that shows how luminous galaxies continue to grow by merging until recently, that is, in the last 5 billion years. It appears that due to the most recent episode of this 'galactic cannibalism', the brightest galaxies became at least 50% more massive. This discovery provides unique and powerful validation of hierarchical formation as manifested in both galaxy and cluster assembly. "The stars in these galaxies are already old and so we must conclude that the recent merging did not produce a new generation of stars," concludes Tran. "Most of the stars in these galaxies were born at least 7 billion years ago." The team is composed of Kim-Vy H. Tran (Institute for Theoretical Physics, University of Zürich, Switzerland), John Moustakas (New York University, USA), Anthony H. Gonzalez and Stefan J. Kautsch (University of Florida, Gainesville, USA), and Lei Bai and Dennis Zaritsky (Steward Observatory, University of Arizona, USA). The results presented here are published in the Astrophysical Journal Letters: "The Late Stellar Assembly Of Massive Cluster Galaxies Via Major Merging", by Tran et al.

  8. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-04-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion, and mass-loss rates during the luminous blue variable and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  9. Histology of the Urogenital System in the American Bullfrog (Rana catesbeiana), with Emphasis on Male Reproductive Morphology.

    PubMed

    Rheubert, Justin L; Cook, Hanna E; Siegel, Dustin S; Trauth, Stanley E

    2017-10-01

    Previous studies have revealed variations in the urogenital system morphology of amphibians. Recently, the urogenital system of salamanders was reviewed and terminology was synonymized across taxa. Discrepancies exist in the terminology describing the urogenital system of anurans, which prompted our group to develop a complete, detailed description of the urogenital system in an anuran species and provide nomenclature that is synonymous with those of other amphibian taxa. In Rana catesbeiana, sperm mature within spermatocysts of the seminiferous tubule epithelia and are transported to a series of intratesticular ducts that exit the testes and merge to form vasa efferentia. Vasa efferentia converge into single longitudinal ducts (Bidder's ducts) on the lateral aspects of the kidneys. Branches from the longitudinal ducts merge with genital kidney renal tubules through renal corpuscles. The nephrons travel caudally and empty into the Wöffian ducts. Similar to salamanders, the caudal portion of the kidneys (termed the pelvic kidneys in salamanders) only possesses nephrons involved in urine formation, not sperm transport. Data from the present study provide a detailed description and synonymous nomenclature that can be used to make future comparative analyses between taxa more efficient.

  10. Accuracy of inference on the physics of binary evolution from gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Barrett, Jim W.; Gaebel, Sebastian M.; Neijssel, Coenraad J.; Vigna-Gómez, Alejandro; Stevenson, Simon; Berry, Christopher P. L.; Farr, Will M.; Mandel, Ilya

    2018-07-01

    The properties of the population of merging binary black holes encode some of the uncertain physics underlying the evolution of massive stars in binaries. The binary black hole merger rate and chirp-mass distribution are being measured by ground-based gravitational-wave detectors. We consider isolated binary evolution, and explore how accurately the physical model can be constrained with such observations by applying the Fisher information matrix to the merging black hole population simulated with the rapid binary-population synthesis code COMPAS. We investigate variations in four COMPAS parameters: common-envelope efficiency, kick-velocity dispersion and mass-loss rates during the luminous blue variable, and Wolf-Rayet stellar-evolutionary phases. We find that ˜1000 observations would constrain these model parameters to a fractional accuracy of a few per cent. Given the empirically determined binary black hole merger rate, we can expect gravitational-wave observations alone to place strong constraints on the physics of stellar and binary evolution within a few years. Our approach can be extended to use other observational data sets; combining observations at different evolutionary stages will lead to a better understanding of stellar and binary physics.

  11. The case for electron re-acceleration at galaxy cluster shocks

    NASA Astrophysics Data System (ADS)

    van Weeren, Reinout J.; Andrade-Santos, Felipe; Dawson, William A.; Golovich, Nathan; Lal, Dharam V.; Kang, Hyesung; Ryu, Dongsu; Brìggen, Marcus; Ogrean, Georgiana A.; Forman, William R.; Jones, Christine; Placco, Vinicius M.; Santucci, Rafael M.; Wittman, David; Jee, M. James; Kraft, Ralph P.; Sobral, David; Stroe, Andra; Fogarty, Kevin

    2017-01-01

    On the largest scales, the Universe consists of voids and filaments making up the cosmic web. Galaxy clusters are located at the knots in this web, at the intersection of filaments. Clusters grow through accretion from these large-scale filaments and by mergers with other clusters and groups. In a growing number of galaxy clusters, elongated Mpc-sized radio sources have been found1,2 . Also known as radio relics, these regions of diffuse radio emission are thought to trace relativistic electrons in the intracluster plasma accelerated by low-Mach-number shocks generated by cluster-cluster merger events 3 . A long-standing problem is how low-Mach-number shocks can accelerate electrons so efficiently to explain the observed radio relics. Here, we report the discovery of a direct connection between a radio relic and a radio galaxy in the merging galaxy cluster Abell 3411-3412 by combining radio, X-ray and optical observations. This discovery indicates that fossil relativistic electrons from active galactic nuclei are re-accelerated at cluster shocks. It also implies that radio galaxies play an important role in governing the non-thermal component of the intracluster medium in merging clusters.

  12. The case for electron re-acceleration at galaxy cluster shocks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Weeren, Reinout J.; Andrade-Santos, Felipe; Dawson, William A.

    On the largest scales, the Universe consists of voids and filaments making up the cosmic web. Galaxy clusters are located at the knots in this web, at the intersection of filaments. Clusters grow through accretion from these large-scale filaments and by mergers with other clusters and groups. In a growing number of galaxy clusters, elongated Mpc-sized radio sources have been found. Also known as radio relics, these regions of diffuse radio emission are thought to trace relativistic electrons in the intracluster plasma accelerated by low-Mach-number shocks generated by cluster–cluster merger events. A long-standing problem is how low-Mach-number shocks can acceleratemore » electrons so efficiently to explain the observed radio relics. Here, we report the discovery of a direct connection between a radio relic and a radio galaxy in the merging galaxy cluster Abell 3411–3412 by combining radio, X-ray and optical observations. This discovery indicates that fossil relativistic electrons from active galactic nuclei are re-accelerated at cluster shocks. Lastly, it also implies that radio galaxies play an important role in governing the non-thermal component of the intracluster medium in merging clusters.« less

  13. The case for electron re-acceleration at galaxy cluster shocks

    DOE PAGES

    van Weeren, Reinout J.; Andrade-Santos, Felipe; Dawson, William A.; ...

    2017-01-04

    On the largest scales, the Universe consists of voids and filaments making up the cosmic web. Galaxy clusters are located at the knots in this web, at the intersection of filaments. Clusters grow through accretion from these large-scale filaments and by mergers with other clusters and groups. In a growing number of galaxy clusters, elongated Mpc-sized radio sources have been found. Also known as radio relics, these regions of diffuse radio emission are thought to trace relativistic electrons in the intracluster plasma accelerated by low-Mach-number shocks generated by cluster–cluster merger events. A long-standing problem is how low-Mach-number shocks can acceleratemore » electrons so efficiently to explain the observed radio relics. Here, we report the discovery of a direct connection between a radio relic and a radio galaxy in the merging galaxy cluster Abell 3411–3412 by combining radio, X-ray and optical observations. This discovery indicates that fossil relativistic electrons from active galactic nuclei are re-accelerated at cluster shocks. Lastly, it also implies that radio galaxies play an important role in governing the non-thermal component of the intracluster medium in merging clusters.« less

  14. Evaluating OpenSHMEM Explicit Remote Memory Access Operations and Merged Requests

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boehm, Swen; Pophale, Swaroop S; Gorentla Venkata, Manjunath

    The OpenSHMEM Library Specification has evolved consid- erably since version 1.0. Recently, non-blocking implicit Remote Memory Access (RMA) operations were introduced in OpenSHMEM 1.3. These provide a way to achieve better overlap between communication and computation. However, the implicit non-blocking operations do not pro- vide a separate handle to track and complete the individual RMA opera- tions. They are guaranteed to be completed after either a shmem quiet(), shmem barrier() or a shmem barrier all() is called. These are global com- pletion and synchronization operations. Though this semantic is expected to achieve a higher message rate for the applications, themore » drawback is that it does not allow fine-grained control over the completion of RMA operations. In this paper, first, we introduce non-blocking RMA operations with requests, where each operation has an explicit request to track and com- plete the operation. Second, we introduce interfaces to merge multiple requests into a single request handle. The merged request tracks multiple user-selected RMA operations, which provides the flexibility of tracking related communication operations with one request handle. Lastly, we explore the implications in terms of performance, productivity, usability and the possibility of defining different patterns of communication via merging of requests. Our experimental results show that a well designed and implemented OpenSHMEM stack can hide the overhead of allocating and managing the requests. The latency of RMA operations with requests is similar to blocking and implicit non-blocking RMA operations. We test our implementation with the Scalable Synthetic Compact Applications (SSCA #1) benchmark and observe that using RMA operations with requests and merging of these requests outperform the implementation using blocking RMA operations and implicit non-blocking operations by 49% and 74% respectively.« less

  15. Large Scale Frequent Pattern Mining using MPI One-Sided Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vishnu, Abhinav; Agarwal, Khushbu

    In this paper, we propose a work-stealing runtime --- Library for Work Stealing LibWS --- using MPI one-sided model for designing scalable FP-Growth --- {\\em de facto} frequent pattern mining algorithm --- on large scale systems. LibWS provides locality efficient and highly scalable work-stealing techniques for load balancing on a variety of data distributions. We also propose a novel communication algorithm for FP-growth data exchange phase, which reduces the communication complexity from state-of-the-art O(p) to O(f + p/f) for p processes and f frequent attributed-ids. FP-Growth is implemented using LibWS and evaluated on several work distributions and support counts. Anmore » experimental evaluation of the FP-Growth on LibWS using 4096 processes on an InfiniBand Cluster demonstrates excellent efficiency for several work distributions (87\\% efficiency for Power-law and 91% for Poisson). The proposed distributed FP-Tree merging algorithm provides 38x communication speedup on 4096 cores.« less

  16. Efficient Multiphoton Generation in Waveguide Quantum Electrodynamics.

    PubMed

    González-Tudela, A; Paulisch, V; Kimble, H J; Cirac, J I

    2017-05-26

    Engineering quantum states of light is at the basis of many quantum technologies such as quantum cryptography, teleportation, or metrology among others. Though, single photons can be generated in many scenarios, the efficient and reliable generation of complex single-mode multiphoton states is still a long-standing goal in the field, as current methods either suffer from low fidelities or small probabilities. Here we discuss several protocols which harness the strong and long-range atomic interactions induced by waveguide QED to efficiently load excitations in a collection of atoms, which can then be triggered to produce the desired multiphoton state. In order to boost the success probability and fidelity of each excitation process, atoms are used to both generate the excitations in the rest, as well as to herald the successful generation. Furthermore, to overcome the exponential scaling of the probability of success with the number of excitations, we design a protocol to merge excitations that are present in different internal atomic levels with a polynomial scaling.

  17. Merging black hole binaries: the effects of progenitor's metallicity, mass-loss rate and Eddington factor

    NASA Astrophysics Data System (ADS)

    Giacobbo, Nicola; Mapelli, Michela; Spera, Mario

    2018-03-01

    The first four gravitational wave events detected by LIGO were all interpreted as merging black hole binaries (BHBs), opening a new perspective on the study of such systems. Here we use our new population-synthesis code MOBSE, an upgraded version of BSE, to investigate the demography of merging BHBs. MOBSE includes metallicity-dependent prescriptions for mass-loss of massive hot stars. It also accounts for the impact of the electron-scattering Eddington factor on mass-loss. We perform >108 simulations of isolated massive binaries, with 12 different metallicities, to study the impact of mass-loss, core-collapse supernovae and common envelope on merging BHBs. Accounting for the dependence of stellar winds on the Eddington factor leads to the formation of black holes (BHs) with mass up to 65 M⊙ at metallicity Z ˜ 0.0002. However, most BHs in merging BHBs have masses ≲ 40 M⊙. We find merging BHBs with mass ratios in the 0.1-1.0 range, even if mass ratios >0.6 are more likely. We predict that systems like GW150914, GW170814 and GW170104 can form only from progenitors with metallicity Z ≤ 0.006, Z ≤ 0.008 and Z ≤ 0.012, respectively. Most merging BHBs have gone through a common envelope phase, but up to ˜17 per cent merging BHBs at low metallicity did not undergo any common envelope phase. We find a much higher number of mergers from metal-poor progenitors than from metal-rich ones: the number of BHB mergers per unit mass is ˜10-4 M_{⊙}^{-1} at low metallicity (Z = 0.0002-0.002) and drops to ˜10-7 M_{⊙}^{-1} at high metallicity (Z ˜ 0.02).

  18. The Great American Biotic Interchange in birds

    PubMed Central

    Weir, Jason T.; Bermingham, Eldredge; Schluter, Dolph

    2009-01-01

    The sudden exchange of mammals over the land bridge between the previously isolated continents of North and South America is among the most celebrated events in the faunal history of the New World. This exchange resulted in the rapid merging of continental mammalian faunas that had evolved in almost complete isolation from each other for tens of millions of years. Yet, the wider importance of land bridge-mediated interchange to faunal mixing in other groups is poorly known because of the incompleteness of the fossil record. In particular, the ability of birds to fly may have rendered a land bridge unnecessary for faunal merging. Using molecular dating of the unique bird faunas of the two continents, we show that rates of interchange increased dramatically after land bridge completion in tropical forest-specializing groups, which rarely colonize oceanic islands and have poor dispersal abilities across water barriers, but not in groups comprised of habitat generalists. These results support the role of the land bridge in the merging of the tropical forest faunas of North and South America. In contrast to mammals, the direction of traffic across the land bridge in birds was primarily south to north. The event transformed the tropical avifauna of the New World. PMID:19996168

  19. Synthesizer: Expediting synthesis studies from context-free data with information retrieval techniques.

    PubMed

    Gandy, Lisa M; Gumm, Jordan; Fertig, Benjamin; Thessen, Anne; Kennish, Michael J; Chavan, Sameer; Marchionni, Luigi; Xia, Xiaoxin; Shankrit, Shambhavi; Fertig, Elana J

    2017-01-01

    Scientists have unprecedented access to a wide variety of high-quality datasets. These datasets, which are often independently curated, commonly use unstructured spreadsheets to store their data. Standardized annotations are essential to perform synthesis studies across investigators, but are often not used in practice. Therefore, accurately combining records in spreadsheets from differing studies requires tedious and error-prone human curation. These efforts result in a significant time and cost barrier to synthesis research. We propose an information retrieval inspired algorithm, Synthesize, that merges unstructured data automatically based on both column labels and values. Application of the Synthesize algorithm to cancer and ecological datasets had high accuracy (on the order of 85-100%). We further implement Synthesize in an open source web application, Synthesizer (https://github.com/lisagandy/synthesizer). The software accepts input as spreadsheets in comma separated value (CSV) format, visualizes the merged data, and outputs the results as a new spreadsheet. Synthesizer includes an easy to use graphical user interface, which enables the user to finish combining data and obtain perfect accuracy. Future work will allow detection of units to automatically merge continuous data and application of the algorithm to other data formats, including databases.

  20. Merging of independent condensates: disentangling the Kibble-Zurek mechanism

    NASA Astrophysics Data System (ADS)

    Ville, Jean-Loup; Aidelsburger, Monika; Saint-Jalm, Raphael; Nascimbene, Sylvain; Beugnon, Jerome; Dalibard, Jean

    2017-04-01

    An important step in the study of out-of-equilibrium physics is the Kibble-Zurek theory which describes a system after a quench through a second-order phase transition. This was studied in our group with a temperature quench across the normal-to-superfluid phase transition in an annular trap geometry, inducing the formation of supercurrents. Their magnitude and direction were detected by measuring spiral patterns resulting from the interference of the ring-shaped condensate with a central reference disk. According to the KZ mechanism domains of phase are created during the quench, with a characteristic size depending of its duration. In our case this results in a stochastic formation of supercurrents depending on the relative phases of the domains. As a next step of this study, we now design ourselves the patches thanks to our tunable trapping potential. We control both the number of condensates to be merged (from one to twelve) and their merging time. We report an increase of the vorticity in the ring for an increased number of patches compatible with a random phase model. We further investigate the time required by the phase to homogenize between two condensates.

  1. Synthesizer: Expediting synthesis studies from context-free data with information retrieval techniques

    PubMed Central

    Gumm, Jordan; Fertig, Benjamin; Thessen, Anne; Kennish, Michael J.; Chavan, Sameer; Marchionni, Luigi; Xia, Xiaoxin; Shankrit, Shambhavi; Fertig, Elana J.

    2017-01-01

    Scientists have unprecedented access to a wide variety of high-quality datasets. These datasets, which are often independently curated, commonly use unstructured spreadsheets to store their data. Standardized annotations are essential to perform synthesis studies across investigators, but are often not used in practice. Therefore, accurately combining records in spreadsheets from differing studies requires tedious and error-prone human curation. These efforts result in a significant time and cost barrier to synthesis research. We propose an information retrieval inspired algorithm, Synthesize, that merges unstructured data automatically based on both column labels and values. Application of the Synthesize algorithm to cancer and ecological datasets had high accuracy (on the order of 85–100%). We further implement Synthesize in an open source web application, Synthesizer (https://github.com/lisagandy/synthesizer). The software accepts input as spreadsheets in comma separated value (CSV) format, visualizes the merged data, and outputs the results as a new spreadsheet. Synthesizer includes an easy to use graphical user interface, which enables the user to finish combining data and obtain perfect accuracy. Future work will allow detection of units to automatically merge continuous data and application of the algorithm to other data formats, including databases. PMID:28437440

  2. A qualitative analysis of the determinants in the choice of a French journal reviewing procedures

    NASA Astrophysics Data System (ADS)

    Morge, Ludovic

    2015-12-01

    Between 1993 and 2010, two French journals (Aster and Didaskalia) coming from different backgrounds but belonging to the same institution used to publish papers on research in science and technology education. The merging of these journals made it necessary for them to compare the different reviewing procedures used by each. This merging occurred at a time when research is becoming increasingly international which partly determines some of the reviewing procedure choices. In order for a francophone international journal to survive, it needs to take this internationalization into account in a reasoned manner. The author of this article, as a chief editor of RDST (Recherches en Didactique des Sciences et des Technologies)—the journal resulting from the merging- taking part in this merger, analyses the social, cultural and pragmatic determinants which impacted the choices made in reviewing procedures. This paper describes how these diversity of factors leads us to drop the idea of a standard reviewing procedure which would be valid for all journals.

  3. Wavelength-Adaptive Dehazing Using Histogram Merging-Based Classification for UAV Images

    PubMed Central

    Yoon, Inhye; Jeong, Seokhwa; Jeong, Jaeheon; Seo, Doochun; Paik, Joonki

    2015-01-01

    Since incoming light to an unmanned aerial vehicle (UAV) platform can be scattered by haze and dust in the atmosphere, the acquired image loses the original color and brightness of the subject. Enhancement of hazy images is an important task in improving the visibility of various UAV images. This paper presents a spatially-adaptive dehazing algorithm that merges color histograms with consideration of the wavelength-dependent atmospheric turbidity. Based on the wavelength-adaptive hazy image acquisition model, the proposed dehazing algorithm consists of three steps: (i) image segmentation based on geometric classes; (ii) generation of the context-adaptive transmission map; and (iii) intensity transformation for enhancing a hazy UAV image. The major contribution of the research is a novel hazy UAV image degradation model by considering the wavelength of light sources. In addition, the proposed transmission map provides a theoretical basis to differentiate visually important regions from others based on the turbidity and merged classification results. PMID:25808767

  4. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  5. Empirical Analysis of Effects of Bank Mergers and Acquisitions on Small Business Lending in Nigeria

    NASA Astrophysics Data System (ADS)

    Ita, Asuquo Akabom

    2012-11-01

    Mergers and acquisitions are the major instruments of the recent banking reforms in Nigeria.The effects and the implications of the reforms on the lending practices of merged banks to small businesses were considered in this study. These effects were divided into static and dynamic effects (restructuring, direct and external). Data were collected by cross-sectional research design and were subsequently analyzed by the ordinary least square (OLS) method.The analyses show that bank size, financial characteristics and deposit of non-merged banks are positively related to small business lending. While for the merged banks, the reverse is the case. From the above result, it is evident that merger and acquisition have not only static effect on small business lending but also dynamic effect, therefore, given the central position of small businesses in the current government policy on industrialization in Nigeria, policy makers in Nigeria, should consider both the static and dynamic effects of merger and acquisition on small business lending in their policy thrust.

  6. Experimental characterization of a transition from collisionless to collisional interaction between head-on-merging supersonic plasma jets a)

    DOE PAGES

    Moser, Auna L.; Hsu, Scott C.

    2015-05-01

    We present results from experiments on the head-on merging of two supersonic plasma jets in an initially collisionless regime for the counter-streaming ions [A. L. Moser & S. C. Hsu, Phys. Plasmas, submitted (2014)]. The plasma jets are of either an argon/impurity or hydrogen/impurity mixture and are produced by pulsed-power-driven railguns. Based on time- and space-resolved fast-imaging, multi-chord interferometry, and survey-spectroscopy measurements of the overlapping region between the merging jets, we observe that the jets initially interpenetrate, consistent with calculated inter-jet ion collision lengths, which are long. As the jets interpenetrate, a rising mean-charge state causes a rapid decrease inmore » the inter-jet ion collision length. Finally, the interaction becomes collisional and the jets stagnate, eventually producing structures consistent with collisional shocks. These experimental observations can aid in the validation of plasma collisionality and ionization models for plasmas with complex equations of state.« less

  7. Experimental characterization of a transition from collisionless to collisional interaction between head-on-merging supersonic plasma jets a)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moser, Auna L.; Hsu, Scott C.

    We present results from experiments on the head-on merging of two supersonic plasma jets in an initially collisionless regime for the counter-streaming ions [A. L. Moser & S. C. Hsu, Phys. Plasmas, submitted (2014)]. The plasma jets are of either an argon/impurity or hydrogen/impurity mixture and are produced by pulsed-power-driven railguns. Based on time- and space-resolved fast-imaging, multi-chord interferometry, and survey-spectroscopy measurements of the overlapping region between the merging jets, we observe that the jets initially interpenetrate, consistent with calculated inter-jet ion collision lengths, which are long. As the jets interpenetrate, a rising mean-charge state causes a rapid decrease inmore » the inter-jet ion collision length. Finally, the interaction becomes collisional and the jets stagnate, eventually producing structures consistent with collisional shocks. These experimental observations can aid in the validation of plasma collisionality and ionization models for plasmas with complex equations of state.« less

  8. A 3D analysis of the metal distribution in the compact group of galaxies HCG 31

    NASA Astrophysics Data System (ADS)

    Torres-Flores, Sergio; Mendes de Oliveira, Claudia; Alfaro-Cuello, Mayte; Rodrigo Carrasco, Eleazar; de Mello, Duilia; Amram, Philippe

    2015-02-01

    We present new Gemini/GMOS integral field unit observations of the central region of the merging compact group of galaxies HCG 31. Using this data set, we derive the oxygen abundances for the merging galaxies HCG 31A and HCG 31C. We found a smooth metallicity gradient between the nuclei of these galaxies, suggesting a mixing of metals between these objects. These results are confirmed by high-resolution Fabry-Perot data, from which we infer that gas is flowing between HCG 31A and HCG 31C.

  9. Medical group mergers: strategies for success.

    PubMed

    Latham, Will

    2014-01-01

    As consolidation sweeps over the healthcare industry, many medical groups are considering mergers with other groups as an alternative to employment. While mergers are challenging and fraught with risk, an organized approach to the merger process can dramatically increase the odds for success. Merging groups need to consider the benefits they seek from a merger, identify the obstacles that must be overcome to merge, and develop alternatives to overcome those obstacles. This article addresses the benefits to be gained and issues to be addressed, and provides a tested roadmap that has resulted in many successful medical group mergers.

  10. A merged pipe organ binary-analog correlator

    NASA Astrophysics Data System (ADS)

    Miller, R. S.; Berry, M. B.

    1982-02-01

    The design of a 96-stage, programmable binary-analog correlator is described. An array of charge coupled device (CCD) delay lines of differing lengths perform the delay and sum functions. Merging of several CCD channels is employed to reduce the active area. This device architecture allows simplified output detection while maintaining good device performance at higher speeds (5-10 MHz). Experimental results indicate a 50 dB broadband dynamic range and excellent agreement with the theoretical processing gain (19.8 dB) when operated at a 6 MHz sampling frequency as a p-n sequence matched filter.

  11. Experimental evidence for collisional shock formation via two obliquely merging supersonic plasma jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merritt, Elizabeth C., E-mail: emerritt@lanl.gov; Adams, Colin S.; University of New Mexico, Albuquerque, New Mexico 87131

    We report spatially resolved measurements of the oblique merging of two supersonic laboratory plasma jets. The jets are formed and launched by pulsed-power-driven railguns using injected argon, and have electron density ∼10{sup 14} cm{sup −3}, electron temperature ≈1.4 eV, ionization fraction near unity, and velocity ≈40 km/s just prior to merging. The jet merging produces a few-cm-thick stagnation layer, as observed in both fast-framing camera images and multi-chord interferometer data, consistent with collisional shock formation [E. C. Merritt et al., Phys. Rev. Lett. 111, 085003 (2013)].

  12. DETECTION OF SHOCK MERGING IN THE CHROMOSPHERE OF A SOLAR PORE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chae, Jongchul; Song, Donguk; Seo, Minju

    2015-06-01

    It was theoretically demonstrated that a shock propagating in the solar atmosphere can overtake another and merge with it. We provide clear observational evidence that shock merging does occur quite often in the chromosphere of sunspots. Using Hα imaging spectral data taken by the Fast Imaging Solar Spectrograph of the 1.6 m New Solar Telescope at the Big Bear Soar Observatory, we construct time–distance maps of line-of-sight velocities along two appropriately chosen cuts in a pore. The maps show a number of alternating redshift and blueshift ridges, and we identify each interface between a preceding redshift ridge and the followingmore » blueshift ridge as a shock ridge. The important finding of ours is that two successive shock ridges often merge with each other. This finding can be theoretically explained by the merging of magneto-acoustic shock waves propagating with lower speeds of about 10 km s{sup −1} and those propagating at higher speeds of about 16–22 km s{sup −1}. The shock merging is an important nonlinear dynamical process of the solar chromosphere that can bridge the gap between higher-frequency chromospheric oscillations and lower-frequency dynamic phenomena such as fibrils.« less

  13. Effect of adaptive cruise control systems on mixed traffic flow near an on-ramp

    NASA Astrophysics Data System (ADS)

    Davis, L. C.

    2007-06-01

    Mixed traffic flow consisting of vehicles equipped with adaptive cruise control (ACC) and manually driven vehicles is analyzed using car-following simulations. Simulations of merging from an on-ramp onto a freeway reported in the literature have not thus far demonstrated a substantial positive impact of ACC. In this paper cooperative merging for ACC vehicles is proposed to improve throughput and increase distance traveled in a fixed time. In such a system an ACC vehicle senses not only the preceding vehicle in the same lane but also the vehicle immediately in front in the other lane. Prior to reaching the merge region, the ACC vehicle adjusts its velocity to ensure that a safe gap for merging is obtained. If on-ramp demand is moderate, cooperative merging produces significant improvement in throughput (20%) and increases up to 3.6 km in distance traveled in 600 s for 50% ACC mixed flow relative to the flow of all-manual vehicles. For large demand, it is shown that autonomous merging with cooperation in the flow of all ACC vehicles leads to throughput limited only by the downstream capacity, which is determined by speed limit and headway time.

  14. Advanced digital modulation: Communication techniques and monolithic GaAs technology

    NASA Technical Reports Server (NTRS)

    Wilson, S. G.; Oliver, J. D., Jr.; Kot, R. C.; Richards, C. R.

    1983-01-01

    Communications theory and practice are merged with state-of-the-art technology in IC fabrication, especially monolithic GaAs technology, to examine the general feasibility of a number of advanced technology digital transmission systems. Satellite-channel models with (1) superior throughput, perhaps 2 Gbps; (2) attractive weight and cost; and (3) high RF power and spectrum efficiency are discussed. Transmission techniques possessing reasonably simple architectures capable of monolithic fabrication at high speeds were surveyed. This included a review of amplitude/phase shift keying (APSK) techniques and the continuous-phase-modulation (CPM) methods, of which MSK represents the simplest case.

  15. Constraining the Merging History of Massive Galaxies Since Redshift 3 Using Close Pairs. I. Major Pairs from Candels and the SDSS

    NASA Astrophysics Data System (ADS)

    Mantha, Kameswara Bharadwaj; McIntosh, Daniel H.; Brennan, Ryan; Cook, Joshua; Kodra, Dritan; Newman, Jeffrey; Somerville, Rachel S.; Barro, Guillermo; Behroozi, Peter; Conselice, Christopher; Dekel, Avishai; Faber, Sandra M.; Closson Ferguson, Henry; Finkelstein, Steven L.; Fontana, Adriano; Galametz, Audrey; Perez-Gonzalez, Pablo; Grogin, Norman A.; Guo, Yicheng; Hathi, Nimish P.; Hopkins, Philip F.; Kartaltepe, Jeyhan S.; Kocevski, Dale; Koekemoer, Anton M.; Koo, David C.; Lee, Seong-Kook; Lotz, Jennifer M.; Lucas, Ray A.; Nayyeri, Hooshang; Peth, Michael; Pforr, Janine; Primack, Joel R.; Santini, Paola; Simmons, Brooke D.; Stefanon, Mauro; Straughn, Amber; Snyder, Gregory F.; Wuyts, Stijn

    2017-01-01

    Major galaxy-galaxy merging can play an important role in the history of massive galaxies (stellar masses > 2E10 Msun) over cosmic time. An important way to measure the impact of major merging is to study close pairs of galaxies stellar mass or flux ratios between 1 and 4. We improve on the best recent efforts by probing merging of lower mass galaxies, anchoring evolutionary trends from five Hubble Space Telescope Legacy fields in the Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS) to the nearby universe using Sloan Digital Sky Survey (SDSS) to measure the fraction of massive galaxies in such pairs during six epochs spanning 01.5. This implies that major merging may not be as important at high redshifts as previously thought, merger timescales may not be fully understood, or we may be missing evidence of mergers at z~2-3 owing to CANDELS selections effects. Next, we will analyze pair fractions and merging timescales within realistic mocks of CANDELS from state of the art Semi-Analytic Model (SAM) to better understand and calibrate our empirical results.

  16. Fault Network Reconstruction using Agglomerative Clustering: Applications to South Californian Seismicity

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2014-05-01

    We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.

  17. Great expectations: patient perspectives and anticipated utility of non-diagnostic genomic-sequencing results.

    PubMed

    Hylind, Robyn; Smith, Maureen; Rasmussen-Torvik, Laura; Aufox, Sharon

    2018-01-01

    The management of secondary findings is a challenge to health-care providers relaying clinical genomic-sequencing results to patients. Understanding patients' expectations from non-diagnostic genomic sequencing could help guide this management. This study interviewed 14 individuals enrolled in the eMERGE (Electronic Medical Records and Genomics) study. Participants in eMERGE consent to undergo non-diagnostic genomic sequencing, receive results, and have results returned to their physicians. The interviews assessed expectations and intended use of results. The majority of interviewees were male (64%) and 43% identified as non-Caucasian. A unique theme identified was that many participants expressed uncertainty about the type of diseases they expected to receive results on, what results they wanted to learn about, and how they intended to use results. Participant uncertainty highlights the complex nature of deciding to undergo genomic testing and a deficiency in genomic knowledge. These results could help improve how genomic sequencing and secondary findings are discussed with patients.

  18. Prospective randomized comparison of rotational angiography with three-dimensional reconstruction and computed tomography merged with electro-anatomical mapping: a two center atrial fibrillation ablation study.

    PubMed

    Anand, Rishi; Gorev, Maxim V; Poghosyan, Hermine; Pothier, Lindsay; Matkins, John; Kotler, Gregory; Moroz, Sarah; Armstrong, James; Nemtsov, Sergei V; Orlov, Michael V

    2016-08-01

    To compare the efficacy and accuracy of rotational angiography with three-dimensional reconstruction (3DATG) image merged with electro-anatomical mapping (EAM) vs. CT-EAM. A prospective, randomized, parallel, two-center study conducted in 36 patients (25 men, age 65 ± 10 years) undergoing AF ablation (33 % paroxysmal, 67 % persistent) guided by 3DATG (group 1) vs. CT (group 2) image fusion with EAM. 3DATG was performed on the Philips Allura Xper FD 10 system. Procedural characteristics including time, radiation exposure, outcome, and navigation accuracy were compared between two groups. There was no significant difference between the groups in total procedure duration or time spent for various procedural steps. Minor differences in procedural characteristics were present between two centers. Segmentation and fusion time for 3DATG or CT-EAM was short and similar between both centers. Accuracy of navigation guided by either method was high and did not depend on left atrial size. Maintenance of sinus rhythm between the two groups was no different up to 24 months of follow-up. This study did not find superiority of 3DATG-EAM image merge to guide AF ablation when compared to CT-EAM fusion. Both merging techniques result in similar navigation accuracy.

  19. Particle-in-cell simulations of collisionless shock formation via head-on merging of two laboratory supersonic plasma jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thoma, C.; Welch, D. R.; Hsu, S. C.

    2013-08-15

    We describe numerical simulations, using the particle-in-cell (PIC) and hybrid-PIC code lsp[T. P. Hughes et al., Phys. Rev. ST Accel. Beams 2, 110401 (1999)], of the head-on merging of two laboratory supersonic plasma jets. The goals of these experiments are to form and study astrophysically relevant collisionless shocks in the laboratory. Using the plasma jet initial conditions (density ∼10{sup 14}–10{sup 16} cm{sup −3}, temperature ∼ few eV, and propagation speed ∼20–150 km/s), large-scale simulations of jet propagation demonstrate that interactions between the two jets are essentially collisionless at the merge region. In highly resolved one- and two-dimensional simulations, we showmore » that collisionless shocks are generated by the merging jets when immersed in applied magnetic fields (B∼0.1–1 T). At expected plasma jet speeds of up to 150 km/s, our simulations do not give rise to unmagnetized collisionless shocks, which require much higher velocities. The orientation of the magnetic field and the axial and transverse density gradients of the jets have a strong effect on the nature of the interaction. We compare some of our simulation results with those of previously published PIC simulation studies of collisionless shock formation.« less

  20. Merger types forming the Virgo cluster in recent gigayears

    NASA Astrophysics Data System (ADS)

    Olchanski, M.; Sorce, J. G.

    2018-06-01

    Context. As our closest cluster-neighbor, the Virgo cluster of galaxies is intensely studied by observers to unravel the mysteries of galaxy evolution within clusters. At this stage, cosmological numerical simulations of the cluster are useful to efficiently test theories and calibrate models. However, it is not trivial to select the perfect simulacrum of the Virgo cluster to fairly compare in detail its observed and simulated galaxy populations that are affected by the type and history of the cluster. Aims: Determining precisely the properties of Virgo for a later selection of simulated clusters becomes essential. It is still not clear how to access some of these properties, such as the past history of the Virgo cluster from current observations. Therefore, directly producing effective simulacra of the Virgo cluster is inevitable. Methods: Efficient simulacra of the Virgo cluster can be obtained via simulations that resemble the local Universe down to the cluster scale. In such simulations, Virgo-like halos form in the proper local environment and permit assessing the most probable formation history of the cluster. Studies based on these simulations have already revealed that the Virgo cluster has had a quiet merging history over the last seven gigayears and that the cluster accretes matter along a preferential direction. Results: This paper reveals that in addition such Virgo halos have had on average only one merger larger than about a tenth of their mass at redshift zero within the last four gigayears. This second branch (by opposition to main branch) formed in a given sub-region and merged recently (within the last gigayear). These properties are not shared with a set of random halos within the same mass range. Conclusions: This study extends the validity of the scheme used to produce the Virgo simulacra down to the largest sub-halos of the Virgo cluster. It opens up great prospects for detailed comparisons with observations, including substructures and markers of past history, to be conducted with a large sample of high resolution "Virgos" and including baryons, in the near future.

  1. MERGING GALAXY CLUSTERS: OFFSET BETWEEN THE SUNYAEV-ZEL'DOVICH EFFECT AND X-RAY PEAKS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Molnar, Sandor M.; Hearn, Nathan C.; Stadel, Joachim G., E-mail: sandor@phys.ntu.edu.tw

    2012-03-20

    Galaxy clusters, the most massive collapsed structures, have been routinely used to determine cosmological parameters. When using clusters for cosmology, the crucial assumption is that they are relaxed. However, subarcminute resolution Sunyaev-Zel'dovich (SZ) effect images compared with high-resolution X-ray images of some clusters show significant offsets between the two peaks. We have carried out self-consistent N-body/hydrodynamical simulations of merging galaxy clusters using FLASH to study these offsets quantitatively. We have found that significant displacements result between the SZ and X-ray peaks for large relative velocities for all masses used in our simulations as long as the impact parameters were aboutmore » 100-250 kpc. Our results suggest that the SZ peak coincides with the peak in the pressure times the line-of-sight characteristic length and not the pressure maximum (as it would for clusters in equilibrium). The peak in the X-ray emission, as expected, coincides with the density maximum of the main cluster. As a consequence, the morphology of the SZ signal, and therefore the offset between the SZ and X-ray peaks, change with viewing angle. As an application, we compare the morphologies of our simulated images to observed SZ and X-ray images and mass surface densities derived from weak-lensing observations of the merging galaxy cluster CL0152-1357, we find that a large relative velocity of 4800 km s{sup -1} is necessary to explain the observations. We conclude that an analysis of the morphologies of multi-frequency observations of merging clusters can be used to put meaningful constraints on the initial parameters of the progenitors.« less

  2. Quiet echo planar imaging for functional and diffusion MRI

    PubMed Central

    Price, Anthony N.; Cordero‐Grande, Lucilio; Malik, Shaihan; Ferrazzi, Giulio; Gaspar, Andreia; Hughes, Emer J.; Christiaens, Daan; McCabe, Laura; Schneider, Torben; Rutherford, Mary A.; Hajnal, Joseph V.

    2017-01-01

    Purpose To develop a purpose‐built quiet echo planar imaging capability for fetal functional and diffusion scans, for which acoustic considerations often compromise efficiency and resolution as well as angular/temporal coverage. Methods The gradient waveforms in multiband‐accelerated single‐shot echo planar imaging sequences have been redesigned to minimize spectral content. This includes a sinusoidal read‐out with a single fundamental frequency, a constant phase encoding gradient, overlapping smoothed CAIPIRINHA blips, and a novel strategy to merge the crushers in diffusion MRI. These changes are then tuned in conjunction with the gradient system frequency response function. Results Maintained image quality, SNR, and quantitative diffusion values while reducing acoustic noise up to 12 dB (A) is illustrated in two adult experiments. Fetal experiments in 10 subjects covering a range of parameters depict the adaptability and increased efficiency of quiet echo planar imaging. Conclusion Purpose‐built for highly efficient multiband fetal echo planar imaging studies, the presented framework reduces acoustic noise for all echo planar imaging‐based sequences. Full optimization by tuning to the gradient frequency response functions allows for a maximally time‐efficient scan within safe limits. This allows ambitious in‐utero studies such as functional brain imaging with high spatial/temporal resolution and diffusion scans with high angular/spatial resolution to be run in a highly efficient manner at acceptable sound levels. Magn Reson Med 79:1447–1459, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:28653363

  3. Modeling methods of MEMS micro-speaker with electrostatic working principle

    NASA Astrophysics Data System (ADS)

    Tumpold, D.; Kaltenbacher, M.; Glacer, C.; Nawaz, M.; Dehé, A.

    2013-05-01

    The market for mobile devices like tablets, laptops or mobile phones is increasing rapidly. Device housings get thinner and energy efficiency is more and more important. Micro-Electro-Mechanical-System (MEMS) loudspeakers, fabricated in complementary metal oxide semiconductor (CMOS) compatible technology merge energy efficient driving technology with cost economical fabrication processes. In most cases, the fabrication of such devices within the design process is a lengthy and costly task. Therefore, the need for computer modeling tools capable of precisely simulating the multi-field interactions is increasing. The accurate modeling of such MEMS devices results in a system of coupled partial differential equations (PDEs) describing the interaction between the electric, mechanical and acoustic field. For the efficient and accurate solution we apply the Finite Element (FE) method. Thereby, we fully take the nonlinear effects into account: electrostatic force, charged moving body (loaded membrane) in an electric field, geometric nonlinearities and mechanical contact during the snap-in case between loaded membrane and stator. To efficiently handle the coupling between the mechanical and acoustic fields, we apply Mortar FE techniques, which allow different grid sizes along the coupling interface. Furthermore, we present a recently developed PML (Perfectly Matched Layer) technique, which allows limiting the acoustic computational domain even in the near field without getting spurious reflections. For computations towards the acoustic far field we us a Kirchhoff Helmholtz integral (e.g, to compute the directivity pattern). We will present simulations of a MEMS speaker system based on a single sided driving mechanism as well as an outlook on MEMS speakers using double stator systems (pull-pull-system), and discuss their efficiency (SPL) and quality (THD) towards the generated acoustic sound.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boche, H., E-mail: boche@tum.de; Janßen, G., E-mail: gisbert.janssen@tum.de

    We consider one-way quantum state merging and entanglement distillation under compound and arbitrarily varying source models. Regarding quantum compound sources, where the source is memoryless, but the source state an unknown member of a certain set of density matrices, we continue investigations begun in the work of Bjelaković et al. [“Universal quantum state merging,” J. Math. Phys. 54, 032204 (2013)] and determine the classical as well as entanglement cost of state merging. We further investigate quantum state merging and entanglement distillation protocols for arbitrarily varying quantum sources (AVQS). In the AVQS model, the source state is assumed to vary inmore » an arbitrary manner for each source output due to environmental fluctuations or adversarial manipulation. We determine the one-way entanglement distillation capacity for AVQS, where we invoke the famous robustification and elimination techniques introduced by Ahlswede. Regarding quantum state merging for AVQS we show by example that the robustification and elimination based approach generally leads to suboptimal entanglement as well as classical communication rates.« less

  5. Overview and analysis of the 2016 Gold Run in the Booster and AGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeno, K.

    2016-09-16

    Run 16 differed from preceding Au runs in that during most of it a 12:6:2 merge was employed in the AGS instead of an 8:4:2 merge. This was done to provide higher bunch intensities for RHIC. Since the approach to providing higher bunch intensities is, and has been, to merge more Booster bunches of the same intensity into one final bunch, detailing the longitudinal aspects of this setup seems quite relevant. So, aside from providing an overview of the Au portion of Run 16, this note also contains a series of emittance measurements in the Booster and AGS. Comparisons ofmore » these to similar measurements in previous runs are also made in hopes of gaining a better understanding of what factors contribute to the emittance of a bunch at AGS extraction. The note also tries to provide some context in which to understand the various merge schemes and describes a potential 8 to 1 type merge.« less

  6. Lead-free epitaxial ferroelectric material integration on semiconducting (100) Nb-doped SrTiO3 for low-power non-volatile memory and efficient ultraviolet ray detection

    PubMed Central

    Kundu, Souvik; Clavel, Michael; Biswas, Pranab; Chen, Bo; Song, Hyun-Cheol; Kumar, Prashant; Halder, Nripendra N.; Hudait, Mantu K.; Banerji, Pallab; Sanghadasa, Mohan; Priya, Shashank

    2015-01-01

    We report lead-free ferroelectric based resistive switching non-volatile memory (NVM) devices with epitaxial (1-x)BaTiO3-xBiFeO3 (x = 0.725) (BT-BFO) film integrated on semiconducting (100) Nb (0.7%) doped SrTiO3 (Nb:STO) substrates. The piezoelectric force microscopy (PFM) measurement at room temperature demonstrated ferroelectricity in the BT-BFO thin film. PFM results also reveal the repeatable polarization inversion by poling, manifesting its potential for read-write operation in NVM devices. The electroforming-free and ferroelectric polarization coupled electrical behaviour demonstrated excellent resistive switching with high retention time, cyclic endurance, and low set/reset voltages. X-ray photoelectron spectroscopy was utilized to determine the band alignment at the BT-BFO and Nb:STO heterojunction, and it exhibited staggered band alignment. This heterojunction is found to behave as an efficient ultraviolet photo-detector with low rise and fall time. The architecture also demonstrates half-wave rectification under low and high input signal frequencies, where the output distortion is minimal. The results provide avenue for an electrical switch that can regulate the pixels in low or high frequency images. Combined this work paves the pathway towards designing future generation low-power ferroelectric based microelectronic devices by merging both electrical and photovoltaic properties of BT-BFO materials. PMID:26202946

  7. Lead-free epitaxial ferroelectric material integration on semiconducting (100) Nb-doped SrTiO3 for low-power non-volatile memory and efficient ultraviolet ray detection.

    PubMed

    Kundu, Souvik; Clavel, Michael; Biswas, Pranab; Chen, Bo; Song, Hyun-Cheol; Kumar, Prashant; Halder, Nripendra N; Hudait, Mantu K; Banerji, Pallab; Sanghadasa, Mohan; Priya, Shashank

    2015-07-23

    We report lead-free ferroelectric based resistive switching non-volatile memory (NVM) devices with epitaxial (1-x)BaTiO3-xBiFeO3 (x = 0.725) (BT-BFO) film integrated on semiconducting (100) Nb (0.7%) doped SrTiO3 (Nb:STO) substrates. The piezoelectric force microscopy (PFM) measurement at room temperature demonstrated ferroelectricity in the BT-BFO thin film. PFM results also reveal the repeatable polarization inversion by poling, manifesting its potential for read-write operation in NVM devices. The electroforming-free and ferroelectric polarization coupled electrical behaviour demonstrated excellent resistive switching with high retention time, cyclic endurance, and low set/reset voltages. X-ray photoelectron spectroscopy was utilized to determine the band alignment at the BT-BFO and Nb:STO heterojunction, and it exhibited staggered band alignment. This heterojunction is found to behave as an efficient ultraviolet photo-detector with low rise and fall time. The architecture also demonstrates half-wave rectification under low and high input signal frequencies, where the output distortion is minimal. The results provide avenue for an electrical switch that can regulate the pixels in low or high frequency images. Combined this work paves the pathway towards designing future generation low-power ferroelectric based microelectronic devices by merging both electrical and photovoltaic properties of BT-BFO materials.

  8. Algorithms for Large-Scale Astronomical Problems

    DTIC Science & Technology

    2013-08-01

    implemented as a succession of Hadoop MapReduce jobs and sequential programs written in Java . The sampling and splitting stages are implemented as...one MapReduce job, the partitioning and clustering phases make up another job. The merging stage is implemented as a stand-alone Java program. The...Merging. The merging stage is implemented as a sequential Java program that reads the files with the shell information, which were generated by

  9. Automatic registration of fused lidar/digital imagery (texel images) for three-dimensional image creation

    NASA Astrophysics Data System (ADS)

    Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan

    2015-03-01

    Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.

  10. The effect of gas dynamics on semi-analytic modelling of cluster galaxies

    NASA Astrophysics Data System (ADS)

    Saro, A.; De Lucia, G.; Dolag, K.; Borgani, S.

    2008-12-01

    We study the degree to which non-radiative gas dynamics affect the merger histories of haloes along with subsequent predictions from a semi-analytic model (SAM) of galaxy formation. To this aim, we use a sample of dark matter only and non-radiative smooth particle hydrodynamics (SPH) simulations of four massive clusters. The presence of gas-dynamical processes (e.g. ram pressure from the hot intra-cluster atmosphere) makes haloes more fragile in the runs which include gas. This results in a 25 per cent decrease in the total number of subhaloes at z = 0. The impact on the galaxy population predicted by SAMs is complicated by the presence of `orphan' galaxies, i.e. galaxies whose parent substructures are reduced below the resolution limit of the simulation. In the model employed in our study, these galaxies survive (unaffected by the tidal stripping process) for a residual merging time that is computed using a variation of the Chandrasekhar formula. Due to ram-pressure stripping, haloes in gas simulations tend to be less massive than their counterparts in the dark matter simulations. The resulting merging times for satellite galaxies are then longer in these simulations. On the other hand, the presence of gas influences the orbits of haloes making them on average more circular and therefore reducing the estimated merging times with respect to the dark matter only simulation. This effect is particularly significant for the most massive satellites and is (at least in part) responsible for the fact that brightest cluster galaxies in runs with gas have stellar masses which are about 25 per cent larger than those obtained from dark matter only simulations. Our results show that gas dynamics has only a marginal impact on the statistical properties of the galaxy population, but that its impact on the orbits and merging times of haloes strongly influences the assembly of the most massive galaxies.

  11. Extending the scope of pooled analyses of individual patient biomarker data from heterogeneous laboratory platforms and cohorts using merging algorithms.

    PubMed

    Burke, Órlaith; Benton, Samantha; Szafranski, Pawel; von Dadelszen, Peter; Buhimschi, S Catalin; Cetin, Irene; Chappell, Lucy; Figueras, Francesc; Galindo, Alberto; Herraiz, Ignacio; Holzman, Claudia; Hubel, Carl; Knudsen, Ulla; Kronborg, Camilla; Laivuori, Hannele; Lapaire, Olav; McElrath, Thomas; Moertl, Manfred; Myers, Jenny; Ness, Roberta B; Oliveira, Leandro; Olson, Gayle; Poston, Lucilla; Ris-Stalpers, Carrie; Roberts, James M; Schalekamp-Timmermans, Sarah; Schlembach, Dietmar; Steegers, Eric; Stepan, Holger; Tsatsaris, Vassilis; van der Post, Joris A; Verlohren, Stefan; Villa, Pia M; Williams, David; Zeisler, Harald; Redman, Christopher W G; Staff, Anne Cathrine

    2016-01-01

    A common challenge in medicine, exemplified in the analysis of biomarker data, is that large studies are needed for sufficient statistical power. Often, this may only be achievable by aggregating multiple cohorts. However, different studies may use disparate platforms for laboratory analysis, which can hinder merging. Using circulating placental growth factor (PlGF), a potential biomarker for hypertensive disorders of pregnancy (HDP) such as preeclampsia, as an example, we investigated how such issues can be overcome by inter-platform standardization and merging algorithms. We studied 16,462 pregnancies from 22 study cohorts. PlGF measurements (gestational age ⩾20 weeks) analyzed on one of four platforms: R&D Systems, AlereTriage, RocheElecsys or AbbottArchitect, were available for 13,429 women. Two merging algorithms, using Z-Score and Multiple of Median transformations, were applied. Best reference curves (BRC), based on merged, transformed PlGF measurements in uncomplicated pregnancy across six gestational age groups, were estimated. Identification of HDP by these PlGF-BRCs was compared to that of platform-specific curves. We demonstrate the feasibility of merging PlGF concentrations from different analytical platforms. Overall BRC identification of HDP performed at least as well as platform-specific curves. Our method can be extended to any set of biomarkers obtained from different laboratory platforms in any field. Merged biomarker data from multiple studies will improve statistical power and enlarge our understanding of the pathophysiology and management of medical syndromes. Copyright © 2015 International Society for the Study of Hypertension in Pregnancy. Published by Elsevier B.V. All rights reserved.

  12. Merged SAGE II, Ozone_cci and OMPS ozone profiles dataset and evaluation of ozone trends in the stratosphere

    NASA Astrophysics Data System (ADS)

    Tamminen, J.; Sofieva, V.; Kyrölä, E.; Laine, M.; Degenstein, D. A.; Bourassa, A. E.; Roth, C.; Zawada, D.; Weber, M.; Rozanov, A.; Rahpoe, N.; Stiller, G. P.; Laeng, A.; von Clarmann, T.; Walker, K. A.; Sheese, P.; Hubert, D.; Van Roozendael, M.; Zehner, C.; Damadeo, R. P.; Zawodny, J. M.; Kramarova, N. A.; Bhartia, P. K.

    2017-12-01

    We present a merged dataset of ozone profiles from several satellite instruments: SAGE II on ERBS, GOMOS, SCIAMACHY and MIPAS on Envisat, OSIRIS on Odin, ACE-FTS on SCISAT, and OMPS on Suomi-NPP. The merged dataset is created in the framework of European Space Agency Climate Change Initiative (Ozone_cci) with the aim of analyzing stratospheric ozone trends. For the merged dataset, we used the latest versions of the original ozone datasets. The datasets from the individual instruments have been extensively validated and inter-compared; only those datasets, which are in good agreement and do not exhibit significant drifts with respect to collocated ground-based observations and with respect to each other, are used for merging. The long-term SAGE-CCI-OMPS dataset is created by computation and merging of deseasonalized anomalies from individual instruments. The merged SAGE-CCI-OMPS dataset consists of deseasonalized anomalies of ozone in 10° latitude bands from 90°S to 90°N and from 10 to 50 km in steps of 1 km covering the period from October 1984 to July 2016. This newly created dataset is used for evaluating ozone trends in the stratosphere through multiple linear regression. Negative ozone trends in the upper stratosphere are observed before 1997 and positive trends are found after 1997. The upper stratospheric trends are statistically significant at mid-latitudes in the upper stratosphere and indicate ozone recovery, as expected from the decrease of stratospheric halogens that started in the middle of the 1990s.

  13. Safety evaluation of joint and conventional lane merge configurations for freeway work zones.

    PubMed

    Ishak, Sherif; Qi, Yan; Rayaprolu, Pradeep

    2012-01-01

    Inefficient operation of traffic in work zone areas not only leads to an increase in travel time delays, queue length, and fuel consumption but also increases the number of forced merges and roadway accidents. This study evaluated the safety performance of work zones with a conventional lane merge (CLM) configuration in Louisiana. Analysis of variance (ANOVA) was used to compare the crash rates for accidents involving fatalities, injuries, and property damage only (PDO) in each of the following 4 areas: (1) advance warning area, (2) transition area, (3) work area, and (4) termination area. The analysis showed that the advance warning area had higher fatality, injury, and PDO crash rates when compared to the transition area, work area, and termination area. This finding confirmed the need to make improvements in the advance warning area where merging maneuvers take place. Therefore, a new lane merge configuration, called joint lane merge (JLM), was proposed and its safety performance was examined and compared to the conventional lane merge configuration using a microscopic simulation model (VISSIM), which was calibrated with real-world data from an existing work zone on I-55 and used to simulate a total of 25 different scenarios with different levels of demand and traffic composition. Safety performance was evaluated using 2 surrogate measures: uncomfortable decelerations and speed variance. Statistical analysis was conducted to determine whether the differences in safety performance between both configurations were significant. The safety analysis indicated that JLM outperformed CLM in most cases with low to moderate flow rates and that the percentage of trucks did not have a significant impact on the safety performance of either configuration. Though the safety analysis did not clearly indicate which lane merge configuration is safer for the overall work zone area, it was able to identify the possibly associated safety changes within the work zone area under different traffic conditions. Copyright © 2012 Taylor & Francis Group, LLC

  14. A new configurational bias scheme for sampling supramolecular structures

    NASA Astrophysics Data System (ADS)

    De Gernier, Robin; Curk, Tine; Dubacheva, Galina V.; Richter, Ralf P.; Mognetti, Bortolo M.

    2014-12-01

    We present a new simulation scheme which allows an efficient sampling of reconfigurable supramolecular structures made of polymeric constructs functionalized by reactive binding sites. The algorithm is based on the configurational bias scheme of Siepmann and Frenkel and is powered by the possibility of changing the topology of the supramolecular network by a non-local Monte Carlo algorithm. Such a plan is accomplished by a multi-scale modelling that merges coarse-grained simulations, describing the typical polymer conformations, with experimental results accounting for free energy terms involved in the reactions of the active sites. We test the new algorithm for a system of DNA coated colloids for which we compute the hybridisation free energy cost associated to the binding of tethered single stranded DNAs terminated by short sequences of complementary nucleotides. In order to demonstrate the versatility of our method, we also consider polymers functionalized by receptors that bind a surface decorated by ligands. In particular, we compute the density of states of adsorbed polymers as a function of the number of ligand-receptor complexes formed. Such a quantity can be used to study the conformational properties of adsorbed polymers useful when engineering adsorption with tailored properties. We successfully compare the results with the predictions of a mean field theory. We believe that the proposed method will be a useful tool to investigate supramolecular structures resulting from direct interactions between functionalized polymers for which efficient numerical methodologies of investigation are still lacking.

  15. Magnetic merging in colliding flux tubes

    NASA Technical Reports Server (NTRS)

    Zweibel, Ellen G.; Rhoads, James E.

    1995-01-01

    We develop an analytical theory of reconnection between colliding, twisted magnetic flux tubes. Our analysis is restricted to direct collisions between parallel tubes and is based on the collision dynamics worked out by Bogdan (1984). We show that there is a range of collision velocities for which neutral point reconnection of the Parker-Sweet type can occur, and a smaller range for which reconnection leads to coalescence. Mean velocities within the solar convection zone are probably significantly greater than the upper limit for coalescence. This suggests that the majority of flux tube collisions do not result in merging, unless the frictional coupling of the tubes to the background flow is extremely strong.

  16. Modeling methods for merging computational and experimental aerodynamic pressure data

    NASA Astrophysics Data System (ADS)

    Haderlie, Jacob C.

    This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT data could serve as a "merging" because the resulting WT pressure prediction uses information from both sources. In the GP approach, this model basis function concept seems to place more "weight" on the Cp values from the wind tunnel (WT) because the GP surrogate uses the CFD to approximate the WT data values. Conversely, the computationally inexpensive additive corrector method uses the CFD B-spline surrogate to define the shape of the spanwise distribution of the Cp while minimizing prediction error at all spanwise locations for a given arc length position; this, too, combines information from both sources to make a prediction of the 2-D WT-based Cp distribution, but the additive corrector approach gives more weight to the CFD prediction than to the WT data. Three surrogate models of the experimental data as a function of angle of attack are also compared for accuracy and computational cost. These surrogates are a single Gaussian process model (a single "expert"), product of experts, and generalized product of experts. The merging approach provides a single pressure distribution that combines experimental and computational data. The batch Gaussian process method provides a relatively accurate surrogate that is computationally acceptable, and can receive wind tunnel data from port locations that are not necessarily parallel to a variable direction. On the other hand, the sequential Gaussian process and additive corrector methods must receive a sufficient number of data points aligned with one direction, e.g., from pressure port bands (tap rows) aligned with the freestream. The generalized product of experts best represents wind tunnel pressure as a function of angle of attack, but at higher computational cost than the single expert approach. The format of the application data from computational and experimental sources in this work precluded the merging process from including flow condition variables (e.g., angle of attack) in the independent variables, so the merging process is only conducted in the wing geometry variables of arc length and span. The merging process of Cp data allows a more "hands-off" approach to aircraft design and analysis, (i.e., not as many engineers needed to debate the Cp distribution shape) and generates Cp predictions at any location on the wing. However, the cost with these benefits are engineer time (learning how to build surrogates), computational time in constructing the surrogates, and surrogate accuracy (surrogates introduce error into data predictions). This dissertation effort used the Trap Wing / First AIAA CFD High-Lift Prediction Workshop as a relevant transonic wing with a multi-element high-lift system, and this work identified that the batch GP model for the WT data and the B-spline surrogate for the CFD might best be combined using expert belief weights to describe Cp as a function of location on the wing element surface. (Abstract shortened by ProQuest.).

  17. Efficient image data distribution and management with application to web caching architectures

    NASA Astrophysics Data System (ADS)

    Han, Keesook J.; Suter, Bruce W.

    2003-03-01

    We present compact image data structures and associated packet delivery techniques for effective Web caching architectures. Presently, images on a web page are inefficiently stored, using a single image per file. Our approach is to use clustering to merge similar images into a single file in order to exploit the redundancy between images. Our studies indicate that a 30-50% image data size reduction can be achieved by eliminating the redundancies of color indexes. Attached to this file is new metadata to permit an easy extraction of images. This approach will permit a more efficient use of the cache, since a shorter list of cache references will be required. Packet and transmission delays can be reduced by 50% eliminating redundant TCP/IP headers and connection time. Thus, this innovative paradigm for the elimination of redundancy may provide valuable benefits for optimizing packet delivery in IP networks by reducing latency and minimizing the bandwidth requirements.

  18. How choices in exchange design for states could affect insurance premiums and levels of coverage.

    PubMed

    Blavin, Fredric; Blumberg, Linda J; Buettgens, Matthew; Holahan, John; McMorrow, Stacey

    2012-02-01

    The Affordable Care Act gives states the option to create health insurance exchanges from which individuals and small employers can purchase health insurance. States have considerable flexibility in how they design and implement these exchanges. We analyze several key design options being considered, using the Urban Institute's Health Insurance Policy Simulation Model: creating separate versus merged small-group and nongroup markets, eliminating age rating in these markets, removing the small-employer credit, and setting the maximum number of employees for firms in the small-group market at 50 versus 100 workers. Among our findings are that merging the small-group and nongroup markets would result in 1.7 million more people nationwide participating in the exchanges and, because of greater affordability of nongroup coverage, approximately 1.0 million more people being insured than if the risk pools were not merged. The various options generate relatively small differences in overall coverage and cost, although some, such as reducing age rating bands, would result in higher costs for some people while lowering costs for others. These cost effects would be most apparent among people who purchase coverage without federal subsidies. On the whole, we conclude that states can make these design choices based on local support and preferences without dramatic repercussions for overall coverage and cost outcomes.

  19. A nonlinear merging method of analog and photon signals for CO2 detection in lower altitudes using differential absorption lidar

    NASA Astrophysics Data System (ADS)

    Qi, Zhong; Zhang, Teng; Han, Ge; Li, Dongcang; Ma, Xin; Gong, Wei

    2017-04-01

    The current acquisition system of a lidar detects return signals in two modes (i.e., analog and photon counting); resulting in the lower (below 1500 m) and upper (higher than 1100 m) atmospheric parameters need analog and photon counting signal to retrieve, respectively. Hence, a lidar cannot obtain a continuous column of the concentrations of atmospheric components. For carbon cycle studies, the range-resolved concentration of atmospheric CO2 in the lower troposphere (below 1500 m) is one of the most significant parameters that should be determined. This study proposes a novel gluing method that merges the CO2 signal detected by ground-based DIAL in the lower troposphere. Through simulation experiments, the best uniform approximation polynomial theorem is utilized to determine the transformation coefficient to correlate signals from the different modes perfectly. The experimental results (both simulation experiments and actual measurement of signals) show that the proposed method is suitable and feasible for merging data in the region below 1500 m. Hence, the photon-counting signals whose SNRs are higher than those of the analog signals can be used to retrieve atmospheric parameters at an increased near range, facilitating atmospheric soundings using ground-based lidar in various fields.

  20. Fusion of GFP and phase contrast images with complex shearlet transform and Haar wavelet-based energy rule.

    PubMed

    Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren

    2018-03-14

    Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.

  1. Merged or monolithic? Using machine-learning to reconstruct the dynamical history of simulated star clusters

    NASA Astrophysics Data System (ADS)

    Pasquato, Mario; Chung, Chul

    2016-05-01

    Context. Machine-learning (ML) solves problems by learning patterns from data with limited or no human guidance. In astronomy, ML is mainly applied to large observational datasets, e.g. for morphological galaxy classification. Aims: We apply ML to gravitational N-body simulations of star clusters that are either formed by merging two progenitors or evolved in isolation, planning to later identify globular clusters (GCs) that may have a history of merging from observational data. Methods: We create mock-observations from simulated GCs, from which we measure a set of parameters (also called features in the machine-learning field). After carrying out dimensionality reduction on the feature space, the resulting datapoints are fed in to various classification algorithms. Using repeated random subsampling validation, we check whether the groups identified by the algorithms correspond to the underlying physical distinction between mergers and monolithically evolved simulations. Results: The three algorithms we considered (C5.0 trees, k-nearest neighbour, and support-vector machines) all achieve a test misclassification rate of about 10% without parameter tuning, with support-vector machines slightly outperforming the others. The first principal component of feature space correlates with cluster concentration. If we exclude it from the regression, the performance of the algorithms is only slightly reduced.

  2. Operational Art and its Relevance to Army Logisticians

    DTIC Science & Technology

    1999-12-06

    essence. Chapter two describes how the operational planner synthesizes the problem environment and merges the " art " and " science " of war through...catalyst that merges application of science and theory into " art ." The planner must also possess personal attributes that allow application to be...synergize (instead of dictate) the merging of science and art . be an expert at applying MDMP.. .not enslaved by it. absorb the construct of operational art

  3. Challenges and success factors in university mergers and academic integrations.

    PubMed

    Ahmadvand, Alireza; Heidari, Kazem; Hosseini, Hamed; Majdzadeh, Reza

    2012-12-01

    There are different reasons for mergers among higher education institutes. In October 2010 the Iran University of Medical Sciences (IUMS) merged with two other medical universities in Tehran. In this study, we aim to review the literature on academic integrations and university mergers to call the attention to challenges and reasons for the success or failure of university mergers. We searched for studies that pertained to university or college mergers, amalgamation, dissolution, or acquisition in the following databases: PubMed, Emerald, Web of Science, Scopus, and Ovid, without any limitations on country, language, or publication date. Two reviewers selected the search results in a joint meeting. We used content analysis methodology and held three sessions for consensus building on incompatibilities. We reviewed a total of 32 documents. The "merger" phenomenon attracted considerable attention worldwide from the 1970s until the 1990s. The most important reasons for merging were to boost efficiency and effectiveness, deal with organizational fragmentation, broaden student access and implement equity strategies, increase government control on higher education systems, decentralization, and to establish larger organizations. Cultural incompatibility, different academic standards, and geographical distance may prevent a merger. In some countries, geographical distance has caused an increase in existing cultural, social, and academic tensions. The decision and process of a merger is a broad, multi-dimensional change for an academic organization. Managers who are unaware of the fact that mergers are an evolutionary process with different stages may cause challenges and problems during organizational changes. Socio-cultural integration acts as an important stage in the post-merger process. It is possible for newly-formed schools, departments, and research centers to be evaluated as case studies in future research.

  4. Reducing Spatial Data Complexity for Classification Models

    NASA Astrophysics Data System (ADS)

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the comparable compression levels.

  5. Properties of Merger Shocks in Merging Galaxy Clusters

    NASA Astrophysics Data System (ADS)

    Ha, Ji-Hoon; Ryu, Dongsu; Kang, Hyesung

    2018-04-01

    X-ray shocks and radio relics detected in the cluster outskirts are commonly interpreted as shocks induced by mergers of subclumps. We study the properties of merger shocks in merging galaxy clusters, using a set of cosmological simulations for the large-scale structure formation of the universe. As a representative case, we focus on the simulated clusters that undergo almost head-on collisions with mass ratio ∼2. Due to the turbulent nature of the intracluster medium, shock surfaces are not smooth, but composed of shocks with different Mach numbers. As the merger shocks expand outward from the core to the outskirts, the average Mach number, < {M}s> , increases in time. We suggest that the shocks propagating along the merger axis could be manifested as X-ray shocks and/or radio relics. The kinetic energy through the shocks, F ϕ , peaks at ∼1 Gyr after their initial launching, or at ∼1–2 Mpc from the core. Because of the Mach number dependent model adopted here for the cosmic-ray (CR) acceleration efficiency, their CR-energy-weighted Mach number is higher with < {M}s{> }CR}∼ 3{--}4, compared to the kinetic-energy-weighted Mach number, < {M}s{> }φ ∼ 2{--}3. Most energetic shocks are to be found ahead of the lighter dark matter (DM) clump, while the heavier DM clump is located on the opposite side of clusters. Although our study is limited to the merger case considered, the results such as the means and variations of shock properties and their time evolution could be compared with the observed characteristics of merger shocks, constraining interpretations of relevant observations.

  6. Hypersonic merged layer blunt body flows with wakes

    NASA Technical Reports Server (NTRS)

    Jain, Amolak C.; Dahm, Werner K.

    1991-01-01

    An attempt is made here to understand the basic physics of the flowfield with wake on a blunt body of revolution under hypersonic rarefied conditions. A merged layer model of flow is envisioned. Full steady-state Navier-Stokes equations in spherical polar coordinate system are computed from the surface with slip and temperature jump conditions to the free stream by the Accelerated Successive Replacement method of numerical integration. Analysis is developed for bodies of arbitrary shape, but actual computations have been carried out for a sphere and sphere-cone body. Particular attention is paid to set the limit of the onset of separation, wake closure, shear-layer impingement, formation and dissipation of the shocks in the flowfield. Validity of the results is established by comparing the present results for sphere with the corresponding results of the SOFIA code in the common region of their validity and with the experimental data.

  7. State-space adjustment of radar rainfall and skill score evaluation of stochastic volume forecasts in urban drainage systems.

    PubMed

    Löwe, Roland; Mikkelsen, Peter Steen; Rasmussen, Michael R; Madsen, Henrik

    2013-01-01

    Merging of radar rainfall data with rain gauge measurements is a common approach to overcome problems in deriving rain intensities from radar measurements. We extend an existing approach for adjustment of C-band radar data using state-space models and use the resulting rainfall intensities as input for forecasting outflow from two catchments in the Copenhagen area. Stochastic grey-box models are applied to create the runoff forecasts, providing us with not only a point forecast but also a quantification of the forecast uncertainty. Evaluating the results, we can show that using the adjusted radar data improves runoff forecasts compared with using the original radar data and that rain gauge measurements as forecast input are also outperformed. Combining the data merging approach with short-term rainfall forecasting algorithms may result in further improved runoff forecasts that can be used in real time control.

  8. Cold pool organization and the merging of convective updrafts in a Large Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Glenn, I. B.; Krueger, S. K.

    2016-12-01

    Cold pool organization is a process that accelerates the transition from shallow to deep cumulus convection, and leads to higher deep convective cloud top heights. The mechanism by which cold pool organization enhances convection remains not well understood, but the basic idea is that since precipitation evaporation and a low equivalent potential temperature in the mid-troposphere lead to strong cold pools, the net cold pool effect can be accounted for in a cumulus parameterization as a relationship involving those factors. Understanding the actual physical mechanism at work will help quantify the strength of the relationship between cold pools and enhanced deep convection. One proposed mechanism of enhancement is that cold pool organization leads to reduced distances between updrafts, creating a local environment more conducive to convection as updrafts entrain parcels of air recently detrained by their neighbors. We take this hypothesis one step further and propose that convective updrafts actually merge, not just exchange recently processed air. Because entrainment and detrainment around an updraft draws nearby air in or pushes it out, respectively, they act like dynamic flow sources and sinks, drawing each other in or pushing each other away. The acceleration is proportional to the inverse square of the distance between two updrafts, so a small reduction in distance can make a big difference in the rate of merging. We have shown in previous research how merging can be seen as collisions between different updraft air parcels using Lagrangian Parcel Trajectories (LPTs) released in a Large Eddy Simulation (LES) during a period with organized deep convection. Now we use a Eulerian frame of reference to examine the updraft merging process during the transition from shallow to organized deep convection. We use a case based on the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) for our LES. We directly measure the rate of entrainment and the properties of the entrained air for all convective updrafts in the simulation. We use a tracking algorithm to define merging between convective updrafts. We will show the rate of merging as the transition between shallow and deep convection occurs and the different distributions of entrainment rate and ultimate detrainment height of merged and non-merged updrafts.

  9. Formation and sustainment of field reversed configuration (FRC) plasmas by spheromak merging and neutral beam injection

    DOE PAGES

    Yamada, Masaaki

    2016-01-01

    This study briefly reviews a compact toroid reactor concept that addresses critical issues for forming, stabilizing and sustaining a field reversed configuration (FRC) with the use of plasma merging, plasma shaping, conducting shells, neutral beam injection (NBI). In this concept, an FRC plasma is generated by the merging of counter-helicity spheromaks produced by inductive discharges and sustained by the use of neutral beam injection (NBI). Plasma shaping, conducting shells, and the NBI would provide stabilization to global MHD modes. Although a specific FRC reactor design is outside the scope of the present paper, an example of a promising FRC reactormore » program is summarized based on the previously developed SPIRIT (Self-organized Plasmas by Induction, Reconnection and Injection Techniques) concept in order to connect this concept to the recently achieved the High Performance FRC plasmas obtained by Tri Alpha Energy [Binderbauer et al, Phys. Plasmas 22,056110, (2015)]. This paper includes a brief summary of the previous concept paper by M. Yamada et al, Plasma Fusion Res. 2, 004 (2007) and the recent experimental results from MRX.« less

  10. Merging Features and Optical-Near Infrared Color Gradients of Early-type Galaxies

    NASA Astrophysics Data System (ADS)

    Kim, Duho; Im, M.

    2012-01-01

    It has been suggested that merging plays an important role in the formation and the evolution of early-type galaxies (ETGs). Optical-NIR color gradients of ETGs in high density environments are found to be less steep than those of ETGs in low density environments, hinting frequent merger activities in ETGs in high density environments. In order to examine if the flat color gradients are the result of dry mergers, we studied the relations between merging features, color gradient, and environments of 198 low redshift ETGs selected from Sloan Digital Sky Survey (SDSS) Stripe82. Near Infrared (NIR) images are taken from UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS). Color(r-K) gradients of ETGs with tidal features are a little flatter than relaxed ETGs, but not significant. We found that massive (>1011.3 M⊙) relaxed ETGs have 2.5 times less scattered color gradients than less massive ETGs. The less scattered color gradients of massive ETGs could be evidence of dry merger processes in the evolution of massive ETGs. We found no relation between color gradients of ETGs and their environments.

  11. Formation and sustainment of field reversed configuration (FRC) plasmas by spheromak merging and neutral beam injection

    NASA Astrophysics Data System (ADS)

    Yamada, Masaaki

    2016-03-01

    This paper briefly reviews a compact toroid reactor concept that addresses critical issues for forming, stabilizing and sustaining a field reversed configuration (FRC) with the use of plasma merging, plasma shaping, conducting shells, neutral beam injection (NBI). In this concept, an FRC plasma is generated by the merging of counter-helicity spheromaks produced by inductive discharges and sustained by the use of neutral beam injection (NBI). Plasma shaping, conducting shells, and the NBI would provide stabilization to global MHD modes. Although a specific FRC reactor design is outside the scope of the present paper, an example of a promising FRC reactor program is summarized based on the previously developed SPIRIT (Self-organized Plasmas by Induction, Reconnection and Injection Techniques) concept in order to connect this concept to the recently achieved the High Performance FRC plasmas obtained by Tri Alpha Energy [Binderbauer et al, Phys. Plasmas 22,056110, (2015)]. This paper includes a brief summary of the previous concept paper by M. Yamada et al, Plasma Fusion Res. 2, 004 (2007) and the recent experimental results from MRX.

  12. Laboratory plasma physics experiments using merging supersonic plasma jets

    DOE PAGES

    Hsu, S. C.; Moser, A. L.; Merritt, E. C.; ...

    2015-04-01

    We describe a laboratory plasma physics experiment at Los Alamos National Laboratory that uses two merging supersonic plasma jets formed and launched by pulsed-power-driven railguns. The jets can be formed using any atomic species or mixture available in a compressed-gas bottle and have the following nominal initial parameters at the railgun nozzle exit: n e ≈ n i ~ 10¹⁶ cm⁻³, T e ≈ T i ≈ 1.4 eV, V jet ≈ 30–100 km/s, mean chargemore » $$\\bar{Z}$$ ≈ 1, sonic Mach number M s ≡ V jet/C s > 10, jet diameter = 5 cm, and jet length ≈ 20 cm. Experiments to date have focused on the study of merging-jet dynamics and the shocks that form as a result of the interaction, in both collisional and collisionless regimes with respect to the inter-jet classical ion mean free path, and with and without an applied magnetic field. However, many other studies are also possible, as discussed in this paper.« less

  13. Formation and sustainment of field reversed configuration (FRC) plasmas by spheromak merging and neutral beam injection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yamada, Masaaki

    2016-03-25

    This paper briefly reviews a compact toroid reactor concept that addresses critical issues for forming, stabilizing and sustaining a field reversed configuration (FRC) with the use of plasma merging, plasma shaping, conducting shells, neutral beam injection (NBI). In this concept, an FRC plasma is generated by the merging of counter-helicity spheromaks produced by inductive discharges and sustained by the use of neutral beam injection (NBI). Plasma shaping, conducting shells, and the NBI would provide stabilization to global MHD modes. Although a specific FRC reactor design is outside the scope of the present paper, an example of a promising FRC reactormore » program is summarized based on the previously developed SPIRIT (Self-organized Plasmas by Induction, Reconnection and Injection Techniques) concept in order to connect this concept to the recently achieved the High Performance FRC plasmas obtained by Tri Alpha Energy [Binderbauer et al, Phys. Plasmas 22,056110, (2015)]. This paper includes a brief summary of the previous concept paper by M. Yamada et al, Plasma Fusion Res. 2, 004 (2007) and the recent experimental results from MRX.« less

  14. Design and Construction of Versatile Experiment Spherical Torus (VEST) at Seoul National University

    NASA Astrophysics Data System (ADS)

    An, Younghwa; Chung, Kyoung-Jae; Jung, Bongki; Lee, Hyunyeong; Sung, Choongki; Kim, Hyun-Seok; Na, Yong-Su; Hwang, Yong-Seok

    2011-10-01

    A new spherical torus, named as VEST (Versatile Experiment Spherical Torus), has been built at Seoul National University to investigate versatile research topics such as double null merging start-up, divertor engineering and non-inductive current drive. VEST is characterized by two partial solenoid coils installed at both vertical ends of a center stack, which will be used for double null merging start-up schemes. A poloidal field (PF) coil system including the partial solenoids for break-down and a long solenoid for the sustainment of merged plasma has been designed by solving circuit equations for the PF coils and vacuum vessel elements in consideration of required volt-second, null configuration and eddy current. To supply required currents to the PF coils and solenoids, power supplies based on double-swing circuit have been designed and fabricated with capacitor banks and thyristor switch assemblies. Also a power supply utilizing cost-effective commercial batteries has been developed for toroidal field(TF) coils. Detailed descriptions on the design of VEST and some initial test results will be presented.

  15. An outburst powered by the merging of two stars inside the envelope of a giant

    NASA Astrophysics Data System (ADS)

    Hillel, Shlomi; Schreier, Ron; Soker, Noam

    2017-11-01

    We conduct 3D hydrodynamical simulations of energy deposition into the envelope of a red giant star as a result of the merger of two close main sequence stars or brown dwarfs, and show that the outcome is a highly non-spherical outflow. Such a violent interaction of a triple stellar system can explain the formation of `messy', I.e. lacking any kind of symmetry, planetary nebulae and similar nebulae around evolved stars. We do not simulate the merging process, but simply assume that after the tight binary system enters the envelope of the giant star the interaction with the envelope causes the two components, stars or brown dwarfs, to merge and liberate gravitational energy. We deposit the energy over a time period of about 9 h, which is about 1 per cent of the the orbital period of the merger product around the centre of the giant star. The ejection of the fast hot gas and its collision with previously ejected mass are very likely to lead to a transient event, I.e. an intermediate luminosity optical transient.

  16. Laboratory plasma physics experiments using merging supersonic plasma jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, S. C.; Moser, A. L.; Merritt, E. C.

    We describe a laboratory plasma physics experiment at Los Alamos National Laboratory that uses two merging supersonic plasma jets formed and launched by pulsed-power-driven railguns. The jets can be formed using any atomic species or mixture available in a compressed-gas bottle and have the following nominal initial parameters at the railgun nozzle exit: n e ≈ n i ~ 10¹⁶ cm⁻³, T e ≈ T i ≈ 1.4 eV, V jet ≈ 30–100 km/s, mean chargemore » $$\\bar{Z}$$ ≈ 1, sonic Mach number M s ≡ V jet/C s > 10, jet diameter = 5 cm, and jet length ≈ 20 cm. Experiments to date have focused on the study of merging-jet dynamics and the shocks that form as a result of the interaction, in both collisional and collisionless regimes with respect to the inter-jet classical ion mean free path, and with and without an applied magnetic field. However, many other studies are also possible, as discussed in this paper.« less

  17. Fast-Time Evaluations of Airborne Merging and Spacing in Terminal Arrival Operations

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Karthik; Barmore, Bryan; Bussink, Frank; Weitz, Lesley; Dahlene, Laura

    2005-01-01

    NASA researchers are developing new airborne technologies and procedures to increase runway throughput at capacity-constrained airports by improving the precision of inter-arrival spacing at the runway threshold. In this new operational concept, pilots of equipped aircraft are cleared to adjust aircraft speed to achieve a designated spacing interval at the runway threshold, relative to a designated lead aircraft. A new airborne toolset, prototypes of which are being developed at the NASA Langley Research Center, assists pilots in achieving this objective. The current prototype allows precision spacing operations to commence even when the aircraft and its lead are not yet in-trail, but are on merging arrival routes to the runway. A series of fast-time evaluations of the new toolset were conducted at the Langley Research Center during the summer of 2004. The study assessed toolset performance in a mixed fleet of aircraft on three merging arrival streams under a range of operating conditions. The results of the study indicate that the prototype possesses a high degree of robustness to moderate variations in operating conditions.

  18. Program Merges SAR Data on Terrain and Vegetation Heights

    NASA Technical Reports Server (NTRS)

    Siqueira, Paul; Hensley, Scott; Rodriguez, Ernesto; Simard, Marc

    2007-01-01

    X/P Merge is a computer program that estimates ground-surface elevations and vegetation heights from multiple sets of data acquired by the GeoSAR instrument [a terrain-mapping synthetic-aperture radar (SAR) system that operates in the X and bands]. X/P Merge software combines data from X- and P-band digital elevation models, SAR backscatter magnitudes, and interferometric correlation magnitudes into a simplified set of output topographical maps of ground-surface elevation and tree height.

  19. Integrating multisensor satellite data merging and image reconstruction in support of machine learning for better water quality management.

    PubMed

    Chang, Ni-Bin; Bai, Kaixu; Chen, Chi-Farn

    2017-10-01

    Monitoring water quality changes in lakes, reservoirs, estuaries, and coastal waters is critical in response to the needs for sustainable development. This study develops a remote sensing-based multiscale modeling system by integrating multi-sensor satellite data merging and image reconstruction algorithms in support of feature extraction with machine learning leading to automate continuous water quality monitoring in environmentally sensitive regions. This new Earth observation platform, termed "cross-mission data merging and image reconstruction with machine learning" (CDMIM), is capable of merging multiple satellite imageries to provide daily water quality monitoring through a series of image processing, enhancement, reconstruction, and data mining/machine learning techniques. Two existing key algorithms, including Spectral Information Adaptation and Synthesis Scheme (SIASS) and SMart Information Reconstruction (SMIR), are highlighted to support feature extraction and content-based mapping. Whereas SIASS can support various data merging efforts to merge images collected from cross-mission satellite sensors, SMIR can overcome data gaps by reconstructing the information of value-missing pixels due to impacts such as cloud obstruction. Practical implementation of CDMIM was assessed by predicting the water quality over seasons in terms of the concentrations of nutrients and chlorophyll-a, as well as water clarity in Lake Nicaragua, providing synergistic efforts to better monitor the aquatic environment and offer insightful lake watershed management strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Control methods for merging ALSM and ground-based laser point clouds acquired under forest canopies

    NASA Astrophysics Data System (ADS)

    Slatton, Kenneth C.; Coleman, Matt; Carter, William E.; Shrestha, Ramesh L.; Sartori, Michael

    2004-12-01

    Merging of point data acquired from ground-based and airborne scanning laser rangers has been demonstrated for cases in which a common set of targets can be readily located in both data sets. However, direct merging of point data was not generally possible if the two data sets did not share common targets. This is often the case for ranging measurements acquired in forest canopies, where airborne systems image the canopy crowns well, but receive a relatively sparse set of points from the ground and understory. Conversely, ground-based scans of the understory do not generally sample the upper canopy. An experiment was conducted to establish a viable procedure for acquiring and georeferencing laser ranging data underneath a forest canopy. Once georeferenced, the ground-based data points can be merged with airborne points even in cases where no natural targets are common to both data sets. Two ground-based laser scans are merged and georeferenced with a final absolute error in the target locations of less than 10cm. This is comparable to the accuracy of the georeferenced airborne data. Thus, merging of the georeferenced ground-based and airborne data should be feasible. The motivation for this investigation is to facilitate a thorough characterization of airborne laser ranging phenomenology over forested terrain as a function of vertical location in the canopy.

  1. Do regional methods really help reduce uncertainties in flood frequency analyses?

    NASA Astrophysics Data System (ADS)

    Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric

    2013-04-01

    Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.

  2. Development and testing of LANDSAT-assisted procedures for cost-effective forest management

    NASA Technical Reports Server (NTRS)

    Colwell, J. E.; Sanders, P. A.; Thomson, F. J.

    1982-01-01

    The capability of LANDSAT data to make certain forest management activities on the Clearwater National Forest in Idaho more efficient and/or more effective was examined. One task was designed to evaluate the utility of single-date categorized LANDSAT data as a source of land cover information for use in assessing elk habitat quality. LANDSAT data was used to categorize conifer forest on the basis of the percentage crown closure. This information was used to evaluate elk habitat quality on the basis of the ratio of cover to forage. A preliminary conclusion is that categorized LANDSAT data can be helpful for assessing current elk habitat quality if the relationships between crown closure and hiding cover can be adequately defined. Another task was designed to evaluate the utility of merged two-date LANDSAT data for updating the existing (1972) Clearwater Forest land cover information. LANDSAT data from 1972 and 1981 were merged, and change images were created. These products indicated where major changes were taking place. These areas could then be examined on aerial photography or in the field to further characterize the nature and magnitude of the change. The 1972 land cover information could be subsequently altered in these changed areas, whereas areas with no change would not have to be re-examined.

  3. Finale of a Quartet: Hints on Supernova Formation

    NASA Astrophysics Data System (ADS)

    Fang, Xiao; Thompson, Todd A.; Hirata, Christopher M.

    2018-01-01

    The origin of Type Ia Supernovae (SNe) is not well understood. Two most popular hypotheses are the single-degenerate scenario, where one white dwarf (WD) accretes matter from its giant companion until the Chandrasekhar limit is reached, and the double-degenerate scenario, where two WDs merge and explode. We focus on the second scenario. It has long been realized that binary WD systems normally take extremely long time to merge via gravitational waves and it is still unclear whether WD mergers can fully account for the observed SN Ia rate. Recent effort has been devoted to the effects of introducing a distant tertiary to the binary system. The standard “Kozai-Lidov” mechanism can lead to high eccentricities of the binary WDs, which could lead to direct collisions or much efficient energy dissipation. Alternatively, we investigate the long-term evolution of the hierarchical quadruple systems, i.e. WD binary with a binary companion, which are basically unexplored, yet they should be numerous. We explore their interesting dynamics and find that the fraction of reaching high eccentricities is largely enhanced, which hints on a higher WD merger rate than predicted from triple systems with the same set of secular and non-secular effects considered. Considering the population of quadruple stellar systems, the quadruple scenario might contribute significantly to the overall rate of Ia SNe.

  4. Polymer-lipid hybrid systems: merging the benefits of polymeric and lipid-based nanocarriers to improve oral drug delivery.

    PubMed

    Rao, Shasha; Prestidge, Clive A

    2016-01-01

    A number of biobarriers limit efficient oral drug absorption; both polymer-based and lipid-based nanocarriers have demonstrated properties and delivery mechanisms to overcome these biobarriers in preclinical settings. Moreover, in order to address the multifaceted oral drug delivery challenges, polymer-lipid hybrid systems are now being designed to merge the beneficial features of both polymeric and lipid-based nanocarriers. Recent advances in the development of polymer-lipid hybrids with a specific focus on their viability in oral delivery are reviewed. Three classes of polymer-lipid hybrids have been identified, i.e. lipid-core polymer-shell systems, polymer-core lipid-shell systems, and matrix-type polymer-lipid hybrids. We focus on their application to overcome the various biological barriers to oral drug absorption, as exemplified by selected preclinical studies. Numerous studies have demonstrated the superiority of polymer-lipid hybrid systems to their non-hybrid counterparts in providing improved drug encapsulation, modulated drug release, and improved cellular uptake. These features have encouraged their applications in the delivery of chemotherapeutics, proteins, peptides, and vaccines. With further research expected to optimize the manufacturing and scaling up processes and in-depth pre-clinical pharmacological and toxicological assessments, these multifaceted drug delivery systems will have significant clinical impact on the oral delivery of pharmaceuticals and biopharmaceuticals.

  5. Thermal and magnetic hysteresis associated with martensitic and magnetic phase transformations in Ni52Mn25In16Co7 Heusler alloy

    NASA Astrophysics Data System (ADS)

    Madiligama, A. S. B.; Ari-Gur, P.; Ren, Y.; Koledov, V. V.; Dilmieva, E. T.; Kamantsev, A. P.; Mashirov, A. V.; Shavrov, V. G.; Gonzalez-Legarreta, L.; Grande, B. H.

    2017-11-01

    Ni-Mn-In-Co Heusler alloys demonstrate promising magnetocaloric performance for use as refrigerants in magnetic cooling systems with the goal of replacing the lower efficiency, eco-adverse fluid-compression technology. The largest change in entropy occurs when the applied magnetic field causes a merged structural and magnetic transformation and the associated entropy changes of the two transformations works constructively. In this study, magnetic and crystalline phase transformations were each treated separately and the effects of the application of magnetic field on thermal hystereses associated with both structural and magnetic transformations of the Ni52Mn25In16Co7 were studied. From the analysis of synchrotron diffraction data and thermomagnetic measurements, it was revealed that the alloy undergoes both structural (from cubic austenite to a mixture of 7M &5M modulated martensite) and magnetic (ferromagnetic to a low-magnetization phase) phase transformations. Thermal hysteresis is associated with both transformations, and the variation of the thermal hystereses of the magnetic and structural transformations with applied magnetic field is significantly different. Because of the differences between the hystereses loops of the two transformations, they merge only upon heating under a certain magnetic field.

  6. Computational and experimental investigation of two-dimensional scramjet inlets and hypersonic flow over a sharp flat plate

    NASA Astrophysics Data System (ADS)

    Messitt, Donald G.

    1999-11-01

    The WIND code was employed to compute the hypersonic flow in the shock wave boundary layer merged region near the leading edge of a sharp flat plate. Solutions were obtained at Mach numbers from 9.86 to 15.0 and free stream Reynolds numbers of 3,467 to 346,700 in-1 (1.365 · 105 to 1.365 · 107 m-1) for perfect gas conditions. The numerical results indicated a merged shock wave and viscous layer near the leading edge. The merged region grew in size with increasing free stream Mach number, proportional to Minfinity 2/Reinfinity. Profiles of the static pressure in the merged region indicated a strong normal pressure gradient (∂p/∂y). The normal pressure gradient has been neglected in previous analyses which used the boundary layer equations. The shock wave near the leading edge was thick, as has been experimentally observed. Computed shock wave locations and surface pressures agreed well within experimental error for values of the rarefaction parameter, chi/M infinity2 < 0.3. A preliminary analysis using kinetic theory indicated that rarefied flow effects became important above this value. In particular, the WIND solution agreed well in the transition region between the merged flow, which was predicted well by the theory of Li and Nagamatsu, and the downstream region where the strong interaction theory applied. Additional computations with the NPARC code, WIND's predecessor, demonstrated the ability of the code to compute hypersonic inlet flows at free stream Mach numbers up to 20. Good qualitative agreement with measured pressure data indicated that the code captured the important physical features of the shock wave - boundary layer interactions. The computed surface and pitot pressures fell within the combined experimental and numerical error bounds for most points. The calculations demonstrated the need for extremely fine grids when computing hypersonic interaction flows.

  7. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    DOE PAGES

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  8. Assume-Guarantee Abstraction Refinement Meets Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Bogomolov, Sergiy; Frehse, Goran; Greitschus, Marius; Grosu, Radu; Pasareanu, Corina S.; Podelski, Andreas; Strump, Thomas

    2014-01-01

    Compositional verification techniques in the assume- guarantee style have been successfully applied to transition systems to efficiently reduce the search space by leveraging the compositional nature of the systems under consideration. We adapt these techniques to the domain of hybrid systems with affine dynamics. To build assumptions we introduce an abstraction based on location merging. We integrate the assume-guarantee style analysis with automatic abstraction refinement. We have implemented our approach in the symbolic hybrid model checker SpaceEx. The evaluation shows its practical potential. To the best of our knowledge, this is the first work combining assume-guarantee reasoning with automatic abstraction-refinement in the context of hybrid automata.

  9. Investigation of critical parameters controlling the efficiency of associative ionization

    NASA Astrophysics Data System (ADS)

    Le Padellec, A.; Launoy, T.; Dochain, A.; Urbain, X.

    2017-05-01

    This paper compiles our merged-beam experimental findings for the associative ionization (AI) process from charged reactants, with the aim of guiding future investigations with e.g. the double electrostatic ion storage ring DESIREE in Stockholm. A reinvestigation of the isotopic effect in H-(D-) + He+ collisions is presented, along with a review of {{{H}}}3+ and NO+ production by AI involving ion pairs or excited neutrals, and put in perspective with the mutual neutralization and radiative association reactions. Critical parameters are identified and evaluated for their systematic role in controlling the magnitude of the cross section: isotopic substitution, exothermicity, electronic state density, and spin statistics.

  10. 3D imaging acquisition, modeling, and prototyping for facial defects reconstruction

    NASA Astrophysics Data System (ADS)

    Sansoni, Giovanna; Trebeschi, Marco; Cavagnini, Gianluca; Gastaldi, Giorgio

    2009-01-01

    A novel approach that combines optical three-dimensional imaging, reverse engineering (RE) and rapid prototyping (RP) for mold production in the prosthetic reconstruction of facial prostheses is presented. A commercial laser-stripe digitizer is used to perform the multiview acquisition of the patient's face; the point clouds are aligned and merged in order to obtain a polygonal model, which is then edited to sculpture the virtual prothesis. Two physical models of both the deformed face and the 'repaired' face are obtained: they differ only in the defect zone. Depending on the material used for the actual prosthesis, the two prototypes can be used either to directly cast the final prosthesis or to fabricate the positive wax pattern. Two case studies are presented, referring to prostetic reconstructions of an eye and of a nose. The results demonstrate the advantages over conventional techniques as well as the improvements with respect to known automated manufacturing techniques in the mold construction. The proposed method results into decreased patient's disconfort, reduced dependence on the anaplasthologist skill, increased repeatability and efficiency of the whole process.

  11. Merging-compression formation of high temperature tokamak plasma

    NASA Astrophysics Data System (ADS)

    Gryaznevich, M. P.; Sykes, A.

    2017-07-01

    Merging-compression is a solenoid-free plasma formation method used in spherical tokamaks (STs). Two plasma rings are formed and merged via magnetic reconnection into one plasma ring that then is radially compressed to form the ST configuration. Plasma currents of several hundred kA and plasma temperatures in the keV-range have been produced using this method, however until recently there was no full understanding of the merging-compression formation physics. In this paper we explain in detail, for the first time, all stages of the merging-compression plasma formation. This method will be used to create ST plasmas in the compact (R ~ 0.4-0.6 m) high field, high current (3 T/2 MA) ST40 tokamak. Moderate extrapolation from the available experimental data suggests the possibility of achieving plasma current ~2 MA, and 10 keV range temperatures at densities ~1-5  ×  1020 m-3, bringing ST40 plasmas into a burning plasma (alpha particle heating) relevant conditions directly from the plasma formation. Issues connected with this approach for ST40 and future ST reactors are discussed

  12. A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation

    NASA Astrophysics Data System (ADS)

    Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava

    2015-12-01

    In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.

  13. Integration of biotic ligand models (BLM) and bioaccumulation kinetics into a mechanistic framework for metal uptake in aquatic organisms.

    PubMed

    Veltman, Karin; Huijbregts, Mark A J; Hendriks, A Jan

    2010-07-01

    Both biotic ligand models (BLM) and bioaccumulation models aim to quantify metal exposure based on mechanistic knowledge, but key factors included in the description of metal uptake differ between the two approaches. Here, we present a quantitative comparison of both approaches and show that BLM and bioaccumulation kinetics can be merged into a common mechanistic framework for metal uptake in aquatic organisms. Our results show that metal-specific absorption efficiencies calculated from BLM-parameters for freshwater fish are highly comparable, i.e. within a factor of 2.4 for silver, cadmium, copper, and zinc, to bioaccumulation-absorption efficiencies for predominantly marine fish. Conditional affinity constants are significantly related to the metal-specific covalent index. Additionally, the affinity constants of calcium, cadmium, copper, sodium, and zinc are significantly comparable across aquatic species, including molluscs, daphnids, and fish. This suggests that affinity constants can be estimated from the covalent index, and constants can be extrapolated across species. A new model is proposed that integrates the combined effect of metal chemodynamics, as speciation, competition, and ligand affinity, and species characteristics, as size, on metal uptake by aquatic organisms. An important direction for further research is the quantitative comparison of the proposed model with acute toxicity values for organisms belonging to different size classes.

  14. An on-board pedestrian detection and warning system with features of side pedestrian

    NASA Astrophysics Data System (ADS)

    Cheng, Ruzhong; Zhao, Yong; Wong, ChupChung; Chan, KwokPo; Xu, Jiayao; Wang, Xin'an

    2012-01-01

    Automotive Active Safety(AAS) is the main branch of intelligence automobile study and pedestrian detection is the key problem of AAS, because it is related with the casualties of most vehicle accidents. For on-board pedestrian detection algorithms, the main problem is to balance efficiency and accuracy to make the on-board system available in real scenes, so an on-board pedestrian detection and warning system with the algorithm considered the features of side pedestrian is proposed. The system includes two modules, pedestrian detecting and warning module. Haar feature and a cascade of stage classifiers trained by Adaboost are first applied, and then HOG feature and SVM classifier are used to refine false positives. To make these time-consuming algorithms available in real-time use, a divide-window method together with operator context scanning(OCS) method are applied to increase efficiency. To merge the velocity information of the automotive, the distance of the detected pedestrian is also obtained, so the system could judge if there is a potential danger for the pedestrian in the front. With a new dataset captured in urban environment with side pedestrians on zebra, the embedded system and its algorithm perform an on-board available result on side pedestrian detection.

  15. Interpolation by fast Wigner transform for rapid calculations of magnetic resonance spectra from powders.

    PubMed

    Stevensson, Baltzar; Edén, Mattias

    2011-03-28

    We introduce a novel interpolation strategy, based on nonequispaced fast transforms involving spherical harmonics or Wigner functions, for efficient calculations of powder spectra in (nuclear) magnetic resonance spectroscopy. The fast Wigner transform (FWT) interpolation operates by minimizing the time-consuming calculation stages, by sampling over a small number of Gaussian spherical quadrature (GSQ) orientations that are exploited to determine the spectral frequencies and amplitudes from a 10-70 times larger GSQ set. This results in almost the same orientational averaging accuracy as if the expanded grid was utilized explicitly in an order of magnitude slower computation. FWT interpolation is applicable to spectral simulations involving any time-independent or time-dependent and noncommuting spin Hamiltonian. We further show that the merging of FWT interpolation with the well-established ASG procedure of Alderman, Solum and Grant [J. Chem. Phys. 134, 3717 (1986)] speeds up simulations by 2-7 times relative to using ASG alone (besides greatly extending its scope of application), and between 1-2 orders of magnitude compared to direct orientational averaging in the absence of interpolation. Demonstrations of efficient spectral simulations are given for several magic-angle spinning scenarios in NMR, encompassing half-integer quadrupolar spins and homonuclear dipolar-coupled (13)C systems.

  16. Optical coherence tomography retinal image reconstruction via nonlocal weighted sparse representation

    NASA Astrophysics Data System (ADS)

    Abbasi, Ashkan; Monadjemi, Amirhassan; Fang, Leyuan; Rabbani, Hossein

    2018-03-01

    We present a nonlocal weighted sparse representation (NWSR) method for reconstruction of retinal optical coherence tomography (OCT) images. To reconstruct a high signal-to-noise ratio and high-resolution OCT images, utilization of efficient denoising and interpolation algorithms are necessary, especially when the original data were subsampled during acquisition. However, the OCT images suffer from the presence of a high level of noise, which makes the estimation of sparse representations a difficult task. Thus, the proposed NWSR method merges sparse representations of multiple similar noisy and denoised patches to better estimate a sparse representation for each patch. First, the sparse representation of each patch is independently computed over an overcomplete dictionary, and then a nonlocal weighted sparse coefficient is computed by averaging representations of similar patches. Since the sparsity can reveal relevant information from noisy patches, combining noisy and denoised patches' representations is beneficial to obtain a more robust estimate of the unknown sparse representation. The denoised patches are obtained by applying an off-the-shelf image denoising method and our method provides an efficient way to exploit information from noisy and denoised patches' representations. The experimental results on denoising and interpolation of spectral domain OCT images demonstrated the effectiveness of the proposed NWSR method over existing state-of-the-art methods.

  17. Operational Concept for Flight Crews to Participate in Merging and Spacing of Aircraft

    NASA Technical Reports Server (NTRS)

    Baxley, Brian T.; Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.

    2006-01-01

    The predicted tripling of air traffic within the next 15 years is expected to cause significant aircraft delays and create a major financial burden for the airline industry unless the capacity of the National Airspace System can be increased. One approach to improve throughput and reduce delay is to develop new ground tools, airborne tools, and procedures to reduce the variance of aircraft delivery to the airport, thereby providing an increase in runway throughput capacity and a reduction in arrival aircraft delay. The first phase of the Merging and Spacing Concept employs a ground based tool used by Air Traffic Control that creates an arrival time to the runway threshold based on the aircraft s current position and speed, then makes minor adjustments to that schedule to accommodate runway throughput constraints such as weather and wake vortex separation criteria. The Merging and Spacing Concept also employs arrival routing that begins at an en route metering fix at altitude and continues to the runway threshold with defined lateral, vertical, and velocity criteria. This allows the desired spacing interval between aircraft at the runway to be translated back in time and space to the metering fix. The tool then calculates a specific speed for each aircraft to fly while enroute to the metering fix based on the adjusted land timing for that aircraft. This speed is data-linked to the crew who fly this speed, causing the aircraft to arrive at the metering fix with the assigned spacing interval behind the previous aircraft in the landing sequence. The second phase of the Merging and Spacing Concept increases the timing precision of the aircraft delivery to the runway threshold by having flight crews using an airborne system make minor speed changes during enroute, descent, and arrival phases of flight. These speed changes are based on broadcast aircraft state data to determine the difference between the actual and assigned time interval between the aircraft pair. The airborne software then calculates a speed adjustment to null that difference over the remaining flight trajectory. Follow-on phases still under development will expand the concept to all types of aircraft, arriving from any direction, merging at different fixes and altitudes, and to any airport. This paper describes the implementation phases of the Merging and Spacing Concept, and provides high-level results of research conducted to date.

  18. Double and triple-harmonic RF buckets and their use for bunch squeezing in AGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, C. J.

    2016-08-24

    For the past several years we have merged bunches in AGS in order to achieve the desired intensity per bunch prior to injection into RHIC. The merging is done on a flat porch at or above AGS injection energy. Because the merges involve the reduction of the RF harmonic number by a factor of 2 (for a 2 to 1 merge) and then a factor of 3 (for a 3 to 1 merge), one requires RF frequencies 6hf s, 3hf s, 2hf s and hf s, where f s is the revolution frequency on the porch and h = 4more » is the fundamental harmonic number. The standard AGS RF cavities cannot operate at the lowest frequencies 2hf s and hf s; these are provided by two modified cavities. Upon completion of the merges, the bunches are sitting in harmonic h buckets. In order to be accelerated they need to be squeezed into harmonic 3h buckets. This is accomplished by producing a double-harmonic bucket in which harmonics h and 2h act in concert, and then a triple-harmonic bucket in which harmonics h, 2h, and 3h act in concert. Simulations have shown that the squeeze presents an acceptance bottleneck which limits the longitudinal emittance that can be put into the harmonic 3h bucket. In this note the areas of the double and triple-harmonic buckets are calculated explicitly, and it is shown that these go through a minimum as the RF voltages are raised to the desired values. Several RF voltage ranges are examined, and the acceptance bottleneck is determined for each of these. Finally, the acceptance bottleneck for Au77+ bunches in AGS is calculated for several RF voltage ranges. The main result is that the RF voltages for the low-frequency harmonic h and 2h cavities both must be at least 22 kV in order to achieve an acceptance of 0:6 eV s per nucleon. If the harmonic h and 2h voltages are 15 and 22 kV, respectively, then the acceptance is reduced to 0:548 eV s per nucleon.« less

  19. Systems Biology Analysis Merging Phenotype, Metabolomic and Genomic Data Identifies Non-SMC Condensin I Complex, Subunit G (NCAPG) and Cellular Maintenance Processes as Major Contributors to Genetic Variability in Bovine Feed Efficiency

    PubMed Central

    Widmann, Philipp; Reverter, Antonio; Weikard, Rosemarie; Suhre, Karsten; Hammon, Harald M.; Albrecht, Elke; Kuehn, Christa

    2015-01-01

    Feed efficiency is a paramount factor for livestock economy. Previous studies had indicated a substantial heritability of several feed efficiency traits. In our study, we investigated the genetic background of residual feed intake, a commonly used parameter of feed efficiency, in a cattle resource population generated from crossing dairy and beef cattle. Starting from a whole genome association analysis, we subsequently performed combined phenotype-metabolome-genome analysis taking a systems biology approach by inferring gene networks based on partial correlation and information theory approaches. Our data about biological processes enriched with genes from the feed efficiency network suggest that genetic variation in feed efficiency is driven by genetic modulation of basic processes relevant to general cellular functions. When looking at the predicted upstream regulators from the feed efficiency network, the Tumor Protein P53 (TP53) and Transforming Growth Factor beta 1 (TGFB1) genes stood out regarding significance of overlap and number of target molecules in the data set. These results further support the hypothesis that TP53 is a major upstream regulator for genetic variation of feed efficiency. Furthermore, our data revealed a significant effect of both, the Non-SMC Condensin I Complex, Subunit G (NCAPG) I442M (rs109570900) and the Growth /differentiation factor 8 (GDF8) Q204X (rs110344317) loci, on residual feed intake and feed conversion. For both loci, the growth promoting allele at the onset of puberty was associated with a negative, but favorable effect on residual feed intake. The elevated energy demand for increased growth triggered by the NCAPG 442M allele is obviously not fully compensated for by an increased efficiency in converting feed into body tissue. As a consequence, the individuals carrying the NCAPG 442M allele had an additional demand for energy uptake that is reflected by the association of the allele with increased daily energy intake as observed in our study. PMID:25875852

  20. Merging magnetic droplets by a magnetic field pulse

    NASA Astrophysics Data System (ADS)

    Wang, Chengjie; Xiao, Dun; Liu, Yaowen

    2018-05-01

    Reliable manipulation of magnetic droplets is of immense importance for their applications in spin torque oscillators. Using micromagnetic simulations, we find that the antiphase precession state, which originates in the dynamic dipolar interaction effect, is a favorable stable state for two magnetic droplets nucleated at two identical nano-contacts. A magnetic field pulse can be used to destroy their stability and merge them into a big droplet. The merging process strongly depends on the pulse width as well as the pulse strength.

  1. Merging and Splitting of Plasma Spheroids in a Dusty Plasma

    NASA Astrophysics Data System (ADS)

    Mikikian, Maxime; Tawidian, Hagop; Lecas, Thomas

    2012-12-01

    Dust particle growth in a plasma is a strongly disturbing phenomenon for the plasma equilibrium. It can induce many different types of low-frequency instabilities that can be experimentally observed, especially using high-speed imaging. A spectacular case has been observed in a krypton plasma where a huge density of dust particles is grown by material sputtering. The instability consists of well-defined regions of enhanced optical emission that emerge from the electrode vicinity and propagate towards the discharge center. These plasma spheroids have complex motions resulting from their mutual interaction that can also lead to the merging of two plasma spheroids into a single one. The reverse situation is also observed with the splitting of a plasma spheroid into two parts. These results are presented for the first time and reveal new behaviors in dusty plasmas.

  2. Results of the Fluid Merging Viscosity Measurement International Space Station Experiment

    NASA Technical Reports Server (NTRS)

    Ethridge, Edwin C.; Kaukler, William; Antar, Basil

    2009-01-01

    The purpose of FMVM is to measure the rate of coalescence of two highly viscous liquid drops and correlate the results with the liquid viscosity and surface tension. The experiment takes advantage of the low gravitational free floating conditions in space to permit the unconstrained coalescence of two nearly spherical drops. The merging of the drops is accomplished by deploying them from a syringe and suspending them on Nomex threads followed by the astronaut s manipulation of one of the drops toward a stationary droplet till contact is achieved. Coalescence and merging occurs due to shape relaxation and reduction of surface energy, being resisted by the viscous drag within the liquid. Experiments were conducted onboard the International Space Station in July of 2004 and subsequently in May of 2005. The coalescence was recorded on video and down-linked near real-time. When the coefficient of surface tension for the liquid is known, the increase in contact radius can be used to determine the coefficient of viscosity for that liquid. The viscosity is determined by fitting the experimental speed to theoretically calculated contact radius speed for the same experimental parameters. Recent fluid dynamical numerical simulations of the coalescence process will be presented. The results are important for a better understanding of the coalescence process. The experiment is also relevant to liquid phase sintering, free form in-situ fabrication, and as a potential new method for measuring the viscosity of viscous glass formers at low shear rates.

  3. Intelligent viewpoint selection for efficient CT to video registration in laparoscopic liver surgery.

    PubMed

    Robu, Maria R; Edwards, Philip; Ramalhinho, João; Thompson, Stephen; Davidson, Brian; Hawkes, David; Stoyanov, Danail; Clarkson, Matthew J

    2017-07-01

    Minimally invasive surgery offers advantages over open surgery due to a shorter recovery time, less pain and trauma for the patient. However, inherent challenges such as lack of tactile feedback and difficulty in controlling bleeding lower the percentage of suitable cases. Augmented reality can show a better visualisation of sub-surface structures and tumour locations by fusing pre-operative CT data with real-time laparoscopic video. Such augmented reality visualisation requires a fast and robust video to CT registration that minimises interruption to the surgical procedure. We propose to use view planning for efficient rigid registration. Given the trocar position, a set of camera positions are sampled and scored based on the corresponding liver surface properties. We implement a simulation framework to validate the proof of concept using a segmented CT model from a human patient. Furthermore, we apply the proposed method on clinical data acquired during a human liver resection. The first experiment motivates the viewpoint scoring strategy and investigates reliable liver regions for accurate registrations in an intuitive visualisation. The second experiment shows wider basins of convergence for higher scoring viewpoints. The third experiment shows that a comparable registration performance can be achieved by at least two merged high scoring views and four low scoring views. Hence, the focus could change from the acquisition of a large liver surface to a small number of distinctive patches, thereby giving a more explicit protocol for surface reconstruction. We discuss the application of the proposed method on clinical data and show initial results. The proposed simulation framework shows promising results to motivate more research into a comprehensive view planning method for efficient registration in laparoscopic liver surgery.

  4. Merging and energy exchange between optical filaments

    NASA Astrophysics Data System (ADS)

    Georgieva, D. A.; Kovachev, L. M.

    2015-10-01

    We investigate nonlinear interaction between collinear femtosecond laser pulses with power slightly above the critical for self-focusing Pcr trough the processes of cross-phase modulation (CPM) and degenerate four-photon parametric mixing (FPPM). When there is no initial phase difference between the pulses we observe attraction between pulses due to CPM. The final result is merging between the pulses in a single filament with higher power. By method of moments it is found that the attraction depends on the distance between the pulses and has potential character. In the second case we study energy exchange between filaments. This process is described through FPPM scheme and requests initial phase difference between the waves.

  5. Merging NLO multi-jet calculations with improved unitarization

    NASA Astrophysics Data System (ADS)

    Bellm, Johannes; Gieseke, Stefan; Plätzer, Simon

    2018-03-01

    We present an algorithm to combine multiple matrix elements at LO and NLO with a parton shower. We build on the unitarized merging paradigm. The inclusion of higher orders and multiplicities reduce the scale uncertainties for observables sensitive to hard emissions, while preserving the features of inclusive quantities. The combination allows further soft and collinear emissions to be predicted by the all-order parton-shower approximation. We inspect the impact of terms that are formally but not parametrically negligible. We present results for a number of collider observables where multiple jets are observed, either on their own or in the presence of additional uncoloured particles. The algorithm is implemented in the event generator Herwig.

  6. Cosmological perturbation effects on gravitational-wave luminosity distance estimates

    NASA Astrophysics Data System (ADS)

    Bertacca, Daniele; Raccanelli, Alvise; Bartolo, Nicola; Matarrese, Sabino

    2018-06-01

    Waveforms of gravitational waves provide information about a variety of parameters for the binary system merging. However, standard calculations have been performed assuming a FLRW universe with no perturbations. In reality this assumption should be dropped: we show that the inclusion of cosmological perturbations translates into corrections to the estimate of astrophysical parameters derived for the merging binary systems. We compute corrections to the estimate of the luminosity distance due to velocity, volume, lensing and gravitational potential effects. Our results show that the amplitude of the corrections will be negligible for current instruments, mildly important for experiments like the planned DECIGO, and very important for future ones such as the Big Bang Observer.

  7. The HelCat Helicon-Cathode Device at UNM

    NASA Astrophysics Data System (ADS)

    Cyrin, Bricette; Watts, Christopher; Gilmore, Mark; Hayes, Tiffany; Kelly, Ralph; Leach, Christopher; Lynn, Alan; Sanchez, Andrew; Xie, Shuangwei; Yan, Lincan; Zhang, Yue

    2009-11-01

    The HelCat helicon-cathode device is a dual-source linear plasma device for investigating a wide variety of basic plasma phenomena. HelCat is 4 m long, 50 cm diameter, with axial magnetic field < 2.2 kG. An RF helicon source is at one end of the device, and a thermionic BaO-Ni cathode is at the other end. Current research topics include the relationship of turbulence to sheared plasma flows, deterministic chaos, Alfv'en wave propagation and damping, and merging plasma interaction. We present an overview of the ongoing research, and focus on recent results of merging helicon and cathode plasma. We will present some really cool movies.

  8. Merging and energy exchange between optical filaments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georgieva, D. A., E-mail: dgeorgieva@tu-sofia.bg; Kovachev, L. M.

    2015-10-28

    We investigate nonlinear interaction between collinear femtosecond laser pulses with power slightly above the critical for self-focusing P{sub cr} trough the processes of cross-phase modulation (CPM) and degenerate four-photon parametric mixing (FPPM). When there is no initial phase difference between the pulses we observe attraction between pulses due to CPM. The final result is merging between the pulses in a single filament with higher power. By method of moments it is found that the attraction depends on the distance between the pulses and has potential character. In the second case we study energy exchange between filaments. This process is describedmore » through FPPM scheme and requests initial phase difference between the waves.« less

  9. The dynamical evolution of transiting planetary systems including a realistic collision prescription

    NASA Astrophysics Data System (ADS)

    Mustill, Alexander J.; Davies, Melvyn B.; Johansen, Anders

    2018-05-01

    Planet-planet collisions are a common outcome of instability in systems of transiting planets close to the star, as well as occurring during in-situ formation of such planets from embryos. Previous N-body studies of instability amongst transiting planets have assumed that collisions result in perfect merging. Here, we explore the effects of implementing a more realistic collision prescription on the outcomes of instability and in-situ formation at orbital radii of a few tenths of an au. There is a strong effect on the outcome of the growth of planetary embryos, so long as the debris thrown off in collisions is rapidly removed from the system (which happens by collisional processing to dust, and then removal by radiation forces) and embryos are small (<0.1 M⊕). If this is the case, then systems form fewer detectable (≥1 M⊕) planets than systems evolved under the assumption of perfect merging in collisions. This provides some contribution to the "Kepler Dichotomy": the observed over-abundance of single-planet systems. The effects of changing the collision prescription on unstable mature systems of super-Earths are less pronounced. Perfect mergers only account for a minority of collision outcomes in such systems, but most collisions resulting in mass loss are grazing impacts in which only a few per cent. of mass is lost. As a result, there is little impact on the final masses and multiplicities of the systems after instability when compared to systems evolved under the assumption that collisions always result in perfect merging.

  10. Metamaterials-based sensor to detect and locate nonlinear elastic sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gliozzi, Antonio S.; Scalerandi, Marco; Miniaci, Marco

    2015-10-19

    In recent years, acoustic metamaterials have attracted increasing scientific interest for very diverse technological applications ranging from sound abatement to ultrasonic imaging, mainly due to their ability to act as band-stop filters. At the same time, the concept of chaotic cavities has been recently proposed as an efficient tool to enhance the quality of nonlinear signal analysis, particularly in the ultrasonic/acoustic case. The goal of the present paper is to merge the two concepts in order to propose a metamaterial-based device that can be used as a natural and selective linear filter for the detection of signals resulting from themore » propagation of elastic waves in nonlinear materials, e.g., in the presence of damage, and as a detector for the damage itself in time reversal experiments. Numerical simulations demonstrate the feasibility of the approach and the potential of the device in providing improved signal-to-noise ratios and enhanced focusing on the defect locations.« less

  11. The growth of language: Universal Grammar, experience, and principles of computation.

    PubMed

    Yang, Charles; Crain, Stephen; Berwick, Robert C; Chomsky, Noam; Bolhuis, Johan J

    2017-10-01

    Human infants develop language remarkably rapidly and without overt instruction. We argue that the distinctive ontogenesis of child language arises from the interplay of three factors: domain-specific principles of language (Universal Grammar), external experience, and properties of non-linguistic domains of cognition including general learning mechanisms and principles of efficient computation. We review developmental evidence that children make use of hierarchically composed structures ('Merge') from the earliest stages and at all levels of linguistic organization. At the same time, longitudinal trajectories of development show sensitivity to the quantity of specific patterns in the input, which suggests the use of probabilistic processes as well as inductive learning mechanisms that are suitable for the psychological constraints on language acquisition. By considering the place of language in human biology and evolution, we propose an approach that integrates principles from Universal Grammar and constraints from other domains of cognition. We outline some initial results of this approach as well as challenges for future research. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Transforming GIS data into functional road models for large-scale traffic simulation.

    PubMed

    Wilkie, David; Sewall, Jason; Lin, Ming C

    2012-06-01

    There exists a vast amount of geographic information system (GIS) data that model road networks around the world as polylines with attributes. In this form, the data are insufficient for applications such as simulation and 3D visualization-tools which will grow in power and demand as sensor data become more pervasive and as governments try to optimize their existing physical infrastructure. In this paper, we propose an efficient method for enhancing a road map from a GIS database to create a geometrically and topologically consistent 3D model to be used in real-time traffic simulation, interactive visualization of virtual worlds, and autonomous vehicle navigation. The resulting representation provides important road features for traffic simulations, including ramps, highways, overpasses, legal merge zones, and intersections with arbitrary states, and it is independent of the simulation methodologies. We test the 3D models of road networks generated by our algorithm on real-time traffic simulation using both macroscopic and microscopic techniques.

  13. Stabilization of Particle Discrimination Efficiencies for Neutron Spectrum Unfolding With Organic Scintillators

    NASA Astrophysics Data System (ADS)

    Lawrence, Chris C.; Polack, J. K.; Febbraro, Michael; Kolata, J. J.; Flaska, Marek; Pozzi, S. A.; Becchetti, F. D.

    2017-02-01

    The literature discussing pulse-shape discrimination (PSD) in organic scintillators dates back several decades. However, little has been written about PSD techniques that are optimized for neutron spectrum unfolding. Variation in n-γ misclassification rates and in γ/n ratio of incident fields can distort the neutron pulse-height response of scintillators and these distortions can in turn cause large errors in unfolded spectra. New applications in arms-control verification call for detection of lower-energy neutrons, for which PSD is particularly problematic. In this article, we propose techniques for removing distortions on pulse-height response that result from the merging of PSD distributions in the low-pulse-height region. These techniques take advantage of the repeatable shapes of PSD distributions that are governed by the counting statistics of scintillation-photon populations. We validate the proposed techniques using accelerator-based time-of-flight measurements and then demonstrate them by unfolding the Watt spectrum from measurement with a 252Cf neutron source.

  14. A Theory for Self-consistent Acceleration of Energetic Charged Particles by Dynamic Small-scale Flux Ropes

    NASA Astrophysics Data System (ADS)

    le Roux, J. A.; Zank, G. P.; Khabarova, O.; Webb, G. M.

    2016-12-01

    Simulations of charged particle acceleration in turbulent plasma regions with numerous small-scale contracting and merging (reconnecting) magnetic islands/flux ropes emphasize the key role of temporary particle trapping in these structures for efficient acceleration that can result in power-law spectra. In response, a comprehensive kinetic transport theory framework was developed by Zank et al. and le Roux et al. to capture the essential physics of energetic particle acceleration in solar wind regions containing numerous dynamic small-scale flux ropes. Examples of test particle solutions exhibiting hard power-law spectra for energetic particles were presented in recent publications by both Zank et al. and le Roux et al.. However, the considerable pressure in the accelerated particles suggests the need for expanding the kinetic transport theory to enable a self-consistent description of energy exchange between energetic particles and small-scale flux ropes. We plan to present the equations of an expanded kinetic transport theory framework that will enable such a self-consistent description.

  15. Critical exponents of the explosive percolation transition

    NASA Astrophysics Data System (ADS)

    da Costa, R. A.; Dorogovtsev, S. N.; Goltsev, A. V.; Mendes, J. F. F.

    2014-04-01

    In a new type of percolation phase transition, which was observed in a set of nonequilibrium models, each new connection between vertices is chosen from a number of possibilities by an Achlioptas-like algorithm. This causes preferential merging of small components and delays the emergence of the percolation cluster. First simulations led to a conclusion that a percolation cluster in this irreversible process is born discontinuously, by a discontinuous phase transition, which results in the term "explosive percolation transition." We have shown that this transition is actually continuous (second order) though with an anomalously small critical exponent of the percolation cluster. Here we propose an efficient numerical method enabling us to find the critical exponents and other characteristics of this second-order transition for a representative set of explosive percolation models with different number of choices. The method is based on gluing together the numerical solutions of evolution equations for the cluster size distribution and power-law asymptotics. For each of the models, with high precision, we obtain critical exponents and the critical point.

  16. Theranostic GO-based nanohybrid for tumor induced imaging and potential combinational tumor therapy.

    PubMed

    Qin, Si-Yong; Feng, Jun; Rong, Lei; Jia, Hui-Zhen; Chen, Si; Liu, Xiang-Ji; Luo, Guo-Feng; Zhuo, Ren-Xi; Zhang, Xian-Zheng

    2014-02-12

    Graphene oxide (GO)-based theranostic nanohybrid is designed for tumor induced imaging and potential combinational tumor therapy. The anti-tumor drug, Doxorubicin (DOX) is chemically conjugated to the poly(ethylenimine)-co-poly(ethylene glycol) (PEI-PEG) grafted GO via a MMP2-cleavable PLGLAG peptide linkage. The therapeutic efficacy of DOX is chemically locked and its intrinsic fluorescence is quenched by GO under normal physiological condition. Once stimulated by the MMP2 enzyme over-expressed in tumor tissues, the resulting peptide cleavage permits the unloading of DOX for tumor therapy and concurrent fluorescence recovery of DOX for in situ tumor cell imaging. Attractively, this PEI-bearing nanohybrid can mediate efficient DNA transfection and shows great potential for combinational drug/gene therapy. This tumor induced imaging and potential combinational therapy will open a window for tumor treatment by offering a unique theranostic approach through merging the diagnostic capability and pathology-responsive therapeutic function. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Technology Tips

    ERIC Educational Resources Information Center

    Hollebrands, Karen Flanagan

    2004-01-01

    Directions for using a function called Mail Merge in Microsoft Office are mentioned. Using a spreadsheet and mail merge, allows one to create different assessments, complete with worked solutions, generate them on demand or save them for later use.

  18. Implementation of a protocol for assembling DNA in a Teflon tube

    NASA Astrophysics Data System (ADS)

    Walsh, Edmond J.; Feuerborn, Alexander; Cook, Peter R.

    2017-02-01

    Droplet based microfluidics continues to grow as a platform for chemical and biological reactions using small quantities of fluids, however complex protocols are rarely possible in existing devices. This paper implements a new approach to merging of drops, combined with magnetic bead manipulation, for the creation of ligated double-stranded DNA molecule using "Gibson assembly" chemistry. DNA assembly is initially accomplished through the merging, and mixing, of five drops followed by a thermal cycle. Then, integrating this drop merging method with magnetic beads enable the implementation of amore complete protocol consisting of nine wash steps,merging of four drop, transport of selective reagents between twelve drops using magnetic particles, followed by a thermal cycle and finally the deposition of a purified drop into an Eppendorf for downstream analysis. Gel electrophoresis is used to confirm successful DNA assembly.

  19. A profile of U.S. hospital mergers.

    PubMed

    Harrison, Jeffrey P; McDowell, Geoffrey M

    2005-01-01

    According to Modern Healthcare's Annual Report on Mergers and Acquisitions the number of hospital mergers has declined significantly since the Balanced Budget Act of 1997. This study evaluated market characteristics, organizational factors and the operational performance of these hospitals prior to merger. We found that merged hospitals were more likely to be located in markets with higher per capital income and higher HMO penetration. Merged hospitals were larger in size and had greater clinical complexity as measured by increased services. Finally, we found that merged hospitals had higher occupancy rates, lower return on assets (ROA), and older facilities. From a managerial perspective, merged hospitals display many of the characteristics of an organization in financial distress. From a policy standpoint, the decline in hospital mergers subsequent to the Balanced Budget Act of 1997 may affect the long-term survivability of many U.S. hospitals.

  20. The use of Merging and Aggregation Operators for MRDB Data Feeding

    NASA Astrophysics Data System (ADS)

    Kozioł, Krystian; Lupa, Michał

    2013-12-01

    This paper presents the application of two generalization operators - merging and displacement - in the process of automatic data feeding in a multiresolution data base of topographic objects from large-scale data-bases (1 : 500-1 : 5000). An ordered collection of objects makes a layer of development that in the process of generalization is subjected to the processes of merging and displacement in order to maintain recognizability in the reduced scale of the map. The solution to the above problem is the algorithms described in the work; these algorithms use the standard recognition of drawings (Chrobak 2010), independent of the user. A digital cartographic generalization process is a set of consecutive operators where merging and aggregation play a key role. The proper operation has a significant impact on the qualitative assessment of data generalization

  1. Suppressed star formation by a merging cluster system

    DOE PAGES

    Mansheim, A. S.; Lemaux, B. C.; Tomczak, A. R.; ...

    2017-03-24

    We examine the effects of an impending cluster merger on galaxies in the large scale structure (LSS) RX J0910 at z =1.105. Using multi-wavelength data, including 102 spectral members drawn from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) survey and precise photometric redshifts, we calculate star formation rates and map the specific star formation rate density of the LSS galaxies. These analyses along with an investigation of the color-magnitude properties of LSS galaxies indicate lower levels of star formation activity in the region between the merging clusters relative to the outskirts of the system. We suggest thatmore » gravitational tidal forces due to the potential of the merging halos may be the physical mechanism responsible for the observed suppression of star formation in galaxies caught between the merging clusters.« less

  2. GEM: a dynamic tracking model for mesoscale eddies in the ocean

    NASA Astrophysics Data System (ADS)

    Li, Qiu-Yang; Sun, Liang; Lin, Sheng-Fu

    2016-12-01

    The Genealogical Evolution Model (GEM) presented here is an efficient logical model used to track dynamic evolution of mesoscale eddies in the ocean. It can distinguish between different dynamic processes (e.g., merging and splitting) within a dynamic evolution pattern, which is difficult to accomplish using other tracking methods. To this end, the GEM first uses a two-dimensional (2-D) similarity vector (i.e., a pair of ratios of overlap area between two eddies to the area of each eddy) rather than a scalar to measure the similarity between eddies, which effectively solves the "missing eddy" problem (temporarily lost eddy in tracking). Second, for tracking when an eddy splits, the GEM uses both "parent" (the original eddy) and "child" (eddy split from parent) and the dynamic processes are described as the birth and death of different generations. Additionally, a new look-ahead approach with selection rules effectively simplifies computation and recording. All of the computational steps are linear and do not include iteration. Given the pixel number of the target region L, the maximum number of eddies M, the number N of look-ahead time steps, and the total number of time steps T, the total computer time is O(LM(N + 1)T). The tracking of each eddy is very smooth because we require that the snapshots of each eddy on adjacent days overlap one another. Although eddy splitting or merging is ubiquitous in the ocean, they have different geographic distributions in the North Pacific Ocean. Both the merging and splitting rates of the eddies are high, especially at the western boundary, in currents and in "eddy deserts". The GEM is useful not only for satellite-based observational data, but also for numerical simulation outputs. It is potentially useful for studying dynamic processes in other related fields, e.g., the dynamics of cyclones in meteorology.

  3. A Validation Study of Merging and Spacing Techniques in a NAS-Wide Simulation

    NASA Technical Reports Server (NTRS)

    Glaab, Patricia C.

    2011-01-01

    In November 2010, Intelligent Automation, Inc. (IAI) delivered an M&S software tool to that allows system level studies of the complex terminal airspace with the ACES simulation. The software was evaluated against current day arrivals in the Atlanta TRACON using Atlanta's Hartsfield-Jackson International Airport (KATL) arrival schedules. Results of this validation effort are presented describing data sets, traffic flow assumptions and techniques, and arrival rate comparisons between reported landings at Atlanta versus simulated arrivals using the same traffic sets in ACES equipped with M&S. Initial results showed the simulated system capacity to be significantly below arrival capacity seen at KATL. Data was gathered for Atlanta using commercial airport and flight tracking websites (like FlightAware.com), and analyzed to insure compatible techniques were used for result reporting and comparison. TFM operators for Atlanta were consulted for tuning final simulation parameters and for guidance in flow management techniques during high volume operations. Using these modified parameters and incorporating TFM guidance for efficiencies in flowing aircraft, arrival capacity for KATL was matched for the simulation. Following this validation effort, a sensitivity study was conducted to measure the impact of variations in system parameters on the Atlanta airport arrival capacity.

  4. Onboard Safety Technology Survey Synthesis - Final Report

    DOT National Transportation Integrated Search

    2008-01-01

    The Federal Motor Carrier Safety Administration (FMCSA) funded this project to collect, merge, and conduct an assessment of onboard safety system surveys and resulting data sets that may benefit commercial vehicle operations safety and future researc...

  5. The Role of Higher-Order Modes on the Electromagnetic Whistler-Cyclotron Wave Fluctuations of Thermal and Non-Thermal Plasmas

    NASA Technical Reports Server (NTRS)

    Vinas, Adolfo F.; Moya, Pablo S.; Navarro, Roberto; Araneda, Jamie A.

    2014-01-01

    Two fundamental challenging problems of laboratory and astrophysical plasmas are the understanding of the relaxation of a collisionless plasmas with nearly isotropic velocity distribution functions and the resultant state of nearly equipartition energy density with electromagnetic plasma turbulence. Here, we present the results of a study which shows the role that higher-order-modes play in limiting the electromagnetic whistler-like fluctuations in a thermal and non-thermal plasma. Our main results show that for a thermal plasma the magnetic fluctuations are confined by regions that are bounded by the least-damped higher order modes. We further show that the zone where the whistler-cyclotron normal modes merges the electromagnetic fluctuations shifts to longer wavelengths as the beta(sub e) increases. This merging zone has been interpreted as the beginning of the region where the whistler-cyclotron waves losses their identity and become heavily damped while merging with the fluctuations. Our results further indicate that in the case of nonthermal plasmas, the higher-order modes do not confine the fluctuations due to the effective higher-temperature effects and the excess of suprathermal plasma particles. The analysis presented here considers the second-order theory of fluctuations and the dispersion relation of weakly transverse fluctuations, with wave vectors parallel to the uniform background magnetic field, in a finite temperature isotropic bi-Maxwellian and Tsallis-kappa-like magnetized electron-proton plasma. Our results indicate that the spontaneously emitted electromagnetic fluctuations are in fact enhanced over these quasi modes suggesting that such modes play an important role in the emission and absorption of electromagnetic fluctuations in thermal or quasi-thermal plasmas.

  6. Merging pedigree databases to describe and compare mating practices and gene flow between pedigree dogs in France, Sweden and the UK.

    PubMed

    Wang, S; Leroy, G; Malm, S; Lewis, T; Strandberg, E; Fikse, W F

    2017-04-01

    Merging pedigree databases across countries may improve the ability of kennel organizations to monitor genetic variability and health-related issues of pedigree dogs. We used data provided by the Société Centrale Canine (France), Svenska Kennelklubben (Sweden) and the Kennel Club (UK) to study the feasibility of merging pedigree databases across countries and describe breeding practices and international gene flow within the following four breeds: Bullmastiff (BMA), English setter (ESE), Bernese mountain dog (BMD) and Labrador retriever (LBR). After merging the databases, genealogical parameters and founder contributions were calculated according to the birth period, breed and registration country of the dogs. Throughout the investigated period, mating between close relatives, measured as the proportion of inbred individuals (considering only two generations of pedigree), decreased or remained stable, with the exception of LBR in France. Gene flow between countries became more frequent, and the origins of populations within countries became more diverse over time. In conclusion, the potential to reduce inbreeding within purebred dog populations through exchanging breeding animals across countries was confirmed by an improved effective population size when merging populations from different countries. © 2016 Blackwell Verlag GmbH.

  7. Intermittent surface water connectivity: Fill and spill vs. fill and merge dynamics

    USGS Publications Warehouse

    Leibowitz, Scott G.; Mushet, David M.; Newton, Wesley E.

    2016-01-01

    Intermittent surface connectivity can influence aquatic systems, since chemical and biotic movements are often associated with water flow. Although often referred to as fill and spill, wetlands also fill and merge. We examined the effects of these connection types on water levels, ion concentrations, and biotic communities of eight prairie pothole wetlands between 1979 and 2015. Fill and spill caused pulsed surface water connections that were limited to periods following spring snow melt. In contrast, two wetlands connected through fill and merge experienced a nearly continuous, 20-year surface water connection and had completely coincident water levels. Fill and spill led to minimal convergence in dissolved ions and macroinvertebrate composition, while these constituents converged under fill and merge. The primary factor determining differences in response was duration of the surface water connection between wetland pairs. Our findings suggest that investigations into the effects of intermittent surface water connections should not consider these connections generically, but need to address the specific types of connections. In particular, fill and spill promotes external water exports while fill and merge favors internal storage. The behaviors of such intermittent connections will likely be accentuated under a future with more frequent and severe climate extremes.

  8. Simulations of Merging Helion Bunches on the AGS Injection Porch

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gardner, C. J.

    During the setup of helions for the FY2014 RHIC run it was discovered that the standard scheme for merging bunches on the AGS injection porch required an injection kicker pulse shorter than what was available. To overcome this difficulty, K. Zeno proposed and developed an interesting and unusual alternative which uses RF harmonic numbers 12, 4, 2 (rather than the standard 8, 4, 2) to merge 8 helion bunches into 2. In this note we carry out simulations that illustrate how the alternative scheme works and how it compares with the standard scheme. This is done in Sections 13 andmore » 14. A scheme in which 6 bunches are merged into 1 is simulated in Section 15. This may be useful if more helions per merged bunch are needed in future runs. General formulae for the simulations are given in Sections 9 through 12. For completeness, Sections 1 through 8 give a derivation of the turn-by-turn equations of longitudinal motion at constant magnetic field. The derivation is based on the work of MacLachlan. The reader may wish to skip over these Sections and start with Section 9.« less

  9. Two-fluid and magnetohydrodynamic modelling of magnetic reconnection in the MAST spherical tokamak and the solar corona

    NASA Astrophysics Data System (ADS)

    Browning, P. K.; Cardnell, S.; Evans, M.; Arese Lucini, F.; Lukin, V. S.; McClements, K. G.; Stanier, A.

    2016-01-01

    Twisted magnetic flux ropes are ubiquitous in laboratory and astrophysical plasmas, and the merging of such flux ropes through magnetic reconnection is an important mechanism for restructuring magnetic fields and releasing free magnetic energy. The merging-compression scenario is one possible start-up scheme for spherical tokamaks, which has been used on the Mega Amp Spherical Tokamak (MAST). Two current-carrying plasma rings or flux ropes approach each due to mutual attraction, forming a current sheet and subsequently merge through magnetic reconnection into a single plasma torus, with substantial plasma heating. Two-dimensional resistive and Hall-magnetohydrodynamic simulations of this process are reported, including a strong guide field. A model of the merging based on helicity-conserving relaxation to a minimum energy state is also presented, extending previous work to tight-aspect-ratio toroidal geometry. This model leads to a prediction of the final state of the merging, in good agreement with simulations and experiment, as well as the average temperature rise. A relaxation model of reconnection between two or more flux ropes in the solar corona is also described, allowing for different senses of twist, and the implications for heating of the solar corona are discussed.

  10. Merging tree ring chronologies and climate system model simulated temperature by optimal interpolation algorithm in North America

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Xing, Pei; Luo, Yong; Zhao, Zongci; Nie, Suping; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua

    2015-04-01

    A new dataset of annual mean surface temperature has been constructed over North America in recent 500 years by performing optimal interpolation (OI) algorithm. Totally, 149 series totally were screened out including 69 tree ring width (MXD) and 80 tree ring width (TRW) chronologies are screened from International Tree Ring Data Bank (ITRDB). The simulated annual mean surface temperature derives from the past1000 experiment results of Community Climate System Model version 4 (CCSM4). Different from existing research that applying data assimilation approach to (General Circulation Models) GCMs simulation, the errors of both the climate model simulation and tree ring reconstruction were considered, with a view to combining the two parts in an optimal way. Variance matching (VM) was employed to calibrate tree ring chronologies on CRUTEM4v, and corresponding errors were estimated through leave-one-out process. Background error covariance matrix was estimated from samples of simulation results in a running 30-year window in a statistical way. Actually, the background error covariance matrix was calculated locally within the scanning range (2000km in this research). Thus, the merging process continued with a time-varying local gain matrix. The merging method (MM) was tested by two kinds of experiments, and the results indicated standard deviation of errors can be reduced by about 0.3 degree centigrade lower than tree ring reconstructions and 0.5 degree centigrade lower than model simulation. During the recent Obvious decadal variability can be identified in MM results including the evident cooling (0.10 degree per decade) in 1940-60s, wherein the model simulation exhibit a weak increasing trend (0.05 degree per decade) instead. MM results revealed a compromised spatial pattern of the linear trend of surface temperature during a typical period (1601-1800 AD) in Little Ice Age, which basically accorded with the phase transitions of the Pacific decadal oscillation (PDO) and Atlantic multi-decadal oscillation (AMO). Through the empirical orthogonal functions and power spectrum analysis, it was demonstrated that, compared with the pure simulations of CCSM4, MM made significant improvement of decadal variability for the gridded temperature in North America by merging the temperature-sensitive tree ring records.

  11. THE DYNAMICS OF MERGING CLUSTERS: A MONTE CARLO SOLUTION APPLIED TO THE BULLET AND MUSKET BALL CLUSTERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dawson, William A., E-mail: wadawson@ucdavis.edu

    2013-08-01

    Merging galaxy clusters have become one of the most important probes of dark matter, providing evidence for dark matter over modified gravity and even constraints on the dark matter self-interaction cross-section. To properly constrain the dark matter cross-section it is necessary to understand the dynamics of the merger, as the inferred cross-section is a function of both the velocity of the collision and the observed time since collision. While the best understanding of merging system dynamics comes from N-body simulations, these are computationally intensive and often explore only a limited volume of the merger phase space allowed by observed parametermore » uncertainty. Simple analytic models exist but the assumptions of these methods invalidate their results near the collision time, plus error propagation of the highly correlated merger parameters is unfeasible. To address these weaknesses I develop a Monte Carlo method to discern the properties of dissociative mergers and propagate the uncertainty of the measured cluster parameters in an accurate and Bayesian manner. I introduce this method, verify it against an existing hydrodynamic N-body simulation, and apply it to two known dissociative mergers: 1ES 0657-558 (Bullet Cluster) and DLSCL J0916.2+2951 (Musket Ball Cluster). I find that this method surpasses existing analytic models-providing accurate (10% level) dynamic parameter and uncertainty estimates throughout the merger history. This, coupled with minimal required a priori information (subcluster mass, redshift, and projected separation) and relatively fast computation ({approx}6 CPU hours), makes this method ideal for large samples of dissociative merging clusters.« less

  12. Improvement of forecast skill for severe weather by merging radar-based extrapolation and storm-scale NWP corrected forecast

    NASA Astrophysics Data System (ADS)

    Wang, Gaili; Wong, Wai-Kin; Hong, Yang; Liu, Liping; Dong, Jili; Xue, Ming

    2015-03-01

    The primary objective of this study is to improve the performance of deterministic high resolution rainfall forecasts caused by severe storms by merging an extrapolation radar-based scheme with a storm-scale Numerical Weather Prediction (NWP) model. Effectiveness of Multi-scale Tracking and Forecasting Radar Echoes (MTaRE) model was compared with that of a storm-scale NWP model named Advanced Regional Prediction System (ARPS) for forecasting a violent tornado event that developed over parts of western and much of central Oklahoma on May 24, 2011. Then the bias corrections were performed to improve the forecast accuracy of ARPS forecasts. Finally, the corrected ARPS forecast and radar-based extrapolation were optimally merged by using a hyperbolic tangent weight scheme. The comparison of forecast skill between MTaRE and ARPS in high spatial resolution of 0.01° × 0.01° and high temporal resolution of 5 min showed that MTaRE outperformed ARPS in terms of index of agreement and mean absolute error (MAE). MTaRE had a better Critical Success Index (CSI) for less than 20-min lead times and was comparable to ARPS for 20- to 50-min lead times, while ARPS had a better CSI for more than 50-min lead times. Bias correction significantly improved ARPS forecasts in terms of MAE and index of agreement, although the CSI of corrected ARPS forecasts was similar to that of the uncorrected ARPS forecasts. Moreover, optimally merging results using hyperbolic tangent weight scheme further improved the forecast accuracy and became more stable.

  13. Temporal and spatial patterns of wetland extent influence variability of surface water connectivity in the Prairie Pothole Region, United States

    USGS Publications Warehouse

    Vanderhoof, Melanie; Alexander, Laurie C.; Todd, Jason

    2016-01-01

    Context. Quantifying variability in landscape-scale surface water connectivity can help improve our understanding of the multiple effects of wetlands on downstream waterways. Objectives. We examined how wetland merging and the coalescence of wetlands with streams varied both spatially (among ecoregions) and interannually (from drought to deluge) across parts of the Prairie Pothole Region. Methods. Wetland extent was derived over a time series (1990-2011) using Landsat imagery. Changes in landscape-scale connectivity, generated by the physical coalescence of wetlands with other surface water features, were quantified by fusing static wetland and stream datasets with Landsat-derived wetland extent maps, and related to multiple wetness indices. The usage of Landsat allows for decadal-scale analysis, but limits the types of surface water connections that can be detected. Results. Wetland extent correlated positively with the merging of wetlands and wetlands with streams. Wetness conditions, as defined by drought indices and runoff, were positively correlated with wetland extent, but less consistently correlated with measures of surface water connectivity. The degree of wetland-wetland merging was found to depend less on total wetland area or density, and more on climate conditions, as well as the threshold for how wetland/upland was defined. In contrast, the merging of wetlands with streams was positively correlated with stream density, and inversely related to wetland density. Conclusions. Characterizing the degree of surface water connectivity within the Prairie Pothole Region in North America requires consideration of 1) climate-driven variation in wetness conditions and 2) within-region variation in wetland and stream spatial arrangements.

  14. Exponentially decaying interaction potential of cavity solitons

    NASA Astrophysics Data System (ADS)

    Anbardan, Shayesteh Rahmani; Rimoldi, Cristina; Kheradmand, Reza; Tissoni, Giovanna; Prati, Franco

    2018-03-01

    We analyze the interaction of two cavity solitons in an optically injected vertical cavity surface emitting laser above threshold. We show that they experience an attractive force even when their distance is much larger than their diameter, and eventually they merge. Since the merging time depends exponentially on the initial distance, we suggest that the attraction could be associated with an exponentially decaying interaction potential, similarly to what is found for hydrophobic materials. We also show that the merging time is simply related to the characteristic times of the laser, photon lifetime, and carrier lifetime.

  15. Cluster analysis of medical service resources at district hospitals in Taiwan, 2007-2011.

    PubMed

    Tseng, Shu-Fang; Lee, Tian-Shyug; Deng, Chung-Yeh

    2015-12-01

    A vast amount of the annual/national budget has been spent on the National Health Insurance program in Taiwan. However, the market for district hospitals has become increasingly competitive, and district hospitals are under pressure to optimize the use of health service resources. Therefore, we employed a clustering method to explore variations in input and output service volumes, and investigate resource allocation and health care service efficiency in district hospitals. Descriptive and cluster analyses were conducted to examine the district hospitals included in the Ministry of Health and Welfare database during 2007-2011. The results, according to the types of hospital ownership, suggested that the number of public hospitals has decreased and that of private hospitals increased; the largest increase in the number of district hospitals occurred when Taichung City was merged into Taichung County. The descriptive statistics from 2007 to 2011 indicated that 43% and 36.4% of the hospitals had 501-800 occupied beds and 101-200 physicians, respectively, and > 401 medical staff members. However, the number of outpatients and discharged patients exceeded 6001 and 90,001, respectively. In addition, the highest percentage of hospitals (43.9%) had 30,001-60,000 emergency department patients. In 2010, the number of patients varied widely, and the analysis of variance cluster results were nonsignificant (p > 0.05). District hospitals belonging to low-throughput and low-performance groups were encouraged to improve resource utilization for enhancing health care service efficiency. Copyright © 2015. Published by Elsevier Taiwan.

  16. The impact of the CartoSound® image directly acquired from the left atrium for integration in atrial fibrillation ablation.

    PubMed

    Kaseno, Kenichi; Hisazaki, Kaori; Nakamura, Kohki; Ikeda, Etsuko; Hasegawa, Kanae; Aoyama, Daisetsu; Shiomi, Yuichiro; Ikeda, Hiroyuki; Morishita, Tetsuji; Ishida, Kentaro; Amaya, Naoki; Uzui, Hiroyasu; Tada, Hiroshi

    2018-04-14

    Intracardiac echocardiographic (ICE) imaging might be useful for integrating three-dimensional computed tomographic (CT) images for left atrial (LA) catheter navigation during atrial fibrillation (AF) ablation. However, the optimal CT image integration method using ICE has not been established. This study included 52 AF patients who underwent successful circumferential pulmonary vein isolation (CPVI). In all patients, CT image integration was performed after the CPVI with the following two methods: (1) using ICE images of the LA derived from the right atrium and right ventricular outflow tract (RA-merge) and (2) using ICE images of the LA directly derived from the LA added to the image for the RA-merge (LA-merge). The accuracy of these two methods was assessed by the distances between the integrated CT image and ICE image (ICE-to-CT distance), and between the CT image and actual ablated sites for the CPVI (CT-to-ABL distance). The mean ICE-to-CT distance was comparable between the two methods (RA-merge = 1.6 ± 0.5 mm, LA-merge = 1.7 ± 0.4 mm; p = 0.33). However, the mean CT-to-ABL distance was shorter for the LA-merge (2.1 ± 0.6 mm) than RA-merge (2.5 ± 0.8 mm; p < 0.01). The LA, especially the left-sided PVs and LA roof, was more sharply delineated by direct LA imaging, and whereas the greatest CT-to-ABL distance was observed at the roof portion of the left superior PV (3.7 ± 2.8 mm) after the RA-merge, it improved to 2.6 ± 1.9 mm after the LA-merge (p < 0.01). Additional ICE images of the LA directly acquired from the LA might lead to a greater accuracy of the CT image integration for the CVPI.

  17. Evaluation of Airborne Precision Spacing in a Human-in-the-Loop Experiment

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.

    2005-01-01

    A significant bottleneck in the current air traffic system occurs at the runway. Expanding airports and adding new runways will help solve this problem; however, this comes with significant costs: financially, politically and environmentally. A complementary solution is to safely increase the capacity of current runways. This can be achieved by precisely spacing aircraft at the runway threshold, with a resulting reduction in the spacing bu er required under today s operations. At NASA's Langley Research Center, the Airspace Systems program has been investigating airborne technologies and procedures that will assist the flight crew in achieving precise spacing behind another aircraft. A new spacing clearance allows the pilot to follow speed cues from a new on-board guidance system called Airborne Merging and Spacing for Terminal Arrivals (AMSTAR). AMSTAR receives Automatic Dependent Surveillance-Broadcast (ADS-B) reports from an assigned, leading aircraft and calculates the appropriate speed for the ownship to fly to achieve the desired spacing interval, time- or distance-based, at the runway threshold. Since the goal is overall system capacity, the speed guidance algorithm is designed to provide system-wide benefits and stability to a string of arriving aircraft. An experiment was recently performed at the NASA Langley Air Traffic Operations Laboratory (ATOL) to test the flexibility of Airborne Precision Spacing operations under a variety of operational conditions. These included several types of merge and approach geometries along with the complementary merging and in-trail operations. Twelve airline pilots and four controllers participated in this simulation. Performance and questionnaire data were collected from a total of eighty-four individual arrivals. The pilots were able to achieve precise spacing with a mean error of 0.5 seconds and a standard deviation of 4.7 seconds. No statistically significant di erences in spacing performance were found between in-trail and merging operations or among the three modeled airspaces. Questionnaire data showed general acceptance for both pilots and controllers. These results reinforce previous findings from full-mission simulation and flight evaluation of the in-trail operations. This paper reviews the results of this simulation in detail.

  18. The influence of massive black hole binaries on the morphology of merger remnants

    NASA Astrophysics Data System (ADS)

    Bortolas, E.; Gualandris, A.; Dotti, M.; Read, J. I.

    2018-06-01

    Massive black hole (MBH) binaries, formed as a result of galaxy mergers, are expected to harden by dynamical friction and three-body stellar scatterings, until emission of gravitational waves (GWs) leads to their final coalescence. According to recent simulations, MBH binaries can efficiently harden via stellar encounters only when the host geometry is triaxial, even if only modestly, as angular momentum diffusion allows an efficient repopulation of the binary loss cone. In this paper, we carry out a suite of N-body simulations of equal-mass galaxy collisions, varying the initial orbits and density profiles for the merging galaxies and running simulations both with and without central MBHs. We find that the presence of an MBH binary in the remnant makes the system nearly oblate, aligned with the galaxy merger plane, within a radius enclosing 100 MBH masses. We never find binary hosts to be prolate on any scale. The decaying MBHs slightly enhance the tangential anisotropy in the centre of the remnant due to angular momentum injection and the slingshot ejection of stars on nearly radial orbits. This latter effect results in about 1 per cent of the remnant stars being expelled from the galactic nucleus. Finally, we do not find any strong connection between the remnant morphology and the binary hardening rate, which depends only on the inner density slope of the remnant galaxy. Our results suggest that MBH binaries are able to coalesce within a few Gyr, even if the binary is found to partially erase the merger-induced triaxiality from the remnant.

  19. Quantifying the Components of Impervious Surfaces

    USGS Publications Warehouse

    Tilley, Janet S.; Slonecker, E. Terrence

    2006-01-01

    This study's objectives were to (1) determine the relative contribution of impervious surface individual components by collecting digital information from high-resolution imagery, 1-meter or better; and to (2) determine which of the more advanced techniques, such as spectral unmixing or the application of coefficients to land use or land cover data, was the most suitable method that could be used by State and local governments as well as Federal agencies to efficiently measure the imperviousness in any given watershed or area of interest. The components of impervious surfaces, combined from all the watersheds and time periods from objective one were the following: buildings 29.2-percent, roads 28.3-percent, parking lots 24.6-percent; with the remaining three totaling 14-percent - driveways, sidewalks, and other, where other were any other features that were not contained within the first five. Results from objective two were spectral unmixing techniques will ultimately be the most efficient method of determining imperviousness, but are not yet accurate enough as it is critical to achieve accuracy better than 10-percent of the truth, of which the method is not consistently accomplishing as observed in this study. Of the three techniques in coefficient application tested, land use coefficient application was not practical, while if the last two methods, coefficients applied to land cover data, were merged, their end results could be to within 5-percent or better, of the truth. Until the spectral unmixing technique has been further refined, land cover coefficients should be used, which offer quick results, but not current as they were developed for the 1992 National Land Characteristics Data.

  20. Clustering of galaxies in a hierarchical universe - I. Methods and results at z=0

    NASA Astrophysics Data System (ADS)

    Kauffmann, Guinevere; Colberg, Jorg M.; Diaferio, Antonaldo; White, Simon D. M.

    1999-02-01

    We introduce a new technique for following the formation and evolution of galaxies in cosmological N-body simulations. Dissipationless simulations are used to track the formation and merging of dark matter haloes as a function of redshift. Simple prescriptions, taken directly from semi-analytic models of galaxy formation, are adopted for gas cooling, star formation, supernova feedback and the merging of galaxies within the haloes. This scheme enables us to explore the clustering properties of galaxies, and to investigate how selection by luminosity, colour or type influences the results. In this paper we study the properties of the galaxy distribution at z=0. These include B- and K-band luminosity functions, two-point correlation functions, pairwise peculiar velocities, cluster mass-to-light ratios, B-V colours, and star formation rates. We focus on two variants of a cold dark matter (CDM) cosmology: a high-density (Omega =1) model with shape-parameter Gamma =0.21 (tau CDM), and a low-density model with Omega =0.3 and Lambda =0.7 (Lambda CDM). Both models are normalized to reproduce the I-band Tully-Fisher relation of Giovanelli et al. near a circular velocity of 220 km s^-1. Our results depend strongly both on this normalization and on the adopted prescriptions for star formation and feedback. Very different assumptions are required to obtain an acceptable model in the two cases. For tau CDM, efficient feedback is required to suppress the growth of galaxies, particularly in low-mass field haloes. Without it, there are too many galaxies and the correlation function exhibits a strong turnover on scales below 1 Mpc. For Lambda CDM, feedback must be weaker, otherwise too few L_* galaxies are produced and the correlation function is too steep. Although neither model is perfect, both come close to reproducing most of the data. Given the uncertainties in modelling some of the critical physical processes, we conclude that it is not yet possible to draw firm conclusions about the values of cosmological parameters from studies of this kind. Further observational work on global star formation and feedback effects is required to narrow the range of possibilities.

  1. Merging cranial histology and 3D-computational biomechanics: a review of the feeding ecology of a Late Triassic temnospondyl amphibian

    PubMed Central

    Gruntmejer, Kamil; Marcé-Nogué, Jordi; Bodzioch, Adam; Fortuny, Josep

    2018-01-01

    Finite Element Analysis (FEA) is a useful method for understanding form and function. However, modelling of fossil taxa invariably involves assumptions as a result of preservation-induced loss of information in the fossil record. To test the validity of predictions from FEA, given such assumptions, these results could be compared to independent lines of evidence for cranial mechanics. In the present study a new concept of using bone microstructure to predict stress distribution in the skull during feeding is put forward and a correlation between bone microstructure and results of computational biomechanics (FEA) is carried out. The bony framework is a product of biological optimisation; bone structure is created to meet local mechanical conditions. To test how well results from FEA correlate to cranial mechanics predicted from bone structure, the well-known temnospondyl Metoposaurus krasiejowensis was used as a model. A crucial issue to Temnospondyli is their feeding mode: did they suction feed or employ direct biting, or both? Metoposaurids have previously been characterised either as active hunters or passive bottom dwellers. In order to test the correlation between results from FEA and bone microstructure, two skulls of Metoposaurus were used, one modelled under FE analyses, while for the second one 17 dermal bone microstructure were analysed. Thus, for the first time, results predicting cranial mechanical behaviour using both methods are merged to understand the feeding strategy of Metoposaurus. Metoposaurus appears to have been an aquatic animal that exhibited a generalist feeding behaviour. This taxon may have used two foraging techniques in hunting; mainly bilateral biting and, to a lesser extent, lateral strikes. However, bone microstructure suggests that lateral biting was more frequent than suggested by Finite Element Analysis (FEA). One of the potential factors that determined its mode of life may have been water levels. During optimum water conditions, metoposaurids may have been more active ambush predators that were capable of lateral strikes of the head. The dry season required a less active mode of life when bilateral biting is particularly efficient. This, combined with their characteristically anteriorly positioned orbits, was optimal for ambush strategy. This ability to use alternative modes of food acquisition, independent of environmental conditions, might hold the key in explaining the very common occurrence of metoposaurids during the Late Triassic. PMID:29503770

  2. Fast processing of microscopic images using object-based extended depth of field.

    PubMed

    Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades

    2016-12-22

    Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.

  3. ASD-RSD; To Merge or Not to Merge

    ERIC Educational Resources Information Center

    Sinclair, Dorothy

    1971-01-01

    This paper clarifies the history of the proposal, and comments on some of the issues involved in the merger of the Reference Services Division (RSD) and the Adult Services Division (ASD) of the American Library Association. (MF)

  4. Cosmic ray modulation and merged interaction regions

    NASA Technical Reports Server (NTRS)

    Burlaga, L. F.; Goldstein, M. L.; Mcdonald, F. B.

    1985-01-01

    Beyond several AU, interactions among shocks and streams give rise to merged interaction regions in which the magnetic field is turbulent. The integral intensity of . 75 MeV/Nuc cosmic rays at Voyager is generally observed to decrease when a merged interaction region moves past the spacecraft and to increase during the passage of a rarefaction region. When the separation between interaction regions is relatively large, the cosmic ray intensity tends to increase on a scale of a few months. This was the case at Voyager 1 from July 1, 1983 to May 1, 1984, when the spacecraft moved from 16.7 to 19.6 AU. Changes in cosmic ray intensity were related to the magnetic field strength in a simple way. It is estimated that the diffusion coefficient in merged interaction regions at this distance is similar to 0.6 x 10 to the 22nd power sq cm/s.

  5. Phase transition and flow-rate behavior of merging granular flows.

    PubMed

    Hu, Mao-Bin; Liu, Qi-Yi; Jiang, Rui; Hou, Meiying; Wu, Qing-Song

    2015-02-01

    Merging of granular flows is ubiquitous in industrial, mining, and geological processes. However, its behavior remains poorly understood. This paper studies the phase transition and flow-rate behavior of two granular flows merging into one channel. When the main channel is wider than the side channel, the system shows a remarkable two-sudden-drops phenomenon in the outflow rate when gradually increasing the main inflow. When gradually decreasing the main inflow, the system shows obvious hysteresis phenomenon. We study the flow-rate-drop phenomenon by measuring the area fraction and the mean velocity at the merging point. The phase diagram of the system is also presented to understand the occurrence of the phenomenon. We find that the dilute-to-dense transition occurs when the area fraction of particles at the joint point exceeds a critical value ϕ(c)=0.65±0.03.

  6. Partial coalescence of drops at liquid interfaces

    NASA Astrophysics Data System (ADS)

    Blanchette, François; Bigioni, Terry P.

    2006-04-01

    When two separate masses of the same fluid are brought gently into contact, they are expected to fully merge into a single larger mass to minimize surface energy. However, when a stationary drop coalesces with an underlying reservoir of identical fluid, merging does not always proceed to completion. Occasionally, a drop in the process of merging apparently defies surface tension by `pinching off' before total coalescence occurs, leaving behind a smaller daughter droplet. Moreover, this process can repeat itself for subsequent generations of daughter droplets, resulting in a cascade of self-similar events. Such partial coalescence behaviour has implications for the dynamics of a variety of systems, including the droplets in clouds, ocean mist and airborne salt particles, emulsions, and the generation of vortices near an interface. Although it was first observed almost half a century ago, little is known about its precise mechanism. Here, we combine high-speed video imaging with numerical simulations to determine the conditions under which partial coalescence occurs, and to reveal a dynamic pinch-off mechanism. This mechanism is critically dependent on the ability of capillary waves to vertically stretch the drop by focusing energy on its summit.

  7. Afterlife of a Drop Impacting a Liquid Pool

    NASA Astrophysics Data System (ADS)

    Saha, Abhishek; Wei, Yanju; Tang, Xiaoyu; Law, Chung K.

    2017-11-01

    Drop impact on liquid pool is ubiquitous in industrial processes, such as inkjet printing and spray coating. While merging of drop with the impacted liquid surface is essential to facilitate the printing and coating processes, it is the afterlife of this merged drop and associated mixing which control the quality of the printed or coated surface. In this talk we will report an experimental study on the structural evolution of the merged droplet inside the liquid pool. First, we will analyze the depth of the crater created on the pool surface by the impacted drop for a range of impact inertia, and we will derive a scaling relation and the associated characteristic time-scale. Next, we will focus on the toroidal vortex formed by the moving drop inside the liquid pool and assess the characteristic time and length scales of the penetration process. The geometry of the vortex structure which qualitatively indicates the degree of mixedness will also be discussed. Finally, we will present the results from experiments with various viscosities to demonstrate the role of viscous dissipation on the geometry and structure formed by the drop. This work is supported by the Army Research Office and the Xerox Corporation.

  8. Connected vehicle enabled freeway merge management - field test.

    DOT National Transportation Integrated Search

    2016-01-01

    Freeway congestion is a major problem of the transportation system, resulting in major economic : loss in terms of traffic delays and fuel costs. With connected vehicle (CV) technologies, more : proactive traffic management strategies are possible. T...

  9. Is black-hole ringdown a memory of its progenitor?

    PubMed

    Kamaretsos, Ioannis; Hannam, Mark; Sathyaprakash, B S

    2012-10-05

    We perform an extensive numerical study of coalescing black-hole binaries to understand the gravitational-wave spectrum of quasinormal modes excited in the merged black hole. Remarkably, we find that the masses and spins of the progenitor are clearly encoded in the mode spectrum of the ringdown signal. Some of the mode amplitudes carry the signature of the binary's mass ratio, while others depend critically on the spins. Simulations of precessing binaries suggest that our results carry over to generic systems. Using Bayesian inference, we demonstrate that it is possible to accurately measure the mass ratio and a proper combination of spins even when the binary is itself invisible to a detector. Using a mapping of the binary masses and spins to the final black-hole spin allows us to further extract the spin components of the progenitor. Our results could have tremendous implications for gravitational astronomy by facilitating novel tests of general relativity using merging black holes.

  10. The cosmic merger rate of stellar black hole binaries from the Illustris simulation

    NASA Astrophysics Data System (ADS)

    Mapelli, Michela; Giacobbo, Nicola; Ripamonti, Emanuele; Spera, Mario

    2017-12-01

    The cosmic merger rate density of black hole binaries (BHBs) can give us an essential clue to constraining the formation channels of BHBs, in light of current and forthcoming gravitational wave detections. Following a Monte Carlo approach, we couple new population-synthesis models of BHBs with the Illustris cosmological simulation, to study the cosmic history of BHB mergers. We explore six population-synthesis models, varying the prescriptions for supernovae, common envelope and natal kicks. In most considered models, the cosmic BHB merger rate follows the same trend as the cosmic star formation rate. The normalization of the cosmic BHB merger rate strongly depends on the treatment of common envelope and on the distribution of natal kicks. We find that most BHBs merging within LIGO's instrumental horizon come from relatively metal-poor progenitors (<0.2 Z⊙). The total masses of merging BHBs span a large range of values, from ∼6 to ∼82 M⊙. In our fiducial model, merging BHBs consistent with GW150914, GW151226 and GW170104 represent ∼6, 3 and 12 per cent of all BHBs merging within the LIGO horizon, respectively. The heavy systems, like GW150914, come from metal-poor progenitors (<0.15 Z⊙). Most GW150914-like systems merging in the local Universe appear to have formed at high redshift, with a long delay time. In contrast, GW151226-like systems form and merge all the way through the cosmic history, from progenitors with a broad range of metallicities. Future detections will be crucial to put constraints on common envelope, on natal kicks, and on the BHB mass function.

  11. Star Formation History In Merging Galaxies

    NASA Astrophysics Data System (ADS)

    Chien, Li-Hsin

    2009-01-01

    Interacting and merging galaxies are believed to play an important role in many aspects of galactic evolution. Their violent interactions can trigger starbursts, which lead to formation of young globular clusters. Therefore the ages of these young globular clusters can be interpreted to yield the timing of interaction-triggered events, and thus provide a key to reconstruct the star formation history in merging galaxies. The link between galaxy interaction and star formation is well established, but the triggers of star formation in interacting galaxies are still not understood. To date there are two competing formulas that describe the star formation mechanism--density-dependent and shock-induced rules. Numerical models implementing the two rules predict significantly different star formation histories in merging galaxies. My dissertation combines these two distinct areas of astrophysics, stellar evolution and galactic dynamics, to investigate the star formation history in galaxies at various merging stages. Begin with NGC 4676 as an example, I will briefly describe its model and illustrate the idea of using the ages of clusters to constrain the modeling. The ages of the clusters are derived from spectra that were taken with multi-object spectroscopy on Keck. Using NGC 7252 as a second example, I will present a state of the art dynamical model which predicts NGC7252's star formation history and other properties. I will then show a detailed comparison and analysis between the clusters and the modeling. In the end, I will address this important link as the key to answer the fundamental question of my thesis: what is the trigger of star formation in merging galaxies?

  12. Comparison of SHOX and associated elements duplications distribution between patients (Lėri-Weill dyschondrosteosis/idiopathic short stature) and population sample.

    PubMed

    Hirschfeldova, Katerina; Solc, Roman

    2017-09-05

    The effect of heterozygous duplications of SHOX and associated elements on Lėri-Weill dyschondrosteosis (LWD) and idiopathic short stature (ISS) development is less distinct when compared to reciprocal deletions. The aim of our study was to compare frequency and distribution of duplications within SHOX and associated elements between population sample and LWD (ISS) patients. A preliminary analysis conducted on Czech population sample of 250 individuals compared to our previously reported sample of 352 ISS/LWD Czech patients indicated that rather than the difference in frequency of duplications it is the difference in their distribution. Particularly, there was an increased frequency of duplications residing to the CNE-9 enhancer in our LWD/ISS sample. To see whether the obtained data are consistent across published studies we made a literature survey to get published cases with SHOX or associated elements duplication and formed the merged LWD, the merged ISS, and the merged population samples. Relative frequency of particular region duplication in each of those merged samples were calculated. There was a significant difference in the relative frequency of CNE-9 enhancer duplications (11 vs. 3) and complete SHOX (exon1-6b) duplications (4 vs. 24) (p-value 0.0139 and p-value 0.000014, respectively) between the merged LWD sample and the merged population sample. We thus propose that partial SHOX duplications and small duplications encompassing CNE-9 enhancer could be highly penetrant alleles associated with ISS and LWD development. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A scalable delivery framework and a pricing model for streaming media with advertisements

    NASA Astrophysics Data System (ADS)

    Al-Hadrusi, Musab; Sarhan, Nabil J.

    2008-01-01

    This paper presents a delivery framework for streaming media with advertisements and an associated pricing model. The delivery model combines the benefits of periodic broadcasting and stream merging. The advertisements' revenues are used to subsidize the price of the media content. The pricing is determined based on the total ads' viewing time. Moreover, this paper presents an efficient ad allocation scheme and three modified scheduling policies that are well suited to the proposed delivery framework. Furthermore, we study the effectiveness of the delivery framework and various scheduling polices through extensive simulation in terms of numerous metrics, including customer defection probability, average number of ads viewed per client, price, arrival rate, profit, and revenue.

  14. Fast immersed interface Poisson solver for 3D unbounded problems around arbitrary geometries

    NASA Astrophysics Data System (ADS)

    Gillis, T.; Winckelmans, G.; Chatelain, P.

    2018-02-01

    We present a fast and efficient Fourier-based solver for the Poisson problem around an arbitrary geometry in an unbounded 3D domain. This solver merges two rewarding approaches, the lattice Green's function method and the immersed interface method, using the Sherman-Morrison-Woodbury decomposition formula. The method is intended to be second order up to the boundary. This is verified on two potential flow benchmarks. We also further analyse the iterative process and the convergence behavior of the proposed algorithm. The method is applicable to a wide range of problems involving a Poisson equation around inner bodies, which goes well beyond the present validation on potential flows.

  15. Teaching brain-behavior relations economically with stimulus equivalence technology.

    PubMed

    Fienup, Daniel M; Covey, Daniel P; Critchfield, Thomas S

    2010-03-01

    Instructional interventions based on stimulus equivalence provide learners with the opportunity to acquire skills that are not directly taught, thereby improving the efficiency of instructional efforts. The present report describes a study in which equivalence-based instruction was used to teach college students facts regarding brain anatomy and function. The instruction involved creating two classes of stimuli that students understood as being related. Because the two classes shared a common member, they spontaneously merged, thereby increasing the yield of emergent relations. Overall, students mastered more than twice as many facts as were explicitly taught, thus demonstrating the potential of equivalence-based instruction to reduce the amount of student investment that is required to master advanced academic topics.

  16. Low-Level Space Optimization of an AES Implementation for a Bit-Serial Fully Pipelined Architecture

    NASA Astrophysics Data System (ADS)

    Weber, Raphael; Rettberg, Achim

    A previously developed AES (Advanced Encryption Standard) implementation is optimized and described in this paper. The special architecture for which this implementation is targeted comprises synchronous and systematic bit-serial processing without a central controlling instance. In order to shrink the design in terms of logic utilization we deeply analyzed the architecture and the AES implementation to identify the most costly logic elements. We propose to merge certain parts of the logic to achieve better area efficiency. The approach was integrated into an existing synthesis tool which we used to produce synthesizable VHDL code. For testing purposes, we simulated the generated VHDL code and ran tests on an FPGA board.

  17. Progress In Magnetized Target Fusion Driven by Plasma Liners

    NASA Technical Reports Server (NTRS)

    Thio, Francis Y. C.; Kirkpatrick, Ronald C.; Knapp, Charles E.; Cassibry, Jason; Eskridge, Richard; Lee, Michael; Smith, James; Martin, Adam; Wu, S. T.; Schmidt, George; hide

    2001-01-01

    Magnetized target fusion (MTF) attempts to combine the favorable attributes of magnetic confinement fusion (MCF) for energy confinement with the attributes of inertial confinement fusion (ICF) for efficient compression heating and wall-free containment of the fusing plasma. It uses a material liner to compress and contain a magnetized plasma. For practical applications, standoff drivers to deliver the imploding momentum flux to the target plasma remotely are required. Spherically converging plasma jets have been proposed as standoff drivers for this purpose. The concept involves the dynamic formation of a spherical plasma liner by the merging of plasma jets, and the use of the liner so formed to compress a spheromak or a field reversed configuration (FRC).

  18. Modeling merging behavior at lane drops.

    DOT National Transportation Integrated Search

    2015-02-01

    In work-zone configurations where lane drops are present, merging of traffic at the taper presents an operational concern. In : addition, as flow through the work zone is reduced, the relative traffic safety of the work zone is also reduced. Improvin...

  19. 78 FR 76146 - Formations of, Acquisitions by, and Mergers of Savings and Loan Holding Companies

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-16

    ..., Missouri 63166-2034: 1. Sugar Creek MHC, Trenton, Illinois; to convert to stock form and merge with Sugar Creek Financial Corp., Trenton, Illinois. Sugar Creek Financial Corp. will merge into Sugar Creek...

  20. Effect of phase front modulation on the merging of multiple regularized femtosecond filaments

    NASA Astrophysics Data System (ADS)

    Pushkarev, D.; Shipilo, D.; Lar'kin, A.; Mitina, E.; Panov, N.; Uryupina, D.; Ushakov, A.; Volkov, R.; Karpeev, S.; Khonina, S.; Kosareva, O.; Savel'ev, A.

    2018-04-01

    Comparative experimental data on filamentation of a powerful femtosecond laser beams with amplitude or phase front modulation is presented. We show that phase discontinuities and zero intensity lines prevented filament merging and superfilament formation.

Top