The turbulent cascade of individual eddies
NASA Astrophysics Data System (ADS)
Huertas-Cerdeira, Cecilia; Lozano-Durán, Adrián; Jiménez, Javier
2014-11-01
The merging and splitting processes of Reynolds-stress carrying structures in the inertial range of scales are studied through their time-resolved evolution in channels at Reλ = 100 - 200 . Mergers and splits coexist during the whole life of the structures, and are responsible for a substantial part of their growth and decay. Each interaction involves two or more eddies and results in little overall volume loss or gain. Most of them involve a small eddy that merges with, or splits from, a significantly larger one. Accordingly, if merge and split indexes are respectively defined as the maximum number of times that a structure has merged from its birth or will split until its death, the mean eddy volume grows linearly with both indexes, suggesting an accretion process rather than a hierarchical fragmentation. However, a non-negligible number of interactions involve eddies of similar scale, with a second probability peak of the volume of the smaller parent or child at 0.3 times that of the resulting or preceding structure. Funded by the Multiflow project of the ERC.
BBMerge – Accurate paired shotgun read merging via overlap
Bushnell, Brian; Rood, Jonathan; Singer, Esther
2017-10-26
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
BBMerge – Accurate paired shotgun read merging via overlap
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bushnell, Brian; Rood, Jonathan; Singer, Esther
Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less
Annular vortex merging processes in non-neutral electron plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaga, Chikato, E-mail: d146073@hiroshima-u.ac.jp; Ito, Kiyokazu; Higaki, Hiroyuki
2015-06-29
Non-neutral electron plasmas in a uniform magnetic field are investigated experimentally as a two dimensional (2D) fluid. Previously, it was reported that 2D phase space volume increases during a vortex merging process with viscosity. However, the measurement was restricted to a plasma with a high density. Here, an alternative method is introduced to evaluate a similar process for a plasma with a low density.
NASA Astrophysics Data System (ADS)
Chen, CHAI; Yiik Diew, WONG
2017-02-01
This study provides an integrated strategy, encompassing microscopic simulation, safety assessment, and multi-attribute decision-making, to optimize traffic performance at downstream merging area of signalized intersections. A Fuzzy Cellular Automata (FCA) model is developed to replicate microscopic movement and merging behavior. Based on simulation experiment, the proposed FCA approach is able to provide capacity and safety evaluation of different traffic scenarios. The results are then evaluated through data envelopment analysis (DEA) and analytic hierarchy process (AHP). Optimized geometric layout and control strategies are then suggested for various traffic conditions. An optimal lane-drop distance that is dependent on traffic volume and speed limit can thus be established at the downstream merging area.
3D geometric split-merge segmentation of brain MRI datasets.
Marras, Ioannis; Nikolaidis, Nikolaos; Pitas, Ioannis
2014-05-01
In this paper, a novel method for MRI volume segmentation based on region adaptive splitting and merging is proposed. The method, called Adaptive Geometric Split Merge (AGSM) segmentation, aims at finding complex geometrical shapes that consist of homogeneous geometrical 3D regions. In each volume splitting step, several splitting strategies are examined and the most appropriate is activated. A way to find the maximal homogeneity axis of the volume is also introduced. Along this axis, the volume splitting technique divides the entire volume in a number of large homogeneous 3D regions, while at the same time, it defines more clearly small homogeneous regions within the volume in such a way that they have greater probabilities of survival at the subsequent merging step. Region merging criteria are proposed to this end. The presented segmentation method has been applied to brain MRI medical datasets to provide segmentation results when each voxel is composed of one tissue type (hard segmentation). The volume splitting procedure does not require training data, while it demonstrates improved segmentation performance in noisy brain MRI datasets, when compared to the state of the art methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Modeling of the merging of two colliding field reversed configuration plasmoids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Guanqiong; Wang, Xiaoguang; Li, Lulu
2016-06-15
The field reversed configuration (FRC) is one of the candidate plasma targets for the magneto-inertial fusion, and a high temperature FRC can be formed by using the collision-merging technology. Although the merging process and mechanism of FRC are quite complicated, it is thinkable to build a simple model to investigate the macroscopic equilibrium parameters including the density, the temperature and the separatrix volume, which may play an important role in the collision-merging process of FRC. It is quite interesting that the estimates of the related results based on our simple model are in agreement with the simulation results of amore » two-dimensional magneto-hydrodynamic code (MFP-2D), which has being developed by our group since the last couple of years, while these results can qualitatively fit the results of C-2 experiments by Tri-alpha energy company. On the other hand, the simple model can be used to investigate how to increase the density of the merged FRC. It is found that the amplification of the density depends on the poloidal flux-increase factor and the temperature increases with the translation speed of two plasmoids.« less
Characterization of Human Torso Vascular Morphometry in Normotensive and Hypotensive Trauma Patients
2015-07-01
Aorta Wall Measures Merged for Analysis Landmarks & User-aided Segmentation 5cm Volume...with Centerline Measures AORTA PROCESSING VENA CAVA PROCESSING Basic Morphomics Scan Identification Aorta Centerline Segmented Aorta and Vena...Analysis 49 Data Presentation Aorta Radius Popula/on Normotensive Hypotensive
Method of modifying a volume mesh using sheet extraction
Borden, Michael J [Albuquerque, NM; Shepherd, Jason F [Albuquerque, NM
2007-02-20
A method and machine-readable medium provide a technique to modify a hexahedral finite element volume mesh using dual generation and sheet extraction. After generating a dual of a volume stack (mesh), a predetermined algorithm may be followed to modify the volume mesh of hexahedral elements. The predetermined algorithm may include the steps of determining a sheet of hexahedral mesh elements, generating nodes for merging, and merging the nodes to delete the sheet of hexahedral mesh elements and modify the volume mesh.
Optimizing Endoscope Reprocessing Resources Via Process Flow Queuing Analysis.
Seelen, Mark T; Friend, Tynan H; Levine, Wilton C
2018-05-04
The Massachusetts General Hospital (MGH) is merging its older endoscope processing facilities into a single new facility that will enable high-level disinfection of endoscopes for both the ORs and Endoscopy Suite, leveraging economies of scale for improved patient care and optimal use of resources. Finalized resource planning was necessary for the merging of facilities to optimize staffing and make final equipment selections to support the nearly 33,000 annual endoscopy cases. To accomplish this, we employed operations management methodologies, analyzing the physical process flow of scopes throughout the existing Endoscopy Suite and ORs and mapping the future state capacity of the new reprocessing facility. Further, our analysis required the incorporation of historical case and reprocessing volumes in a multi-server queuing model to identify any potential wait times as a result of the new reprocessing cycle. We also performed sensitivity analysis to understand the impact of future case volume growth. We found that our future-state reprocessing facility, given planned capital expenditures for automated endoscope reprocessors (AERs) and pre-processing sinks, could easily accommodate current scope volume well within the necessary pre-cleaning-to-sink reprocessing time limit recommended by manufacturers. Further, in its current planned state, our model suggested that the future endoscope reprocessing suite at MGH could support an increase in volume of at least 90% over the next several years. Our work suggests that with simple mathematical analysis of historic case data, significant changes to a complex perioperative environment can be made with ease while keeping patient safety as the top priority.
NASA Astrophysics Data System (ADS)
Hua, H.; Wilson, B. D.; Manipon, G.; Pan, L.; Fetzer, E.
2011-12-01
Multi-decadal climate data records are critical to studying climate variability and change. These often also require merging data from multiple instruments such as those from NASA's A-Train that contain measurements covering a wide range of atmospheric conditions and phenomena. Multi-decadal climate data record of water vapor measurements from sensors on A-Train, operational weather, and other satellites are being assembled from existing data sources, or produced from well-established methods published in peer-reviewed literature. However, the immense volume and inhomogeneity of data often requires an "exploratory computing" approach to product generation where data is processed in a variety of different ways with varying algorithms, parameters, and code changes until an acceptable intermediate product is generated. This process is repeated until a desirable final merged product can be generated. Typically the production legacy is often lost due to the complexity of processing steps that were tried along the way. The data product information associated with source data, processing methods, parameters used, intermediate product outputs, and associated materials are often hidden in each of the trials and scattered throughout the processing system(s). We will discuss methods to help users better capture and explore the production legacy of the data, metadata, ancillary files, code, and computing environment changes used during the production of these merged and multi-sensor data products. By leveraging existing semantic and provenance tools, we can capture sufficient information to enable users to track, perform faceted searches, and visualize the provenance of the products and processing lineage. We will explore if sufficient provenance information can be captured to enable science reproducibility of these climate data records.
Fluid control structures in microfluidic devices
Mathies, Richard A.; Grover, William H.; Skelley, Alison; Lagally, Eric; Liu, Chung N.
2008-11-04
Methods and apparatus for implementing microfluidic analysis devices are provided. A monolithic elastomer membrane associated with an integrated pneumatic manifold allows the placement and actuation of a variety of fluid control structures, such as structures for pumping, isolating, mixing, routing, merging, splitting, preparing, and storing volumes of fluid. The fluid control structures can be used to implement a variety of sample introduction, preparation, processing, and storage techniques.
Fluid control structures in microfluidic devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathies, Richard A.; Grover, William H.; Skelley, Alison
2017-05-09
Methods and apparatus for implementing microfluidic analysis devices are provided. A monolithic elastomer membrane associated with an integrated pneumatic manifold allows the placement and actuation of a variety of fluid control structures, such as structures for pumping, isolating, mixing, routing, merging, splitting, preparing, and storing volumes of fluid. The fluid control structures can be used to implement a variety of sample introduction, preparation, processing, and storage techniques.
Fluid control structures in microfluidic devices
NASA Technical Reports Server (NTRS)
Skelley, Alison (Inventor); Mathies, Richard A. (Inventor); Lagally, Eric (Inventor); Grover, William H. (Inventor); Liu, Chung N. (Inventor)
2008-01-01
Methods and apparatus for implementing microfluidic analysis devices are provided. A monolithic elastomer membrane associated with an integrated pneumatic manifold allows the placement and actuation of a variety of fluid control structures, such as structures for pumping, isolating, mixing, routing, merging, splitting, preparing, and storing volumes of fluid. The fluid control structures can be used to implement a variety of sample introduction, preparation, processing, and storage techniques.
Merged ozone profiles from four MIPAS processors
NASA Astrophysics Data System (ADS)
Laeng, Alexandra; von Clarmann, Thomas; Stiller, Gabriele; Dinelli, Bianca Maria; Dudhia, Anu; Raspollini, Piera; Glatthor, Norbert; Grabowski, Udo; Sofieva, Viktoria; Froidevaux, Lucien; Walker, Kaley A.; Zehner, Claus
2017-04-01
The Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) was an infrared (IR) limb emission spectrometer on the Envisat platform. Currently, there are four MIPAS ozone data products, including the operational Level-2 ozone product processed at ESA, with the scientific prototype processor being operated at IFAC Florence, and three independent research products developed by the Istituto di Fisica Applicata Nello Carrara (ISAC-CNR)/University of Bologna, Oxford University, and the Karlsruhe Institute of Technology-Institute of Meteorology and Climate Research/Instituto de Astrofísica de Andalucía (KIT-IMK/IAA). Here we present a dataset of ozone vertical profiles obtained by merging ozone retrievals from four independent Level-2 MIPAS processors. We also discuss the advantages and the shortcomings of this merged product. As the four processors retrieve ozone in different parts of the spectra (microwindows), the source measurements can be considered as nearly independent with respect to measurement noise. Hence, the information content of the merged product is greater and the precision is better than those of any parent (source) dataset. The merging is performed on a profile per profile basis. Parent ozone profiles are weighted based on the corresponding error covariance matrices; the error correlations between different profile levels are taken into account. The intercorrelations between the processors' errors are evaluated statistically and are used in the merging. The height range of the merged product is 20-55 km, and error covariance matrices are provided as diagnostics. Validation of the merged dataset is performed by comparison with ozone profiles from ACE-FTS (Atmospheric Chemistry Experiment-Fourier Transform Spectrometer) and MLS (Microwave Limb Sounder). Even though the merging is not supposed to remove the biases of the parent datasets, around the ozone volume mixing ratio peak the merged product is found to have a smaller (up to 0.1 ppmv) bias with respect to ACE-FTS than any of the parent datasets. The bias with respect to MLS is of the order of 0.15 ppmv at 20-30 km height and up to 0.45 ppmv at larger altitudes. The agreement between the merged data MIPAS dataset with ACE-FTS is better than that with MLS. This is, however, the case for all parent processors as well.
Merging Surface Reconstructions of Terrestrial and Airborne LIDAR Range Data
2009-05-19
Mangan and R. Whitaker. Partitioning 3D surface meshes using watershed segmentation . IEEE Trans. on Visualization and Computer Graphics, 5(4), pp...Jain, and A. Zakhor. Data Processing Algorithms for Generating Textured 3D Building Facade Meshes from Laser Scans and Camera Images. International...acquired set of overlapping range images into a single mesh [2,9,10]. However, due to the volume of data involved in large scale urban modeling, data
Scholz, Matthew; Lo, Chien -Chi; Chain, Patrick S. G.
2014-10-01
Assembly of metagenomic samples is a very complex process, with algorithms designed to address sequencing platform-specific issues, (read length, data volume, and/or community complexity), while also faced with genomes that differ greatly in nucleotide compositional biases and in abundance. To address these issues, we have developed a post-assembly process: MetaGenomic Assembly by Merging (MeGAMerge). We compare this process to the performance of several assemblers, using both real, and in-silico generated samples of different community composition and complexity. MeGAMerge consistently outperforms individual assembly methods, producing larger contigs with an increased number of predicted genes, without replication of data. MeGAMerge contigs aremore » supported by read mapping and contig alignment data, when using synthetically-derived and real metagenomic data, as well as by gene prediction analyses and similarity searches. Ultimately, MeGAMerge is a flexible method that generates improved metagenome assemblies, with the ability to accommodate upcoming sequencing platforms, as well as present and future assembly algorithms.« less
A rule-based shell to hierarchically organize HST observations
NASA Technical Reports Server (NTRS)
Bose, Ashim; Gerb, Andrew
1995-01-01
An observing program on the Hubble Space Telescope (HST) is described in terms of exposures that are obtained by one or more of the instruments onboard the HST. These exposures are organized into a hierarchy of structures for purposes of efficient scheduling of observations. The process by which exposures get organized into the higher-level structures is called merging. This process relies on rules to determine which observations can be 'merged' into the same higher level structure, and which cannot. The TRANSformation expert system converts proposals for astronomical observations with HST into detailed observing plans. The conversion process includes the task of merging. Within TRANS, we have implemented a declarative shell to facilitate merging. This shell offers the following features: (1) an easy way of specifying rules on when to merge and when not to merge, (2) a straightforward priority mechanism for resolving conflicts among rules, (3) an explanation facility for recording the merging history, (4) a report generating mechanism to help users understand the reasons for merging, and (5) a self-documenting mechanism that documents all the merging rules that have been defined in the shell, ordered by priority. The merging shell is implemented using an object-oriented paradigm in CLOS. It has been a part of operational TRANS (after extensive testing) since July 1993. It has fulfilled all performance expectations, and has considerably simplified the process of implementing new or changed requirements for merging. The users are pleased with its report-generating and self-documenting features.
Application of Tube Dynamics to Non-Statistical Reaction Processes
NASA Astrophysics Data System (ADS)
Gabern, F.; Koon, W. S.; Marsden, J. E.; Ross, S. D.; Yanao, T.
2006-06-01
A technique based on dynamical systems theory is introduced for the computation of lifetime distributions and rates of chemical reactions and scattering phenomena, even in systems that exhibit non-statistical behavior. In particular, we merge invariant manifold tube dynamics with Monte Carlo volume determination for accurate rate calculations. This methodology is applied to a three-degree-of-freedom model problem and some ideas on how it might be extended to higher-degree-of-freedom systems are presented.
NASA Astrophysics Data System (ADS)
Bruder, Friedrich-Karl; Fäcke, Thomas; Hagen, Rainer; Hansen, Sven; Manecke, Christel; Orselli, Enrico; Rewitz, Christian; Rölle, Thomas; Walze, Günther
2017-06-01
The main function of any augmented reality system is to seamlessly merge the real world perception of a viewer with computer generated images and information. Besides real-time head-tracking and room-scanning capabilities the combiner optics, which optically merge the natural with the artificial visual information, represent a key component for those systems. Various types of combiner optics are known to the industry, all with their specific advantages and disadvantages. Beside the well-established solutions based on refractive optics or surface gratings, volume Holographic Optical Elements (vHOEs) are a very attractive alternative in this field. The unique characteristics of these diffractive grating structures - being lightweight, thin, flat and invisible in Off Bragg conditions - make them perfectly suitable for their use in integrated and compact combiners. For any consumer application it is paramount to build unobtrusive and lightweight augmented reality displays, for which those volume holographic combiners are ideally suited. Due to processing challenges of (historic) holographic recording materials mass production of vHOE holographic combiners was not possible. Therefore vHOE based combiners found use in military applications only by now. The new Bayfol® HX instant developing holographic photopolymer film provides an ideal technology platform to optimize the performance of vHOEs in a wide range of applications. Bayfol® HX provides full color capability and adjustable diffraction efficiency as well as an unprecedented optical clarity when compared to classical holographic recording materials like silver halide emulsions (AgHX) or dichromated gelatin (DCG). Bayfol® HX film is available in industrial scale and quality. Its properties can be tailored for various diffractive performances and integration methods. Bayfol® HX film is easy to process without any need for chemical or thermal development steps, offering simplified contact-copy mass production schemes.
Jordan, A; Chen, D; Yi, Q-L; Kanias, T; Gladwin, M T; Acker, J P
2016-07-01
Quality control (QC) data collected by blood services are used to monitor production and to ensure compliance with regulatory standards. We demonstrate how analysis of quality control data can be used to highlight the sources of variability within red cell concentrates (RCCs). We merged Canadian Blood Services QC data with manufacturing and donor records for 28 227 RCC between June 2011 and October 2014. Units were categorized based on processing method, bag manufacturer, donor age and donor sex, then assessed based on product characteristics: haemolysis and haemoglobin levels, unit volume, leucocyte count and haematocrit. Buffy-coat method (top/bottom)-processed units exhibited lower haemolysis than units processed using the whole-blood filtration method (top/top). Units from female donors exhibited lower haemolysis than male donations. Processing method influenced unit volume and the ratio of additive solution to residual plasma. Stored red blood cell characteristics are influenced by prestorage processing and donor factors. Understanding the relationship between processing, donors and RCC quality will help blood services to ensure the safety of transfused products. © 2016 International Society of Blood Transfusion.
Hall effect on a Merging Formation Process of a Field-Reversed Configuration
NASA Astrophysics Data System (ADS)
Kaminou, Yasuhiro; Guo, Xuehan; Inomoto, Michiaki; Ono, Yasushi; Horiuchi, Ritoku
2015-11-01
Counter-helicity spheromak merging is one of the formation methods of a Field-Reversed Configuration (FRC). In counter-helicity spheromak merging, two spheromaks with opposing toroidal fields merge together, through magnetic reconnection events and relax into a FRC, which has no or little toroidal field. This process contains magnetic reconnection and a relaxation phenomena, and the Hall effect has some essential effects on these process because the X-point in the magnetic reconnection or the O-point of the FRC has no or little magnetic field. However, the Hall effect as both global and local effect on counter-helicity spheromak merging has not been elucidated. In this poster, we conducted 2D/3D Hall-MHD simulations and experiments of counter-helicity spheromak merging. We find that the Hall effect enhances the reconnection rate, and reduces the generation of toroidal sheared-flow. The suppression of the ``slingshot effect'' affects the relaxation process. We will discuss details in the poster.
Changes in thunderstorm characteristics due to feeder cloud merging
NASA Astrophysics Data System (ADS)
Sinkevich, Andrei A.; Krauss, Terrence W.
2014-06-01
Cumulus cloud merging is a complex dynamical and microphysical process in which two convective cells merge into a single cell. Previous radar observations and numerical simulations have shown a substantial increase in the maximum area, maximum echo top and maximum reflectivity as a result of the merging process. Although the qualitative aspects of merging have been well documented, the quantitative effects on storm properties remain less defined. Therefore, a statistical assessment of changes in storm characteristics due to merging is of importance. Further investigation into the effects of cloud merging on precipitation flux (Pflux) in a statistical manner provided the motivation for this study in the Asir region of Saudi Arabia. It was confirmed that merging has a strong effect on storm development in this region. The data analysis shows that an increase in the median of the distribution of maximum reflectivity was observed just after merging and was equal to 3.9 dBZ. A detailed analysis of the individual merge cases compared the merged storm Pflux and mass to the sum of the individual Feeder and Storm portions just before merging for each case. The merged storm Pflux increased an average of 106% over the 20-min period after merging, and the mass increased on average 143%. The merged storm clearly became larger and more severe than the sum of the two parts prior to merging. One consequence of this study is that any attempts to evaluate the precipitation enhancement effects of cloud seeding must also include the issue of cloud mergers because merging can have a significant effect on the results.
Evaluation of the late merge work zone traffic control strategy.
DOT National Transportation Integrated Search
2004-01-01
Several alternative lane merge strategies have been proposed in recent years to process vehicles through work zone lane closures more safely and efficiently. Among these is the late merge. With the late merge, drivers are instructed to use all lanes ...
A Graph-Embedding Approach to Hierarchical Visual Word Mergence.
Wang, Lei; Liu, Lingqiao; Zhou, Luping
2017-02-01
Appropriately merging visual words are an effective dimension reduction method for the bag-of-visual-words model in image classification. The approach of hierarchically merging visual words has been extensively employed, because it gives a fully determined merging hierarchy. Existing supervised hierarchical merging methods take different approaches and realize the merging process with various formulations. In this paper, we propose a unified hierarchical merging approach built upon the graph-embedding framework. Our approach is able to merge visual words for any scenario, where a preferred structure and an undesired structure are defined, and, therefore, can effectively attend to all kinds of requirements for the word-merging process. In terms of computational efficiency, we show that our algorithm can seamlessly integrate a fast search strategy developed in our previous work and, thus, well maintain the state-of-the-art merging speed. To the best of our survey, the proposed approach is the first one that addresses the hierarchical visual word mergence in such a flexible and unified manner. As demonstrated, it can maintain excellent image classification performance even after a significant dimension reduction, and outperform all the existing comparable visual word-merging methods. In a broad sense, our work provides an open platform for applying, evaluating, and developing new criteria for hierarchical word-merging tasks.
Chen, Yi-Chun; Liu, Kan; Shen, Clifton Kwang-Fu; van Dam, R. Michael
2017-01-01
Microscopic droplets or slugs of mixed reagents provide a convenient platform for performing large numbers of isolated biochemical or chemical reactions for many screening and optimization applications. Myriad microfluidic approaches have emerged for creating droplets or slugs with controllable size and composition, generally using an immiscible carrier fluid to assist with the formation or merging processes. We report a novel device for generation of liquid slugs in air when the use of a carrier liquid is not compatible with the application. The slug generator contains two adjacent chambers, each of which has a volume that can be digitally adjusted by closing selected microvalves. Reagents are filled into the two chambers, merged together into a contiguous liquid slug, ejected at the desired time from the device using gas pressure, and mixed by flowing in a downstream channel. Programmable size and composition of slugs is achieved by dynamically adjusting the volume of each chamber prior to filling. Slug formation in this fashion is independent of fluid properties and can easily be scaled to mix larger numbers of reagents. This device has already been used to screen monomer ratios in supramolecular nanoparticle assembly and radiolabeling conditions of engineered antibodies, and here we provide a detailed description of the underlying device. PMID:29167603
The half-wave rectifier response of the magnetosphere and antiparallel merging
NASA Technical Reports Server (NTRS)
Crooker, N. U.
1980-01-01
In some ways the magnetosphere behaves as if merging occurs only when the interplanetary magnetic field (IMF) is southward, and in other ways it behaves as if merging occurs for all IMF orientations. An explanation of this duality is offered in terms of a geometrical antiparallel merging model which predicts merging for all IMF orientations but magnetic flux transfer to the tail only for southward IMF. This is in contrast to previous models of component merging, where merging and flux transfer occur together for nearly all IMF orientations. That the problematic duality can be explained by the model is compelling evidence that antiparallel merging should be seriously considered in constructing theories of the merging process.
Image-fusion of MR spectroscopic images for treatment planning of gliomas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang Jenghwa; Thakur, Sunitha; Perera, Gerard
2006-01-15
{sup 1}H magnetic resonance spectroscopic imaging (MRSI) can improve the accuracy of target delineation for gliomas, but it lacks the anatomic resolution needed for image fusion. This paper presents a simple protocol for fusing simulation computer tomography (CT) and MRSI images for glioma intensity-modulated radiotherapy (IMRT), including a retrospective study of 12 patients. Each patient first underwent whole-brain axial fluid-attenuated-inversion-recovery (FLAIR) MRI (3 mm slice thickness, no spacing), followed by three-dimensional (3D) MRSI measurements (TE/TR: 144/1000 ms) of a user-specified volume encompassing the extent of the tumor. The nominal voxel size of MRSI ranged from 8x8x10 mm{sup 3} to 12x12x10more » mm{sup 3}. A system was developed to grade the tumor using the choline-to-creatine (Cho/Cr) ratios from each MRSI voxel. The merged MRSI images were then generated by replacing the Cho/Cr value of each MRSI voxel with intensities according to the Cho/Cr grades, and resampling the poorer-resolution Cho/Cr map into the higher-resolution FLAIR image space. The FUNCTOOL processing software was also used to create the screen-dumped MRSI images in which these data were overlaid with each FLAIR MRI image. The screen-dumped MRSI images were manually translated and fused with the FLAIR MRI images. Since the merged MRSI images were intrinsically fused with the FLAIR MRI images, they were also registered with the screen-dumped MRSI images. The position of the MRSI volume on the merged MRSI images was compared with that of the screen-dumped MRSI images and was shifted until agreement was within a predetermined tolerance. Three clinical target volumes (CTVs) were then contoured on the FLAIR MRI images corresponding to the Cho/Cr grades. Finally, the FLAIR MRI images were fused with the simulation CT images using a mutual-information algorithm, yielding an IMRT plan that simultaneously delivers three different dose levels to the three CTVs. The image-fusion protocol was tested on 12 (six high-grade and six low-grade) glioma patients. The average agreement of the MRSI volume position on the screen-dumped MRSI images and the merged MRSI images was 0.29 mm with a standard deviation of 0.07 mm. Of all the voxels with Cho/Cr grade one or above, the distribution of Cho/Cr grade was found to correlate with the glioma grade from pathologic finding and is consistent with literature results indicating Cho/Cr elevation as a marker for malignancy. In conclusion, an image-fusion protocol was developed that successfully incorporates MRSI information into the IMRT treatment plan for glioma.« less
Hail Size Distribution Mapping
NASA Technical Reports Server (NTRS)
2008-01-01
A 3-D weather radar visualization software program was developed and implemented as part of an experimental Launch Pad 39 Hail Monitor System. 3DRadPlot, a radar plotting program, is one of several software modules that form building blocks of the hail data processing and analysis system (the complete software processing system under development). The spatial and temporal mapping algorithms were originally developed through research at the University of Central Florida, funded by NASA s Tropical Rainfall Measurement Mission (TRMM), where the goal was to merge National Weather Service (NWS) Next-Generation Weather Radar (NEXRAD) volume reflectivity data with drop size distribution data acquired from a cluster of raindrop disdrometers. In this current work, we adapted these algorithms to process data from a cluster of hail disdrometers positioned around Launch Pads 39A or 39B, along with the corresponding NWS radar data. Radar data from all NWS NEXRAD sites is archived at the National Climatic Data Center (NCDC). That data can be readily accessed at
Interactions of a co-rotating vortex pair at multiple offsets
NASA Astrophysics Data System (ADS)
Forster, Kyle J.; Barber, Tracie J.; Diasinos, Sammy; Doig, Graham
2017-05-01
Two NACA0012 vanes at various lateral offsets were investigated by wind tunnel testing to observe the interactions between the streamwise vortices. The vanes were separated by nine chord lengths in the streamwise direction to allow the upstream vortex to impact on the downstream geometry. These vanes were evaluated at an angle of incidence of 8° and a Reynolds number of 7 ×104 using particle image velocimetry. A helical motion of the vortices was observed, with rotational rate increasing as the offset was reduced to the point of vortex merging. Downstream meandering of the weaker vortex was found to increase in magnitude near the point of vortex merging. The merging process occurred more rapidly when the upstream vortex was passed on the pressure side of the vane, with the downstream vortex being produced with less circulation and consequently merging into the upstream vortex. The merging distance was found to be statistical rather than deterministic quantity, indicating that the meandering of the vortices affected their separations and energies. This resulted in a fluctuation of the merging location. A loss of circulation associated with the merging process was identified, with the process of achieving vortex circularity causing vorticity diffusion, however all merged cases maintained higher circulation than a single vortex condition. The presence of the upstream vortex was found to reduce the strength of the downstream vortex in all offsets evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iida, Y.; Yokoyama, T.; Hagenaar, H. J.
2012-06-20
Frequencies of magnetic patch processes on the supergranule boundary, namely, flux emergence, splitting, merging, and cancellation, are investigated through automatic detection. We use a set of line-of-sight magnetograms taken by the Solar Optical Telescope (SOT) on board the Hinode satellite. We found 1636 positive patches and 1637 negative patches in the data set, whose time duration is 3.5 hr and field of view is 112'' Multiplication-Sign 112''. The total numbers of magnetic processes are as follows: 493 positive and 482 negative splittings, 536 positive and 535 negative mergings, 86 cancellations, and 3 emergences. The total numbers of emergence and cancellationmore » are significantly smaller than those of splitting and merging. Further, the frequency dependence of the merging and splitting processes on the flux content are investigated. Merging has a weak dependence on the flux content with a power-law index of only 0.28. The timescale for splitting is found to be independent of the parent flux content before splitting, which corresponds to {approx}33 minutes. It is also found that patches split into any flux contents with the same probability. This splitting has a power-law distribution of the flux content with an index of -2 as a time-independent solution. These results support that the frequency distribution of the flux content in the analyzed flux range is rapidly maintained by merging and splitting, namely, surface processes. We suggest a model for frequency distributions of cancellation and emergence based on this idea.« less
Coupling the Solar-Wind/IMF to the Ionosphere through the High Latitude Cusps
NASA Technical Reports Server (NTRS)
Maynard, Nelson C.
2003-01-01
Magnetic merging is a primary means for coupling energy from the solar wind into the magnetosphere-ionosphere system. The location and nature of the process remain as open questions. By correlating measurements form diverse locations and using large-scale MHD models to put the measurements in context, it is possible to constrain out interpretations of the global and meso-scale dynamics of magnetic merging. Recent evidence demonstrates that merging often occurs at high latitudes in the vicinity of the cusps. The location is in part controlled by the clock angle in the interplanetary magnetic field (IMF) Y-Z plane. In fact, B(sub Y) bifurcated the cusp relative to source regions. The newly opened field lines may couple to the ionosphere at MLT locations of as much as 3 hr away from local noon. On the other side of noon the cusp may be connected to merging sites in the opposite hemisphere. In face, the small convection cell is generally driven by opposite hemisphere merging. B(sub X) controls the timing of the interaction and merging sites in each hemisphere, which may respond to planar features in the IMF at different times. Correlation times are variable and are controlled by the dynamics of the tilt of the interplanetary electric field phase plane. The orientation of the phase plane may change significantly on time scales of tens of minutes. Merging is temporally variable and may be occurring at multiple sites simultaneously. Accelerated electrons from the merging process excite optical signatures at the foot of the newly opened field lines. All-sky photometer observations of 557.7 nm emissions in the cusp region provide a "television picture" of the merging process and may be used to infer the temporal and spatial variability of merging, tied to variations in the IMF.
Detection of EEG electrodes in brain volumes.
Graffigna, Juan P; Gómez, M Eugenia; Bustos, José J
2010-01-01
This paper presents a method to detect 128 EEG electrodes in image study and to merge with the Nuclear Magnetic Resonance volume for better diagnosis. First we propose three hypotheses to define a specific acquisition protocol in order to recognize the electrodes and to avoid distortions in the image. In the second instance we describe a method for segmenting the electrodes. Finally, registration is performed between volume of the electrodes and NMR.
NASA Astrophysics Data System (ADS)
Kleusberg, E.; Sarmast, S.; Schlatter, P.; Ivanell, S.; Henningson, D. S.
2016-09-01
The wake structure behind a wind turbine, generated by the spectral element code Nek5000, is compared with that from the finite volume code EllipSys3D. The wind turbine blades are modeled using the actuator line method. We conduct the comparison on two different setups. One is based on an idealized rotor approximation with constant circulation imposed along the blades corresponding to Glauert's optimal operating condition, and the other is the Tjffireborg wind turbine. The focus lies on analyzing the differences in the wake structures entailed by the different codes and corresponding setups. The comparisons show good agreement for the defining parameters of the wake such as the wake expansion, helix pitch and circulation of the helical vortices. Differences can be related to the lower numerical dissipation in Nek5000 and to the domain differences at the rotor center. At comparable resolution Nek5000 yields more accurate results. It is observed that in the spectral element method the helical vortices, both at the tip and root of the actuator lines, retain their initial swirl velocity distribution for a longer distance in the near wake. This results in a lower vortex core growth and larger maximum vorticity along the wake. Additionally, it is observed that the break down process of the spiral tip vortices is significantly different between the two methods, with vortex merging occurring immediately after the onset of instability in the finite volume code, while Nek5000 simulations exhibit a 2-3 radii period of vortex pairing before merging.
The use of Merging and Aggregation Operators for MRDB Data Feeding
NASA Astrophysics Data System (ADS)
Kozioł, Krystian; Lupa, Michał
2013-12-01
This paper presents the application of two generalization operators - merging and displacement - in the process of automatic data feeding in a multiresolution data base of topographic objects from large-scale data-bases (1 : 500-1 : 5000). An ordered collection of objects makes a layer of development that in the process of generalization is subjected to the processes of merging and displacement in order to maintain recognizability in the reduced scale of the map. The solution to the above problem is the algorithms described in the work; these algorithms use the standard recognition of drawings (Chrobak 2010), independent of the user. A digital cartographic generalization process is a set of consecutive operators where merging and aggregation play a key role. The proper operation has a significant impact on the qualitative assessment of data generalization
Simulation of the target creation through FRC merging for a magneto-inertial fusion concept
NASA Astrophysics Data System (ADS)
Li, Chenguang; Yang, Xianjun
2017-04-01
A two-dimensional magnetohydrodynamics model has been used to simulate the target creation process in a magneto-inertial fusion concept named Magnetized Plasma Fusion Reactor (MPFR) [C. Li and X. Yang, Phys. Plasmas 23, 102702 (2016)], where the target plasma created through Field reversed configuration (FRC) merging was compressed by an imploding liner driven by the pulsed-power driver. In the scheme, two initial FRCs (Field reversed configurations) are translated into the region where FRC merging occurs, bringing out the target plasma ready for compression. The simulations cover the three stages of the target creation process: formation, translation, and merging. The factors affecting the achieved target are analyzed numerically. The magnetic field gradient produced by the conical coils is found to determine how fast the FRC is accelerated to peak velocity and the collision merging occurs. Moreover, it is demonstrated that FRC merging can be realized by real coils with gaps showing nearly identical performance, and the optimized target by FRC merging shows larger internal energy and retained flux, which is more suitable for the MPFR concept.
NASA Astrophysics Data System (ADS)
Yano, Taihei; JASMINE-WG
2018-04-01
Small-JASMINE (hearafter SJ), infrared astrometric satellite, will measure the positions and the proper motions which are located around the Galactic center, by operating at near infrared wave-lengths. SJ will clarify the formation process of the super massive black hole (hearafter SMBH) at the Galactic center. In particular, SJ will determine whether the SMBH was formed by a sequential merging of multiple black holes. The clarification of this formation process of the SMBH will contribute to a better understanding of merging process of satellite galaxies into the Galaxy, which is suggested by the standard galaxy formation scenario. A numerical simulation (Tanikawa and Umemura, 2014) suggests that if the SMBH was formed by the merging process, then the dynamical friction caused by the black holes have influenced the phase space distribution of stars. The phase space distribution measured by SJ will make it possible to determine the occurrences of the merging process.
NASA Astrophysics Data System (ADS)
Sun, Jie; Li, Zhipeng; Sun, Jian
2015-12-01
Recurring bottlenecks at freeway/expressway are considered as the main cause of traffic congestion in urban traffic system while on-ramp bottlenecks are the most significant sites that may result in congestion. In this paper, the traffic bottleneck characteristics for a simple and typical expressway on-ramp are investigated by the means of simulation modeling under the open boundary condition. In simulations, the running behaviors of each vehicle are described by a car-following model with a calibrated optimal velocity function, and lane changing actions at the merging section are modeled by a novel set of rules. We numerically derive the traffic volume of on-ramp bottleneck under different upstream arrival rates of mainline and ramp flows. It is found that the vehicles from the ramp strongly affect the pass of mainline vehicles and the merging ratio changes with the increasing of ramp vehicle, when the arrival rate of mainline flow is greater than a critical value. In addition, we clarify the dependence of the merging ratio of on-ramp bottleneck on the probability of lane changing and the length of the merging section, and some corresponding intelligent control strategies are proposed in actual traffic application.
Identifying with fictive characters: structural brain correlates of the personality trait ‘fantasy’
Hänggi, Jürgen; Jancke, Lutz
2014-01-01
The perception of oneself as absorbed in the thoughts, feelings and happenings of a fictive character (e.g. in a novel or film) as if the character’s experiences were one’s own is referred to as identification. We investigated whether individual variation in the personality trait of identification is associated with individual variation in the structure of specific brain regions, using surface and volume-based morphometry. The hypothesized regions of interest were selected on the basis of their functional role in subserving the cognitive processing domains considered important for identification (i.e. mental imagery, empathy, theory of mind and merging) and for the immersive experience called ‘presence’. Controlling for age, sex, whole-brain volume and other traits, identification covaried significantly with the left hippocampal volume, cortical thickness in the right anterior insula and the left dorsal medial prefrontal cortex, and with gray matter volume in the dorsolateral prefrontal cortex. These findings show that trait identification is associated with structural variation in specific brain regions. The findings are discussed in relation to the potential functional contribution of these regions to identification. PMID:24464847
Incorporating Edge Information into Best Merge Region-Growing Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Pasolli, Edoardo
2014-01-01
We have previously developed a best merge region-growing approach that integrates nonadjacent region object aggregation with the neighboring region merge process usually employed in region growing segmentation approaches. This approach has been named HSeg, because it provides a hierarchical set of image segmentation results. Up to this point, HSeg considered only global region feature information in the region growing decision process. We present here three new versions of HSeg that include local edge information into the region growing decision process at different levels of rigor. We then compare the effectiveness and processing times of these new versions HSeg with each other and with the original version of HSeg.
The Volume Field Model about Strong Interaction and Weak Interaction
NASA Astrophysics Data System (ADS)
Liu, Rongwu
2016-03-01
For a long time researchers have believed that strong interaction and weak interaction are realized by exchanging intermediate particles. This article proposes a new mechanism as follows: Volume field is a form of material existence in plane space, it takes volume-changing motion in the form of non-continuous motion, volume fields have strong interaction or weak interaction between them by overlapping their volume fields. Based on these concepts, this article further proposes a ``bag model'' of volume field for atomic nucleus, which includes three sub-models of the complex structure of fundamental body (such as quark), the atom-like structure of hadron, and the molecule-like structure of atomic nucleus. This article also proposes a plane space model and formulates a physics model of volume field in the plane space, as well as a model of space-time conversion. The model of space-time conversion suggests that: Point space-time and plane space-time convert each other by means of merging and rupture respectively, the essence of space-time conversion is the mutual transformations of matter and energy respectively; the process of collision of high energy hadrons, the formation of black hole, and the Big Bang of universe are three kinds of space-time conversions.
Managing Large Datasets for Atmospheric Research
NASA Technical Reports Server (NTRS)
Chen, Gao
2015-01-01
Since the mid-1980s, airborne and ground measurements have been widely used to provide comprehensive characterization of atmospheric composition and processes. Field campaigns have generated a wealth of insitu data and have grown considerably over the years in terms of both the number of measured parameters and the data volume. This can largely be attributed to the rapid advances in instrument development and computing power. The users of field data may face a number of challenges spanning data access, understanding, and proper use in scientific analysis. This tutorial is designed to provide an introduction to using data sets, with a focus on airborne measurements, for atmospheric research. The first part of the tutorial provides an overview of airborne measurements and data discovery. This will be followed by a discussion on the understanding of airborne data files. An actual data file will be used to illustrate how data are reported, including the use of data flags to indicate missing data and limits of detection. Retrieving information from the file header will be discussed, which is essential to properly interpreting the data. Field measurements are typically reported as a function of sampling time, but different instruments often have different sampling intervals. To create a combined data set, the data merge process (interpolation of all data to a common time base) will be discussed in terms of the algorithm, data merge products available from airborne studies, and their application in research. Statistical treatment of missing data and data flagged for limit of detection will also be covered in this section. These basic data processing techniques are applicable to both airborne and ground-based observational data sets. Finally, the recently developed Toolsets for Airborne Data (TAD) will be introduced. TAD (tad.larc.nasa.gov) is an airborne data portal offering tools to create user defined merged data products with the capability to provide descriptive statistics and the option to treat measurement uncertainty.
Domain decomposition by the advancing-partition method for parallel unstructured grid generation
NASA Technical Reports Server (NTRS)
Banihashemi, legal representative, Soheila (Inventor); Pirzadeh, Shahyar Z. (Inventor)
2012-01-01
In a method for domain decomposition for generating unstructured grids, a surface mesh is generated for a spatial domain. A location of a partition plane dividing the domain into two sections is determined. Triangular faces on the surface mesh that intersect the partition plane are identified. A partition grid of tetrahedral cells, dividing the domain into two sub-domains, is generated using a marching process in which a front comprises only faces of new cells which intersect the partition plane. The partition grid is generated until no active faces remain on the front. Triangular faces on each side of the partition plane are collected into two separate subsets. Each subset of triangular faces is renumbered locally and a local/global mapping is created for each sub-domain. A volume grid is generated for each sub-domain. The partition grid and volume grids are then merged using the local-global mapping.
Comparison of property between two Viking Seismic tapes
NASA Astrophysics Data System (ADS)
Yamamoto, Y.; Yamada, R.
2016-12-01
Tthe restoration work of the seismometer data onboard Viking Lander 2 is still continuing. Originally, the data were processed and archived both in MIT and UTIG separately, and each data is accessible via the Internet today. Their file formats to store the data are different, but both of them are currently readable due to the continuous investigation. However, there is some inconsistency between their data although most of their data are highly consistent. To understand the differences, the knowledge of archiving and off-line processing of spacecraft is required because these differences are caused by the off-line processing.The data processing of spacecraft often requires merge and sort processing of raw data. The merge processing is normally performed to eliminate duplicated data, and the sort processing is performed to fix data order. UTIG did not seem to perform these merge and sort processing. Therefore, the UTIG processed data remain duplication. The MIT processed data did these merge and sort processing, but the raw data sometimes include wrong time tags, and it cannot be fixed strictly after sort processing. Also, the MIT processed data has enough documents to understand metadata, while UTIG data has a brief instruction. Therefore, both of MIT and UTIG data are treated complementary. A better data set can be established using both of them. In this presentation, we would show the method to build a better data set of Viking Lander 2 seismic data.
FPGA-based RF spectrum merging and adaptive hopset selection
NASA Astrophysics Data System (ADS)
McLean, R. K.; Flatley, B. N.; Silvius, M. D.; Hopkinson, K. M.
The radio frequency (RF) spectrum is a limited resource. Spectrum allotment disputes stem from this scarcity as many radio devices are confined to a fixed frequency or frequency sequence. One alternative is to incorporate cognition within a reconfigurable radio platform, therefore enabling the radio to adapt to dynamic RF spectrum environments. In this way, the radio is able to actively sense the RF spectrum, decide, and act accordingly, thereby sharing the spectrum and operating in more flexible manner. In this paper, we present a novel solution for merging many distributed RF spectrum maps into one map and for subsequently creating an adaptive hopset. We also provide an example of our system in operation, the result of which is a pseudorandom adaptive hopset. The paper then presents a novel hardware design for the frequency merger and adaptive hopset selector, both of which are written in VHDL and implemented as a custom IP core on an FPGA-based embedded system using the Xilinx Embedded Development Kit (EDK) software tool. The design of the custom IP core is optimized for area, and it can process a high-volume digital input via a low-latency circuit architecture. The complete embedded system includes the Xilinx PowerPC microprocessor, UART serial connection, and compact flash memory card IP cores, and our custom map merging/hopset selection IP core, all of which are targeted to the Virtex IV FPGA. This system is then incorporated into a cognitive radio prototype on a Rice University Wireless Open Access Research Platform (WARP) reconfigurable radio.
Identifying with fictive characters: structural brain correlates of the personality trait 'fantasy'.
Cheetham, Marcus; Hänggi, Jürgen; Jancke, Lutz
2014-11-01
The perception of oneself as absorbed in the thoughts, feelings and happenings of a fictive character (e.g. in a novel or film) as if the character's experiences were one's own is referred to as identification. We investigated whether individual variation in the personality trait of identification is associated with individual variation in the structure of specific brain regions, using surface and volume-based morphometry. The hypothesized regions of interest were selected on the basis of their functional role in subserving the cognitive processing domains considered important for identification (i.e. mental imagery, empathy, theory of mind and merging) and for the immersive experience called 'presence'. Controlling for age, sex, whole-brain volume and other traits, identification covaried significantly with the left hippocampal volume, cortical thickness in the right anterior insula and the left dorsal medial prefrontal cortex, and with gray matter volume in the dorsolateral prefrontal cortex. These findings show that trait identification is associated with structural variation in specific brain regions. The findings are discussed in relation to the potential functional contribution of these regions to identification. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Human Teaching and Human Learning in the Language Class: A Confluent Approach.
ERIC Educational Resources Information Center
Galyean, Beverly
Much attention has been given to the imbalance between thinking and feelings in the educative process. Human teaching calls for merging the cognitive and affective processes into one confluent learning experience. Language learning is viewed primarily as a means for affective reflective communication. Personal growth merges with language…
Entanglement and Coherence in Quantum State Merging.
Streltsov, A; Chitambar, E; Rana, S; Bera, M N; Winter, A; Lewenstein, M
2016-06-17
Understanding the resource consumption in distributed scenarios is one of the main goals of quantum information theory. A prominent example for such a scenario is the task of quantum state merging, where two parties aim to merge their tripartite quantum state parts. In standard quantum state merging, entanglement is considered to be an expensive resource, while local quantum operations can be performed at no additional cost. However, recent developments show that some local operations could be more expensive than others: it is reasonable to distinguish between local incoherent operations and local operations which can create coherence. This idea leads us to the task of incoherent quantum state merging, where one of the parties has free access to local incoherent operations only. In this case the resources of the process are quantified by pairs of entanglement and coherence. Here, we develop tools for studying this process and apply them to several relevant scenarios. While quantum state merging can lead to a gain of entanglement, our results imply that no merging procedure can gain entanglement and coherence at the same time. We also provide a general lower bound on the entanglement-coherence sum and show that the bound is tight for all pure states. Our results also lead to an incoherent version of Schumacher compression: in this case the compression rate is equal to the von Neumann entropy of the diagonal elements of the corresponding quantum state.
Efficient Encoding and Rendering of Time-Varying Volume Data
NASA Technical Reports Server (NTRS)
Ma, Kwan-Liu; Smith, Diann; Shih, Ming-Yun; Shen, Han-Wei
1998-01-01
Visualization of time-varying volumetric data sets, which may be obtained from numerical simulations or sensing instruments, provides scientists insights into the detailed dynamics of the phenomenon under study. This paper describes a coherent solution based on quantization, coupled with octree and difference encoding for visualizing time-varying volumetric data. Quantization is used to attain voxel-level compression and may have a significant influence on the performance of the subsequent encoding and visualization steps. Octree encoding is used for spatial domain compression, and difference encoding for temporal domain compression. In essence, neighboring voxels may be fused into macro voxels if they have similar values, and subtrees at consecutive time steps may be merged if they are identical. The software rendering process is tailored according to the tree structures and the volume visualization process. With the tree representation, selective rendering may be performed very efficiently. Additionally, the I/O costs are reduced. With these combined savings, a higher level of user interactivity is achieved. We have studied a variety of time-varying volume datasets, performed encoding based on data statistics, and optimized the rendering calculations wherever possible. Preliminary tests on workstations have shown in many cases tremendous reduction by as high as 90% in both storage space and inter-frame delay.
A GaAs vector processor based on parallel RISC microprocessors
NASA Astrophysics Data System (ADS)
Misko, Tim A.; Rasset, Terry L.
A vector processor architecture based on the development of a 32-bit microprocessor using gallium arsenide (GaAs) technology has been developed. The McDonnell Douglas vector processor (MVP) will be fabricated completely from GaAs digital integrated circuits. The MVP architecture includes a vector memory of 1 megabyte, a parallel bus architecture with eight processing elements connected in parallel, and a control processor. The processing elements consist of a reduced instruction set CPU (RISC) with four floating-point coprocessor units and necessary memory interface functions. This architecture has been simulated for several benchmark programs including complex fast Fourier transform (FFT), complex inner product, trigonometric functions, and sort-merge routine. The results of this study indicate that the MVP can process a 1024-point complex FFT at a speed of 112 microsec (389 megaflops) while consuming approximately 618 W of power in a volume of approximately 0.1 ft-cubed.
NASA Astrophysics Data System (ADS)
Jelínek, P.; Karlický, M.; Van Doorsselaere, T.; Bárta, M.
2017-10-01
Using the FLASH code, which solves the full set of the 2D non-ideal (resistive) time-dependent magnetohydrodynamic (MHD) equations, we study processes during the magnetic reconnection in a vertical gravitationally stratified current sheet. We show that during these processes, which correspond to processes in solar flares, plasmoids are formed due to the tearing mode instability of the current sheet. These plasmoids move upward or downward along the vertical current sheet and some of them merge into larger plasmoids. We study the density and temperature structure of these plasmoids and their time evolution in detail. We found that during the merging of two plasmoids, the resulting larger plasmoid starts to oscillate with a period largely determined by L/{c}{{A}}, where L is the size of the plasmoid and c A is the Alfvén speed in the lateral parts of the plasmoid. In our model, L/{c}{{A}} evaluates to ˜ 25 {{s}}. Furthermore, the plasmoid moving downward merges with the underlying flare arcade, which causes oscillations of the arcade. In our model, the period of this arcade oscillation is ˜ 35 {{s}}, which also corresponds to L/{c}{{A}}, but here L means the length of the loop and c A is the average Alfvén speed in the loop. We also show that the merging process of the plasmoid with the flare arcade is a complex process as presented by complex density and temperature structures of the oscillating arcade. Moreover, all these processes are associated with magnetoacoustic waves produced by the motion and merging of plasmoids.
An improved method for pancreas segmentation using SLIC and interactive region merging
NASA Astrophysics Data System (ADS)
Zhang, Liyuan; Yang, Huamin; Shi, Weili; Miao, Yu; Li, Qingliang; He, Fei; He, Wei; Li, Yanfang; Zhang, Huimao; Mori, Kensaku; Jiang, Zhengang
2017-03-01
Considering the weak edges in pancreas segmentation, this paper proposes a new solution which integrates more features of CT images by combining SLIC superpixels and interactive region merging. In the proposed method, Mahalanobis distance is first utilized in SLIC method to generate better superpixel images. By extracting five texture features and one gray feature, the similarity measure between two superpixels becomes more reliable in interactive region merging. Furthermore, object edge blocks are accurately addressed by re-segmentation merging process. Applying the proposed method to four cases of abdominal CT images, we segment pancreatic tissues to verify the feasibility and effectiveness. The experimental results show that the proposed method can make segmentation accuracy increase to 92% on average. This study will boost the application process of pancreas segmentation for computer-aided diagnosis system.
Photon merging and splitting in electromagnetic field inhomogeneities
NASA Astrophysics Data System (ADS)
Gies, Holger; Karbstein, Felix; Seegert, Nico
2016-04-01
We investigate photon merging and splitting processes in inhomogeneous, slowly varying electromagnetic fields. Our study is based on the three-photon polarization tensor following from the Heisenberg-Euler effective action. We put special emphasis on deviations from the well-known constant field results, also revisiting the selection rules for these processes. In the context of high-intensity laser facilities, we analytically determine compact expressions for the number of merged/split photons as obtained in the focal spots of intense laser beams. For the parameter range of typical petawatt class laser systems as pump and probe, we provide estimates for the numbers of signal photons attainable in an actual experiment. The combination of frequency upshifting, polarization dependence and scattering off the inhomogeneities renders photon merging an ideal signature for the experimental exploration of nonlinear quantum vacuum properties.
Metal-Catalyzed Aqueous Oxidation Processes in Merged Microdroplets
NASA Astrophysics Data System (ADS)
Davis, R. D.; Wilson, K. R.
2017-12-01
Iron-catalyzed production of reactive oxygen species (ROS) from hydrogen peroxide (Fenton's reaction) is a fundamental process throughout nature, from groundwater to cloud droplets. In recent years, Fenton's chemistry has gained further interest in atmospheric science as a potentially important process in the oxidation of aqueous secondary organic aerosol (e.g., Chu et al., Sci. Rep., 2017), with some observations indicating that Fenton's reaction proceeds at a higher rate at aerosol interfaces compared to in the bulk (Enami et al., PNAS, 2014). However, a fundamental-level mechanistic understanding of this process remains elusive and the relative importance of interfacial versus bulk chemistry for aqueous organic processing via Fenton's has yet to be fully established. Here, we present a microreactor experimental approach to studying aqueous-phase Fenton's chemistry in microdroplets by rapidly mixing droplets of different composition. Utilizing two on-demand droplet generators, a stream of microdroplets containing aqueous iron chloride were merged with a separate stream of microdroplets containing aqueous hydrogen peroxide and a range of aromatic organic compounds, initiating ROS production and subsequent aqueous-phase oxidation reactions. Upon merging, mixing of the microdroplets occurred in submillisecond timescales, thus allowing the reaction progress to be monitored with high spatial and temporal resolution. For relatively large microreactor (droplet) sizes (50 µm diameter post-merging), the Fenton-initiated aqueous oxidation of aromatic organic compounds in merged microdroplets was consistent with bulk predictions with hydroxyl radicals as the ROS. The microdroplet-size dependence of this observation, along with the role of other ROS species produced from Fenton and Fenton-like processes, will be discussed in the context of relative importance to aqueous organic processing of atmospheric particles.
Luka, George; Ahmadi, Ali; Najjaran, Homayoun; Alocilja, Evangelyn; DeRosa, Maria; Wolthers, Kirsten; Malki, Ahmed; Aziz, Hassan; Althani, Asmaa; Hoorfar, Mina
2015-01-01
A biosensor can be defined as a compact analytical device or unit incorporating a biological or biologically derived sensitive recognition element immobilized on a physicochemical transducer to measure one or more analytes. Microfluidic systems, on the other hand, provide throughput processing, enhance transport for controlling the flow conditions, increase the mixing rate of different reagents, reduce sample and reagents volume (down to nanoliter), increase sensitivity of detection, and utilize the same platform for both sample preparation and detection. In view of these advantages, the integration of microfluidic and biosensor technologies provides the ability to merge chemical and biological components into a single platform and offers new opportunities for future biosensing applications including portability, disposability, real-time detection, unprecedented accuracies, and simultaneous analysis of different analytes in a single device. This review aims at representing advances and achievements in the field of microfluidic-based biosensing. The review also presents examples extracted from the literature to demonstrate the advantages of merging microfluidic and biosensing technologies and illustrate the versatility that such integration promises in the future biosensing for emerging areas of biological engineering, biomedical studies, point-of-care diagnostics, environmental monitoring, and precision agriculture. PMID:26633409
NASA Astrophysics Data System (ADS)
Budge, Scott E.; Badamikar, Neeraj S.; Xie, Xuan
2015-03-01
Several photogrammetry-based methods have been proposed that the derive three-dimensional (3-D) information from digital images from different perspectives, and lidar-based methods have been proposed that merge lidar point clouds and texture the merged point clouds with digital imagery. Image registration alone has difficulty with smooth regions with low contrast, whereas point cloud merging alone has difficulty with outliers and a lack of proper convergence in the merging process. This paper presents a method to create 3-D images that uses the unique properties of texel images (pixel-fused lidar and digital imagery) to improve the quality and robustness of fused 3-D images. The proposed method uses both image processing and point-cloud merging to combine texel images in an iterative technique. Since the digital image pixels and the lidar 3-D points are fused at the sensor level, more accurate 3-D images are generated because registration of image data automatically improves the merging of the point clouds, and vice versa. Examples illustrate the value of this method over other methods. The proposed method also includes modifications for the situation where an estimate of position and attitude of the sensor is known, when obtained from low-cost global positioning systems and inertial measurement units sensors.
Antarctic icebergs distributions 1992-2014
NASA Astrophysics Data System (ADS)
Tournadre, J.; Bouhier, N.; Girard-Ardhuin, F.; Rémy, F.
2016-01-01
Basal melting of floating ice shelves and iceberg calving constitute the two almost equal paths of freshwater flux between the Antarctic ice cap and the Southern Ocean. The largest icebergs (>100 km2) transport most of the ice volume but their basal melting is small compared to their breaking into smaller icebergs that constitute thus the major vector of freshwater. The archives of nine altimeters have been processed to create a database of small icebergs (<8 km2) within open water containing the positions, sizes, and volumes spanning the 1992-2014 period. The intercalibrated monthly ice volumes from the different altimeters have been merged in a homogeneous 23 year climatology. The iceberg size distribution, covering the 0.1-10,000 km2 range, estimated by combining small and large icebergs size measurements follows well a power law of slope -1.52 ± 0.32 close to the -3/2 laws observed and modeled for brittle fragmentation. The global volume of ice and its distribution between the ocean basins present a very strong interannual variability only partially explained by the number of large icebergs. Indeed, vast zones of the Southern Ocean free of large icebergs are largely populated by small iceberg drifting over thousands of kilometers. The correlation between the global small and large icebergs volumes shows that small icebergs are mainly generated by large ones breaking. Drifting and trapping by sea ice can transport small icebergs for long period and distances. Small icebergs act as an ice diffuse process along large icebergs trajectories while sea ice trapping acts as a buffer delaying melting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.
2011-04-01
Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutionsmore » and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.« less
Redundant via insertion in self-aligned double patterning
NASA Astrophysics Data System (ADS)
Song, Youngsoo; Jung, Jinwook; Shin, Youngsoo
2017-03-01
Redundant via (RV) insertion is employed to enhance via manufacturability, and has been extensively studied. Self-aligned double patterning (SADP) process, brings a new challenge to RV insertion since newly created cut for each RV insertion has to be taken care of. Specifically, when a cut for RV, which we simply call RV-cut, is formed, cut conflict may occur with nearby line-end cuts, which results in a decrease in RV candidates. We introduce cut merging to reduce the number of cut conflicts; merged cuts are processed with stitch using litho-etch-litho-etch (LELE) multi-patterning method. In this paper, we propose a new RV insertion method with cut merging in SADP for the first time. In our experiments, a simple RV insertion yields 55.3% vias to receives RVs; our proposed method that considers cut merging increases that number to 69.6% on average of test circuits.
Cold pool organization and the merging of convective updrafts in a Large Eddy Simulation
NASA Astrophysics Data System (ADS)
Glenn, I. B.; Krueger, S. K.
2016-12-01
Cold pool organization is a process that accelerates the transition from shallow to deep cumulus convection, and leads to higher deep convective cloud top heights. The mechanism by which cold pool organization enhances convection remains not well understood, but the basic idea is that since precipitation evaporation and a low equivalent potential temperature in the mid-troposphere lead to strong cold pools, the net cold pool effect can be accounted for in a cumulus parameterization as a relationship involving those factors. Understanding the actual physical mechanism at work will help quantify the strength of the relationship between cold pools and enhanced deep convection. One proposed mechanism of enhancement is that cold pool organization leads to reduced distances between updrafts, creating a local environment more conducive to convection as updrafts entrain parcels of air recently detrained by their neighbors. We take this hypothesis one step further and propose that convective updrafts actually merge, not just exchange recently processed air. Because entrainment and detrainment around an updraft draws nearby air in or pushes it out, respectively, they act like dynamic flow sources and sinks, drawing each other in or pushing each other away. The acceleration is proportional to the inverse square of the distance between two updrafts, so a small reduction in distance can make a big difference in the rate of merging. We have shown in previous research how merging can be seen as collisions between different updraft air parcels using Lagrangian Parcel Trajectories (LPTs) released in a Large Eddy Simulation (LES) during a period with organized deep convection. Now we use a Eulerian frame of reference to examine the updraft merging process during the transition from shallow to organized deep convection. We use a case based on the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) for our LES. We directly measure the rate of entrainment and the properties of the entrained air for all convective updrafts in the simulation. We use a tracking algorithm to define merging between convective updrafts. We will show the rate of merging as the transition between shallow and deep convection occurs and the different distributions of entrainment rate and ultimate detrainment height of merged and non-merged updrafts.
Grins and Groans of Publishing in Professional School Counseling
ERIC Educational Resources Information Center
Brott, Pamelia E.
2005-01-01
Professional School Counseling is the professional journal published by the American School Counselor Association (ASCA). The first volume of the journal was published in 1997 after the predecessor journals, The School Counselor and Elementary School Guidance Journal, merged. The School Counselor was in publication from the early 1950s and…
The CEIC Review, 2001. A Catalyst for Merging Research, Policy, and Practice.
ERIC Educational Resources Information Center
Hartman, Bonnie, Ed.; Rohland, Mark, Ed.
2001-01-01
These three volumes contain articles related to (1) improving educational productivity (lessons from economics); (2) school-to-work (STW); and (3) reduced class size, offering recommendations from national invitational conferences. Topics include making schools work; tax revolts and school performance; market pressure and the impact on…
Searches for all types of binary mergers in the first Advanced LIGO observing run
NASA Astrophysics Data System (ADS)
Read, Jocelyn
2017-01-01
The first observational run of the Advanced LIGO detectors covered September 12, 2015 to January 19, 2016. In that time, two definitive observations of merging binary black hole systems were made. In particular, the second observation, GW151226, relied on matched-filter searches targeting merging binaries. These searches were also capable of detecting binary mergers from binary neutron stars and from black-hole/neutron-star binaries. In this talk, I will give an overview of LIGO compact binary coalescence searches, in particular focusing on systems that contain neutron stars. I will discuss the sensitive volumes of the first observing run, the astrophysical implications of detections and non-detections, and prospects for future observations
Cosmological perturbation effects on gravitational-wave luminosity distance estimates
NASA Astrophysics Data System (ADS)
Bertacca, Daniele; Raccanelli, Alvise; Bartolo, Nicola; Matarrese, Sabino
2018-06-01
Waveforms of gravitational waves provide information about a variety of parameters for the binary system merging. However, standard calculations have been performed assuming a FLRW universe with no perturbations. In reality this assumption should be dropped: we show that the inclusion of cosmological perturbations translates into corrections to the estimate of astrophysical parameters derived for the merging binary systems. We compute corrections to the estimate of the luminosity distance due to velocity, volume, lensing and gravitational potential effects. Our results show that the amplitude of the corrections will be negligible for current instruments, mildly important for experiments like the planned DECIGO, and very important for future ones such as the Big Bang Observer.
Numerical studies of the margin of vortices with decaying cores
NASA Technical Reports Server (NTRS)
Liu, G. C.; Ting, L.
1986-01-01
The merging of vortices to a single one is a canonical incompressible viscous flow problem. The merging process begins when the core sizes or the vortices are comparable to their distances and ends when the contour lines of constant vorticity lines are circularized around one center. Approximate solutions to this problem are constructed by adapting the asymptotic solutions for distinct vortices. For the early stage of merging, the next-order terms in the asymptotic solutions are added to the leading term. For the later stage of merging, the vorticity distribution is reinitialized by vortices with overlapping core structures guided by the 'rule of merging' and the velocity of the 'vortex centers' are then defined by a minimum principle. To show the accuracy of the approximate solution, it is compared with the finite-difference solution.
Modeling methods for merging computational and experimental aerodynamic pressure data
NASA Astrophysics Data System (ADS)
Haderlie, Jacob C.
This research describes a process to model surface pressure data sets as a function of wing geometry from computational and wind tunnel sources and then merge them into a single predicted value. The described merging process will enable engineers to integrate these data sets with the goal of utilizing the advantages of each data source while overcoming the limitations of both; this provides a single, combined data set to support analysis and design. The main challenge with this process is accurately representing each data source everywhere on the wing. Additionally, this effort demonstrates methods to model wind tunnel pressure data as a function of angle of attack as an initial step towards a merging process that uses both location on the wing and flow conditions (e.g., angle of attack, flow velocity or Reynold's number) as independent variables. This surrogate model of pressure as a function of angle of attack can be useful for engineers that need to predict the location of zero-order discontinuities, e.g., flow separation or normal shocks. Because, to the author's best knowledge, there is no published, well-established merging method for aerodynamic pressure data (here, the coefficient of pressure Cp), this work identifies promising modeling and merging methods, and then makes a critical comparison of these methods. Surrogate models represent the pressure data for both data sets. Cubic B-spline surrogate models represent the computational simulation results. Machine learning and multi-fidelity surrogate models represent the experimental data. This research compares three surrogates for the experimental data (sequential--a.k.a. online--Gaussian processes, batch Gaussian processes, and multi-fidelity additive corrector) on the merits of accuracy and computational cost. The Gaussian process (GP) methods employ cubic B-spline CFD surrogates as a model basis function to build a surrogate model of the WT data, and this usage of the CFD surrogate in building the WT data could serve as a "merging" because the resulting WT pressure prediction uses information from both sources. In the GP approach, this model basis function concept seems to place more "weight" on the Cp values from the wind tunnel (WT) because the GP surrogate uses the CFD to approximate the WT data values. Conversely, the computationally inexpensive additive corrector method uses the CFD B-spline surrogate to define the shape of the spanwise distribution of the Cp while minimizing prediction error at all spanwise locations for a given arc length position; this, too, combines information from both sources to make a prediction of the 2-D WT-based Cp distribution, but the additive corrector approach gives more weight to the CFD prediction than to the WT data. Three surrogate models of the experimental data as a function of angle of attack are also compared for accuracy and computational cost. These surrogates are a single Gaussian process model (a single "expert"), product of experts, and generalized product of experts. The merging approach provides a single pressure distribution that combines experimental and computational data. The batch Gaussian process method provides a relatively accurate surrogate that is computationally acceptable, and can receive wind tunnel data from port locations that are not necessarily parallel to a variable direction. On the other hand, the sequential Gaussian process and additive corrector methods must receive a sufficient number of data points aligned with one direction, e.g., from pressure port bands (tap rows) aligned with the freestream. The generalized product of experts best represents wind tunnel pressure as a function of angle of attack, but at higher computational cost than the single expert approach. The format of the application data from computational and experimental sources in this work precluded the merging process from including flow condition variables (e.g., angle of attack) in the independent variables, so the merging process is only conducted in the wing geometry variables of arc length and span. The merging process of Cp data allows a more "hands-off" approach to aircraft design and analysis, (i.e., not as many engineers needed to debate the Cp distribution shape) and generates Cp predictions at any location on the wing. However, the cost with these benefits are engineer time (learning how to build surrogates), computational time in constructing the surrogates, and surrogate accuracy (surrogates introduce error into data predictions). This dissertation effort used the Trap Wing / First AIAA CFD High-Lift Prediction Workshop as a relevant transonic wing with a multi-element high-lift system, and this work identified that the batch GP model for the WT data and the B-spline surrogate for the CFD might best be combined using expert belief weights to describe Cp as a function of location on the wing element surface. (Abstract shortened by ProQuest.).
NASA Astrophysics Data System (ADS)
Naganuma, Takeshi; Wilmotte, Annick
2009-11-01
An integrated program, “Microbiological and ecological responses to global environmental changes in polar regions” (MERGE), was proposed in the International Polar Year (IPY) 2007-2008 and endorsed by the IPY committee as a coordinating proposal. MERGE hosts original proposals to the IPY and facilitates their funding. MERGE selected three key questions to produce scientific achievements. Prokaryotic and eukaryotic organisms in terrestrial, lacustrine, and supraglacial habitats were targeted according to diversity and biogeography; food webs and ecosystem evolution; and linkages between biological, chemical, and physical processes in the supraglacial biome. MERGE hosted 13 original and seven additional proposals, with two full proposals. It respected the priorities and achievements of the individual proposals and aimed to unify their significant results. Ideas and projects followed a bottom-up rather than a top-down approach. We intend to inform the MERGE community of the initial results and encourage ongoing collaboration. Scientists from non-polar regions have also participated and are encouraged to remain involved in MERGE. MERGE is formed by scientists from Argentina, Australia, Austria, Belgium, Brazil, Bulgaria, Canada, Egypt, Finland, France, Germany, Italy, Japan, Korea, Malaysia, New Zealand, Philippines, Poland, Russia, Spain, UK, Uruguay, USA, and Vietnam, and associates from Chile, Denmark, Netherlands, and Norway.
Merging magnetic droplets by a magnetic field pulse
NASA Astrophysics Data System (ADS)
Wang, Chengjie; Xiao, Dun; Liu, Yaowen
2018-05-01
Reliable manipulation of magnetic droplets is of immense importance for their applications in spin torque oscillators. Using micromagnetic simulations, we find that the antiphase precession state, which originates in the dynamic dipolar interaction effect, is a favorable stable state for two magnetic droplets nucleated at two identical nano-contacts. A magnetic field pulse can be used to destroy their stability and merge them into a big droplet. The merging process strongly depends on the pulse width as well as the pulse strength.
Automatic Perceptual Color Map Generation for Realistic Volume Visualization
Silverstein, Jonathan C.; Parsad, Nigel M.; Tsirline, Victor
2008-01-01
Advances in computed tomography imaging technology and inexpensive high performance computer graphics hardware are making high-resolution, full color (24-bit) volume visualizations commonplace. However, many of the color maps used in volume rendering provide questionable value in knowledge representation and are non-perceptual thus biasing data analysis or even obscuring information. These drawbacks, coupled with our need for realistic anatomical volume rendering for teaching and surgical planning, has motivated us to explore the auto-generation of color maps that combine natural colorization with the perceptual discriminating capacity of grayscale. As evidenced by the examples shown that have been created by the algorithm described, the merging of perceptually accurate and realistically colorized virtual anatomy appears to insightfully interpret and impartially enhance volume rendered patient data. PMID:18430609
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, D.G.; West, J.T.
FRAC-IN-THE-BOX is a computer code developed to calculate the fractions of rectangular parallelepiped mesh cell volumes that are intersected by combinatorial geometry type zones. The geometry description used in the code is a subset of the combinatorial geometry used in SABRINA. The input file may be read into SABRINA and three dimensional plots made of the input geometry. The volume fractions for those portions of the geometry that are too complicated to describe with the geometry routines provided in FRAC-IN-THE-BOX may be calculated in SABRINA and merged with the volume fractions computed for the remainder of the geometry. 21 figs.,more » 1 tab.« less
Optimal PMU placement using topology transformation method in power systems.
Rahman, Nadia H A; Zobaa, Ahmed F
2016-09-01
Optimal phasor measurement units (PMUs) placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP). The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-11
... Collection With Additional Merge of Additional Collection: Regulations Governing Inspection and Certification of Fresh and Processed Fruits, Vegetables and other Products AGENCY: Agricultural Marketing Service... Additional Merge of Additional Collection: Regulations Governing Inspection and Certification of Fresh and...
Army Communicator. Volume 28, Number 3, Fall 2003
2003-01-01
receive training from the Cisco Academy. SFC Kathleen O’Meara, an adminis- trative specialist, said her MOS will soon be merged with others. SFC O’Meara...successfully completed.” SGT William Leyden , US Army Japan G6 section, talked about working on teams and with Mr. Tsuhako. “It’s nice. (The system
ERIC Educational Resources Information Center
Arkansas Department of Higher Education, 2010
2010-01-01
At its April 25, 2008 meeting, the Arkansas Higher Education Coordinating Board approved the funding models for the two-year colleges, universities, and the technical centers (former technical institutes merged with universities). These models had been developed in conjunction with presidents and chancellors after meetings and revisions. The…
COUNTERMEASURE: Army Ground Risk-Management Information. Volume 26, Number 8
2005-08-01
also anger, and avoidance should talk to their of expressing painfull Soldiers personally about feelings might lead critical incidents. They Soldiers...accident occurred during The driver was merging the mid-afternoon. o with traffic when he te steered the vehicle into Cog the road’s soft shoulder Class C
NASA Astrophysics Data System (ADS)
Tritscher, Torsten; Koched, Amine; Han, Hee-Siew; Filimundi, Eric; Johnson, Tim; Elzey, Sherrie; Avenido, Aaron; Kykal, Carsten; Bischof, Oliver F.
2015-05-01
Electrical mobility classification (EC) followed by Condensation Particle Counter (CPC) detection is the technique combined in Scanning Mobility Particle Sizers(SMPS) to retrieve nanoparticle size distributions in the range from 2.5 nm to 1 μm. The detectable size range of SMPS systems can be extended by the addition of an Optical Particle Sizer(OPS) that covers larger sizes from 300 nm to 10 μm. This optical sizing method reports an optical equivalent diameter, which is often different from the electrical mobility diameter measured by the standard SMPS technique. Multi-Instrument Manager (MIMTM) software developed by TSI incorporates algorithms that facilitate merging SMPS data sets with data based on optical equivalent diameter to compile single, wide-range size distributions. Here we present MIM 2.0, the next-generation of the data merging tool that offers many advanced features for data merging and post-processing. MIM 2.0 allows direct data acquisition with OPS and NanoScan SMPS instruments to retrieve real-time particle size distributions from 10 nm to 10 μm, which we show in a case study at a fireplace. The merged data can be adjusted using one of the merging options, which automatically determines an overall aerosol effective refractive index. As a result an indirect and average characterization of aerosol optical and shape properties is possible. The merging tool allows several pre-settings, data averaging and adjustments, as well as the export of data sets and fitted graphs. MIM 2.0 also features several post-processing options for SMPS data and differences can be visualized in a multi-peak sample over a narrow size range.
Mainstreaming: Merging Regular and Special Education.
ERIC Educational Resources Information Center
Hasazi, Susan E.; And Others
The booklet on mainstreaming looks at the merging of special and regular education as a process rather than as an end. Chapters address the following topics (sample subtopics in parentheses): what is mainstreaming; pros and cons of mainstreaming; forces influencing change in special education (educators, parents and advocacy groups, the courts,…
Merging Quality Processes & Tools with DACUM.
ERIC Educational Resources Information Center
McLennan, Krystyna S.
This paper explains how merging DACUM (Developing a Curriculum) analysis with quality initiatives can reduce waste, increase job efficiency, assist in development of standard operating procedures, and involve employees in positive job improvement methods. In the first half of the paper, the following principles of total quality management (TQM)…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Yunchun; Yang, Jiayan; Liu, Yu
In this paper, we report the interaction and subsequent merging of two sinistral filaments (F1 and F2) occurring at the boundary of AR 9720 on 2001 December 6. The two filaments were close and nearly perpendicular to each other. The interaction occurred after F1 was erupted and the eruption was impeded by a more extended filament channel (FC) standing in the way, in which F2 was embedded. The erupted material ran into FC along its axis, causing F1 and F2 to merge into a single structure that subsequently underwent a large-amplitude to-and-fro motion. A significant plasma heating process was observedmore » in the merging process, making the mixed material largely disappear from the Hα passband, but appear in Extreme Ultraviolet Telescope 195 Å images for a while. These observations can serve as strong evidence of merging reconnection between the two colliding magnetic structures. A new sinistral filament was formed along FC after the cooling of the merged and heated material. No coronal mass ejection was observed to be associated with the event; though, the eruption was accompanied by a two-ribbon flare with a separation motion, indicating that the eruption had failed. This event shows that, in addition to overlying magnetic fields, such an interaction is an effective restraint to make a filament eruption fail in this way.« less
KAMO: towards automated data processing for microcrystals.
Yamashita, Keitaro; Hirata, Kunio; Yamamoto, Masaki
2018-05-01
In protein microcrystallography, radiation damage often hampers complete and high-resolution data collection from a single crystal, even under cryogenic conditions. One promising solution is to collect small wedges of data (5-10°) separately from multiple crystals. The data from these crystals can then be merged into a complete reflection-intensity set. However, data processing of multiple small-wedge data sets is challenging. Here, a new open-source data-processing pipeline, KAMO, which utilizes existing programs, including the XDS and CCP4 packages, has been developed to automate whole data-processing tasks in the case of multiple small-wedge data sets. Firstly, KAMO processes individual data sets and collates those indexed with equivalent unit-cell parameters. The space group is then chosen and any indexing ambiguity is resolved. Finally, clustering is performed, followed by merging with outlier rejections, and a report is subsequently created. Using synthetic and several real-world data sets collected from hundreds of crystals, it was demonstrated that merged structure-factor amplitudes can be obtained in a largely automated manner using KAMO, which greatly facilitated the structure analyses of challenging targets that only produced microcrystals. open access.
Development of a definition, classification system, and model for cultural geology
NASA Astrophysics Data System (ADS)
Mitchell, Lloyd W., III
The concept for this study is based upon a personal interest by the author, an American Indian, in promoting cultural perspectives in undergraduate college teaching and learning environments. Most academicians recognize that merged fields can enhance undergraduate curricula. However, conflict may occur when instructors attempt to merge social science fields such as history or philosophy with geoscience fields such as mining and geomorphology. For example, ideologies of Earth structures derived from scientific methodologies may conflict with historical and spiritual understandings of Earth structures held by American Indians. Specifically, this study addresses the problem of how to combine cultural studies with the geosciences into a new merged academic discipline called cultural geology. This study further attempts to develop the merged field of cultural geology using an approach consisting of three research foci: a definition, a classification system, and a model. Literature reviews were conducted for all three foci. Additionally, to better understand merged fields, a literature review was conducted specifically for academic fields that merged social and physical sciences. Methodologies concentrated on the three research foci: definition, classification system, and model. The definition was derived via a two-step process. The first step, developing keyword hierarchical ranking structures, was followed by creating and analyzing semantic word meaning lists. The classification system was developed by reviewing 102 classification systems and incorporating selected components into a system framework. The cultural geology model was created also utilizing a two-step process. A literature review of scientific models was conducted. Then, the definition and classification system were incorporated into a model felt to reflect the realm of cultural geology. A course syllabus was then developed that incorporated the resulting definition, classification system, and model. This study concludes that cultural geology can be introduced as a merged discipline by using a three-foci framework consisting of a definition, classification system, and model. Additionally, this study reveals that cultural beliefs, attitudes, and behaviors, can be incorporated into a geology course during the curriculum development process, using an approach known as 'learner-centered'. This study further concludes that cultural beliefs, derived from class members, are an important source of curriculum materials.
Tool for Merging Proposals Into DSN Schedules
NASA Technical Reports Server (NTRS)
Khanampornpan, Teerapat; Kwok, John; Call, Jared
2008-01-01
A Practical Extraction and Reporting Language (Perl) script called merge7da has been developed to facilitate determination, by a project scheduler in NASA's Deep Space Network, of whether a proposal for use of the DSN could create a conflict with the current DSN schedule. Prior to the development of merge7da, there was no way to quickly identify potential schedule conflicts: it was necessary to submit a proposal and wait a day or two for a response from a DSN scheduling facility. By using merge7da to detect and eliminate potential schedule conflicts before submitting a proposal, a project scheduler saves time and gains assurance that the proposal will probably be accepted. merge7da accepts two input files, one of which contains the current DSN schedule and is in a DSN-standard format called '7da'. The other input file contains the proposal and is in another DSN-standard format called 'C1/C2'. merge7da processes the two input files to produce a merged 7da-format output file that represents the DSN schedule as it would be if the proposal were to be adopted. This 7da output file can be loaded into various DSN scheduling software tools now in use.
Content Analysis of the "Professional School Counseling" Journal: The First Ten Years
ERIC Educational Resources Information Center
Falco, Lia D.; Bauman, Sheri; Sumnicht, Zachary; Engelstad, Alicia
2011-01-01
The authors conducted a content analysis of the articles published in the first 10 volumes of the "Professional School Counseling" (PSC) journal, beginning in October 1997 when "The School Counselor" merged with "Elementary School Counseling and Guidance". The analysis coded a total of 571 articles into 20 content categories. Findings address the…
DETECTION OF SHOCK MERGING IN THE CHROMOSPHERE OF A SOLAR PORE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chae, Jongchul; Song, Donguk; Seo, Minju
2015-06-01
It was theoretically demonstrated that a shock propagating in the solar atmosphere can overtake another and merge with it. We provide clear observational evidence that shock merging does occur quite often in the chromosphere of sunspots. Using Hα imaging spectral data taken by the Fast Imaging Solar Spectrograph of the 1.6 m New Solar Telescope at the Big Bear Soar Observatory, we construct time–distance maps of line-of-sight velocities along two appropriately chosen cuts in a pore. The maps show a number of alternating redshift and blueshift ridges, and we identify each interface between a preceding redshift ridge and the followingmore » blueshift ridge as a shock ridge. The important finding of ours is that two successive shock ridges often merge with each other. This finding can be theoretically explained by the merging of magneto-acoustic shock waves propagating with lower speeds of about 10 km s{sup −1} and those propagating at higher speeds of about 16–22 km s{sup −1}. The shock merging is an important nonlinear dynamical process of the solar chromosphere that can bridge the gap between higher-frequency chromospheric oscillations and lower-frequency dynamic phenomena such as fibrils.« less
Food processors requirements met by radiation processing
NASA Astrophysics Data System (ADS)
Durante, Raymond W.
2002-03-01
Processing food using irradiation provides significant advantages to food producers by destroying harmful pathogens and extending shelf life without any detectable physical or chemical changes. It is expected that through increased public education, food irradiation will emerge as a viable commercial industry. Food production in most countries involves state of the art manufacturing, packaging, labeling, and shipping techniques that provides maximum efficiency and profit. In the United States, food sales are extremely competitive and profit margins small. Most food producers have heavily invested in equipment and are hesitant to modify their equipment. Meat and poultry producers in particular utilize sophisticated production machinery that processes enormous volumes of product on a continuous basis. It is incumbent on the food irradiation equipment suppliers to develop equipment that can easily merge with existing processes without requiring major changes to either the final food product or the process utilized to produce that product. Before a food producer can include irradiation as part of their food production process, they must be certain the available equipment meets their needs. This paper will examine several major requirements of food processors that will most likely have to be provided by the supplier of the irradiation equipment.
Sun, Xiaobo; Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng; Qin, Zhaohui S
2018-06-01
Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)-based high-performance computing (HPC) implementation, and the popular VCFTools. Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems.
Gao, Jingjing; Jin, Peng; Eng, Celeste; Burchard, Esteban G; Beaty, Terri H; Ruczinski, Ingo; Mathias, Rasika A; Barnes, Kathleen; Wang, Fusheng
2018-01-01
Abstract Background Sorted merging of genomic data is a common data operation necessary in many sequencing-based studies. It involves sorting and merging genomic data from different subjects by their genomic locations. In particular, merging a large number of variant call format (VCF) files is frequently required in large-scale whole-genome sequencing or whole-exome sequencing projects. Traditional single-machine based methods become increasingly inefficient when processing large numbers of files due to the excessive computation time and Input/Output bottleneck. Distributed systems and more recent cloud-based systems offer an attractive solution. However, carefully designed and optimized workflow patterns and execution plans (schemas) are required to take full advantage of the increased computing power while overcoming bottlenecks to achieve high performance. Findings In this study, we custom-design optimized schemas for three Apache big data platforms, Hadoop (MapReduce), HBase, and Spark, to perform sorted merging of a large number of VCF files. These schemas all adopt the divide-and-conquer strategy to split the merging job into sequential phases/stages consisting of subtasks that are conquered in an ordered, parallel, and bottleneck-free way. In two illustrating examples, we test the performance of our schemas on merging multiple VCF files into either a single TPED or a single VCF file, which are benchmarked with the traditional single/parallel multiway-merge methods, message passing interface (MPI)–based high-performance computing (HPC) implementation, and the popular VCFTools. Conclusions Our experiments suggest all three schemas either deliver a significant improvement in efficiency or render much better strong and weak scalabilities over traditional methods. Our findings provide generalized scalable schemas for performing sorted merging on genetics and genomics data using these Apache distributed systems. PMID:29762754
Probabilistic multi-catalogue positional cross-match
NASA Astrophysics Data System (ADS)
Pineau, F.-X.; Derriere, S.; Motch, C.; Carrera, F. J.; Genova, F.; Michel, L.; Mingo, B.; Mints, A.; Nebot Gómez-Morán, A.; Rosen, S. R.; Ruiz Camuñas, A.
2017-01-01
Context. Catalogue cross-correlation is essential to building large sets of multi-wavelength data, whether it be to study the properties of populations of astrophysical objects or to build reference catalogues (or timeseries) from survey observations. Nevertheless, resorting to automated processes with limited sets of information available on large numbers of sources detected at different epochs with various filters and instruments inevitably leads to spurious associations. We need both statistical criteria to select detections to be merged as unique sources, and statistical indicators helping in achieving compromises between completeness and reliability of selected associations. Aims: We lay the foundations of a statistical framework for multi-catalogue cross-correlation and cross-identification based on explicit simplified catalogue models. A proper identification process should rely on both astrometric and photometric data. Under some conditions, the astrometric part and the photometric part can be processed separately and merged a posteriori to provide a single global probability of identification. The present paper addresses almost exclusively the astrometrical part and specifies the proper probabilities to be merged with photometric likelihoods. Methods: To select matching candidates in n catalogues, we used the Chi (or, indifferently, the Chi-square) test with 2(n-1) degrees of freedom. We thus call this cross-match a χ-match. In order to use Bayes' formula, we considered exhaustive sets of hypotheses based on combinatorial analysis. The volume of the χ-test domain of acceptance - a 2(n-1)-dimensional acceptance ellipsoid - is used to estimate the expected numbers of spurious associations. We derived priors for those numbers using a frequentist approach relying on simple geometrical considerations. Likelihoods are based on standard Rayleigh, χ and Poisson distributions that we normalized over the χ-test acceptance domain. We validated our theoretical results by generating and cross-matching synthetic catalogues. Results: The results we obtain do not depend on the order used to cross-correlate the catalogues. We applied the formalism described in the present paper to build the multi-wavelength catalogues used for the science cases of the Astronomical Resource Cross-matching for High Energy Studies (ARCHES) project. Our cross-matching engine is publicly available through a multi-purpose web interface. In a longer term, we plan to integrate this tool into the CDS XMatch Service.
Benett, William J.; Krulevitch, Peter A.
2001-01-01
A miniature connector for introducing microliter quantities of solutions into microfabricated fluidic devices. The fluidic connector, for example, joins standard high pressure liquid chromatography (HPLC) tubing to 1 mm diameter holes in silicon or glass, enabling ml-sized volumes of sample solutions to be merged with .mu.l-sized devices. The connector has many features, including ease of connect and disconnect; a small footprint which enables numerous connectors to be located in a small area; low dead volume; helium leak-tight; and tubing does not twist during connection. Thus the connector enables easy and effective change of microfluidic devices and introduction of different solutions in the devices.
Observations and Simulations of Formation of Broad Plasma Depletions Through Merging Process
NASA Technical Reports Server (NTRS)
Huang, Chao-Song; Retterer, J. M.; Beaujardiere, O. De La; Roddy, P. A.; Hunton, D.E.; Ballenthin, J. O.; Pfaff, Robert F.
2012-01-01
Broad plasma depletions in the equatorial ionosphere near dawn are region in which the plasma density is reduced by 1-3 orders of magnitude over thousands of kilometers in longitude. This phenomenon is observed repeatedly by the Communication/Navigation Outage Forecasting System (C/NOFS) satellite during deep solar minimum. The plasma flow inside the depletion region can be strongly upward. The possible causal mechanism for the formation of broad plasma depletions is that the broad depletions result from merging of multiple equatorial plasma bubbles. The purpose of this study is to demonstrate the feasibility of the merging mechanism with new observations and simulations. We present C/NOFS observations for two cases. A series of plasma bubbles is first detected by C/NOFS over a longitudinal range of 3300-3800 km around midnight. Each of the individual bubbles has a typical width of approx 100 km in longitude, and the upward ion drift velocity inside the bubbles is 200-400 m/s. The plasma bubbles rotate with the Earth to the dawn sector and become broad plasma depletions. The observations clearly show the evolution from multiple plasma bubbles to broad depletions. Large upward plasma flow occurs inside the depletion region over 3800 km in longitude and exists for approx 5 h. We also present the numerical simulations of bubble merging with the physics-based low-latitude ionospheric model. It is found that two separate plasma bubbles join together and form a single, wider bubble. The simulations show that the merging process of plasma bubbles can indeed occur in incompressible ionospheric plasma. The simulation results support the merging mechanism for the formation of broad plasma depletions.
NASA Astrophysics Data System (ADS)
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2018-06-01
In this study, we present a method for improving the quality of automatic single fallen tree stem segmentation in ALS data by applying a specialized constrained conditional random field (CRF). The entire processing pipeline is composed of two steps. First, short stem segments of equal length are detected and a subset of them is selected for further processing, while in the second step the chosen segments are merged to form entire trees. The first step is accomplished using the specialized CRF defined on the space of segment labelings, capable of finding segment candidates which are easier to merge subsequently. To achieve this, the CRF considers not only the features of every candidate individually, but incorporates pairwise spatial interactions between adjacent segments into the model. In particular, pairwise interactions include a collinearity/angular deviation probability which is learned from training data as well as the ratio of spatial overlap, whereas unary potentials encode a learned probabilistic model of the laser point distribution around each segment. Each of these components enters the CRF energy with its own balance factor. To process previously unseen data, we first calculate the subset of segments for merging on a grid of balance factors by minimizing the CRF energy. Then, we perform the merging and rank the balance configurations according to the quality of their resulting merged trees, obtained from a learned tree appearance model. The final result is derived from the top-ranked configuration. We tested our approach on 5 plots from the Bavarian Forest National Park using reference data acquired in a field inventory. Compared to our previous segment selection method without pairwise interactions, an increase in detection correctness and completeness of up to 7 and 9 percentage points, respectively, was observed.
NASA Astrophysics Data System (ADS)
Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke
2017-08-01
In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH4+ strategy for ethylene and SO2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO2 from fruits. It was satisfied that trace ethylene and SO2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO2 during the entire LVCC sampling process were proved to be < 4.3% and 2.1% respectively. Good recoveries for ethylene and sulfur dioxide from fruit samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS.
Zhang, Zhuomin; Zhan, Yisen; Huang, Yichun; Li, Gongke
2017-08-05
In this work, a portable large-volume constant-concentration (LVCC) sampling technique coupling with surface-enhanced Raman spectroscopy (SERS) was developed for the rapid on-site gas analysis based on suitable derivatization methods. LVCC sampling technique mainly consisted of a specially designed sampling cell including the rigid sample container and flexible sampling bag, and an absorption-derivatization module with a portable pump and a gas flowmeter. LVCC sampling technique allowed large, alterable and well-controlled sampling volume, which kept the concentration of gas target in headspace phase constant during the entire sampling process and made the sampling result more representative. Moreover, absorption and derivatization of gas target during LVCC sampling process were efficiently merged in one step using bromine-thiourea and OPA-NH 4 + strategy for ethylene and SO 2 respectively, which made LVCC sampling technique conveniently adapted to consequent SERS analysis. Finally, a new LVCC sampling-SERS method was developed and successfully applied for rapid analysis of trace ethylene and SO 2 from fruits. It was satisfied that trace ethylene and SO 2 from real fruit samples could be actually and accurately quantified by this method. The minor concentration fluctuations of ethylene and SO 2 during the entire LVCC sampling process were proved to be <4.3% and 2.1% respectively. Good recoveries for ethylene and sulfur dioxide from fruit samples were achieved in range of 95.0-101% and 97.0-104% respectively. It is expected that portable LVCC sampling technique would pave the way for rapid on-site analysis of accurate concentrations of trace gas targets from real samples by SERS. Copyright © 2017 Elsevier B.V. All rights reserved.
Cortical Merging in S1 as a Substrate for Tactile Input Grouping
Zennou-Azogui, Yoh’I; Xerri, Christian
2018-01-01
Abstract Perception is a reconstruction process guided by rules based on knowledge about the world. Little is known about the neural implementation of the rules of object formation in the tactile sensory system. When two close tactile stimuli are delivered simultaneously on the skin, subjects feel a unique sensation, spatially centered between the two stimuli. Voltage-sensitive dye imaging (VSDi) and electrophysiological recordings [local field potentials (LFPs) and single units] were used to extract the cortical representation of two-point tactile stimuli in the primary somatosensory cortex of anesthetized Long-Evans rats. Although layer 4 LFP responses to brief costimulation of the distal region of two digits resembled the sum of individual responses, approximately one-third of single units demonstrated merging-compatible changes. In contrast to previous intrinsic optical imaging studies, VSD activations reflecting layer 2/3 activity were centered between the representations of the digits stimulated alone. This merging was found for every tested distance between the stimulated digits. We discuss this laminar difference as evidence that merging occurs through a buildup stream and depends on the superposition of inputs, which increases with successive stages of sensory processing. These findings show that layers 2/3 are involved in the grouping of sensory inputs. This process that could be inscribed in the cortical computing routine and network organization is likely to promote object formation and implement perception rules. PMID:29354679
A Class of Administrative Models for Maintaining Anonymity During Merge of Data Files. A Draft.
ERIC Educational Resources Information Center
Boruch, Robert F.
This report examines a series of general models that represent the process of merging records from separate files when it becomes essential to inhibit identifiability of records in at least one of the files. Models are illustrated symbolically by flow diagrams, and examples of each variation are taken from the social sciences. These variations…
Binary partition tree analysis based on region evolution and its application to tree simplification.
Lu, Huihai; Woods, John C; Ghanbari, Mohammed
2007-04-01
Pyramid image representations via tree structures are recognized methods for region-based image analysis. Binary partition trees can be applied which document the merging process with small details found at the bottom levels and larger ones close to the root. Hindsight of the merging process is stored within the tree structure and provides the change histories of an image property from the leaf to the root node. In this work, the change histories are modelled by evolvement functions and their second order statistics are analyzed by using a knee function. Knee values show the reluctancy of each merge. We have systematically formulated these findings to provide a novel framework for binary partition tree analysis, where tree simplification is demonstrated. Based on an evolvement function, for each upward path in a tree, the tree node associated with the first reluctant merge is considered as a pruning candidate. The result is a simplified version providing a reduced solution space and still complying with the definition of a binary tree. The experiments show that image details are preserved whilst the number of nodes is dramatically reduced. An image filtering tool also results which preserves object boundaries and has applications for segmentation.
Field-aligned currents and ion convection at high altitudes
NASA Technical Reports Server (NTRS)
Burch, J. L.; Reiff, P. H.
1985-01-01
Hot plasma observations from Dynamics Explorer 1 have been used to investigate solar-wind ion injection, Birkeland currents, and plasma convection at altitudes above 2 earth-radii in the morning sector. The results of the study, along with the antiparallel merging hypothesis, have been used to construct a By-dependent global convection model. A significant element of the model is the coexistence of three types of convection cells (merging cells, viscous cells, and lobe cells). As the IMF direction varies, the model accounts for the changing roles of viscous and merging processes and makes testable predictions about several magnetospheric phenomena, including the newly-observed theta aurora in the polar cap.
NASA Astrophysics Data System (ADS)
Prat, O. P.; Nelson, B. R.; Stevens, S. E.; Nickl, E.; Seo, D. J.; Kim, B.; Zhang, J.; Qi, Y.
2015-12-01
The processing of radar-only precipitation via the reanalysis from the National Mosaic and Multi-Sensor Quantitative (NMQ/Q2) based on the WSR-88D Next-generation Radar (Nexrad) network over the Continental United States (CONUS) is completed for the period covering from 2002 to 2011. While this constitutes a unique opportunity to study precipitation processes at higher resolution than conventionally possible (1-km, 5-min), the long-term radar-only product needs to be merged with in-situ information in order to be suitable for hydrological, meteorological and climatological applications. The radar-gauge merging is performed by using rain gauge information at daily (Global Historical Climatology Network-Daily: GHCN-D), hourly (Hydrometeorological Automated Data System: HADS), and 5-min (Automated Surface Observing Systems: ASOS; Climate Reference Network: CRN) resolution. The challenges related to incorporating differing resolution and quality networks to generate long-term large-scale gridded estimates of precipitation are enormous. In that perspective, we are implementing techniques for merging the rain gauge datasets and the radar-only estimates such as Inverse Distance Weighting (IDW), Simple Kriging (SK), Ordinary Kriging (OK), and Conditional Bias-Penalized Kriging (CBPK). An evaluation of the different radar-gauge merging techniques is presented and we provide an estimate of uncertainty for the gridded estimates. In addition, comparisons with a suite of lower resolution QPEs derived from ground based radar measurements (Stage IV) are provided in order to give a detailed picture of the improvements and remaining challenges.
Problems in merging Earth sensing satellite data sets
NASA Technical Reports Server (NTRS)
Smith, Paul H.; Goldberg, Michael J.
1987-01-01
Satellite remote sensing systems provide a tremendous source of data flow to the Earth science community. These systems provide scientists with data of types and on a scale previously unattainable. Looking forward to the capabilities of Space Station and the Earth Observing System (EOS), the full realization of the potential of satellite remote sensing will be handicapped by inadequate information systems. There is a growing emphasis in Earth science research to ask questions which are multidisciplinary in nature and global in scale. Many of these research projects emphasize the interactions of the land surface, the atmosphere, and the oceans through various physical mechanisms. Conducting this research requires large and complex data sets and teams of multidisciplinary scientists, often working at remote locations. A review of the problems of merging these large volumes of data into spatially referenced and manageable data sets is presented.
[Atrio-ventricular pressure difference associated with mitral valve motion].
Wang, L M; Mori, H; Minezaki, K; Shinozaki, Y; Okino, H
1990-05-01
Pressure difference (PD) across the mitral valve was analyzed by a computer-aided catheter system in dogs. Positive PD (PPD) was consistently traced in the initial phase of rapid filling. While heart rate (HR) was below 100 beat/min, a negative PD (NPD) followed the above PPD. In the period between the NPD and the 2nd PPD due to atrial contraction, PD was kept at zero, while LA and LV pressures were gradually elevated by pulmonary venous return. As HR exceeded 100, 2 positive peaks of PD merged into M-shaped or mono-peaked PD. Through higher inflow resistance produced by artificial mitral stenosis, PPD peak decayed without NPD. In mitral regurgitation with an acute volume overload, all of the PD amplitudes were exaggerated. Thus the quick reversal of PD suggested the effect in blood filling process across the mitral valve.
Geometric representation methods for multi-type self-defining remote sensing data sets
NASA Technical Reports Server (NTRS)
Anuta, P. E.
1980-01-01
Efficient and convenient representation of remote sensing data is highly important for an effective utilization. The task of merging different data types is currently dealt with by treating each case as an individual problem. A description is provided of work which is carried out to standardize the multidata merging process. The basic concept of the new approach is that of the self-defining data set (SDDS). The creation of a standard is proposed. This standard would be such that data which may be of interest in a large number of earth resources remote sensing applications would be in a format which allows convenient and automatic merging. Attention is given to details regarding the multidata merging problem, a geometric description of multitype data sets, image reconstruction from track-type data, a data set generation system, and an example multitype data set.
Phase transition and flow-rate behavior of merging granular flows.
Hu, Mao-Bin; Liu, Qi-Yi; Jiang, Rui; Hou, Meiying; Wu, Qing-Song
2015-02-01
Merging of granular flows is ubiquitous in industrial, mining, and geological processes. However, its behavior remains poorly understood. This paper studies the phase transition and flow-rate behavior of two granular flows merging into one channel. When the main channel is wider than the side channel, the system shows a remarkable two-sudden-drops phenomenon in the outflow rate when gradually increasing the main inflow. When gradually decreasing the main inflow, the system shows obvious hysteresis phenomenon. We study the flow-rate-drop phenomenon by measuring the area fraction and the mean velocity at the merging point. The phase diagram of the system is also presented to understand the occurrence of the phenomenon. We find that the dilute-to-dense transition occurs when the area fraction of particles at the joint point exceeds a critical value ϕ(c)=0.65±0.03.
The collaborative effect of ram pressure and merging on star formation and stripping fraction
NASA Astrophysics Data System (ADS)
Bischko, J. C.; Steinhauser, D.; Schindler, S.
2015-04-01
Aims: We investigate the effect of ram pressure stripping (RPS) on several simulations of merging pairs of gas-rich spiral galaxies. We are concerned with the changes in stripping efficiency and the time evolution of the star formation rate. Our goal is to provide an estimate of the combined effect of merging and RPS compared to the influence of the individual processes. Methods: We make use of the combined N-body/hydrodynamic code GADGET-2. The code features a threshold-based statistical recipe for star formation, as well as radiative cooling and modeling of galactic winds. In our simulations, we vary mass ratios between 1:4 and 1:8 in a binary merger. We sample different geometric configurations of the merging systems (edge-on and face-on mergers, different impact parameters). Furthermore, we vary the properties of the intracluster medium (ICM) in rough steps: the speed of the merging system relative to the ICM between 500 and 1000 km s-1, the ICM density between 10-29 and 10-27 g cm-3, and the ICM direction relative to the mergers' orbital plane. Ram pressure is kept constant within a simulation time period, as is the ICM temperature of 107 K. Each simulation in the ICM is compared to simulations of the merger in vacuum and the non-merging galaxies with acting ram pressure. Results: Averaged over the simulation time (1 Gyr) the merging pairs show a negligible 5% enhancement in SFR, when compared to single galaxies under the same environmental conditions. The SFRs peak at the time of the galaxies first fly-through. There, our simulations show SFRs of up to 20 M⊙ yr-1 (compared to 3 M⊙ yr-1 of the non-merging galaxies in vacuum). In the most extreme case, this constitutes a short-term (<50 Myr) SFR increase of 50 % over the non-merging galaxies experiencing ram pressure. The wake of merging galaxies in the ICM typically has a third to half the star mass seen in the non-merging galaxies and 5% to 10% less gas mass. The joint effect of RPS and merging, according to our simulations, is not significantly different from pure ram pressure effects.
The Inflammatory Sequelae of Aortic Balloon Occlusion in Hemorrhagic Shock
2014-04-13
circulating volume) and 30, 60, or 90 min of REBOA. Data are plotted as mean value. (A) MAP (B) SVR (C) CO. j o u r n a l o f s u r g i c a l r e s...could be included within an extracorporeal circuit merged with a REBOA system. Equally, a perfusion capable REBOA catheter [36] could be used to
Quantification of human body fat tissue percentage by MRI.
Müller, Hans-Peter; Raudies, Florian; Unrath, Alexander; Neumann, Heiko; Ludolph, Albert C; Kassubek, Jan
2011-01-01
The MRI-based evaluation of the quantity and regional distribution of adipose tissue is one objective measure in the investigation of obesity. The aim of this article was to report a comprehensive and automatic analytical method for the determination of the volumes of subcutaneous fat tissue (SFT) and visceral fat tissue (VFT) in either the whole human body or selected slices or regions of interest. Using an MRI protocol in an examination position that was convenient for volunteers and patients with severe diseases, 22 healthy subjects were examined. The software platform was able to merge MRI scans of several body regions acquired in separate acquisitions. Through a cascade of image processing steps, SFT and VFT volumes were calculated. Whole-body SFT and VFT distributions, as well as fat distributions of defined body slices, were analysed in detail. Complete three-dimensional datasets were analysed in a reproducible manner with as few operator-dependent interventions as possible. In order to determine the SFT volume, the ARTIS (Adapted Rendering for Tissue Intensity Segmentation) algorithm was introduced. The advantage of the ARTIS algorithm was the delineation of SFT volumes in regions in which standard region grow techniques fail. Using the ARTIS algorithm, an automatic SFT volume detection was feasible. MRI data analysis was able to determine SFT and VFT volume percentages using new analytical strategies. With the techniques described, it was possible to detect changes in SFT and VFT percentages of the whole body and selected regions. The techniques presented in this study are likely to be of use in obesity-related investigations, as well as in the examination of longitudinal changes in weight during various medical conditions. Copyright © 2010 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Burns, Robert W., Jr.
Three contiguous schools in the upper midwest--a teacher's training college and a private four-year college in one state, and a land-grant university in another--were studied to see if their libraries could merge one of their major divisions--technical services--into a single administrative unit. Potential benefits from such a merger were felt to…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.
Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less
Michels-Clark, Tara M.; Savici, Andrei T.; Lynch, Vickie E.; ...
2016-03-01
Evidence is mounting that potentially exploitable properties of technologically and chemically interesting crystalline materials are often attributable to local structure effects, which can be observed as modulated diffuse scattering (mDS) next to Bragg diffraction (BD). BD forms a regular sparse grid of intense discrete points in reciprocal space. Traditionally, the intensity of each Bragg peak is extracted by integration of each individual reflection first, followed by application of the required corrections. In contrast, mDS is weak and covers expansive volumes of reciprocal space close to, or between, Bragg reflections. For a representative measurement of the diffuse scattering, multiple sample orientationsmore » are generally required, where many points in reciprocal space are measured multiple times and the resulting data are combined. The common post-integration data reduction method is not optimal with regard to counting statistics. A general and inclusive data processing method is needed. In this contribution, a comprehensive data analysis approach is introduced to correct and merge the full volume of scattering data in a single step, while correctly accounting for the statistical weight of the individual measurements. Lastly, development of this new approach required the exploration of a data treatment and correction protocol that includes the entire collected reciprocal space volume, using neutron time-of-flight or wavelength-resolved data collected at TOPAZ at the Spallation Neutron Source at Oak Ridge National Laboratory.« less
Asthma in furniture and wood processing workers: a systematic review.
Wiggans, R E; Evans, G; Fishwick, D; Barber, C M
2016-04-01
Wood dust is a common cause of occupational asthma. There is potential for high exposure to wood dust during furniture and wood manufacturing processes. To evaluate the evidence for non-neoplastic respiratory ill health associated with work in the furniture and wood manufacturing sector. A systematic review was performed according to PRISMA guidelines. Articles were graded using SIGN (Scottish Intercollegiate Guideline Network) and MERGE (Methods for Evaluating Research Guidelines and Evidence) criteria, with data grouped by study outcome. Initial searches identified 1328 references, from which 55 articles were included in the review. Fourteen studies were graded A using MERGE or >2++ using SIGN. All but one paper describing airway symptoms reported an increased risk in higher wood dust exposed workers in comparison to lower or non-exposed groups. Five studies reporting asthma examined dose response; three found a positive effect. The relative risk for asthma in exposed workers in the single meta-analysis was 1.5 (95% CI 1.25-1.87). Two studies reported more obstructive lung function (forced expiratory volume in 1 s [FEV1]/forced vital capacity < 0.7) in exposed populations. Excess longitudinal FEV1 decline was reported in female smokers with high wood dust exposures in one study population. Where measured, work-related respiratory symptoms did not clearly relate to specific wood immunoglobulin E positivity. Work in this sector was associated with a significantly increased risk of respiratory symptoms and asthma. The evidence for wood dust exposure causing impaired lung function is less clearly established. Further study is required to better understand the prevalence, and causes, of respiratory problems within this sector. © Crown copyright 2015.
NASA Astrophysics Data System (ADS)
Mapelli, Michela
2018-02-01
The first four LIGO detections have confirmed the existence of massive black holes (BHs), with mass 30-40 M⊙. Such BHs might originate from massive metal-poor stars (Z < 0:3 Z⊙) or from gravitational instabilities in the early Universe. The formation channels of merging BHs are still poorly constrained. The measure of mass, spin and redshift distribution of merging BHs will give us fundamental clues to distinguish between different models. In parallel, a better understanding of several astrophysical processes (e.g. common envelope, core-collapse SNe, and dynamical evolution of BHs) is decisive, to shed light on the formation channels of merging BHs.
Challenges in global ballast water management.
Endresen, Øyvind; Lee Behrens, Hanna; Brynestad, Sigrid; Bjørn Andersen, Aage; Skjong, Rolf
2004-04-01
Ballast water management is a complex issue raising the challenge of merging international regulations, ship's specific configurations along with ecological conservation. This complexity is illustrated in this paper by considering ballast water volume, discharge frequency, ship safety and operational issues aligned with regional characteristics to address ecological risk for selected routes. A re-estimation of ballast water volumes gives a global annual level of 3500 Mton. Global ballast water volume discharged into open sea originating from ballast water exchange operations is estimated to approximately 2800 Mton. Risk based decision support systems coupled to databases for different ports and invasive species characteristics and distributions can allow for differentiated treatment levels while maintaining low risk levels. On certain routes, the risk is estimated to be unacceptable and some kind of ballast water treatment or management should be applied.
Leveraging External Sensor Data for Enhanced Space Situational Awareness
2015-09-17
Space Administration Infrared Processing and Analysis CenterTeacher Archive Research Program NN Nearest Neighbor NOMAD Naval Observatory Merged...used to improve SSA? 1.2.2 Assumptions and Limitations This research assumes that the stars in Naval Observatory Merged Astrometric Dataset ( NOMAD ...developed and maintained by the U. S. Naval Observatory (USNO), but as the NOMAD catalog is much easier to obtain than the UCAC, NOMAD will be used as the
Spectroscopic Measurement of Ion Flow During Merging Start-up of Field-Reversed Configuration
NASA Astrophysics Data System (ADS)
Oka, Hirotaka; Inomoto, Michiaki; Tanabe, Hiroshi; Annoura, Masanobu; Ono, Yasushi; Nemoto, Koshichi
2012-10-01
The counter-helicity merging method [1] of field-reversed configuration (FRC) formation involves generation of bidirectional toroidal flow, known as a ``sling-shot.'' In two fluids regime, reconnection process is strongly affected by the Hall effect [2]. In this study, we have investigated the behavior of toroidal bidirectional flow generated by the counter-helicity merging in two-fluids regime. We use 2D Ion Doppler Spectroscopy to mesure toroidal ion flow during merging start-up of FRC from Ar gas. We defined two cases: one case with a radially pushed-in X line (case I) and the other case with a radially pushed-out X line(case O). The flow during the plasma merging shows radial asymmetry, as expected from the magnetic measurement, but finally relaxes to a unidirectional flow in plasma current direction in both cases. We observed larger toroidal flow in the plasma current direction in case I after FRC is formed, though the FRC in case O has larger magnetic flux. These results suggest that more ions are lost during merging start-up in case I. This selective ion loss might account for stability and confinement of FRCs probably maintained by high energy ions.[4pt] [1] Y. Ono, et al., Nucl. Fusion 39, pp. 2001-2008 (1999).[0pt] [2] M. Inomoto, et al., Phys. Rev. Lett., 97, 135002, (2006)
Chang, Ni-Bin; Bai, Kaixu; Chen, Chi-Farn
2017-10-01
Monitoring water quality changes in lakes, reservoirs, estuaries, and coastal waters is critical in response to the needs for sustainable development. This study develops a remote sensing-based multiscale modeling system by integrating multi-sensor satellite data merging and image reconstruction algorithms in support of feature extraction with machine learning leading to automate continuous water quality monitoring in environmentally sensitive regions. This new Earth observation platform, termed "cross-mission data merging and image reconstruction with machine learning" (CDMIM), is capable of merging multiple satellite imageries to provide daily water quality monitoring through a series of image processing, enhancement, reconstruction, and data mining/machine learning techniques. Two existing key algorithms, including Spectral Information Adaptation and Synthesis Scheme (SIASS) and SMart Information Reconstruction (SMIR), are highlighted to support feature extraction and content-based mapping. Whereas SIASS can support various data merging efforts to merge images collected from cross-mission satellite sensors, SMIR can overcome data gaps by reconstructing the information of value-missing pixels due to impacts such as cloud obstruction. Practical implementation of CDMIM was assessed by predicting the water quality over seasons in terms of the concentrations of nutrients and chlorophyll-a, as well as water clarity in Lake Nicaragua, providing synergistic efforts to better monitor the aquatic environment and offer insightful lake watershed management strategies. Copyright © 2017 Elsevier Ltd. All rights reserved.
Turbulence, selective decay, and merging in the SSX plasma wind tunnel
NASA Astrophysics Data System (ADS)
Gray, Tim; Brown, Michael; Flanagan, Ken; Werth, Alexandra; Lukin, V.
2012-10-01
A helical, relaxed plasma state has been observed in a long cylindrical volume. The cylinder has dimensions L = 1 m and R = 0.08 m. The cylinder is long enough so that the predicted minimum energy state is a close approximation to the infinite cylinder solution. The plasma is injected at v >=50 km/s by a coaxial magnetized plasma gun located at one end of the cylindrical volume. Typical plasma parameters are Ti= 25 eV, ne>=10^15 cm-3, and B = 0.25 T. The relaxed state is rapidly attained in 1--2 axial Alfv'en times after initiation of the plasma. Magnetic data is favorably compared with an analytical model. Magnetic data exhibits broadband fluctuations of the measured axial modes during the formation period. The broadband activity rapidly decays as the energy condenses into the lowest energy mode, which is in agreement to the minimum energy eigenstate of ∇xB = λB. While the global structure roughly corresponds to the minimum energy eigenstate for the wind tunnel geometry, the plasma is high beta (β= 0.5) and does not have a flat λ profile. Merging of two plasmoids in this configuration results in noticeably more dynamic activity compared to a single plasmoid. These episodes of activity exhibit s
MEASUREMENT OF FREE AIR ATOMIC BLAST PRESSURES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haskell, N.A.; Fava, J.A.; Brubaker, R.M.
1958-02-14
BS>Peak free-air overpressure versus time measurements in the 10-to-2 psi range were obtained as a function of distance directly over a nuclear burst at a low scaled height. This information was to be used to establish the points in space at which the reflected and direct shock waves merge into a single shock wave and to determine the overpressure as a function of distance for the merged wave, in support of drone-aircraft lethal-volume studies. It was also desired to obtain free air peak overpressure versus distance measurements for an atomic burst at a high altitude. Data are tabulated that weremore » obtained by deploying, from a B-29 aircraft, 10 parachute-borne instrumented canisters on each shot. The second objective was achieved by deploying 15 parachute-borne canisters from the strike aircraft on one shot. (C.H.)« less
Löwe, Roland; Mikkelsen, Peter Steen; Rasmussen, Michael R; Madsen, Henrik
2013-01-01
Merging of radar rainfall data with rain gauge measurements is a common approach to overcome problems in deriving rain intensities from radar measurements. We extend an existing approach for adjustment of C-band radar data using state-space models and use the resulting rainfall intensities as input for forecasting outflow from two catchments in the Copenhagen area. Stochastic grey-box models are applied to create the runoff forecasts, providing us with not only a point forecast but also a quantification of the forecast uncertainty. Evaluating the results, we can show that using the adjusted radar data improves runoff forecasts compared with using the original radar data and that rain gauge measurements as forecast input are also outperformed. Combining the data merging approach with short-term rainfall forecasting algorithms may result in further improved runoff forecasts that can be used in real time control.
NASA Astrophysics Data System (ADS)
Browning, P. K.; Cardnell, S.; Evans, M.; Arese Lucini, F.; Lukin, V. S.; McClements, K. G.; Stanier, A.
2016-01-01
Twisted magnetic flux ropes are ubiquitous in laboratory and astrophysical plasmas, and the merging of such flux ropes through magnetic reconnection is an important mechanism for restructuring magnetic fields and releasing free magnetic energy. The merging-compression scenario is one possible start-up scheme for spherical tokamaks, which has been used on the Mega Amp Spherical Tokamak (MAST). Two current-carrying plasma rings or flux ropes approach each due to mutual attraction, forming a current sheet and subsequently merge through magnetic reconnection into a single plasma torus, with substantial plasma heating. Two-dimensional resistive and Hall-magnetohydrodynamic simulations of this process are reported, including a strong guide field. A model of the merging based on helicity-conserving relaxation to a minimum energy state is also presented, extending previous work to tight-aspect-ratio toroidal geometry. This model leads to a prediction of the final state of the merging, in good agreement with simulations and experiment, as well as the average temperature rise. A relaxation model of reconnection between two or more flux ropes in the solar corona is also described, allowing for different senses of twist, and the implications for heating of the solar corona are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, William A., E-mail: wadawson@ucdavis.edu
2013-08-01
Merging galaxy clusters have become one of the most important probes of dark matter, providing evidence for dark matter over modified gravity and even constraints on the dark matter self-interaction cross-section. To properly constrain the dark matter cross-section it is necessary to understand the dynamics of the merger, as the inferred cross-section is a function of both the velocity of the collision and the observed time since collision. While the best understanding of merging system dynamics comes from N-body simulations, these are computationally intensive and often explore only a limited volume of the merger phase space allowed by observed parametermore » uncertainty. Simple analytic models exist but the assumptions of these methods invalidate their results near the collision time, plus error propagation of the highly correlated merger parameters is unfeasible. To address these weaknesses I develop a Monte Carlo method to discern the properties of dissociative mergers and propagate the uncertainty of the measured cluster parameters in an accurate and Bayesian manner. I introduce this method, verify it against an existing hydrodynamic N-body simulation, and apply it to two known dissociative mergers: 1ES 0657-558 (Bullet Cluster) and DLSCL J0916.2+2951 (Musket Ball Cluster). I find that this method surpasses existing analytic models-providing accurate (10% level) dynamic parameter and uncertainty estimates throughout the merger history. This, coupled with minimal required a priori information (subcluster mass, redshift, and projected separation) and relatively fast computation ({approx}6 CPU hours), makes this method ideal for large samples of dissociative merging clusters.« less
Development of a seamless multisource topographic/bathymetric elevation model of Tampa Bay
Gesch, D.; Wilson, R.
2001-01-01
Many applications of geospatial data in coastal environments require knowledge of the nearshore topography and bathymetry. However, because existing topographic and bathymetric data have been collected independently for different purposes, it has been difficult to use them together at the land/water interface owing to differences in format, projection, resolution, accuracy, and datums. As a first step toward solving the problems of integrating diverse coastal datasets, the U.S. Geological Survey (USGS) and the National Oceanic and Atmospheric Administration (NOAA) are collaborating on a joint demonstration project to merge their data for the Tampa Bay region of Florida. The best available topographic and bathymetric data were extracted from the USGS National Elevation Dataset and the NOAA hydrographic survey database, respectively. Before being merged, the topographic and bathymetric datasets were processed with standard geographic information system tools to place them in a common horizontal reference frame. Also, a key part of the preprocessing was transformation to a common vertical reference through the use of VDatum, a new tool created by NOAA's National Geodetic Survey for vertical datum conversions. The final merged product is a seamless topographic/bathymetric model covering the Tampa Bay region at a grid spacing of 1 arc-second. Topographic LIDAR data were processed and merged with the bathymetry to demonstrate the incorporation of recent third party data sources for several test areas. A primary application of a merged topographic/bathymetric elevation model is for user-defined shoreline delineation, in which the user decides on the tidal condition (for example, low or high water) to be superimposed on the elevation data to determine the spatial position of the water line. Such a use of merged topographic/bathymetric data could lead to the development of a shoreline zone, which could reduce redundant mapping efforts by federal, state, and local agencies by allowing them to customize their portrayals of the shoreline using a standard baseline elevation dataset.
Hubble Views Two Galaxies Merging
2017-12-08
This image, taken with the Wide Field Planetary Camera 2 on board the NASA/ESA Hubble Space Telescope, shows the galaxy NGC 6052, located around 230 million light-years away in the constellation of Hercules. It would be reasonable to think of this as a single abnormal galaxy, and it was originally classified as such. However, it is in fact a “new” galaxy in the process of forming. Two separate galaxies have been gradually drawn together, attracted by gravity, and have collided. We now see them merging into a single structure. As the merging process continues, individual stars are thrown out of their original orbits and placed onto entirely new paths, some very distant from the region of the collision itself. Since the stars produce the light we see, the “galaxy” now appears to have a highly chaotic shape. Eventually, this new galaxy will settle down into a stable shape, which may not resemble either of the two original galaxies. Image credit: ESA/Hubble & NASA, Acknowledgement: Judy Schmidt
Three-dimensional imaging using phase retrieval with two focus planes
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh; Meir, Rinat; Zalevsky, Zeev
2016-03-01
This work presents a technique for a full 3D imaging of biological samples tagged with gold-nanoparticles (GNPs) using only two images, rather than many images per volume as is currently needed for 3D optical sectioning microscopy. The proposed approach is based on the Gerchberg-Saxton (GS) phase retrieval algorithm. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. In addition, since the method requires the capturing of two images only, it can be suitable for 3D live cell imaging. The proposed concept is presented and validated both on simulated data as well as experimentally.
Every factor helps: Rapid Ptychographic Reconstruction
NASA Astrophysics Data System (ADS)
Nashed, Youssef
2015-03-01
Recent advances in microscopy, specifically higher spatial resolution and data acquisition rates, require faster and more robust phase retrieval reconstruction methods. Ptychography is a phase retrieval technique for reconstructing the complex transmission function of a specimen from a sequence of diffraction patterns in visible light, X-ray, and electron microscopes. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes. Waiting to postprocess datasets offline results in missed opportunities. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs). A final specimen reconstruction is then achieved by different techniques to merge sub-dataset results into a single complex phase and amplitude image. Results are shown on a simulated specimen and real datasets from X-ray experiments conducted at a synchrotron light source.
Seasonal Cycles of Oceanic Transports in the Eastern Subpolar North Atlantic
NASA Astrophysics Data System (ADS)
Gary, Stefan F.; Cunningham, Stuart A.; Johnson, Clare; Houpert, Loïc.; Holliday, N. Penny; Behrens, Erik; Biastoch, Arne; Böning, Claus W.
2018-02-01
The variability of the Atlantic Meridional Overturning Circulation (AMOC) may play a role in sea surface temperature predictions on seasonal to decadal time scales. Therefore, AMOC seasonal cycles are a potential baseline for interpreting predictions. Here we present estimates for the seasonal cycle of transports of volume, temperature, and freshwater associated with the upper limb of the AMOC in the eastern subpolar North Atlantic on the Extended Ellett Line hydrographic section between Scotland and Iceland. Due to weather, ship-based observations are primarily in summer. Recent glider observations during other seasons present an opportunity to investigate the seasonal variability in the upper layer of the AMOC. First, we document a new method to quality control and merge ship, float, and glider hydrographic observations. This method accounts for the different spatial sampling rates of the three platforms. The merged observations are used to compute seasonal cycles of volume, temperature, and freshwater transports in the Rockall Trough. These estimates are similar to the seasonal cycles in two eddy-resolving ocean models. Volume transport appears to be the primary factor modulating other Rockall Trough transports. Finally, we show that the weakest transports occur in summer, consistent with seasonal changes in the regional-scale wind stress curl. Although the seasonal cycle is weak compared to other variability in this region, the amplitude of the seasonal cycle in the Rockall Trough, roughly 0.5-1 Sv about a mean of 3.4 Sv, may account for up to 7-14% of the heat flux between Scotland and Greenland.
High-Resolution Imaging of Colliding and Merging Galaxies
NASA Astrophysics Data System (ADS)
Whitmore, Brad
1991-07-01
We propose to obtain high-resolution images, using the WF/PC, of two colliding and merging galaxies (i.e., NGC 4038/4039 = "The Antennae" and NGC 7252 ="Atoms-for-Peace Galaxy". Our goal is to use HST to make critical observations of each object in order to gain a better understanding of the various phases of the merger process. Our primary objective is to determine whether globular clusters are formed during mergers\\?
Limiting fragmentation from scale-invariant merging of fast partons
NASA Astrophysics Data System (ADS)
Bialas, A.; Bzdak, A.; Peschanski, R.
2008-07-01
Exploiting the idea that the fast partons of an energetic projectile can be treated as sources of colour radiation interpreted as wee partons, it is shown that the recently observed property of extended limiting fragmentation implies a scaling law for the rapidity distribution of fast partons. This leads to a picture of a self-similar process where, for fixed total rapidity Y, the sources merge with probability varying as 1 / y.
NIMROD simulations of the IPA FRC experiment
NASA Astrophysics Data System (ADS)
Milroy, Richard
2015-11-01
The IPA experiment created a high temperature plasma by merging and compressing supersonic θ-pinch formed FRCs. The NIMROD code has been used to simulate this process. These calculations include the θ-pinch formation and acceleration of two FRC's using the dynamic formation methodology, and their translation to a central compression chamber where they merge and are magnetically compressed. Transport coefficients have been tuned so simulation results agree well with experimental observation. The inclusion of the Hall term is essential for the FRCs merge quickly, as observed experimentally through the excluded flux profiles. The inclusion of a significant anisotropic viscosity is required for the excluded flux profiles to agree well with the experiment. We plan to extend this validation work using the new ARPA-E funded Venti experiment at Helion Energy in Redmond WA. This will be a very well diagnosed experiment where two FRCs merge (like the IPA experiment) and are then compressed to near-fusion conditions. Preliminary calculations with parameters relevant to this experiment have been made, and some numerical issues identified.
Maggu, Akshay R; Liu, Fang; Antoniou, Mark; Wong, Patrick C M
2016-01-01
Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators ) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change.
Maggu, Akshay R.; Liu, Fang; Antoniou, Mark; Wong, Patrick C. M.
2016-01-01
Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change. PMID:28066218
Microelectromechanical Systems
NASA Technical Reports Server (NTRS)
Gabriel, Kaigham J.
1995-01-01
Micro-electromechanical systems (MEMS) is an enabling technology that merges computation and communication with sensing and actuation to change the way people and machines interact with the physical world. MEMS is a manufacturing technology that will impact widespread applications including: miniature inertial measurement measurement units for competent munitions and personal navigation; distributed unattended sensors; mass data storage devices; miniature analytical instruments; embedded pressure sensors; non-invasive biomedical sensors; fiber-optics components and networks; distributed aerodynamic control; and on-demand structural strength. The long term goal of ARPA's MEMS program is to merge information processing with sensing and actuation to realize new systems and strategies for both perceiving and controlling systems, processes, and the environment. The MEMS program has three major thrusts: advanced devices and processes, system design, and infrastructure.
Foam model of planetary formation
NASA Astrophysics Data System (ADS)
Andreev, Y.; Potashko, O.
The Analysis of 2637 terrestrial minerals shows presence of characteristic element and isotope structure for each ore irrespective of its site. The model of processes geo-nuclear syntheses elements is offered due to avalanche merge of nucleus which simply explains these laws. Main assumption: nucleus, atoms, connections, ores and minerals were formed in volume of the modern Earth at an early stage of its evolution from uniform proto-substance. Substantive provisions of the model: 1)The most part of nucleus of atoms of all chemical elements of the Earth's crust were formed on the mechanism of avalanche chain merge practically in one stage (in geological scales) in a course of correlated(in scales of a planet) process with allocation of a plenty of heat. 2) Atoms of chemical elements were generated during cooling a planet with preservation of a relative spatial arrangement of nucleus. 3) Chemical compounds have arisen at cooling a surface of a planet and were accompanied by reorganizations (hashing) macro- and geo-scale. 4) Mineral formations are consequence of correlated behaviour of chemical compounds on microscopic scales during phase transition from gaseous or liquid to a firm condition. 5) Synthesis of chemical elements in deep layers of the Earth occurs till now. "Foaming'' instead of "Big Bang" The physical space is continual gas-fluid environment consist of super fluid foam. The continuity, keeping and uniqueness of proto-substance are postulated. Scenario: primary singularity-> droplets(proto-galaxies) droplets(proto-stars)-> droplets(proto-planets)-> droplets(proto- satellites)-> droplets. Proto-planet substance->proton+electron as 1st generation disintegration result of primary foam. Nuclei or nucleonic crystals are the 2nd generation in result of cascade merge of protons into conglomerates. The theory has applied to the analysis of samples of native copper deposit from Rafalovka's ore deposit in Ukraine. The abundance of elements by use of the roentgen fluorescent microanalysis has been made. Changes of a parity of elements are described by nuclear synthesis reactions: 16O+47Ti, 23Na+40Ca, 24Mg+39K, 31P+32S-> 63Cu; 16O+49Ti, 23Na+42Ca, 26Mg+39K, 31P+34S-> 65Cu Dramatical change of isotope parities of 56Fe and 57Fe in the sites of space carried on 3 millimetres. The content of 57Fe is greater then 56Fe in Cu granule.
Afterlife of a Drop Impacting a Liquid Pool
NASA Astrophysics Data System (ADS)
Saha, Abhishek; Wei, Yanju; Tang, Xiaoyu; Law, Chung K.
2017-11-01
Drop impact on liquid pool is ubiquitous in industrial processes, such as inkjet printing and spray coating. While merging of drop with the impacted liquid surface is essential to facilitate the printing and coating processes, it is the afterlife of this merged drop and associated mixing which control the quality of the printed or coated surface. In this talk we will report an experimental study on the structural evolution of the merged droplet inside the liquid pool. First, we will analyze the depth of the crater created on the pool surface by the impacted drop for a range of impact inertia, and we will derive a scaling relation and the associated characteristic time-scale. Next, we will focus on the toroidal vortex formed by the moving drop inside the liquid pool and assess the characteristic time and length scales of the penetration process. The geometry of the vortex structure which qualitatively indicates the degree of mixedness will also be discussed. Finally, we will present the results from experiments with various viscosities to demonstrate the role of viscous dissipation on the geometry and structure formed by the drop. This work is supported by the Army Research Office and the Xerox Corporation.
Best Merge Region Growing Segmentation with Integrated Non-Adjacent Region Object Aggregation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Tarabalka, Yuliya; Montesano, Paul M.; Gofman, Emanuel
2012-01-01
Best merge region growing normally produces segmentations with closed connected region objects. Recognizing that spectrally similar objects often appear in spatially separate locations, we present an approach for tightly integrating best merge region growing with non-adjacent region object aggregation, which we call Hierarchical Segmentation or HSeg. However, the original implementation of non-adjacent region object aggregation in HSeg required excessive computing time even for moderately sized images because of the required intercomparison of each region with all other regions. This problem was previously addressed by a recursive approximation of HSeg, called RHSeg. In this paper we introduce a refined implementation of non-adjacent region object aggregation in HSeg that reduces the computational requirements of HSeg without resorting to the recursive approximation. In this refinement, HSeg s region inter-comparisons among non-adjacent regions are limited to regions of a dynamically determined minimum size. We show that this refined version of HSeg can process moderately sized images in about the same amount of time as RHSeg incorporating the original HSeg. Nonetheless, RHSeg is still required for processing very large images due to its lower computer memory requirements and amenability to parallel processing. We then note a limitation of RHSeg with the original HSeg for high spatial resolution images, and show how incorporating the refined HSeg into RHSeg overcomes this limitation. The quality of the image segmentations produced by the refined HSeg is then compared with other available best merge segmentation approaches. Finally, we comment on the unique nature of the hierarchical segmentations produced by HSeg.
Bouncing-to-Merging Transition in Drop Impact on Liquid Film: Role of Liquid Viscosity.
Tang, Xiaoyu; Saha, Abhishek; Law, Chung K; Sun, Chao
2018-02-27
When a drop impacts on a liquid surface, it can either bounce back or merge with the surface. The outcome affects many industrial processes, in which merging is preferred in spray coating to generate a uniform layer and bouncing is desired in internal combustion engines to prevent accumulation of the fuel drop on the wall. Thus, a good understanding of how to control the impact outcome is highly demanded to optimize the performance. For a given liquid, a regime diagram of bouncing and merging outcomes can be mapped in the space of Weber number (ratio of impact inertia and surface tension) versus film thickness. In addition, recognizing that the liquid viscosity is a fundamental fluid property that critically affects the impact outcome through viscous dissipation of the impact momentum, here we investigate liquids with a wide range of viscosity from 0.7 to 100 cSt, to assess its effect on the regime diagram. Results show that while the regime diagram maintains its general structure, the merging regime becomes smaller for more viscous liquids and the retraction merging regime disappears when the viscosity is very high. The viscous effects are modeled and subsequently the mathematical relations for the transition boundaries are proposed which agree well with the experiments. The new expressions account for all the liquid properties and impact conditions, thus providing a powerful tool to predict and manipulate the outcome when a drop impacts on a liquid film.
PREFACE: 26th Summer School and International Symposium on the Physics of Ionized Gases (SPIG 2012)
NASA Astrophysics Data System (ADS)
Kuraica, Milorad; Mijatovic, Zoran
2012-11-01
This volume of Journal of Physics: Conference Series contains the general invited lectures, topical invited lectures and progress reports presented at the 26th Summer School and International Symposium on the Physics of Ionized Gases - SPIG 2012. The conference was held in Zrenjanin, Serbia, from 27-31 August. The SPIG conference has a 52 year long tradition. The structure of the papers in this volume cover the following sections: atomic collision processes, particle and laser beam interactions with solids, low temperature plasmas and general plasmas. As these four topics often overlap and merge in numerous fundamental studies and, more importantly applications, SPIG in general serves as a venue for exchanging ideas in the related fields. We hope that this volume will be an important source of information about progress in plasma physics and will be useful, first of all, for students, but also for plasma physics scientists. The Editors would like to thank the invited speakers for their participation at SPIG 2012 and for their efforts writing contributions for this volume. We also express our gratitude to the members of Scientific and Organizing committees for their efforts in organizing this SPIG. Especially we would like to thank the Ministry of Education, Science and Technological Development of Republic of Serbia, Provincial Secretariat for Science and Techonological Development, Province of Vojvodina, Institute Français de Serbie and Biser Zrenjanin for financial support as well as the European Physical Society (EPS) for supporting the award for the best poster of a young scientist and American Elements, USA. Milorad Kuraica Zoran Mijatovic October 2012 Editors
Optical-Near-infrared Color Gradients and Merging History of Elliptical Galaxies
NASA Astrophysics Data System (ADS)
Kim, Duho; Im, Myungshin
2013-04-01
It has been suggested that merging plays an important role in the formation and the evolution of elliptical galaxies. While gas dissipation by star formation is believed to steepen metallicity and color gradients of the merger products, mixing of stars through dissipation-less merging (dry merging) is believed to flatten them. In order to understand the past merging history of elliptical galaxies, we studied the optical-near-infrared (NIR) color gradients of 204 elliptical galaxies. These galaxies are selected from the overlap region of the Sloan Digital Sky Survey (SDSS) Stripe 82 and the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS). The use of optical and NIR data (g, r, and K) provides large wavelength baselines, and breaks the age-metallicity degeneracy, allowing us to derive age and metallicity gradients. The use of the deep SDSS Stripe 82 images makes it possible for us to examine how the color/age/metallicity gradients are related to merging features. We find that the optical-NIR color and the age/metallicity gradients of elliptical galaxies with tidal features are consistent with those of relaxed ellipticals, suggesting that the two populations underwent a similar merging history on average and that mixing of stars was more or less completed before the tidal features disappeared. Elliptical galaxies with dust features have steeper color gradients than the other two types, even after masking out dust features during the analysis, which can be due to a process involving wet merging. More importantly, we find that the scatter in the color/age/metallicity gradients of the relaxed and merging feature types decreases as their luminosities (or masses) increase at M > 1011.4 M ⊙ but stays large at lower luminosities. Mean metallicity gradients appear nearly constant over the explored mass range, but a possible flattening is observed at the massive end. According to our toy model that predicts how the distribution of metallicity gradients changes as a result of major dry merging, the mean metallicity gradient should flatten by 40% and its scatter becomes smaller by 80% per a mass-doubling scale if ellipticals evolve only through major dry merger. Our result, although limited by a number statistics at the massive end, is consistent with the picture that major dry merging is an important mechanism for the evolution for ellipticals at M > 1011.4 M ⊙, but is less important at the lower mass range.
OPTICAL-NEAR-INFRARED COLOR GRADIENTS AND MERGING HISTORY OF ELLIPTICAL GALAXIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Duho; Im, Myungshin
2013-04-01
It has been suggested that merging plays an important role in the formation and the evolution of elliptical galaxies. While gas dissipation by star formation is believed to steepen metallicity and color gradients of the merger products, mixing of stars through dissipation-less merging (dry merging) is believed to flatten them. In order to understand the past merging history of elliptical galaxies, we studied the optical-near-infrared (NIR) color gradients of 204 elliptical galaxies. These galaxies are selected from the overlap region of the Sloan Digital Sky Survey (SDSS) Stripe 82 and the UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Surveymore » (LAS). The use of optical and NIR data (g, r, and K) provides large wavelength baselines, and breaks the age-metallicity degeneracy, allowing us to derive age and metallicity gradients. The use of the deep SDSS Stripe 82 images makes it possible for us to examine how the color/age/metallicity gradients are related to merging features. We find that the optical-NIR color and the age/metallicity gradients of elliptical galaxies with tidal features are consistent with those of relaxed ellipticals, suggesting that the two populations underwent a similar merging history on average and that mixing of stars was more or less completed before the tidal features disappeared. Elliptical galaxies with dust features have steeper color gradients than the other two types, even after masking out dust features during the analysis, which can be due to a process involving wet merging. More importantly, we find that the scatter in the color/age/metallicity gradients of the relaxed and merging feature types decreases as their luminosities (or masses) increase at M > 10{sup 11.4} M{sub Sun} but stays large at lower luminosities. Mean metallicity gradients appear nearly constant over the explored mass range, but a possible flattening is observed at the massive end. According to our toy model that predicts how the distribution of metallicity gradients changes as a result of major dry merging, the mean metallicity gradient should flatten by 40% and its scatter becomes smaller by 80% per a mass-doubling scale if ellipticals evolve only through major dry merger. Our result, although limited by a number statistics at the massive end, is consistent with the picture that major dry merging is an important mechanism for the evolution for ellipticals at M > 10{sup 11.4} M{sub Sun }, but is less important at the lower mass range.« less
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev
2017-02-01
Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.
NASA Astrophysics Data System (ADS)
Brusberg, Lars; Lang, Günter; Schröder, Henning
2011-01-01
The proposed novel packaging approach merges micro-system packaging and glass integrated optics. It provides 3D optical single-mode intra system links to bridge the gap between novel photonic integrated circuits and the glass fibers for inter system interconnects. We introduce our hybrid 3D photonic packaging approach based on thin glass substrates with planar integrated optical single-mode waveguides for fiber-to-chip and chip-to-chip links. Optical mirrors and lenses provide optical mode matching for photonic IC assemblies and optical fiber interconnects. Thin glass is commercially available in panel and wafer formats and characterizes excellent optical and high-frequency properties as reviewed in the paper. That makes it perfect for micro-system packaging. The adopted planar waveguide process based on ion-exchange technology is capable for high-volume manufacturing. This ion-exchange process and the optical propagation are described in detail for thin glass substrates. An extensive characterization of all basic circuit elements like straight and curved waveguides, couplers and crosses proves the low attenuation of the optical circuit elements.
NASA Astrophysics Data System (ADS)
Zhu, Tao; Ren, Ji-Rong; Mo, Shu-Fan
2009-12-01
In this paper, by making use of Duan's topological current theory, the evolution of the vortex filaments in excitable media is discussed in detail. The vortex filaments are found generating or annihilating at the limit points and encountering, splitting, or merging at the bifurcation points of a complex function Z(vec x, t). It is also shown that the Hopf invariant of knotted scroll wave filaments is preserved in the branch processes (splitting, merging, or encountering) during the evolution of these knotted scroll wave filaments. Furthermore, it also revealed that the “exclusion principle" in some chemical media is just the special case of the Hopf invariant constraint, and during the branch processes the “exclusion principle" is also protected by topology.
Error Characterisation and Merging of Active and Passive Microwave Soil Moisture Data Sets
NASA Astrophysics Data System (ADS)
Wagner, Wolfgang; Gruber, Alexander; de Jeu, Richard; Parinussa, Robert; Chung, Daniel; Dorigo, Wouter; Reimer, Christoph; Kidd, Richard
2015-04-01
As part of the Climate Change Initiative (CCI) programme of the European Space Agency (ESA) a data fusion system has been developed which is capable of ingesting surface soil moisture data derived from active and passive microwave sensors (ASCAT, AMSR-E, etc.) flown on different satellite platforms and merging them to create long and consistent time series of soil moisture suitable for use in climate change studies. The so-created soil moisture data records (latest version: ESA CCI SM v02.1 released on 5/12/2014) are freely available and can be obtained from http://www.esa-soilmoisture-cci.org/. As described by Wagner et al. (2012) the principle steps of the data fusion process are: 1) error characterisation, 2) matching to account for data set specific biases, and 3) merging. In this presentation we present the current data fusion process and discuss how new error characterisation methods, such as the increasingly popular triple collocation method as discussed for example by Zwieback et al. (2012) may be used to improve it. The main benefit of an improved error characterisation would be a more reliable identification of the best performing microwave soil moisture retrieval(s) for each grid point and each point in time. In case that two or more satellite data sets provides useful information, the estimated errors can be used to define the weights with which each satellite data set are merged, i.e. the lower its error the higher its weight. This is expected to bring a significant improvement over the current data fusion scheme which is not yet based on quantitative estimates of the retrieval errors but on a proxy measure, namely the vegetation optical depth (Dorigo et al., 2015): over areas with low vegetation passive soil moisture retrievals are used, while over areas with moderate vegetation density active retrievals are used. In transition areas, where both products correlate well, both products are being used in a synergistic way: on time steps where only one of the products is available, the estimate of the respective product is used, while on days where both active and passive sensors provide an estimate, their observations are averaged. REFERENCES Dorigo, W.A., A. Gruber, R. de Jeu, W. Wagner, T. Stacke, A. Löw, C. Albergel, L. Brocca, D. Chung, R. Parinussa, R. Kidd (2015) Evaluation of the ESA CCI soil moisture product using ground-based observations, Remote Sensing of Environment, in press. Wagner, W., W. Dorigo, R. de Jeu, D. Fernandez, J. Benveniste, E. Haas, M. Ertl (2012) Fusion of active and passive microwave observations to create an Essential Climate Variable data record on soil moisture, ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences (ISPRS Annals), Volume I-7, XXII ISPRS Congress, Melbourne, Australia, 25 August-1 September 2012, 315-321. Zwieback, S., K. Scipal, W. Dorigo, W. Wagner (2012) Structural and statistical properties of the collocation technique for error characterization, Nonlinear Processes in Geophysics, 19, 69-80.
Joint optimization: Merging a new culture with a new physical environment.
Stichler, Jaynelle F; Ecoff, Laurie
2009-04-01
Nearly $200 billion of healthcare construction is expected by the year 2015, and nurse leaders must expand their knowledge and capabilities in healthcare design. This bimonthly department prepares nurse leaders to use the evidence-based design process to ensure that new, expanded, and renovated hospitals facilitate optimal patient outcomes, enhance the work environment for healthcare providers, and improve organizational performance. In this article, the authors discuss the concept of joint optimization of merging organizational culture with a new hospital facility.
15 maps merged in one data structure - GIS-based template for Dawn at Ceres
NASA Astrophysics Data System (ADS)
Naß, A.; Dawn Mapping Team
2017-09-01
Derive regional and global valid statements out of the map (quadrangles) is already a very time intensive task. However, another challenge is how individual mappers can generate one homogenous GIS-based project (w.r.t. geometrical and visual character) representing one geologically-consistent final map. Within this contribution a template will be presented which was generated for the process of the interpretative mapping project of Ceres to accomplish the requirement of unifying and merging individual quadrangle.
NASA Astrophysics Data System (ADS)
Hautot, Felix; Dubart, Philippe; Bacri, Charles-Olivier; Chagneau, Benjamin; Abou-Khalil, Roger
2017-09-01
New developments in the field of robotics and computer vision enables to merge sensors to allow fast realtime localization of radiological measurements in the space/volume with near-real time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarii and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations
Aeromagnetic map compilation: Procedures for merging and an example from Washington
Finn, C.
1999-01-01
Rocks in Antarctica and offshore have widely diverse magnetic properties. Consequently, aeromagnetic data collected there can improve knowledge of the geologic, tectonic and geothermal characteristics of the region. Aeromagnetic data can map concealed structures such as faults, folds and dikes, ascertain basin thickness and locate buried volcanic, as well as some intrusive and metamorphic rocks. Gridded, composite data sets allow a view of continental-scale trends that individual data sets do not provide and link widely-separated areas of outcrop and disparate geologic studies. Individual magnetic surveys must be processed so that they match adjacent surveys prior to merging. A consistent representation of the Earth's magnetic field (International Geomagnetic Reference Field (IGRF)) must be removed from each data set. All data sets need to be analytically continued to the same flight elevation with their datums shifted to match adjacent data. I advocate minimal processing to best represent the individual surveys in the merged compilation. An example of a compilation of aeromagnetic surveys from Washington illustrates the utility of aeromagnetic maps for providing synoptic views of regional tectonic features.
Merging and energy exchange between optical filaments
NASA Astrophysics Data System (ADS)
Georgieva, D. A.; Kovachev, L. M.
2015-10-01
We investigate nonlinear interaction between collinear femtosecond laser pulses with power slightly above the critical for self-focusing Pcr trough the processes of cross-phase modulation (CPM) and degenerate four-photon parametric mixing (FPPM). When there is no initial phase difference between the pulses we observe attraction between pulses due to CPM. The final result is merging between the pulses in a single filament with higher power. By method of moments it is found that the attraction depends on the distance between the pulses and has potential character. In the second case we study energy exchange between filaments. This process is described through FPPM scheme and requests initial phase difference between the waves.
Merging and energy exchange between optical filaments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georgieva, D. A., E-mail: dgeorgieva@tu-sofia.bg; Kovachev, L. M.
2015-10-28
We investigate nonlinear interaction between collinear femtosecond laser pulses with power slightly above the critical for self-focusing P{sub cr} trough the processes of cross-phase modulation (CPM) and degenerate four-photon parametric mixing (FPPM). When there is no initial phase difference between the pulses we observe attraction between pulses due to CPM. The final result is merging between the pulses in a single filament with higher power. By method of moments it is found that the attraction depends on the distance between the pulses and has potential character. In the second case we study energy exchange between filaments. This process is describedmore » through FPPM scheme and requests initial phase difference between the waves.« less
NASA Astrophysics Data System (ADS)
Wei, Hai-Rui; Long, Gui Lu
2015-03-01
We propose two compact, economic, and scalable schemes for implementing optical controlled-phase-flip and controlled-controlled-phase-flip gates by using the input-output process of a single-sided cavity strongly coupled to a single nitrogen-vacancy-center defect in diamond. Additional photonic qubits, necessary for procedures based on the parity-check measurement or controlled-path and merging gates, are not employed in our schemes. In the controlled-path gate, the paths of the target photon are conditionally controlled by the control photon, and these two paths can be merged back into one by using a merging gate. Only one half-wave plate is employed in our scheme for the controlled-phase-flip gate. Compared with the conventional synthesis procedures for constructing a controlled-controlled-phase-flip gate, the cost of which is two controlled-path gates and two merging gates, or six controlled-not gates, our scheme is more compact and simpler. Our schemes could be performed with a high fidelity and high efficiency with current achievable experimental techniques.
NASA Astrophysics Data System (ADS)
Gabern, Frederic; Koon, Wang S.; Marsden, Jerrold E.; Ross, Shane D.
2005-11-01
The computation, starting from basic principles, of chemical reaction rates in realistic systems (with three or more degrees of freedom) has been a longstanding goal of the chemistry community. Our current work, which merges tube dynamics with Monte Carlo methods provides some key theoretical and computational tools for achieving this goal. We use basic tools of dynamical systems theory, merging the ideas of Koon et al. [W.S. Koon, M.W. Lo, J.E. Marsden, S.D. Ross, Heteroclinic connections between periodic orbits and resonance transitions in celestial mechanics, Chaos 10 (2000) 427-469.] and De Leon et al. [N. De Leon, M.A. Mehta, R.Q. Topper, Cylindrical manifolds in phase space as mediators of chemical reaction dynamics and kinetics. I. Theory, J. Chem. Phys. 94 (1991) 8310-8328.], particularly the use of invariant manifold tubes that mediate the reaction, into a tool for the computation of lifetime distributions and rates of chemical reactions and scattering phenomena, even in systems that exhibit non-statistical behavior. Previously, the main problem with the application of tube dynamics has been with the computation of volumes in phase spaces of high dimension. The present work provides a starting point for overcoming this hurdle with some new ideas and implements them numerically. Specifically, an algorithm that uses tube dynamics to provide the initial bounding box for a Monte Carlo volume determination is used. The combination of a fine scale method for determining the phase space structure (invariant manifold theory) with statistical methods for volume computations (Monte Carlo) is the main contribution of this paper. The methodology is applied here to a three degree of freedom model problem and may be useful for higher degree of freedom systems as well.
In-depth analysis of drivers' merging behavior and rear-end crash risks in work zone merging areas.
Weng, Jinxian; Xue, Shan; Yang, Ying; Yan, Xuedong; Qu, Xiaobo
2015-04-01
This study investigates the drivers' merging behavior and the rear-end crash risk in work zone merging areas during the entire merging implementation period from the time of starting a merging maneuver to that of completing the maneuver. With the merging traffic data from a work zone site in Singapore, a mixed probit model is developed to describe the merging behavior, and two surrogate safety measures including the time to collision (TTC) and deceleration rate to avoid the crash (DRAC) are adopted to compute the rear-end crash risk between the merging vehicle and its neighboring vehicles. Results show that the merging vehicle has a bigger probability of completing a merging maneuver quickly under one of the following situations: (i) the merging vehicle moves relatively fast; (ii) the merging lead vehicle is a heavy vehicle; and (iii) there is a sizable gap in the adjacent through lane. Results indicate that the rear-end crash risk does not monotonically increase as the merging vehicle speed increases. The merging vehicle's rear-end crash risk is also affected by the vehicle type. There is a biggest increment of rear-end crash risk if the merging lead vehicle belongs to a heavy vehicle. Although the reduced remaining distance to work zone could urge the merging vehicle to complete a merging maneuver quickly, it might lead to an increased rear-end crash risk. Interestingly, it is found that the rear-end crash risk could be generally increased over the elapsed time after the merging maneuver being triggered. Copyright © 2015 Elsevier Ltd. All rights reserved.
Particle acceleration during merging-compression plasma start-up in the Mega Amp Spherical Tokamak
NASA Astrophysics Data System (ADS)
McClements, K. G.; Allen, J. O.; Chapman, S. C.; Dendy, R. O.; Irvine, S. W. A.; Marshall, O.; Robb, D.; Turnyanskiy, M.; Vann, R. G. L.
2018-02-01
Magnetic reconnection occurred during merging-compression plasma start-up in the Mega Amp Spherical Tokamak (MAST), resulting in the prompt acceleration of substantial numbers of ions and electrons to highly suprathermal energies. Accelerated field-aligned ions (deuterons and protons) were detected using a neutral particle analyser at energies up to about 20 keV during merging in early MAST pulses, while nonthermal electrons have been detected indirectly in more recent pulses through microwave bursts. However no increase in soft x-ray emission was observed until later in the merging phase, by which time strong electron heating had been detected through Thomson scattering measurements. A test-particle code CUEBIT is used to model ion acceleration in the presence of an inductive toroidal electric field with a prescribed spatial profile and temporal evolution based on Hall-MHD simulations of the merging process. The simulations yield particle distributions with properties similar to those observed experimentally, including strong field alignment of the fast ions and the acceleration of protons to higher energies than deuterons. Particle-in-cell modelling of a plasma containing a dilute field-aligned suprathermal electron component suggests that at least some of the microwave bursts can be attributed to the anomalous Doppler instability driven by anisotropic fast electrons, which do not produce measurable enhancements in soft x-ray emission either because they are insufficiently energetic or because the nonthermal bremsstrahlung emissivity during this phase of the pulse is below the detection threshold. There is no evidence of runaway electron acceleration during merging, possibly due to the presence of three-dimensional field perturbations.
A fast 3D region growing approach for CT angiography applications
NASA Astrophysics Data System (ADS)
Ye, Zhen; Lin, Zhongmin; Lu, Cheng-chang
2004-05-01
Region growing is one of the most popular methods for low-level image segmentation. Many researches on region growing have focused on the definition of the homogeneity criterion or growing and merging criterion. However, one disadvantage of conventional region growing is redundancy. It requires a large memory usage, and the computation-efficiency is very low especially for 3D images. To overcome this problem, a non-recursive single-pass 3D region growing algorithm named SymRG is implemented and successfully applied to 3D CT angiography (CTA) applications for vessel segmentation and bone removal. The method consists of three steps: segmenting one-dimensional regions of each row; doing region merging to adjacent rows to obtain the region segmentation of each slice; and doing region merging to adjacent slices to obtain the final region segmentation of 3D images. To improve the segmentation speed for very large volume 3D CTA images, this algorithm is applied repeatedly to newly updated local cubes. The next new cube can be estimated by checking isolated segmented regions on all 6 faces of the current local cube. This local non-recursive 3D region-growing algorithm is memory-efficient and computation-efficient. Clinical testings of this algorithm on Brain CTA show this technique could effectively remove whole skull, most of the bones on the skull base, and reveal the cerebral vascular structures clearly.
On the impact of neutron star binaries' natal-kick distribution on the Galactic r-process enrichment
NASA Astrophysics Data System (ADS)
Safarzadeh, Mohammadtaher; Côté, Benoit
2017-11-01
We study the impact of the neutron star binaries' (NSBs) natal-kick distribution on the galactic r-process enrichment. We model the growth of a Milky Way type halo based on N-body simulation results and its star formation history based on multi-epoch abundance matching techniques. We consider that the NSBs that merge well beyond the galaxy's effective radius (>2 × Reff) do not contribute to the galactic r-process enrichment. Assuming a power-law delay-time distribution (DTD) function (∝t-1) with tmin = 30 Myr for binaries' coalescence time-scales and an exponential profile for their natal-kick distribution with an average value of 180 km s-1, we show that up to ˜ 40 per cent of all formed NSBs do not contribute to the r-process enrichment by z = 0, either because they merge far from the galaxy at a given redshift (up to ˜ 25 per cent) or have not yet merged by today (˜ 15 per cent). Our result is largely insensitive to the details of the DTD function. Assuming a constant coalescence time-scale of 100 Myr well approximates the adopted DTD although with 30 per cent of the NSBs ending up not contributing to the r-process enrichment. Our results, although rather dependent on the adopted natal-kick distribution, represent the first step towards estimating the impact of natal kicks and DTD functions on the r-process enrichment of galaxies that would need to be incorporated in the hydrodynamical simulations.
A merged model of quality improvement and evaluation: maximizing return on investment.
Woodhouse, Lynn D; Toal, Russ; Nguyen, Trang; Keene, DeAnna; Gunn, Laura; Kellum, Andrea; Nelson, Gary; Charles, Simone; Tedders, Stuart; Williams, Natalie; Livingood, William C
2013-11-01
Quality improvement (QI) and evaluation are frequently considered to be alternative approaches for monitoring and assessing program implementation and impact. The emphasis on third-party evaluation, particularly associated with summative evaluation, and the grounding of evaluation in the social and behavioral science contrast with an emphasis on the integration of QI process within programs or organizations and its origins in management science and industrial engineering. Working with a major philanthropic organization in Georgia, we illustrate how a QI model is integrated with evaluation for five asthma prevention and control sites serving poor and underserved communities in rural and urban Georgia. A primary foundation of this merged model of QI and evaluation is a refocusing of the evaluation from an intimidating report card summative evaluation by external evaluators to an internally engaged program focus on developmental evaluation. The benefits of the merged model to both QI and evaluation are discussed. The use of evaluation based logic models can help anchor a QI program in evidence-based practice and provide linkage between process and outputs with the longer term distal outcomes. Merging the QI approach with evaluation has major advantages, particularly related to enhancing the funder's return on investment. We illustrate how a Plan-Do-Study-Act model of QI can (a) be integrated with evaluation based logic models, (b) help refocus emphasis from summative to developmental evaluation, (c) enhance program ownership and engagement in evaluation activities, and (d) increase the role of evaluators in providing technical assistance and support.
Benett, William J.; Krulevitch, Peter A.
2001-01-01
A miniature connector for introducing microliter quantities of solutions into microfabricated fluidic devices, and which incorporates a molded ring or seal set into a ferrule cartridge, with or without a compression screw. The fluidic connector, for example, joins standard high pressure liquid chromatography (HPLC) tubing to 1 mm diameter holes in silicon or glass, enabling ml-sized volumes of sample solutions to be merged with .mu.l-sized devices. The connector has many features, including ease of connect and disconnect; a small footprint which enables numerous connectors to be located in a small area; low dead volume; helium leak-tight; and tubing does not twist during connection. Thus the connector enables easy and effective change of microfluidic devices and introduction of different solutions in the devices.
NASA Astrophysics Data System (ADS)
Ground, Cody; Vergine, Fabrizio; Maddalena, Luca
2016-08-01
A defining feature of the turbulent free shear layer is that its growth is hindered by compressibility effects, thus limiting its potential to sufficiently mix the injected fuel and surrounding airstream at the supersonic Mach numbers intrinsic to the combustor of air-breathing hypersonic vehicles. The introduction of streamwise vorticity is often proposed in an attempt to counteract these undesired effects. This fact makes the strategy of introducing multiple streamwise vortices and imposing upon them certain modes of mutual interaction in order to potentially enhance mixing an intriguing concept. However, many underlying fundamental characteristics of the flowfields in the presence such interactions are not yet well understood; therefore, the fundamental physics of these flowfields should be independently investigated before the explicit mixing performance is characterized. In this work, experimental measurements are taken with the stereoscopic particle image velocimetry technique on two specifically targeted modes of vortex interaction—the merging and non-merging of two corotating vortices. The fluctuating velocity fields are analyzed utilizing the proper orthogonal decomposition (POD) in order to identify the content, organization, and distribution of the modal turbulent kinetic energy content of the fluctuating velocity eigenmodes. The effects of the two modes of vortex interaction are revealed by the POD analysis which shows distinct differences in the modal features of the two cases. When comparing the low-order eigenmodes of the two cases, the size of the structures contained within the first ten modes is seen to increase as the flow progresses downstream for the merging case, whereas the opposite is true for the non-merging case. Additionally, the relative modal energy contribution of the first ten eigenmodes increases as the vortices evolve downstream for the merging case, whereas in the non-merging case the relative modal energy contribution decreases. The POD results show that the vortex merging process reorients and redistributes the relative turbulent kinetic energy content toward the larger-scale structures within the low-order POD eigenmodes. This result suggests that by specifically designing the vortex generation system to impose preselected modes of vortex interaction upon the flow it is possible to exert some form of control over the downstream evolution and distribution of the global and modal turbulent kinetic energy content.
Time-evolving of very large-scale motions in a turbulent channel flow
NASA Astrophysics Data System (ADS)
Hwang, Jinyul; Lee, Jin; Sung, Hyung Jin; Zaki, Tamer A.
2014-11-01
Direct numerical simulation (DNS) data of a turbulent channel flow at Reτ = 930 was scrutinized to investigate the formation of very large-scale motions (VLSMs) by merging of two large-scale motions (LSMs), aligned in the streamwise direction. We mainly focused on the supportive motions by the near-wall streaks during the merging of the outer LSMs. From visualization of the instantaneous flow fields, several low-speed streaks in the near-wall region were collected in the spanwise direction, when LSMs were concatenated in the outer region. The magnitude of the streamwise velocity fluctuations in the streaks was intensified during the spanwise merging of the near-wall streaks. Conditionally-averaged velocity fields around the merging of the outer LSMs showed that the intensified near-wall motions were induced by the outer LSMs and extended over the near-wall regions. The intense near-wall motions influence the formation of the outer low-speed regions as well as the reduction of the convection velocity of the downstream LSMs. The interaction between the near-wall and the outer motions is the essential origin of the different convection velocities of the upstream and downstream LSMs for the formation process of VLSMs by merging. This work was supported by the Creative Research Initiatives (No. 2014-001493) program of the National Research Foundation of Korea (MSIP) and partially supported by KISTI under the Strategic Supercomputing Support Program.
Cluster Physics with Merging Galaxy Clusters
NASA Astrophysics Data System (ADS)
Molnar, Sandor
Collisions between galaxy clusters provide a unique opportunity to study matter in a parameter space which cannot be explored in our laboratories on Earth. In the standard ΛCDM model, where the total density is dominated by the cosmological constant (Λ) and the matter density by cold dark matter (CDM), structure formation is hierarchical, and clusters grow mostly by merging. Mergers of two massive clusters are the most energetic events in the universe after the Big Bang, hence they provide a unique laboratory to study cluster physics. The two main mass components in clusters behave differently during collisions: the dark matter is nearly collisionless, responding only to gravity, while the gas is subject to pressure forces and dissipation, and shocks and turbulence are developed during collisions. In the present contribution we review the different methods used to derive the physical properties of merging clusters. Different physical processes leave their signatures on different wavelengths, thus our review is based on a multifrequency analysis. In principle, the best way to analyze multifrequency observations of merging clusters is to model them using N-body/HYDRO numerical simulations. We discuss the results of such detailed analyses. New high spatial and spectral resolution ground and space based telescopes will come online in the near future. Motivated by these new opportunities, we briefly discuss methods which will be feasible in the near future in studying merging clusters.
Merging Satellite Precipitation Products for Improved Streamflow Simulations
NASA Astrophysics Data System (ADS)
Maggioni, V.; Massari, C.; Barbetta, S.; Camici, S.; Brocca, L.
2017-12-01
Accurate quantitative precipitation estimation is of great importance for water resources management, agricultural planning and forecasting and monitoring of natural hazards such as flash floods and landslides. In situ observations are limited around the Earth, especially in remote areas (e.g., complex terrain, dense vegetation), but currently available satellite precipitation products are able to provide global precipitation estimates with an accuracy that depends upon many factors (e.g., type of storms, temporal sampling, season, etc.). The recent SM2RAIN approach proposes to estimate rainfall by using satellite soil moisture observations. As opposed to traditional satellite precipitation methods, which sense cloud properties to retrieve instantaneous estimates, this new bottom-up approach makes use of two consecutive soil moisture measurements for obtaining an estimate of the fallen precipitation within the interval between two satellite overpasses. As a result, the nature of the measurement is different and complementary to the one of classical precipitation products and could provide a different valid perspective to substitute or improve current rainfall estimates. Therefore, we propose to merge SM2RAIN and the widely used TMPA 3B42RT product across Italy for a 6-year period (2010-2015) at daily/0.25deg temporal/spatial scale. Two conceptually different merging techniques are compared to each other and evaluated in terms of different statistical metrics, including hit bias, threat score, false alarm rates, and missed rainfall volumes. The first is based on the maximization of the temporal correlation with a reference dataset, while the second is based on a Bayesian approach, which provides a probabilistic satellite precipitation estimate derived from the joint probability distribution of observations and satellite estimates. The merged precipitation products show a better performance with respect to the parental satellite-based products in terms of categorical statistics, as well as bias reduction and correlation coefficient, with the Bayesian approach being superior to other methods. A study case in the Tiber river basin is also presented to discuss the performance of forcing a hydrological model with the merged satellite precipitation product to simulate streamflow time series.
NASA Astrophysics Data System (ADS)
Malik, M. A.; Cantwell, K. L.; Reser, B.; Gray, L. M.
2016-02-01
Marine researchers and managers routinely rely on interdisciplinary data sets collected using hull-mounted sonars, towed sensors, or submersible vehicles. These data sets can be broadly categorized into acoustic remote sensing, imagery-based observations, water property measurements, and physical samples. The resulting raw data sets are overwhelmingly large and complex, and often require specialized software and training to process. To address these challenges, NOAA's Office of Ocean Exploration and Research (OER) is developing tools to improve the discoverability of raw data sets and integration of quality-controlled processed data in order to facilitate re-use of archived oceanographic data. Majority of recently collected OER raw oceanographic data can be retrieved from national data archives (e.g. NCEI and NOAA central library). Merging of disperse data sets by scientists with diverse expertise, however remains problematic. Initial efforts at OER have focused on merging geospatial acoustic remote sensing data with imagery and water property measurements that typically lack direct geo-referencing. OER has developed `smart' ship and submersible tracks that can provide a synopsis of geospatial coverage of various data sets. Tools under development enable scientists to quickly assess the relevance of archived OER data to their respective research or management interests, and enable quick access to the desired raw and processed data sets. Pre-processing of the data and visualization to combine various data sets also offers benefits to streamline data quality assurance and quality control efforts.
Illuminating gravitational waves: A concordant picture of photons from a neutron star merger
Kasliwal, M. M.; Nakar, E.; Singer, L. P.; ...
2017-10-16
Merging neutron stars offer an excellent laboratory for simultaneously studying strong-field gravity and matter in extreme environments. We establish the physical association of an electromagnetic counterpart (EM170817) with gravitational waves (GW170817) detected from merging neutron stars. By synthesizing a panchromatic data set, we demonstrate that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis. The weak gamma rays seen in EM170817 are dissimilar to classical short gamma-ray bursts with ultrarelativistic jets. Instead, we suggest that breakout of a wide-angle, mildly relativistic cocoon engulfing the jet explains the low-luminosity gamma rays, the high-luminosity ultraviolet-optical-infrared, and themore » delayed radio and x-ray emission. We posit that all neutron star mergers may lead to a wide-angle cocoon breakout, sometimes accompanied by a successful jet and sometimes by a choked jet.« less
Illuminating gravitational waves: A concordant picture of photons from a neutron star merger
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kasliwal, M. M.; Nakar, E.; Singer, L. P.
Merging neutron stars offer an excellent laboratory for simultaneously studying strong-field gravity and matter in extreme environments. We establish the physical association of an electromagnetic counterpart (EM170817) with gravitational waves (GW170817) detected from merging neutron stars. By synthesizing a panchromatic data set, we demonstrate that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis. The weak gamma rays seen in EM170817 are dissimilar to classical short gamma-ray bursts with ultrarelativistic jets. Instead, we suggest that breakout of a wide-angle, mildly relativistic cocoon engulfing the jet explains the low-luminosity gamma rays, the high-luminosity ultraviolet-optical-infrared, and themore » delayed radio and x-ray emission. We posit that all neutron star mergers may lead to a wide-angle cocoon breakout, sometimes accompanied by a successful jet and sometimes by a choked jet.« less
Illuminating gravitational waves: A concordant picture of photons from a neutron star merger
NASA Astrophysics Data System (ADS)
Kasliwal, M. M.; Nakar, E.; Singer, L. P.; Kaplan, D. L.; Cook, D. O.; Van Sistine, A.; Lau, R. M.; Fremling, C.; Gottlieb, O.; Jencson, J. E.; Adams, S. M.; Feindt, U.; Hotokezaka, K.; Ghosh, S.; Perley, D. A.; Yu, P.-C.; Piran, T.; Allison, J. R.; Anupama, G. C.; Balasubramanian, A.; Bannister, K. W.; Bally, J.; Barnes, J.; Barway, S.; Bellm, E.; Bhalerao, V.; Bhattacharya, D.; Blagorodnova, N.; Bloom, J. S.; Brady, P. R.; Cannella, C.; Chatterjee, D.; Cenko, S. B.; Cobb, B. E.; Copperwheat, C.; Corsi, A.; De, K.; Dobie, D.; Emery, S. W. K.; Evans, P. A.; Fox, O. D.; Frail, D. A.; Frohmaier, C.; Goobar, A.; Hallinan, G.; Harrison, F.; Helou, G.; Hinderer, T.; Ho, A. Y. Q.; Horesh, A.; Ip, W.-H.; Itoh, R.; Kasen, D.; Kim, H.; Kuin, N. P. M.; Kupfer, T.; Lynch, C.; Madsen, K.; Mazzali, P. A.; Miller, A. A.; Mooley, K.; Murphy, T.; Ngeow, C.-C.; Nichols, D.; Nissanke, S.; Nugent, P.; Ofek, E. O.; Qi, H.; Quimby, R. M.; Rosswog, S.; Rusu, F.; Sadler, E. M.; Schmidt, P.; Sollerman, J.; Steele, I.; Williamson, A. R.; Xu, Y.; Yan, L.; Yatsu, Y.; Zhang, C.; Zhao, W.
2017-12-01
Merging neutron stars offer an excellent laboratory for simultaneously studying strong-field gravity and matter in extreme environments. We establish the physical association of an electromagnetic counterpart (EM170817) with gravitational waves (GW170817) detected from merging neutron stars. By synthesizing a panchromatic data set, we demonstrate that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis. The weak gamma rays seen in EM170817 are dissimilar to classical short gamma-ray bursts with ultrarelativistic jets. Instead, we suggest that breakout of a wide-angle, mildly relativistic cocoon engulfing the jet explains the low-luminosity gamma rays, the high-luminosity ultraviolet-optical-infrared, and the delayed radio and x-ray emission. We posit that all neutron star mergers may lead to a wide-angle cocoon breakout, sometimes accompanied by a successful jet and sometimes by a choked jet.
Accurate Grid-based Clustering Algorithm with Diagonal Grid Searching and Merging
NASA Astrophysics Data System (ADS)
Liu, Feng; Ye, Chengcheng; Zhu, Erzhou
2017-09-01
Due to the advent of big data, data mining technology has attracted more and more attentions. As an important data analysis method, grid clustering algorithm is fast but with relatively lower accuracy. This paper presents an improved clustering algorithm combined with grid and density parameters. The algorithm first divides the data space into the valid meshes and invalid meshes through grid parameters. Secondly, from the starting point located at the first point of the diagonal of the grids, the algorithm takes the direction of “horizontal right, vertical down” to merge the valid meshes. Furthermore, by the boundary grid processing, the invalid grids are searched and merged when the adjacent left, above, and diagonal-direction grids are all the valid ones. By doing this, the accuracy of clustering is improved. The experimental results have shown that the proposed algorithm is accuracy and relatively faster when compared with some popularly used algorithms.
Merging of the USGS Atlas of Mercury 1:5,000,000 Geologic Series
NASA Technical Reports Server (NTRS)
Frigeri, A.; Federico, C.; Pauselli, C.; Coradini, A.
2008-01-01
After 30 years, the planet Mercury is going to give us new information. The NASA MESSENGER [1] already made its first successful flyby on December 2007 while the European Space Agency and the Japanese Space Agency ISAS/JAXA are preparing the upcoming mission BepiColombo [2]. In order to contribute to current and future analyses on the geology of Mercury, we have started to work on the production of a single digital geologic map of Mercury derived from the merging process of the geologic maps of the Atlas of Mercury, produced by the United States Geological Survey, based on Mariner 10 data. The aim of this work is to merge the nine maps so that the final product reflects as much as possible the original work. Herein we describe the data we used, the working environment and the steps made for producing the final map.
Partial coalescence of drops at liquid interfaces
NASA Astrophysics Data System (ADS)
Blanchette, François; Bigioni, Terry P.
2006-04-01
When two separate masses of the same fluid are brought gently into contact, they are expected to fully merge into a single larger mass to minimize surface energy. However, when a stationary drop coalesces with an underlying reservoir of identical fluid, merging does not always proceed to completion. Occasionally, a drop in the process of merging apparently defies surface tension by `pinching off' before total coalescence occurs, leaving behind a smaller daughter droplet. Moreover, this process can repeat itself for subsequent generations of daughter droplets, resulting in a cascade of self-similar events. Such partial coalescence behaviour has implications for the dynamics of a variety of systems, including the droplets in clouds, ocean mist and airborne salt particles, emulsions, and the generation of vortices near an interface. Although it was first observed almost half a century ago, little is known about its precise mechanism. Here, we combine high-speed video imaging with numerical simulations to determine the conditions under which partial coalescence occurs, and to reveal a dynamic pinch-off mechanism. This mechanism is critically dependent on the ability of capillary waves to vertically stretch the drop by focusing energy on its summit.
NASA Astrophysics Data System (ADS)
Burns, Jack O.; Hallman, Eric J.; Alden, Brian; Datta, Abhirup; Rapetti, David
2017-06-01
We present early results from an X-ray/Radio study of a sample of merging galaxy clusters. Using a novel X-ray pipeline, we have generated high-fidelity temperature maps from existing long-integration Chandra data for a set of clusters including Abell 115, A520, and MACSJ0717.5+3745. Our pipeline, written in python and operating on the NASA ARC high performance supercomputer Pleiades, generates temperature maps with minimal user interaction. This code will be released, with full documentation, on GitHub in beta to the community later this year. We have identified a population of observable shocks in the X-ray data that allow us to characterize the merging activity. In addition, we have compared the X-ray emission and properties to the radio data from observations with the JVLA and GMRT. These merging clusters contain radio relics and/or radio halos in each case. These data products illuminate the merger process, and how the energy of the merger is dissipated into thermal and non-thermal forms. This research was supported by NASA ADAP grant NNX15AE17G.
Improvement in Recursive Hierarchical Segmentation of Data
NASA Technical Reports Server (NTRS)
Tilton, James C.
2006-01-01
A further modification has been made in the algorithm and implementing software reported in Modified Recursive Hierarchical Segmentation of Data (GSC- 14681-1), NASA Tech Briefs, Vol. 30, No. 6 (June 2006), page 51. That software performs recursive hierarchical segmentation of data having spatial characteristics (e.g., spectral-image data). The output of a prior version of the software contained artifacts, including spurious segmentation-image regions bounded by processing-window edges. The modification for suppressing the artifacts, mentioned in the cited article, was addition of a subroutine that analyzes data in the vicinities of seams to find pairs of regions that tend to lie adjacent to each other on opposite sides of the seams. Within each such pair, pixels in one region that are more similar to pixels in the other region are reassigned to the other region. The present modification provides for a parameter ranging from 0 to 1 for controlling the relative priority of merges between spatially adjacent and spatially non-adjacent regions. At 1, spatially-adjacent-/spatially- non-adjacent-region merges have equal priority. At 0, only spatially-adjacent-region merges (no spectral clustering) are allowed. Between 0 and 1, spatially-adjacent- region merges have priority over spatially- non-adjacent ones.
Fluid Merging Viscosity Measurement (FMVM) Experiment on the International Space Station
NASA Technical Reports Server (NTRS)
Antar, Basil N.; Ethridge, Edwin; Lehman, Daniel; Kaukler, William
2007-01-01
The concept of using low gravity experimental data together with fluid dynamical numerical simulations for measuring the viscosity of highly viscous liquids was recently validated on the International Space Station (ISS). After testing the proof of concept for this method with parabolic flight experiments, an ISS experiment was proposed and later conducted onboard the ISS in July, 2004 and subsequently in May of 2005. In that experiment a series of two liquid drops were brought manually together until they touched and then were allowed to merge under the action of capillary forces alone. The merging process was recorded visually in order to measure the contact radius speed as the merging proceeded. Several liquids were tested and for each liquid several drop diameters were used. It has been shown that when the coefficient of surface tension for the liquid is known, the contact radius speed can then determine the coefficient of viscosity for that liquid. The viscosity is determined by fitting the experimental speed to theoretically calculated contact radius speed for the same experimental parameters. Experimental and numerical results will be presented in which the viscosity of different highly viscous liquids were determined, to a high degree of accuracy, using this technique.
Identifying a new particle with jet substructures
Han, Chengcheng; Kim, Doojin; Kim, Minho; ...
2017-01-09
Here, we investigate a potential of determining properties of a new heavy resonance of mass O(1)TeV which decays to collimated jets via heavy Standard Model intermediary states, exploiting jet substructure techniques. Employing the Z gauge boson as a concrete example for the intermediary state, we utilize a "merged jet" defined by a large jet size to capture the two quarks from its decay. The use of the merged jet bene ts the identification of a Z-induced jet as a single, reconstructed object without any combinatorial ambiguity. We also find that jet substructure procedures may enhance features in some kinematic observablesmore » formed with subjet four-momenta extracted from a merged jet. This observation motivates us to feed subjet momenta into the matrix elements associated with plausible hypotheses on the nature of the heavy resonance, which are further processed to construct a matrix element method (MEM)-based observable. For both moderately and highly boosted Z bosons, we demonstrate that the MEM in combination with jet substructure techniques can be a very powerful tool for identifying its physical properties. Finally, we discuss effects from choosing different jet sizes for merged jets and jet-grooming parameters upon the MEM analyses.« less
Parallel ptychographic reconstruction
Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; ...
2014-12-19
Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps tomore » take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.« less
Slow secondary relaxation in a free-energy landscape model for relaxation in glass-forming liquids
NASA Astrophysics Data System (ADS)
Diezemann, Gregor; Mohanty, Udayan; Oppenheim, Irwin
1999-02-01
Within the framework of a free-energy landscape model for the relaxation in supercooled liquids the primary (α) relaxation is modeled by transitions among different free-energy minima. The secondary (β) relaxation then corresponds to intraminima relaxation. We consider a simple model for the reorientational motions of the molecules associated with both processes and calculate the dielectric susceptibility as well as the spin-lattice relaxation times. The parameters of the model can be chosen in a way that both quantities show a behavior similar to that observed in experimental studies on supercooled liquids. In particular we find that it is not possible to obtain a crossing of the time scales associated with α and β relaxation. In our model these processes always merge at high temperatures and the α process remains above the merging temperature. The relation to other models is discussed.
Identifying regions of interest in medical images using self-organizing maps.
Teng, Wei-Guang; Chang, Ping-Lin
2012-10-01
Advances in data acquisition, processing and visualization techniques have had a tremendous impact on medical imaging in recent years. However, the interpretation of medical images is still almost always performed by radiologists. Developments in artificial intelligence and image processing have shown the increasingly great potential of computer-aided diagnosis (CAD). Nevertheless, it has remained challenging to develop a general approach to process various commonly used types of medical images (e.g., X-ray, MRI, and ultrasound images). To facilitate diagnosis, we recommend the use of image segmentation to discover regions of interest (ROI) using self-organizing maps (SOM). We devise a two-stage SOM approach that can be used to precisely identify the dominant colors of a medical image and then segment it into several small regions. In addition, by appropriately conducting the recursive merging steps to merge smaller regions into larger ones, radiologists can usually identify one or more ROIs within a medical image.
Secure ADS-B authentication system and method
NASA Technical Reports Server (NTRS)
Viggiano, Marc J (Inventor); Valovage, Edward M (Inventor); Samuelson, Kenneth B (Inventor); Hall, Dana L (Inventor)
2010-01-01
A secure system for authenticating the identity of ADS-B systems, including: an authenticator, including a unique id generator and a transmitter transmitting the unique id to one or more ADS-B transmitters; one or more ADS-B transmitters, including a receiver receiving the unique id, one or more secure processing stages merging the unique id with the ADS-B transmitter's identification, data and secret key and generating a secure code identification and a transmitter transmitting a response containing the secure code and ADSB transmitter's data to the authenticator; the authenticator including means for independently determining each ADS-B transmitter's secret key, a receiver receiving each ADS-B transmitter's response, one or more secure processing stages merging the unique id, ADS-B transmitter's identification and data and generating a secure code, and comparison processing comparing the authenticator-generated secure code and the ADS-B transmitter-generated secure code and providing an authentication signal based on the comparison result.
Time-varying mixed logit model for vehicle merging behavior in work zone merging areas.
Weng, Jinxian; Du, Gang; Li, Dan; Yu, Yao
2018-08-01
This study aims to develop a time-varying mixed logit model for the vehicle merging behavior in work zone merging areas during the merging implementation period from the time of starting a merging maneuver to that of completing the maneuver. From the safety perspective, vehicle crash probability and severity between the merging vehicle and its surrounding vehicles are regarded as major factors influencing vehicle merging decisions. Model results show that the model with the use of vehicle crash risk probability and severity could provide higher prediction accuracy than previous models with the use of vehicle speeds and gap sizes. It is found that lead vehicle type, through lead vehicle type, through lag vehicle type, crash probability of the merging vehicle with respect to the through lag vehicle, crash severities of the merging vehicle with respect to the through lead and lag vehicles could exhibit time-varying effects on the merging behavior. One important finding is that the merging vehicle could become more and more aggressive in order to complete the merging maneuver as quickly as possible over the elapsed time, even if it has high vehicle crash risk with respect to the through lead and lag vehicles. Copyright © 2018 Elsevier Ltd. All rights reserved.
Stopped-flow enzyme assays on a chip using a microfabricated mixer.
Burke, Brian J; Regnier, Fred E
2003-04-15
This paper describes a microfabricated enzyme assay system including a micromixer that can be used to perform stopped-flow reactions. Samples and reagents were transported into the system by electroosmotic flow (EOF). Streams of reagents were merged and passed through the 100-pL micromixer in < 1 s. The objective of the work was to perform kinetically based enzyme assays in the stopped-flow mode using a system of roughly 6 nL volume. Beta-galactosidase (beta-Gal) was chosen as a model enzyme for these studies and was used to convert the substrate fluorescein mono-beta-D-galactopyranoside (FMG) into fluorescein. Results obtained with microfabricated systems using the micromixer compared well to those obtained with an external T mixing device. In contrast, assays performed in a microfabricated device by merging two streams and allowing mixing to occur by lateral diffusion did not compare well. Using the microfabricated mixer, Km and kcat values of 75 +/- 13 microM and 44 +/- 3 s(-1) were determined. These values compare well to those obtained with the conventional stopped-flow apparatus for which Km was determined to be 60 +/- 6 microM and kcat was 47 +/- 4 s(-1). Enzyme inhibition assays with phenylethyl-beta-D-thiogalactoside (PETG) were also comparable. It was concluded that kinetically based, stopped-flow enzyme assays can be performed in 60 s or less with a miniaturized system of roughly 6 nL liquid volume when mixing is assisted with the described device.
Leung, Kaston; Zahn, Hans; Leaver, Timothy; Konwar, Kishori M.; Hanson, Niels W.; Pagé, Antoine P.; Lo, Chien-Chi; Chain, Patrick S.; Hallam, Steven J.; Hansen, Carl L.
2012-01-01
We present a programmable droplet-based microfluidic device that combines the reconfigurable flow-routing capabilities of integrated microvalve technology with the sample compartmentalization and dispersion-free transport that is inherent to droplets. The device allows for the execution of user-defined multistep reaction protocols in 95 individually addressable nanoliter-volume storage chambers by consecutively merging programmable sequences of picoliter-volume droplets containing reagents or cells. This functionality is enabled by “flow-controlled wetting,” a droplet docking and merging mechanism that exploits the physics of droplet flow through a channel to control the precise location of droplet wetting. The device also allows for automated cross-contamination-free recovery of reaction products from individual chambers into standard microfuge tubes for downstream analysis. The combined features of programmability, addressability, and selective recovery provide a general hardware platform that can be reprogrammed for multiple applications. We demonstrate this versatility by implementing multiple single-cell experiment types with this device: bacterial cell sorting and cultivation, taxonomic gene identification, and high-throughput single-cell whole genome amplification and sequencing using common laboratory strains. Finally, we apply the device to genome analysis of single cells and microbial consortia from diverse environmental samples including a marine enrichment culture, deep-sea sediments, and the human oral cavity. The resulting datasets capture genotypic properties of individual cells and illuminate known and potentially unique partnerships between microbial community members. PMID:22547789
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khabarova, O.; Zank, G. P.; Li, G.
2015-08-01
Increases of ion fluxes in the keV–MeV range are sometimes observed near the heliospheric current sheet (HCS) during periods when other sources are absent. These resemble solar energetic particle events, but the events are weaker and apparently local. Conventional explanations based on either shock acceleration of charged particles or particle acceleration due to magnetic reconnection at interplanetary current sheets (CSs) are not persuasive. We suggest instead that recurrent magnetic reconnection occurs at the HCS and smaller CSs in the solar wind, a consequence of which is particle energization by the dynamically evolving secondary CSs and magnetic islands. The effectiveness of themore » trapping and acceleration process associated with magnetic islands depends in part on the topology of the HCS. We show that the HCS possesses ripples superimposed on the large-scale flat or wavy structure. We conjecture that the ripples can efficiently confine plasma and provide tokamak-like conditions that are favorable for the appearance of small-scale magnetic islands that merge and/or contract. Particles trapped in the vicinity of merging islands and experiencing multiple small-scale reconnection events are accelerated by the induced electric field and experience first-order Fermi acceleration in contracting magnetic islands according to the transport theory of Zank et al. We present multi-spacecraft observations of magnetic island merging and particle energization in the absence of other sources, providing support for theory and simulations that show particle energization by reconnection related processes of magnetic island merging and contraction.« less
Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C
2013-06-01
Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing.
Multi-azimuth 3D Seismic Exploration and Processing in the Jeju Basin, the Northern East China Sea
NASA Astrophysics Data System (ADS)
Yoon, Youngho; Kang, Moohee; Kim, Jin-Ho; Kim, Kyong-O.
2015-04-01
Multi-azimuth(MAZ) 3D seismic exploration is one of the most advanced seismic survey methods to improve illumination and multiple attenuation for better image of the subsurface structures. 3D multi-channel seismic data were collected in two phases during 2012, 2013, and 2014 in Jeju Basin, the northern part of the East China Sea Basin where several oil and gas fields were discovered. Phase 1 data were acquired at 135° and 315° azimuths in 2012 and 2013 comprised a full 3D marine seismic coverage of 160 km2. In 2014, phase 2 data were acquired at the azimuths 45° and 225°, perpendicular to those of phase 1. These two datasets were processed through the same processing workflow prior to velocity analysis and merged to one MAZ dataset. We performed velocity analysis on the MAZ dataset as well as two phases data individually and then stacked these three datasets separately. We were able to pick more accurate velocities in the MAZ dataset compare to phase 1 and 2 data while velocity picking. Consequently, the MAZ seismic volume provide us better resolution and improved images since different shooting directions illuminate different parts of the structures and stratigraphic features.
The Bologna Process between Structural Convergence and Institutional Diversity
ERIC Educational Resources Information Center
Dunkel, Torsten
2009-01-01
The merging of the Bologna and the Copenhagen processes into a single European education area appears appropriate, especially as general, vocational, adult and academic education are to be integrated in a future European Qualification Framework (EQF). This is the backdrop to the following description of the Bologna process, which was originally…
Exploration of a Dynamic Merging Scheme for Precipitation Estimation over a Small Urban Catchment
NASA Astrophysics Data System (ADS)
Al-Azerji, Sherien; Rico-Ramirez, Miguel, ,, Dr.; Han, Dawei, ,, Prof.
2016-04-01
The accuracy of quantitative precipitation estimation is of significant importance for urban areas due to the potentially damaging consequences that can result from pluvial flooding. Improved accuracy could be accomplished by merging rain gauge measurements with weather radar data through different merging methods. Several factors may affect the accuracy of the merged data, and the gauge density used for merging is one of the most important. However, if there are no gauges inside the research area, then a gauge network outside the research area can be used for the merging. Generally speaking, the denser the rain gauge network is, the better the merging results that can be achieved. However, in practice, the rain gauge network around the research area is fixed, and the research question is about the optimal merging area. The hypothesis is that if the merging area is too small, there are fewer gauges for merging and thus the result would be poor. If the merging area is too large, gauges far away from the research area can be included in merging. However, due to their large distances, those gauges far away from the research area provide little relevant information to the study and may even introduce noise in merging. Therefore, an optimal merging area that produces the best merged rainfall estimation in the research area could exist. To test this hypothesis, the distance from the centre of the research area and the number of merging gauges around the research area were gradually increased and merging with a new domain of radar data was then performed. The performance of the new merging scheme was compared with a gridded interpolated rainfall from four experimental rain gauges installed inside the research area for validation. The result of this analysis shows that there is indeed an optimum distance from the centre of research area and consequently an optimum number of rain gauges that produce the best merged rainfall data inside the research area. This study is of important and practical value for estimating rainfall in an urban catchment (when there are no gauges available inside the catchment) by merging weather radar with rain gauge data from outside of the catchment. This has not been reported in any literature before now.
MaMR: High-performance MapReduce programming model for material cloud applications
NASA Astrophysics Data System (ADS)
Jing, Weipeng; Tong, Danyu; Wang, Yangang; Wang, Jingyuan; Liu, Yaqiu; Zhao, Peng
2017-02-01
With the increasing data size in materials science, existing programming models no longer satisfy the application requirements. MapReduce is a programming model that enables the easy development of scalable parallel applications to process big data on cloud computing systems. However, this model does not directly support the processing of multiple related data, and the processing performance does not reflect the advantages of cloud computing. To enhance the capability of workflow applications in material data processing, we defined a programming model for material cloud applications that supports multiple different Map and Reduce functions running concurrently based on hybrid share-memory BSP called MaMR. An optimized data sharing strategy to supply the shared data to the different Map and Reduce stages was also designed. We added a new merge phase to MapReduce that can efficiently merge data from the map and reduce modules. Experiments showed that the model and framework present effective performance improvements compared to previous work.
Thali, Michael J; Braun, Marcel; Wirth, Joachim; Vock, Peter; Dirnhofer, Richard
2003-11-01
A main goal of forensic medicine is to document and to translate medical findings to a language and/or visualization that is readable and understandable for judicial persons and for medical laymen. Therefore, in addition to classical methods, scientific cutting-edge technologies can and should be used. Through the use of the Forensic, 3-D/CAD-supported Photogrammetric method the documentation of so-called "morphologic fingerprints" has been realized. Forensic, 3-D/CAD-supported Photogrammetry creates morphologic data models of the injury and of the suspected injury-causing instrument allowing the evaluation of a match between the injury and the instrument. In addition to the photogrammetric body surface registration, the radiological documentation provided by a volume scan (i.e., spiral, multi-detector CT, or MRI) registers the sub-surface injury, which is not visible to Photogrammetry. The new, combined method of merging Photogrammetry and Radiology data sets creates the potential to perform many kinds of reconstructions and postprocessing of (patterned) injuries in the realm of forensic medical case work. Using this merging method of colored photogrammetric surface and gray-scale radiological internal documentation, a great step towards a new kind of reality-based, high-tech wound documentation and visualization in forensic medicine is made. The combination of the methods of 3D/CAD Photogrammetry and Radiology has the advantage of being observer-independent, non-subjective, non-invasive, digitally storable over years or decades and even transferable over the web for second opinion.
The role of neutron star mergers in the chemical evolution of the Galactic halo
NASA Astrophysics Data System (ADS)
Cescutti, G.; Romano, D.; Matteucci, F.; Chiappini, C.; Hirschi, R.
2015-05-01
Context. The dominant astrophysical production site of the r-process elements has not yet been unambiguously identified. The suggested main r-process sites are core-collapse supernovae and merging neutron stars. Aims: We explore the problem of the production site of Eu. We also use the information present in the observed spread in the Eu abundances in the early Galaxy, and not only its average trend. Moreover, we extend our investigations to other heavy elements (Ba, Sr, Rb, Zr) to provide additional constraints on our results. Methods: We adopt a stochastic chemical evolution model that takes inhomogeneous mixing into account. The adopted yields of Eu from merging neutron stars and from core-collapse supernovae are those that are able to explain the average [Eu/Fe]-[Fe/H] trend observed for solar neighbourhood stars, the solar abundance of Eu, and the present-day abundance gradient of Eu along the Galactic disc in the framework of a well-tested homogeneous model for the chemical evolution of the Milky Way. Rb, Sr, Zr, and Ba are produced by both the s- and r-processes. The r-process yields were obtained by scaling the Eu yields described above according to the abundance ratios observed in r-process rich stars. The s-process contribution by spinstars is the same as in our previous papers. Results: Neutron star binaries that merge in less than 10 Myr or neutron star mergers combined with a source of r-process generated by massive stars can explain the spread of [Eu/Fe] in the Galactic halo. The combination of r-process production by neutron star mergers and s-process production by spinstars is able to reproduce the available observational data for Sr, Zr, and Ba. We also show the first predictions for Rb in the Galactic halo. Conclusions: We confirm previous results that either neutron star mergers on a very short timescale or both neutron star mergers and at least a fraction of Type II supernovae have contributed to the synthesis of Eu in the Galaxy. The r-process production of Sr, Zr, and Ba by neutron star mergers - complemented by an s-process production by spinstars - provide results that are compatible with our previous findings based on other r-process sites. We critically discuss the weak and strong points of both neutron star merging and supernova scenarios for producing Eu and eventually suggest that the best solution is probably a mixed one in which both sources produce Eu. In fact, this scenario reproduces the scatter observed in all the studied elements better. Warning, no authors found for 2015A&A...577A.131.
Applicability of Zipper Merge Versus Early Merge in Kentucky Work Zones
DOT National Transportation Integrated Search
2017-12-24
In an effort to improve work zone safety and streamline traffic flows, a number of state transportation agencies (STAs) have experimented with the zipper merge. The zipper merge differs from a conventional, or early, merge in that vehicles do not mer...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Key, Baris
2014-05-29
Baris Key, an employee at Argonne National Laboratory, discusses the importance of national lab researchers and how they merge basic science, analyze and process in a way that the industry can benefit from.
Key, Baris
2018-04-16
Baris Key, an employee at Argonne National Laboratory, discusses the importance of national lab researchers and how they merge basic science, analyze and process in a way that the industry can benefit from.
NASA Astrophysics Data System (ADS)
Cautun, Marius; van de Weygaert, Rien; Jones, Bernard J. T.; Frenk, Carlos S.
2014-07-01
The cosmic web is the largest scale manifestation of the anisotropic gravitational collapse of matter. It represents the transitional stage between linear and non-linear structures and contains easily accessible information about the early phases of structure formation processes. Here we investigate the characteristics and the time evolution of morphological components. Our analysis involves the application of the NEXUS Multiscale Morphology Filter technique, predominantly its NEXUS+ version, to high resolution and large volume cosmological simulations. We quantify the cosmic web components in terms of their mass and volume content, their density distribution and halo populations. We employ new analysis techniques to determine the spatial extent of filaments and sheets, like their total length and local width. This analysis identifies clusters and filaments as the most prominent components of the web. In contrast, while voids and sheets take most of the volume, they correspond to underdense environments and are devoid of group-sized and more massive haloes. At early times the cosmos is dominated by tenuous filaments and sheets, which, during subsequent evolution, merge together, such that the present-day web is dominated by fewer, but much more massive, structures. The analysis of the mass transport between environments clearly shows how matter flows from voids into walls, and then via filaments into cluster regions, which form the nodes of the cosmic web. We also study the properties of individual filamentary branches, to find long, almost straight, filaments extending to distances larger than 100 h-1 Mpc. These constitute the bridges between massive clusters, which seem to form along approximatively straight lines.
Medical group mergers: strategies for success.
Latham, Will
2014-01-01
As consolidation sweeps over the healthcare industry, many medical groups are considering mergers with other groups as an alternative to employment. While mergers are challenging and fraught with risk, an organized approach to the merger process can dramatically increase the odds for success. Merging groups need to consider the benefits they seek from a merger, identify the obstacles that must be overcome to merge, and develop alternatives to overcome those obstacles. This article addresses the benefits to be gained and issues to be addressed, and provides a tested roadmap that has resulted in many successful medical group mergers.
A merged pipe organ binary-analog correlator
NASA Astrophysics Data System (ADS)
Miller, R. S.; Berry, M. B.
1982-02-01
The design of a 96-stage, programmable binary-analog correlator is described. An array of charge coupled device (CCD) delay lines of differing lengths perform the delay and sum functions. Merging of several CCD channels is employed to reduce the active area. This device architecture allows simplified output detection while maintaining good device performance at higher speeds (5-10 MHz). Experimental results indicate a 50 dB broadband dynamic range and excellent agreement with the theoretical processing gain (19.8 dB) when operated at a 6 MHz sampling frequency as a p-n sequence matched filter.
The Einstein Observatory catalog of IPC x ray sources. Volume 1E: Documentation
NASA Technical Reports Server (NTRS)
Harris, D. E.; Forman, W.; Gioia, I. M.; Hale, J. A.; Harnden, F. R., Jr.; Jones, C.; Karakashian, T.; Maccacaro, T.; Mcsweeney, J. D.; Primini, F. A.
1993-01-01
The Einstein Observatory (HEAO-2, launched November 13, 1978) achieved radically improved sensitivity over previous x-ray missions through the use of focusing optics, which simultaneously afforded greatly reduced background and produced true images. During its 2.5-yr mission, the Einstein X-Ray Telescope was pointed toward some 5,000 celestial targets, most of which were detected, and discovered several thousand additional 'serendipitous' sources in the observed fields. This catalog contains contour diagrams and source data, obtained with the imaging proportional counter in the 0.16 to 3.5 keV energy band, and describes methods for recovering upper limits for any sky position within the observed images. The main catalog consists of six volumes (numbered 2 through 7) of right ascension ordered pages, each containing data for one observation. Along with the primary documentation describing how the catalog was constructed, volume 1 contains a complete source list, results for merged fields, a reference system to published papers, and data useful for calculating upper limits and fluxes.
1981-09-01
III I’ CANOWN" AA HNE Figure 3.3-4 Rzxsqle. Block Diagrami 24 PI LOT AIRBORNE IGROUND AGC AMPLIFIER AGC AMPLIFIER BASE • FM BASEBAND LINKE SQUARE I OO...in that T 0- •,~ ao •, _4 U- - & - @ Figure 3.3-6 Point to Point Inter- Connect Diagram. 25 the wires are merged, or joined, into no @ Lot ".U~ AFT...result in a " bottoning -out" of the isolators during high amplitude vibration. For a properly selected rubber mount, the wearing should be conservative
Improving real-time efficiency of case-based reasoning for medical diagnosis.
Park, Yoon-Joo
2014-01-01
Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.
Moore, G.K.; Baten, L.G.; Allord, G.J.; Robinove, C.J.
1983-01-01
The Fox-Wolf River basin in east-central Wisconsin was selected to test concepts for a water-resources information system using digital mapping technology. This basin of 16,800 sq km is typical of many areas in the country. Fifty digital data sets were included in the Fox-Wolf information system. Many data sets were digitized from 1:500,000 scale maps and overlays. Some thematic data were acquired from WATSTORE and other digital data files. All data were geometrically transformed into a Lambert Conformal Conic map projection and converted to a raster format with a 1-km resolution. The result of this preliminary processing was a group of spatially registered, digital data sets in map form. Parameter evaluation, areal stratification, data merging, and data integration were used to achieve the processing objectives and to obtain analysis results for the Fox-Wolf basin. Parameter evaluation includes the visual interpretation of single data sets and digital processing to obtain new derived data sets. In the areal stratification stage, masks were used to extract from one data set all features that are within a selected area on another data set. Most processing results were obtained by data merging. Merging is the combination of two or more data sets into a composite product, in which the contribution of each original data set is apparent and can be extracted from the composite. One processing result was also obtained by data integration. Integration is the combination of two or more data sets into a single new product, from which the original data cannot be separated or calculated. (USGS)
A Hybrid Shared-Memory Parallel Max-Tree Algorithm for Extreme Dynamic-Range Images.
Moschini, Ugo; Meijster, Arnold; Wilkinson, Michael H F
2018-03-01
Max-trees, or component trees, are graph structures that represent the connected components of an image in a hierarchical way. Nowadays, many application fields rely on images with high-dynamic range or floating point values. Efficient sequential algorithms exist to build trees and compute attributes for images of any bit depth. However, we show that the current parallel algorithms perform poorly already with integers at bit depths higher than 16 bits per pixel. We propose a parallel method combining the two worlds of flooding and merging max-tree algorithms. First, a pilot max-tree of a quantized version of the image is built in parallel using a flooding method. Later, this structure is used in a parallel leaf-to-root approach to compute efficiently the final max-tree and to drive the merging of the sub-trees computed by the threads. We present an analysis of the performance both on simulated and actual 2D images and 3D volumes. Execution times are about better than the fastest sequential algorithm and speed-up goes up to on 64 threads.
Wang, Jingtao; Liu, Jinxia; Han, Junjie; Guan, Jing
2013-02-08
A boundary integral method is developed to investigate the effects of inner droplets and asymmetry of internal structures on rheology of two-dimensional multiple emulsion particles with arbitrary numbers of layers and droplets within each layer. Under a modest extensional flow, the number increment of layers and inner droplets, and the collision among inner droplets subject the particle to stronger shears. In addition, the coalescence or release of inner droplets changes the internal structure of the multiple emulsion particles. Since the rheology of such particles is sensitive to internal structures and their change, modeling them as the core-shell particles to obtain the viscosity equation of a single particle should be modified by introducing the time-dependable volume fraction Φ(t) of the core instead of the fixed Φ. An asymmetric internal structure induces an oriented contact and merging of the outer and inner interface. The start time of the interface merging is controlled by adjusting the viscosity ratio and enhancing the asymmetry, which is promising in the controlled release of inner droplets through hydrodynamics for targeted drug delivery.
Liang, Qin-Qin; Li, Yong-Sheng
2013-12-01
An accurate and rapid method and a system to determine protein content using asynchronous-injection alternating merging zone flow-injection spectrophotometry based on reaction between coomassie brilliant blue G250 (CBBG) and protein was established. Main merit of our approach is that it can avoid interferences of other nitric-compounds in samples, such as melamine and urea. Optimized conditions are as follows: Concentrations of CBBG, polyvinyl alcohol (PVA), NaCl and HCl are 150 mg/l, 30 mg/l, 0.1 mol/l and 1.0% (v/v), respectively; volumes of the sample and reagent are 150 μl and 30 μl, respectively; length of a reaction coil is 200 cm; total flow rate is 2.65 ml/min. The linear range of the method is 0.5-15 mg/l (BSA), its detection limit is 0.05 mg/l, relative standard deviation is less than 1.87% (n=11), and analytical speed is 60 samples per hour. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wild salmon response to natural disturbance processes
Russ Thurow; John M. Buffington
2016-01-01
Dynamic landscapes are shaped by a variety of natural processes and disturbances operating across multiple temporal and spatial scales. Persistence of species in these dynamic environments is also a matter of scale: how do species dispersal and reproductive rates merge with the scales of disturbance?
Subscription merging in filter-based publish/subscribe systems
NASA Astrophysics Data System (ADS)
Zhang, Shengdong; Shen, Rui
2013-03-01
Filter-based publish/subscribe systems suffer from high subscription maintenance cost for each broker in the system stores a large number of subscriptions. Advertisement and covering are not sufficient to conquer such problem. Thus, subscription merging is proposed. However, current researches lack of an efficient and practical merging mechanism. In this paper, we propose a novel subscription merging mechanism. The mechanism is both time and space efficient, and can flexibly control the merging granularity. The merging mechanism has been verified through both theoretical and simulation-based evaluation.
Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L.
2013-01-01
The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties. PMID:24385957
Ohta, Shinri; Fukui, Naoki; Sakai, Kuniyoshi L
2013-01-01
The nature of computational principles of syntax remains to be elucidated. One promising approach to this problem would be to construct formal and abstract linguistic models that parametrically predict the activation modulations in the regions specialized for linguistic processes. In this article, we review recent advances in theoretical linguistics and functional neuroimaging in the following respects. First, we introduce the two fundamental linguistic operations: Merge (which combines two words or phrases to form a larger structure) and Search (which searches and establishes a syntactic relation of two words or phrases). We also illustrate certain universal properties of human language, and present hypotheses regarding how sentence structures are processed in the brain. Hypothesis I is that the Degree of Merger (DoM), i.e., the maximum depth of merged subtrees within a given domain, is a key computational concept to properly measure the complexity of tree structures. Hypothesis II is that the basic frame of the syntactic structure of a given linguistic expression is determined essentially by functional elements, which trigger Merge and Search. We then present our recent functional magnetic resonance imaging experiment, demonstrating that the DoM is indeed a key syntactic factor that accounts for syntax-selective activations in the left inferior frontal gyrus and supramarginal gyrus. Hypothesis III is that the DoM domain changes dynamically in accordance with iterative Merge applications, the Search distances, and/or task requirements. We confirm that the DoM accounts for activations in various sentence types. Hypothesis III successfully explains activation differences between object- and subject-relative clauses, as well as activations during explicit syntactic judgment tasks. A future research on the computational principles of syntax will further deepen our understanding of uniquely human mental faculties.
Retinal Connectomics: Towards Complete, Accurate Networks
Marc, Robert E.; Jones, Bryan W.; Watt, Carl B.; Anderson, James R.; Sigulinsky, Crystal; Lauritzen, Scott
2013-01-01
Connectomics is a strategy for mapping complex neural networks based on high-speed automated electron optical imaging, computational assembly of neural data volumes, web-based navigational tools to explore 1012–1015 byte (terabyte to petabyte) image volumes, and annotation and markup tools to convert images into rich networks with cellular metadata. These collections of network data and associated metadata, analyzed using tools from graph theory and classification theory, can be merged with classical systems theory, giving a more completely parameterized view of how biologic information processing systems are implemented in retina and brain. Networks have two separable features: topology and connection attributes. The first findings from connectomics strongly validate the idea that the topologies complete retinal networks are far more complex than the simple schematics that emerged from classical anatomy. In particular, connectomics has permitted an aggressive refactoring of the retinal inner plexiform layer, demonstrating that network function cannot be simply inferred from stratification; exposing the complex geometric rules for inserting different cells into a shared network; revealing unexpected bidirectional signaling pathways between mammalian rod and cone systems; documenting selective feedforward systems, novel candidate signaling architectures, new coupling motifs, and the highly complex architecture of the mammalian AII amacrine cell. This is but the beginning, as the underlying principles of connectomics are readily transferrable to non-neural cell complexes and provide new contexts for assessing intercellular communication. PMID:24016532
A mechanistic understanding of the wear coefficient: From single to multiple asperities contact
NASA Astrophysics Data System (ADS)
Frérot, Lucas; Aghababaei, Ramin; Molinari, Jean-François
2018-05-01
Sliding contact between solids leads to material detaching from their surfaces in the form of debris particles, a process known as wear. According to the well-known Archard wear model, the wear volume (i.e. the volume of detached particles) is proportional to the load and the sliding distance, while being inversely proportional to the hardness. The influence of other parameters are empirically merged into a factor, referred to as wear coefficient, which does not stem from any theoretical development, thus limiting the predictive capacity of the model. Based on a recent understanding of a critical length-scale controlling wear particle formation, we present two novel derivations of the wear coefficient: one based on Archard's interpretation of the wear coefficient as the probability of wear particle detachment and one that follows naturally from the up-scaling of asperity-level physics into a generic multi-asperity wear model. As a result, the variation of wear rate and wear coefficient are discussed in terms of the properties of the interface, surface roughness parameters and applied load for various rough contact situations. Both new wear interpretations are evaluated analytically and numerically, and recover some key features of wear observed in experiments. This work shines new light on the understanding of wear, potentially opening a pathway for calculating the wear coefficient from first principles.
Dual Active Galactic Nuclei in Nearby Galaxies
NASA Astrophysics Data System (ADS)
Das, Mousumi; Rubinur, Khatun; Karb, Preeti; Varghese, Ashlin; Novakkuni, Navyasree; James, Atul
2018-04-01
Galaxy mergers play a crucial role in the formation of massive galaxies and the buildup of their bulges. An important aspect of the merging process is the in-spiral of the supermassive black-holes (SMBHs) to the centre of the merger remnant and the eventual formation of a SMBH binary. If both the SMBHs are accreting they will form a dual or binary active galactic nucleus (DAGN). The final merger remnant is usually very bright and shows enhanced star formation. In this paper we summarise the current sample of DAGN from previous studies and describe methods that can be used to identify strong DAGN candidates from optical and spectroscopic surveys. These methods depend on the Doppler separation of the double peaked AGN emission lines, the nuclear velocity dispersion of the galaxies and their optical/UV colours. We describe two high resolution, radio observations of DAGN candidates that have been selected based on their double peaked optical emission lines (DPAGN). We also examine whether DAGN host galaxies have higher star formation rates (SFRs) compared to merging galaxies that do not appear to have DAGN. We find that the SFR is not higher for DAGN host galaxies. This suggests that the SFRs in DAGN host galaxies is due to the merging process itself and not related to the presence of two AGN in the system.
A TRMM-Based System for Real-Time Quasi-Global Merged Precipitation Estimates
NASA Technical Reports Server (NTRS)
Starr, David OC. (Technical Monitor); Huffman, G. J.; Adler, R. F.; Stocker, E. F.; Bolvin, D. T.; Nelkin, E. J.
2002-01-01
A new processing system has been developed to combine IR and microwave data into 0.25 degree x 0.25 degree gridded precipitation estimates in near-real time over the latitude band plus or minus 50 degrees. Tropical Rainfall Measuring Mission (TRMM) Microwave Imager (TMI) precipitation estimates are used to calibrate Special Sensor Microwave/Imager (SSM/I) estimates, and Advanced Microwave Sounding Unit (AMSU) and Advanced Microwave Scanning Radiometer (AMSR) estimates, when available. The merged microwave estimates are then used to create a calibrated IR estimate in a Probability-Matched-Threshold approach for each individual hour. The microwave and IR estimates are combined for each 3-hour interval. Early results will be shown, including typical tropical and extratropical storm evolution and examples of the diurnal cycle. Major issues will be discussed, including the choice of IR algorithm, the approach for merging the IR and microwave estimates, extension to higher latitudes, retrospective processing back to 1999, and extension to the GPCP One-Degree Daily product (for which the authors are responsible). The work described here provides one approach to using data from the future NASA Global Precipitation Measurement program, which is designed to provide Jill global coverage by low-orbit passive microwave satellites every three hours beginning around 2008.
Casimir quantum levitation tuned by means of material properties and geometries
NASA Astrophysics Data System (ADS)
Dou, Maofeng; Lou, Fei; Boström, Mathias; Brevik, Iver; Persson, Clas
2014-05-01
The Casimir force between two surfaces is attractive in most cases. Although stable suspension of nano-objects has been achieved, the sophisticated geometries make them difficult to be merged with well-established thin film processes. We find that by introducing thin film surface coating on porous substrates, a repulsive to attractive force transition is achieved when the separations are increased in planar geometries, resulting in a stable suspension of two surfaces near the force transition separation. Both the magnitude of the force and the transition distance can be flexibly tailored though modifying the properties of the considered materials, that is, thin film thickness, doping concentration, and porosity. This stable suspension can be used to design new nanodevices with ultralow friction. Moreover, it might be convenient to merge this thin film coating approach with micro- and nanofabrication processes in the future.
Kovač, Marko; Bauer, Arthur; Ståhl, Göran
2014-01-01
Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120
Mining CANDELS for Tidal Features to Constrain Major Merging During Cosmic Noon
NASA Astrophysics Data System (ADS)
McIntosh, Daniel H.; Mantha, Kameswara; Ciaschi, Cody; Evan, Rubyet A.; Fries, Logan B.; Landry, Luther; Thompson, Scott E.; Snyder, Gregory; Guo, Yicheng; Ceverino, Daniel; Häuβler, Boris; Primack, Joel; Simons, Raymond C.; Zheng, Xianzhong; Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS) Team
2018-01-01
The role of major merging in the rapid buildup and development of massive galaxies at z>1 remains an open question. New theories and observations suggest that non-merging processes like violent disk instabilities may be more vital than previously thought at assembling bulges, producing clumps, and inducing morphological disturbances that may be misinterpreted as the product of major merging. We will present initial results on a systematic search for hallmark tidal indicators of major merging in a complete sample of nearly 6000 massive z>1 galaxies from CANDELS (Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey), the premiere HST/WFC3 Treasury program. We have visually inspected published GALFIT F160W residual (image-model) maps and produced a comprehensive new catalog of Sersic residual characteristics based on a variety of natural features and poor-fit artifacts. Using this catalog, we find the frequency of galaxies with tidal signatures is very small in CANDELS data. Accounting for the brief time scale associated with faint transient tidal features, our preliminary finding indicates that merger fractions derived from the CANDELS morphological classification efforts are substantially overestimated. We are using the database of residual classifications as a baseline to (1) produce improved multi-component residual maps using GALFIT_M, (2) automatically extract and quantify plausible tidal indicators and substructures (clumps vs. multiple nuclei), (3) develop a new deep-learning classification pipeline to robustly identify merger indicators in imaging data, and (4) inform the systematic analyses of synthetic mock (CANDELized) images from zoom-in hydrodynamic simulations to thoroughly quantify the impacts of cosmological dimming, and calibrate the observability timescale of tidal feature detections. Our study will ultimately yield novel constraints on merger rates at z>1 and a definitive census of massive high-noon galaxies with tidal and double-nuclei merging signatures in rest-frame optical HST imaging.
Hondroulis, Evangelia; Movila, Alexandru; Sabhachandani, Pooja; Sarkar, Saheli; Cohen, Noa; Kawai, Toshihisa; Konry, Tania
2017-03-01
Microfluidic droplets are used to isolate cell pairs and prevent crosstalk with neighboring cells, while permitting free motility and interaction within the confined space. Dynamic analysis of cellular heterogeneity in droplets has provided insights in various biological processes. Droplet manipulation methods such as fusion and fission make it possible to precisely regulate the localized environment of a cell in a droplet and deliver reagents as required. Droplet fusion strategies achieved by passive mechanisms preserve cell viability and are easier to fabricate and operate. Here, we present a simple and effective method for the co-encapsulation of polarized M1 and M2 macrophages with Escherichia coli (E. coli) by passive merging in an integrated droplet generation, merging, and docking platform. This approach facilitated live cell profiling of effector immune functions in situ and quantitative functional analysis of macrophage heterogeneity. Biotechnol. Bioeng. 2017;114: 705-709. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Illuminating gravitational waves: A concordant picture of photons from a neutron star merger.
Kasliwal, M M; Nakar, E; Singer, L P; Kaplan, D L; Cook, D O; Van Sistine, A; Lau, R M; Fremling, C; Gottlieb, O; Jencson, J E; Adams, S M; Feindt, U; Hotokezaka, K; Ghosh, S; Perley, D A; Yu, P-C; Piran, T; Allison, J R; Anupama, G C; Balasubramanian, A; Bannister, K W; Bally, J; Barnes, J; Barway, S; Bellm, E; Bhalerao, V; Bhattacharya, D; Blagorodnova, N; Bloom, J S; Brady, P R; Cannella, C; Chatterjee, D; Cenko, S B; Cobb, B E; Copperwheat, C; Corsi, A; De, K; Dobie, D; Emery, S W K; Evans, P A; Fox, O D; Frail, D A; Frohmaier, C; Goobar, A; Hallinan, G; Harrison, F; Helou, G; Hinderer, T; Ho, A Y Q; Horesh, A; Ip, W-H; Itoh, R; Kasen, D; Kim, H; Kuin, N P M; Kupfer, T; Lynch, C; Madsen, K; Mazzali, P A; Miller, A A; Mooley, K; Murphy, T; Ngeow, C-C; Nichols, D; Nissanke, S; Nugent, P; Ofek, E O; Qi, H; Quimby, R M; Rosswog, S; Rusu, F; Sadler, E M; Schmidt, P; Sollerman, J; Steele, I; Williamson, A R; Xu, Y; Yan, L; Yatsu, Y; Zhang, C; Zhao, W
2017-12-22
Merging neutron stars offer an excellent laboratory for simultaneously studying strong-field gravity and matter in extreme environments. We establish the physical association of an electromagnetic counterpart (EM170817) with gravitational waves (GW170817) detected from merging neutron stars. By synthesizing a panchromatic data set, we demonstrate that merging neutron stars are a long-sought production site forging heavy elements by r-process nucleosynthesis. The weak gamma rays seen in EM170817 are dissimilar to classical short gamma-ray bursts with ultrarelativistic jets. Instead, we suggest that breakout of a wide-angle, mildly relativistic cocoon engulfing the jet explains the low-luminosity gamma rays, the high-luminosity ultraviolet-optical-infrared, and the delayed radio and x-ray emission. We posit that all neutron star mergers may lead to a wide-angle cocoon breakout, sometimes accompanied by a successful jet and sometimes by a choked jet. Copyright © 2017, American Association for the Advancement of Science.
Navier-Stokes structure of merged layer flow on the spherical nose of a space vehicle
NASA Technical Reports Server (NTRS)
Jain, A. C.; Woods, G. H.
1988-01-01
Hypersonic merged layer flow on the forepart of a spherical surface of a space vehicle has been investigated on the basis of the full steady-state Navier-Stokes equations using slip and temperature jump boundary conditions at the surface and free-stream conditions far from the surface. The shockwave-like structure was determined as part of the computations. Using an equivalent body concept, computations were carried out under conditions that the Aeroassist Flight Experiment (AFE) Vehicle would encounter at 15 and 20 seconds in its flight path. Emphasis was placed on understanding the basic nature of the flow structure under low density conditions. Particular attention was paid to the understanding of the structure of the outer shockwave-like region as the fluid expands around the sphere. Plots were drawn for flow profiles and surface characteristics to understand the role of dissipation processes in the merged layer of the spherical nose of the vehicle.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Yin, Y
2015-06-15
Purpose: A method using four-dimensional(4D) PET/CT in design of radiation treatment planning was proposed and the target volume and radiation dose distribution changes relative to standard three-dimensional (3D) PET/CT were examined. Methods: A target deformable registration method was used by which the whole patient’s respiration process was considered and the effect of respiration motion was minimized when designing radiotherapy planning. The gross tumor volume of a non-small-cell lung cancer was contoured on the 4D FDG-PET/CT and 3D PET/CT scans by use of two different techniques: manual contouring by an experienced radiation oncologist using a predetermined protocol; another technique using amore » constant threshold of standardized uptake value (SUV) greater than 2.5. The target volume and radiotherapy dose distribution between VOL3D and VOL4D were analyzed. Results: For all phases, the average automatic and manually GTV volume was 18.61 cm3 (range, 16.39–22.03 cm3) and 31.29 cm3 (range, 30.11–35.55 cm3), respectively. The automatic and manually volume of merged IGTV were 27.82 cm3 and 49.37 cm3, respectively. For the manual contour, compared to 3D plan the mean dose for the left, right, and total lung of 4D plan have an average decrease 21.55%, 15.17% and 15.86%, respectively. The maximum dose of spinal cord has an average decrease 2.35%. For the automatic contour, the mean dose for the left, right, and total lung have an average decrease 23.48%, 16.84% and 17.44%, respectively. The maximum dose of spinal cord has an average decrease 1.68%. Conclusion: In comparison to 3D PET/CT, 4D PET/CT may better define the extent of moving tumors and reduce the contouring tumor volume thereby optimize radiation treatment planning for lung tumors.« less
PCN-index derivation at World Data Center for Geomagnetism, Copenhagen, DTU Space
NASA Astrophysics Data System (ADS)
Stolle, C.; Matzka, J.
2012-04-01
The Polar Cap North (PCN) index is based on a correlation between geomagnetic disturbances at the Qaanaaq geomagnetic observatory (IAGA code THL) and the merging electric field derived from solar wind parameters. The index is therefore meant to provide a fast ground based single station indicator for variations in the merging electric field without being dependent on satellite observations. The PC index will be subject to an IAGA endorsement process during IAGA Scientific Assembly 2013. Actually the WDC provides near real time PC-indices and post-processed final PC-indices based on former developed algorithms. However, the coefficients used for calculating the PCN distributed by the WDC Copenhagen are presently not reproducible. In the frame of the IAGA endorsement, DTU Space tests new coefficients mainly based on published algorithms. This presentation will report on activities at the WDC Copenhagen and on the current status at DTU Space with respect to the preparation for the IAGA endorsement process of the PCN-index.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ben-Naim, Eli; Krapivsky, Paul
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
Semantic mediation in the national geologic map database (US)
Percy, D.; Richard, S.; Soller, D.
2008-01-01
Controlled language is the primary challenge in merging heterogeneous databases of geologic information. Each agency or organization produces databases with different schema, and different terminology for describing the objects within. In order to make some progress toward merging these databases using current technology, we have developed software and a workflow that allows for the "manual semantic mediation" of these geologic map databases. Enthusiastic support from many state agencies (stakeholders and data stewards) has shown that the community supports this approach. Future implementations will move toward a more Artificial Intelligence-based approach, using expert-systems or knowledge-bases to process data based on the training sets we have developed manually.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Gregory L.; Arnold, Dorian; LeGendre, Matthew
STAT is a light weight debugging tool that gathers and merges stack traces from all of the processes in a parallel application. STAT uses the MRNet tree based overlay network to broadcast commands from the tool front-end to the STAT daemons and for the front-end to gather the traces from the STAT daemons. As the traces propagate through the MRNet network tree, they are merged across all tasks to form a similar function call patterns and to delineate a small set of equivalence classes. A representative task from each of these classes can then be fed into a full featuremore » debugger like TolalView for root cause analysis.« less
ERIC Educational Resources Information Center
Camhi, Paul J.; Ebsworth, Miriam Eisenstein
2008-01-01
This action research study evaluates a classroom approach incorporating a reflective, metacognitive component within a second language process-oriented writing environment. Inspired by the literature and developed by the first author, this approach seeks to provide English language learners (ELLs) with a command of metalinguistic principles…
Restructuring a Large IT Organization: Theory, Model, Process, and Initial Results.
ERIC Educational Resources Information Center
Luker, Mark; And Others
1995-01-01
Recently the University of Wisconsin-Madison merged three existing but disparate technology-related units into a single division reporting to a chief information officer. The new division faced many challenges, beginning with the need to restructure the old units into a cohesive new organization. The restructuring process, based on structural…
Dissociating interference-control processes between memory and response.
Bissett, Patrick G; Nee, Derek Evan; Jonides, John
2009-09-01
The ability to mitigate interference is of central importance to cognition. Previous research has provided conflicting accounts about whether operations that resolve interference are singular in character or form a family of functions. Here, the authors examined the relationship between interference-resolution processes acting on working memory representations versus responses. The authors combined multiple forms of interference into a single paradigm by merging a directed-forgetting task, which induces proactive interference, with a stop-signal task, which taps response inhibition processes. The results demonstrated that proactive interference and response inhibition produced distinct behavioral signatures that did not interact. By contrast, combining two different measures of response inhibition by merging a go/no-go task variant and a stop signal produced overadditive behavioral interference, demonstrating that different forms of response inhibition tap the same processes. However, not all forms of response conflict interacted, suggesting that inhibition-related functions acting on response selection are dissociable from those acting on response inhibition. These results suggest that inhibition-related functions for memory and responses are dissociable. (c) 2009 APA, all rights reserved.
Technology-enabled Airborne Spacing and Merging
NASA Technical Reports Server (NTRS)
Hull, James; Barmore, Bryan; Abbott, Tetence
2005-01-01
Over the last several decades, advances in airborne and groundside technologies have allowed the Air Traffic Service Provider (ATSP) to give safer and more efficient service, reduce workload and frequency congestion, and help accommodate a critically escalating traffic volume. These new technologies have included advanced radar displays, and data and communication automation to name a few. In step with such advances, NASA Langley is developing a precision spacing concept designed to increase runway throughput by enabling the flight crews to manage their inter-arrival spacing from TRACON entry to the runway threshold. This concept is being developed as part of NASA s Distributed Air/Ground Traffic Management (DAG-TM) project under the Advanced Air Transportation Technologies Program. Precision spacing is enabled by Automatic Dependent Surveillance-Broadcast (ADS-B), which provides air-to-air data exchange including position and velocity reports; real-time wind information and other necessary data. On the flight deck, a research prototype system called Airborne Merging and Spacing for Terminal Arrivals (AMSTAR) processes this information and provides speed guidance to the flight crew to achieve the desired inter-arrival spacing. AMSTAR is designed to support current ATC operations, provide operationally acceptable system-wide increases in approach spacing performance and increase runway throughput through system stability, predictability and precision spacing. This paper describes problems and costs associated with an imprecise arrival flow. It also discusses methods by which Air Traffic Controllers achieve and maintain an optimum interarrival interval, and explores means by which AMSTAR can assist in this pursuit. AMSTAR is an extension of NASA s previous work on in-trail spacing that was successfully demonstrated in a flight evaluation at Chicago O Hare International Airport in September 2002. In addition to providing for precision inter-arrival spacing, AMSTAR provides speed guidance for aircraft on converging routes to safely and smoothly merge onto a common approach. Much consideration has been given to working with operational conditions such as imperfect ADS-B data, wind prediction errors, changing winds, differing aircraft types and wake vortex separation requirements. A series of Monte Carlo simulations are planned for the spring and summer of 2004 at NASA Langley to further study the system behavior and performance under more operationally extreme and varying conditions. This will coincide with a human-in-the-loop study to investigate the flight crew interface, workload and acceptability.
Facet-controlled facilitation of PbS nanoarchitectures by understanding nanocrystal growth
NASA Astrophysics Data System (ADS)
Loc, Welley Siu; Quan, Zewei; Lin, Cuikun; Pan, Jinfong; Wang, Yuxuan; Yang, Kaikun; Jian, Wen-Bin; Zhao, Bo; Wang, Howard; Fang, Jiye
2015-11-01
Nanostructured lead sulphide is a significant component in a number of energy-related sustainable applications such as photovoltaic cells and thermoelectric components. In many micro-packaging processes, dimensionality-controlled nano-architectures as building blocks with unique properties are required. This study investigates different facet-merging growth behaviors through a wet-chemical synthetic strategy to produce high-quality controlled nanostructures of lead sulphide in various dimensionalities. It was found that 1D nanowires or 2D nanosheets can be obtained by the merging of reactive {111}- or {110}-facets, respectively, while promoting {100} facets in the early stages after nucleation leads to the growth of 0D nanocubes. The influence of temperature, capping ligands and co-solvent in facilitating the crystal facet growth of each intermediate seed is also demonstrated. The novelty of this work is characterized by the delicate manipulation of various PbS nanoarchitectures based on the comprehension of the facet-merging evolution. The synthesis of facet-controlled PbS nanostructures could provide novel building blocks with desired properties for use in many applications.Nanostructured lead sulphide is a significant component in a number of energy-related sustainable applications such as photovoltaic cells and thermoelectric components. In many micro-packaging processes, dimensionality-controlled nano-architectures as building blocks with unique properties are required. This study investigates different facet-merging growth behaviors through a wet-chemical synthetic strategy to produce high-quality controlled nanostructures of lead sulphide in various dimensionalities. It was found that 1D nanowires or 2D nanosheets can be obtained by the merging of reactive {111}- or {110}-facets, respectively, while promoting {100} facets in the early stages after nucleation leads to the growth of 0D nanocubes. The influence of temperature, capping ligands and co-solvent in facilitating the crystal facet growth of each intermediate seed is also demonstrated. The novelty of this work is characterized by the delicate manipulation of various PbS nanoarchitectures based on the comprehension of the facet-merging evolution. The synthesis of facet-controlled PbS nanostructures could provide novel building blocks with desired properties for use in many applications. Electronic supplementary information (ESI) available: Experimental section (chemicals, synthesis, characterization methods), synthesis conditions, AFM image of NSs, SEM and TEM images of NWs prepared without OAm, and TEM images of truncated NCbs grown for 7.5 min at 180 °C. See DOI: 10.1039/c5nr04181c
NASA Technical Reports Server (NTRS)
Adler, Robert F.; Huffman, George J.; Bolvin, David T.; Curtis, Scott; Nelkin, Eric J.
1999-01-01
Abstract A technique is described to use Tropical Rain Measuring Mission (TRMM) combined radar/radiometer information to adjust geosynchronous infrared satellite data (the TRMM Adjusted GOES Precipitation Index, or TRMM AGPI). The AGPI is then merged with rain gauge information (mostly over land; the TRMM merged product) to provide fine- scale (1 deg latitude/longitude) pentad and monthly analyses, respectively. The TRMM merged estimates are 10% higher than those from the Global Precipitation Climatology Project (GPCP) when integrated over the tropical oceans (37 deg N-S) for 1998, with 20% differences noted in the most heavily raining areas. In the dry subtropics the TRMM values are smaller than the GPCP estimates. The TRMM merged-product tropical-mean estimates for 1998 are 3.3 mm/ day over ocean and 3.1 mm/ day over land and ocean combined. Regional differences are noted between the western and eastern Pacific Ocean maxima when TRMM and GPCP are compared. In the eastern Pacific rain maximum the TRMM and GPCP mean values are nearly equal, very different from the other tropical rainy areas where TRMM merged-product estimates are higher. This regional difference may indicate that TRMM is better at taking in to account the vertical structure of the rain systems and the difference in structure between the western and eastern (shallower) Pacific convection. Comparisons of these TRMM merged analysis estimates with surface data sets shows varied results; the bias is near zero when compared to western Pacific Ocean atoll raingauge data, but significantly positive compared to Kwajalein radar estimates (adjusted by rain gauges). Over land the TRMM estimates also show a significant positive bias. The inclusion of gauge information in the final merged product significantly reduces the bias over land, as expected. The monthly precipitation patterns produced by the TRMM merged data process clearly show the evolution of the ENSO tropical precipitation pattern from early 1998 (El Nino) through early 1999 (La Nina) and beyond. The El Nino minus La Nina difference map shows the eastern Pacific maximum, the maritime continent minima and other tropical and mid-latitude features. The differences in the Pacific are very similar to those detected by the GPCP analyses. However, summing the El Nino minus La Nina differences over the global tropical oceans yields divergent answers from TRMM, GPCP and other estimates. This emphasizes the need for additional validation and analysis before it is feasible to understand the relations between global precipitation anomalies and Pacific Ocean ENSO temperature changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ushizima, Daniela M.; Bianchi, Andrea G. C.; DeBianchi, Christina
We introduce a computational analysis workflow to access properties of solid objects using nondestructive imaging techniques that rely on X-ray imaging. The goal is to process and quantify structures from material science sample cross sections. The algorithms can differentiate the porous media (high density material) from the void (background, low density media) using a Boolean classifier, so that we can extract features, such as volume, surface area, granularity spectrum, porosity, among others. Our workflow, Quant-CT, leverages several algorithms from ImageJ, such as statistical region merging and 3D object counter. It also includes schemes for bilateral filtering that use a 3Dmore » kernel, for parallel processing of sub-stacks, and for handling over-segmentation using histogram similarities. The Quant-CT supports fast user interaction, providing the ability for the user to train the algorithm via subsamples to feed its core algorithms with automated parameterization. Quant-CT plugin is currently available for testing by personnel at the Advanced Light Source and Earth Sciences Divisions and Energy Frontier Research Center (EFRC), LBNL, as part of their research on porous materials. The goal is to understand the processes in fluid-rock systems for the geologic sequestration of CO2, and to develop technology for the safe storage of CO2 in deep subsurface rock formations. We describe our implementation, and demonstrate our plugin on porous material images. This paper targets end-users, with relevant information for developers to extend its current capabilities.« less
NASA Astrophysics Data System (ADS)
Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.
2013-08-01
Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.
Case Study of a Merger in Higher Education.
ERIC Educational Resources Information Center
Keohane, Kevin
1984-01-01
The merging of four British teacher training institutions and the accompanying severe reduction in teaching staff is described, focusing on the teachers' role in the process, developing an administrative structure, and making decisions about program redundancy. (MSE)
NASA Astrophysics Data System (ADS)
Chen, Xin; Xing, Pei; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua
2017-02-01
A new dataset of surface temperature over North America has been constructed by merging climate model results and empirical tree-ring data through the application of an optimal interpolation algorithm. Errors of both the Community Climate System Model version 4 (CCSM4) simulation and the tree-ring reconstruction were considered to optimize the combination of the two elements. Variance matching was used to reconstruct the surface temperature series. The model simulation provided the background field, and the error covariance matrix was estimated statistically using samples from the simulation results with a running 31-year window for each grid. Thus, the merging process could continue with a time-varying gain matrix. This merging method (MM) was tested using two types of experiment, and the results indicated that the standard deviation of errors was about 0.4 °C lower than the tree-ring reconstructions and about 0.5 °C lower than the model simulation. Because of internal variabilities and uncertainties in the external forcing data, the simulated decadal warm-cool periods were readjusted by the MM such that the decadal variability was more reliable (e.g., the 1940-1960s cooling). During the two centuries (1601-1800 AD) of the preindustrial period, the MM results revealed a compromised spatial pattern of the linear trend of surface temperature, which is in accordance with the phase transition of the Pacific decadal oscillation and Atlantic multidecadal oscillation. Compared with pure CCSM4 simulations, it was demonstrated that the MM brought a significant improvement to the decadal variability of the gridded temperature via the merging of temperature-sensitive tree-ring records.
Mapping sea ice leads with a coupled numeric/symbolic system
NASA Technical Reports Server (NTRS)
Key, J.; Schweiger, A. J.; Maslanik, J. A.
1990-01-01
A method is presented which facilitates the detection and delineation of leads with single-channel Landsat data by coupling numeric and symbolic procedures. The procedure consists of three steps: (1) using the dynamic threshold method, an image is mapped to a lead/no lead binary image; (2) the likelihood of fragments to be real leads is examined with a set of numeric rules; and (3) pairs of objects are examined geometrically and merged where possible. The processing ends when all fragments are merged and statistical characteristics are determined, and a map of valid lead objects are left which summarizes useful physical in the lead complexes. Direct implementation of domain knowledge and rapid prototyping are two benefits of the rule-based system. The approach is found to be more successfully applied to mid- and high-level processing, and the system can retrieve statistics about sea-ice leads as well as detect the leads.
Segmentation of remotely sensed data using parallel region growing
NASA Technical Reports Server (NTRS)
Tilton, J. C.; Cox, S. C.
1983-01-01
The improved spatial resolution of the new earth resources satellites will increase the need for effective utilization of spatial information in machine processing of remotely sensed data. One promising technique is scene segmentation by region growing. Region growing can use spatial information in two ways: only spatially adjacent regions merge together, and merging criteria can be based on region-wide spatial features. A simple region growing approach is described in which the similarity criterion is based on region mean and variance (a simple spatial feature). An effective way to implement region growing for remote sensing is as an iterative parallel process on a large parallel processor. A straightforward parallel pixel-based implementation of the algorithm is explored and its efficiency is compared with sequential pixel-based, sequential region-based, and parallel region-based implementations. Experimental results from on aircraft scanner data set are presented, as is a discussioon of proposed improvements to the segmentation algorithm.
NASA Astrophysics Data System (ADS)
Li, Chenguang; Yang, Xianjun
2016-10-01
The Magnetized Plasma Fusion Reactor concept is proposed as a magneto-inertial fusion approach based on the target plasma created through the collision merging of two oppositely translating field reversed configuration plasmas, which is then compressed by the imploding liner driven by the pulsed-power driver. The target creation process is described by a two-dimensional magnetohydrodynamics model, resulting in the typical target parameters. The implosion process and the fusion reaction are modeled by a simple zero-dimensional model, taking into account the alpha particle heating and the bremsstrahlung radiation loss. The compression on the target can be 2D cylindrical or 2.4D with the additive axial contraction taken into account. The dynamics of the liner compression and fusion burning are simulated and the optimum fusion gain and the associated target parameters are predicted. The scientific breakeven could be achieved at the optimized conditions.
NASA Astrophysics Data System (ADS)
Nguyen, Sy Dzung; Nguyen, Quoc Hung; Choi, Seung-Bok
2015-01-01
This paper presents a new algorithm for building an adaptive neuro-fuzzy inference system (ANFIS) from a training data set called B-ANFIS. In order to increase accuracy of the model, the following issues are executed. Firstly, a data merging rule is proposed to build and perform a data-clustering strategy. Subsequently, a combination of clustering processes in the input data space and in the joint input-output data space is presented. Crucial reason of this task is to overcome problems related to initialization and contradictory fuzzy rules, which usually happen when building ANFIS. The clustering process in the input data space is accomplished based on a proposed merging-possibilistic clustering (MPC) algorithm. The effectiveness of this process is evaluated to resume a clustering process in the joint input-output data space. The optimal parameters obtained after completion of the clustering process are used to build ANFIS. Simulations based on a numerical data, 'Daily Data of Stock A', and measured data sets of a smart damper are performed to analyze and estimate accuracy. In addition, convergence and robustness of the proposed algorithm are investigated based on both theoretical and testing approaches.
Saillant, N N; Earl-Royal, E; Pascual, J L; Allen, S R; Kim, P K; Delgado, M K; Carr, B G; Wiebe, D; Holena, D N
2017-02-01
Age is a risk factor for death, adverse outcomes, and health care use following trauma. The American College of Surgeons' Trauma Quality Improvement Program (TQIP) has published "best practices" of geriatric trauma care; adoption of these guidelines is unknown. We sought to determine which evidence-based geriatric protocols, including TQIP guidelines, were correlated with decreased mortality in Pennsylvania's trauma centers. PA's level I and II trauma centers self-reported adoption of geriatric protocols. Survey data were merged with risk-adjusted mortality data for patients ≥65 from a statewide database, the Pennsylvania Trauma Systems Foundation (PTSF), to compare mortality outlier status and processes of care. Exposures of interest were center-specific processes of care; outcome of interest was PTSF mortality outlier status. 26 of 27 eligible trauma centers participated. There was wide variation in care processes. Four trauma centers were low outliers; three centers were high outliers for risk-adjusted mortality rates in adults ≥65. Results remained consistent when accounting for center volume. The only process associated with mortality outlier status was age-specific solid organ injury protocols (p = 0.04). There was no cumulative effect of multiple evidence-based processes on mortality rate (p = 0.50). We did not see a link between adoption of geriatric best-practices trauma guidelines and reduced mortality at PA trauma centers. The increased susceptibility of elderly to adverse consequences of injury, combined with the rapid growth rate of this demographic, emphasizes the importance of identifying interventions tailored to this population. III. Descriptive.
Bathymetry of Totten Reservoir, Montezuma County, Colorado, 2011
Kohn, Michael S.
2012-01-01
In order to better characterize the water supply capacity of Totten Reservoir, Montezuma County, Colorado, the U.S. Geological Survey, in cooperation with the Dolores Water Conservancy District, conducted a bathymetric survey of Totten Reservoir. The study was performed in June 2011 using a man-operated boat-mounted multibeam echo sounder integrated with a global positioning system and a terrestrial real-time kinematic global positioning system. The two collected datasets were merged and imported into geographic information system software. A bathymetric map of the reservoir was generated in addition to plots for the stage-area and the stage-volume relations.
Bathymetry of Groundhog Reservoir, Dolores County, Colorado, 2011
Kohn, Michael S.
2012-01-01
In order to better characterize the water supply capacity of Groundhog Reservoir, Dolores County, Colorado, the U.S. Geological Survey, in cooperation with the Dolores Water Conservancy District, conducted a bathymetric survey of Groundhog Reservoir. The study was performed in June 2011 using a man-operated boat-mounted multibeam echo sounder integrated with a global positioning system and a terrestrial real-time kinematic global positioning system. The two collected datasets were merged and imported into geographic information system software. A bathymetric map of the reservoir was generated in addition to plots for the stage-area and the stage-volume relations.
Automated separation of merged Langerhans islets
NASA Astrophysics Data System (ADS)
Švihlík, Jan; Kybic, Jan; Habart, David
2016-03-01
This paper deals with separation of merged Langerhans islets in segmentations in order to evaluate correct histogram of islet diameters. A distribution of islet diameters is useful for determining the feasibility of islet transplantation in diabetes. First, the merged islets at training segmentations are manually separated by medical experts. Based on the single islets, the merged islets are identified and the SVM classifier is trained on both classes (merged/single islets). The testing segmentations were over-segmented using watershed transform and the most probable back merging of islets were found using trained SVM classifier. Finally, the optimized segmentation is compared with ground truth segmentation (correctly separated islets).
MetaGenomic Assembly by Merging (MeGAMerge)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scholz Chien-Chi Lo, Matthew B.
2015-08-03
"MetaGenomic Assembly by Merging" (MeGAMerge)Is a novel method of merging of multiple genomic assembly or long read data sources for assembly by use of internal trimming/filtering of data, followed by use of two 3rd party tools to merge data by overlap based assembly.
Ignition and combustion characteristics of metallized propellants
NASA Technical Reports Server (NTRS)
Turns, S. R.; Mueller, D. C.; Scott, M. J.
1990-01-01
Research designed to develop detailed knowledge of the secondary atomization and ignition characteristics of aluminum slurry propellants was started. These processes are studied because they are the controlling factors limiting the combustion efficiency of aluminum slurry propellants in rocket applications. A burner and spray rig system allowing the study of individual slurry droplets having diameters from about 10 to 100 microns was designed and fabricated. The burner generates a near uniform high temperature environment from the merging of 72 small laminar diffusion flames above a honeycomb matrix. This design permits essentially adiabatic operation over a wide range of stoichiometries without danger of flashback. A single particle sizing system and velocimeter also were designed and assembled. Light scattered from a focused laser beam is related to the particle (droplet) size, while the particle velocity is determined by its transit time through the focal volume. Light from the combustion of aluminum is also sensed to determine if ignition was achieved. These size and velocity measurements will allow the determination of disruption and ignition times as functions of drop sizes and ambient conditions.
Three-dimensional cardiac architecture determined by two-photon microtomy
NASA Astrophysics Data System (ADS)
Huang, Hayden; MacGillivray, Catherine; Kwon, Hyuk-Sang; Lammerding, Jan; Robbins, Jeffrey; Lee, Richard T.; So, Peter
2009-07-01
Cardiac architecture is inherently three-dimensional, yet most characterizations rely on two-dimensional histological slices or dissociated cells, which remove the native geometry of the heart. We previously developed a method for labeling intact heart sections without dissociation and imaging large volumes while preserving their three-dimensional structure. We further refine this method to permit quantitative analysis of imaged sections. After data acquisition, these sections are assembled using image-processing tools, and qualitative and quantitative information is extracted. By examining the reconstructed cardiac blocks, one can observe end-to-end adjacent cardiac myocytes (cardiac strands) changing cross-sectional geometries, merging and separating from other strands. Quantitatively, representative cross-sectional areas typically used for determining hypertrophy omit the three-dimensional component; we show that taking orientation into account can significantly alter the analysis. Using fast-Fourier transform analysis, we analyze the gross organization of cardiac strands in three dimensions. By characterizing cardiac structure in three dimensions, we are able to determine that the α crystallin mutation leads to hypertrophy with cross-sectional area increases, but not necessarily via changes in fiber orientation distribution.
The definition of turbulence and the direction of the turbulence energy cascade
NASA Astrophysics Data System (ADS)
Gibson, Carl
2013-11-01
Turbulence is defined as an eddy-like state of fluid motion where the inertial-vortex forces of the eddies are larger than any other forces that tend to damp the eddies out. Because vorticity is produced at the Kolmogorov scale, turbulent kinetic energy always cascades from small scales to large. Irrotational flows that supply kinetic energy to turbulence from large scale motions are by definition non-turbulent. The Taylor-Reynolds-Lumley cascade of kinetic energy from large scales to small is therefore a non-turbulent cascade. The Reynolds turbulence poem must be revised to avoid further confusion. Little whorls on vortex sheets, merge and pair with more of, whorls that grow by vortex forces, Slava Kolmogorov! Turbulent mixing and transport processes in natural fluids depend on fossil turbulence and fossil turbulence waves, which are impossible by the TRL cascade direction. Standard models of cosmology, astronomy, oceanography, and atmospheric transport of heat, mass, momentum and chemical species must be revised. See journalofcosmology.com Volumes 21 and 22 for oceanographic and astro-biological examples.
NASA Astrophysics Data System (ADS)
Selva Bhuvaneswari, K.; Geetha, P.
2017-05-01
Magnetic resonance imaging segmentation refers to a process of assigning labels to set of pixels or multiple regions. It plays a major role in the field of biomedical applications as it is widely used by the radiologists to segment the medical images input into meaningful regions. In recent years, various brain tumour detection techniques are presented in the literature. The entire segmentation process of our proposed work comprises three phases: threshold generation with dynamic modified region growing phase, texture feature generation phase and region merging phase. by dynamically changing two thresholds in the modified region growing approach, the first phase of the given input image can be performed as dynamic modified region growing process, in which the optimisation algorithm, firefly algorithm help to optimise the two thresholds in modified region growing. After obtaining the region growth segmented image using modified region growing, the edges can be detected with edge detection algorithm. In the second phase, the texture feature can be extracted using entropy-based operation from the input image. In region merging phase, the results obtained from the texture feature-generation phase are combined with the results of dynamic modified region growing phase and similar regions are merged using a distance comparison between regions. After identifying the abnormal tissues, the classification can be done by hybrid kernel-based SVM (Support Vector Machine). The performance analysis of the proposed method will be carried by K-cross fold validation method. The proposed method will be implemented in MATLAB with various images.
On the Performance of Alternate Conceptual Ecohydrological Models for Streamflow Prediction
NASA Astrophysics Data System (ADS)
Naseem, Bushra; Ajami, Hoori; Cordery, Ian; Sharma, Ashish
2016-04-01
A merging of a lumped conceptual hydrological model with two conceptual dynamic vegetation models is presented to assess the performance of these models for simultaneous simulations of streamflow and leaf area index (LAI). Two conceptual dynamic vegetation models with differing representation of ecological processes are merged with a lumped conceptual hydrological model (HYMOD) to predict catchment scale streamflow and LAI. The merged RR-LAI-I model computes relative leaf biomass based on transpiration rates while the RR-LAI-II model computes above ground green and dead biomass based on net primary productivity and water use efficiency in response to soil moisture dynamics. To assess the performance of these models, daily discharge and 8-day MODIS LAI product for 27 catchments of 90 - 1600km2 in size located in the Murray - Darling Basin in Australia are used. Our results illustrate that when single-objective optimisation was focussed on maximizing the objective function for streamflow or LAI, the other un-calibrated predicted outcome (LAI if streamflow is the focus) was consistently compromised. Thus, single-objective optimization cannot take into account the essence of all processes in the conceptual ecohydrological models. However, multi-objective optimisation showed great strength for streamflow and LAI predictions. Both response outputs were better simulated by RR-LAI-II than RR-LAI-I due to better representation of physical processes such as net primary productivity (NPP) in RR-LAI-II. Our results highlight that simultaneous calibration of streamflow and LAI using a multi-objective algorithm proves to be an attractive tool for improved streamflow predictions.
NASA Astrophysics Data System (ADS)
Murata, Katsuhiro L.; Yamada, Rika; Oyabu, Shinki; Kaneda, Hidehiro; Ishihara, Daisuke; Yamagishi, Mitsuyoshi; Kokusho, Takuma; Takeuchi, Tsutomu T.
2017-11-01
Using the AKARI, Wide-field Infrared Survey Explorer (WISE), Infrared Astronomical Satellite (IRAS), Sloan Digital Sky Survey (SDSS) and Hubble Space Telescope (HST) data, we investigated the relation of polycyclic aromatic hydrocarbon (PAH) mass (MPAH), very small grain mass (MVSG), big grain mass (MBG) and stellar mass (Mstar) with galaxy merger for 55 star-forming galaxies at redshift z < 0.2. Using the SDSS image at z < 0.1 and the HST image at z > 0.1, we divided the galaxies into merger galaxies and non-merger galaxies with the morphological parameter asymmetry A, and quantified merging stages of galaxies based on the morphological indicators, the second-order momentum of the brightest 20 per cent region M20 and the Gini coefficient. We find that MPAH/MBG of merger galaxies tend to be lower than that of non-merger galaxies and there are no systematic differences of MVSG/MBG and MBG/Mstar between merger galaxies and non-merger galaxies. We find that galaxies with very low MPAH/MBG seem to be merger galaxies at late stages. These results suggest that PAHs are partly destroyed at late stages of merging processes. Furthermore, we investigated MPAH/MBG variations in radiation field intensity strength G0 and the emission line ratio of [O I] λ 6300/Hα that is a shock tracer for merger galaxies and find that MPAH/MBG decreases with increasing both G0 and [O I]/Hα. PAH destruction is likely to be caused by two processes: strong radiation fields and large-scale shocks during merging processes of galaxies.
A mixed reality approach for stereo-tomographic quantification of lung nodules.
Chen, Mianyi; Kalra, Mannudeep K; Yun, Wenbing; Cong, Wenxiang; Yang, Qingsong; Nguyen, Terry; Wei, Biao; Wang, Ge
2016-05-25
To reduce the radiation dose and the equipment cost associated with lung CT screening, in this paper we propose a mixed reality based nodule measurement method with an active shutter stereo imaging system. Without involving hundreds of projection views and subsequent image reconstruction, we generated two projections of an iteratively placed ellipsoidal volume in the field of view and merging these synthetic projections with two original CT projections. We then demonstrated the feasibility of measuring the position and size of a nodule by observing whether projections of an ellipsoidal volume and the nodule are overlapped from a human observer's visual perception through the active shutter 3D vision glasses. The average errors of measured nodule parameters are less than 1 mm in the simulated experiment with 8 viewers. Hence, it could measure real nodules accurately in the experiments with physically measured projections.
Greenland-Wide Seasonal Temperatures During the Last Deglaciation
NASA Astrophysics Data System (ADS)
Buizert, C.; Keisling, B. A.; Box, J. E.; He, F.; Carlson, A. E.; Sinclair, G.; DeConto, R. M.
2018-02-01
The sensitivity of the Greenland ice sheet to climate forcing is of key importance in assessing its contribution to past and future sea level rise. Surface mass loss occurs during summer, and accounting for temperature seasonality is critical in simulating ice sheet evolution and in interpreting glacial landforms and chronologies. Ice core records constrain the timing and magnitude of climate change but are largely limited to annual mean estimates from the ice sheet interior. Here we merge ice core reconstructions with transient climate model simulations to generate Greenland-wide and seasonally resolved surface air temperature fields during the last deglaciation. Greenland summer temperatures peak in the early Holocene, consistent with records of ice core melt layers. We perform deglacial Greenland ice sheet model simulations to demonstrate that accounting for realistic temperature seasonality decreases simulated glacial ice volume, expedites the deglacial margin retreat, mutes the impact of abrupt climate warming, and gives rise to a clear Holocene ice volume minimum.
Entrepreneurial Spirit in Strategic Planning.
ERIC Educational Resources Information Center
Riggs, Donald E.
1987-01-01
Presents a model which merges the concepts of entrepreneurship with those of strategic planning to create a library management system. Each step of the process, including needs assessment and policy formation, strategy choice and implementation, and evaluation, is described in detail. (CLB)
NASA Technical Reports Server (NTRS)
Harris, D. E.; Forman, W.; Gioia, I. M.; Hale, J. A.; Harnden, F. R., Jr.; Jones, C.; Karakashian, T.; Maccacaro, T.; Mcsweeney, J. D.; Primini, F. A.
1993-01-01
The Einstein Observatory (HEAO-2 launched November 13, 1978) achieved radically improved sensitivity over previous x-ray missions through the use of focusing optics, which simultaneously afforded greatly reduced background and produced true images. During its 2.5-yr mission, the Einstein X-Ray Telescope was pointed toward some 5,000 celestial targets, most of which were detected, and discovered several thousand additional 'serendipitous' sources in the observed fields. This catalog contains contour diagrams and source data, obtained with the imaging proportional counter in the 0.16 to 3.5 keV energy band, and describes methods for recovering upper limits for any sky position within the observed images. The main catalog consists of six volumes (numbered 2 through 7) of right ascension ordered pages, each containing data for one observation. Along with the primary documentation describing how the catalog was constructed, volume 1 contains a complete source list, results for merged fields, a reference system to published papers, and data useful for calculating upper limits and fluxes.
NASA Technical Reports Server (NTRS)
Harris, D. E.; Forman, W.; Gioia, I. M.; Hale, J. A.; Harnden, F. R., Jr.; Jones, C.; Karakashian, T.; Maccacaro, T.; Mcsweeney, J. D.; Primini, F. A.
1993-01-01
The Einstein Observatory (HEAO-2, launched November 13, 1978) achieved radically improved sensitivity over previous x-ray missions through the use of focusing optics, which simultaneously afforded greatly reduced background and produced true images. During its 2.5-yr mission, the Einstein X-Ray Telescope was pointed toward some 5,000 celestial targets, most of which were detected, and discovered several thousand additional 'serendipitous' sources in the observed fields. This catalog contains contour diagrams and source data, obtained with the imaging proportional counter in the 0.16 to 3.5 keV energy band, and describes methods for recovering upper limits for any sky position within the observed images. The main catalog consists of six volumes (numbered 2 through 7) of right ascension ordered pages, each containing data for one observation. Along with the primary documentation describing how the catalog was constructed, volume 1 contains a complete source list, results for merged fields, a reference system to published papers, and data useful for calculating upper limits and fluxes.
NASA Technical Reports Server (NTRS)
Harris, D. E.; Forman, W.; Gioia, I. M.; Hale, J. A.; Harnden, F. R., Jr.; Jones, C.; Karakashian, T.; Maccacaro, T.; Mcsweeney, J. D.; Primini, F. A.
1993-01-01
The Einstein Observatory (HEAO-2, launched November 13, 1978) achieved radically improved sensitivity over previous x-ray missions through the use of focusing optics which simultaneously afforded greatly reduced background and produced true images. During its 2.5-yr mission, the Einstein X-Ray Telescope was pointed toward some 5,000 celestial targets, most of which were detected, and discovered several thousand additional 'serendipitous' sources in the observed fields. This catalog contains contour diagrams and source data, obtained with the imaging proportional counter in the 0.16 to 3.5 keV energy band, and describes methods for recovering upper limits for any sky position within the observed images, The main catalog consists of six volumes (numbered 2 through 7) of right ascension ordered pages, each containing data for one observation. Along with the primary documentaion describing how the catalog was constructed, volume 1 contains a complete source list, results for merged fields, a reference system to published papers, and data useful for calculating upper limits and fluxes.
NASA Technical Reports Server (NTRS)
Harris, D. E.; Forman, W.; Gioia, I. M.; Hale, J. A.; Harnden, F. R., Jr.; Jones, C.; Karakashian, T.; Maccacaro, T.; Mcsweeney, J. D.; Primini, F. A.
1993-01-01
The Einstein Observatory (HEAO-2, launched November 13, 1978) achieved radically improved sensitivity over previous x-ray missions through the use of focusing optics which simultaneously afforded greatly reduced background and produced true images. During its 2.5-yr mission, the Einstein X-Ray Telescope was pointed toward some 5,000 celestial targets, most of which were detected, and discovered several thousand additional 'serendipitous' sources in the observed fields. This catalog contains contour diagrams and source data, obtained with the imaging proportional counter in the 0.16 to 3.5 keV energy band, and describes methods for recovering upper limits for any sky position within the observed images. The main catalog consists of six volumes (numbered 2 through 7) of right ascension ordered pages, each containing data for one observation. Along with the primary documentation describing how the catalog was constructed, volume 1 contains a complete source list, results for merged fields, a reference system to published papers and data useful for calculating upper limits and fluxes.
NASA Technical Reports Server (NTRS)
Harris, D. E.; Forman, W.; Gioia, I. M.; Hale, J. A.; Harnden, F. R., Jr.; Jones, C.; Karakashian, T.; Maccacaro, T.; Mcsweeney, J. D.; Primini, F. A.
1993-01-01
The Einstein Observatory (HEAO-2, launched November 13, 1978) achieved radically improved sensitivity over previous x-ray missions through the use of focusing optics which simultaneously afforded greatly reduced background and produced true images. During its 2.5-yr mission, the Einstein X-Ray Telescope was pointed toward some 5,000 celestial targets, most of which were detected, and discovered several thousand additional 'serendipitous' sources in the observed fields. This catalog contains contour diagrams and source data, obtained with the imaging proportional counter in the 0.16 to 3.5 keV energy band, and describes methods for recovering upper limits for any sky position within the observed images. The main catalog consists of six volumes (numbered 2 through 7) of right ascension ordered pages, each containing data for one observation. Along with the primary documentation describing how the catalog was constructed, volume 1 contains a complete source list, results for merged fields, a reference system to published papers and data useful for calculating upper limits and fluxes.
NASA Technical Reports Server (NTRS)
Harris, D. E.; Forman, W.; Gioia, I. M.; Hale, J. A.; Harnden, F. R., Jr.; Jones, C.; Karakashian, T.; Maccacaro, T.; Mcsweeney, J. D.; Primini, F. A.
1993-01-01
The Einstein Observatory (HEAO-2, launched November 13, 1978) achieved radically improved sensitivity over previous x-ray missions through the use of focusing optics which simultaneously afforded greatly reduced background and produced true images. During its 2.5-yr mission, the Einstein X-Ray Telescope was pointed toward some 5,000 celestial targets, most of which were detected, and discovered several thousand additional 'serendipitous' sources in the observed fields. This catalog contains contour diagrams and source data, obtained with the imaging proportional counter in the 0.16 to 3.5 keV energy band, and describes methods for recovering upper limits for any sky position within the observed images. The main catalog consists of six volumes (numbered 2 through 7) of right ascension ordered pages, each containing data for one observation. Along with the primary documentation describing how the catalog was constructed, volume 1 contains a complete source list, results for merged fields, a reference system to published papers, and data useful for calculating upper limits and fluxes.
Unsupervised tattoo segmentation combining bottom-up and top-down cues
NASA Astrophysics Data System (ADS)
Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen
2011-06-01
Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.
Brazhnik, Olga; Jones, John F.
2007-01-01
Producing reliable information is the ultimate goal of data processing. The ocean of data created with the advances of science and technologies calls for integration of data coming from heterogeneous sources that are diverse in their purposes, business rules, underlying models and enabling technologies. Reference models, Semantic Web, standards, ontology, and other technologies enable fast and efficient merging of heterogeneous data, while the reliability of produced information is largely defined by how well the data represent the reality. In this paper we initiate a framework for assessing the informational value of data that includes data dimensions; aligning data quality with business practices; identifying authoritative sources and integration keys; merging models; uniting updates of varying frequency and overlapping or gapped data sets. PMID:17071142
Integrating Quality Assurance Systems in a Merged Higher Education Institution
ERIC Educational Resources Information Center
Kistan, Chandru
2005-01-01
Purpose: This article seeks to highlight the challenges and issues that face merging higher education institutions and also to outline some of the challenges in integrating the quality assurance systems during the pre-, interim and post-merger phases in a merged university. Design/methodology/approach: Case studies of merged and merging…
The MAGIC-5 CAD for nodule detection in low dose and thin slice lung CTs
NASA Astrophysics Data System (ADS)
Cerello, Piergiorgio; MAGIC-5 Collaboration
2010-11-01
Lung cancer is the leading cause of cancer-related mortality in developed countries. Only 10-15% of all men and women diagnosed with lung cancer live 5 years after the diagnosis. However, the 5-year survival rate for patients diagnosed in the early asymptomatic stage of the disease can reach 70%. Early-stage lung cancers can be diagnosed by detecting non-calcified small pulmonary nodules with computed tomography (CT). Computer-aided detection (CAD) could support radiologists in the analysis of the large amount of noisy images generated in screening programs, where low-dose and thin-slice settings are used. The MAGIC-5 project, funded by the Istituto Nazionale di Fisica Nucleare (INFN, Italy) and Ministero dell'Università e della Ricerca (MUR, Italy), developed a multi-method approach based on three CAD algorithms to be used in parallel with a merging of their results: the Channeler Ant Model (CAM), based on Virtual Ant Colonies, the Dot-Enhancement/Pleura Surface Normals/VBNA (DE-PSN-VBNA), and the Region Growing Volume Plateau (RGVP). Preliminary results show quite good performances, to be improved with the refining of the single algorithm and the added value of the results merging.
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; Hilgart, Mark C.; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K.; Smith, Janet L.; Fischetti, Robert F.
2014-01-01
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce. PMID:25484844
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; ...
2014-11-18
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates amore » collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.« less
Facet-controlled facilitation of PbS nanoarchitectures by understanding nanocrystal growth.
Loc, Welley Siu; Quan, Zewei; Lin, Cuikun; Pan, Jinfong; Wang, Yuxuan; Yang, Kaikun; Jian, Wen-Bin; Zhao, Bo; Wang, Howard; Fang, Jiye
2015-12-07
Nanostructured lead sulphide is a significant component in a number of energy-related sustainable applications such as photovoltaic cells and thermoelectric components. In many micro-packaging processes, dimensionality-controlled nano-architectures as building blocks with unique properties are required. This study investigates different facet-merging growth behaviors through a wet-chemical synthetic strategy to produce high-quality controlled nanostructures of lead sulphide in various dimensionalities. It was found that 1D nanowires or 2D nanosheets can be obtained by the merging of reactive {111}- or {110}-facets, respectively, while promoting {100} facets in the early stages after nucleation leads to the growth of 0D nanocubes. The influence of temperature, capping ligands and co-solvent in facilitating the crystal facet growth of each intermediate seed is also demonstrated. The novelty of this work is characterized by the delicate manipulation of various PbS nanoarchitectures based on the comprehension of the facet-merging evolution. The synthesis of facet-controlled PbS nanostructures could provide novel building blocks with desired properties for use in many applications.
Merging Features and Optical-Near Infrared Color Gradients of Early-type Galaxies
NASA Astrophysics Data System (ADS)
Kim, Duho; Im, M.
2012-01-01
It has been suggested that merging plays an important role in the formation and the evolution of early-type galaxies (ETGs). Optical-NIR color gradients of ETGs in high density environments are found to be less steep than those of ETGs in low density environments, hinting frequent merger activities in ETGs in high density environments. In order to examine if the flat color gradients are the result of dry mergers, we studied the relations between merging features, color gradient, and environments of 198 low redshift ETGs selected from Sloan Digital Sky Survey (SDSS) Stripe82. Near Infrared (NIR) images are taken from UKIRT Infrared Deep Sky Survey (UKIDSS) Large Area Survey (LAS). Color(r-K) gradients of ETGs with tidal features are a little flatter than relaxed ETGs, but not significant. We found that massive (>1011.3 M⊙) relaxed ETGs have 2.5 times less scattered color gradients than less massive ETGs. The less scattered color gradients of massive ETGs could be evidence of dry merger processes in the evolution of massive ETGs. We found no relation between color gradients of ETGs and their environments.
An outburst powered by the merging of two stars inside the envelope of a giant
NASA Astrophysics Data System (ADS)
Hillel, Shlomi; Schreier, Ron; Soker, Noam
2017-11-01
We conduct 3D hydrodynamical simulations of energy deposition into the envelope of a red giant star as a result of the merger of two close main sequence stars or brown dwarfs, and show that the outcome is a highly non-spherical outflow. Such a violent interaction of a triple stellar system can explain the formation of `messy', I.e. lacking any kind of symmetry, planetary nebulae and similar nebulae around evolved stars. We do not simulate the merging process, but simply assume that after the tight binary system enters the envelope of the giant star the interaction with the envelope causes the two components, stars or brown dwarfs, to merge and liberate gravitational energy. We deposit the energy over a time period of about 9 h, which is about 1 per cent of the the orbital period of the merger product around the centre of the giant star. The ejection of the fast hot gas and its collision with previously ejected mass are very likely to lead to a transient event, I.e. an intermediate luminosity optical transient.
Martin, Cathrin; H. Opava, Christina; Brusewitz, Maria; Keller, Christina; Åsenlöf, Pernilla
2015-01-01
Background User involvement in the development of health care services is important for the viability, usability, and effectiveness of services. This study reports on the second step of the co-design process. Objective The aim was to explore the significant challenges in advancing the co-design process during the requirements specification phase of a mobile Internet service for the self-management of physical activity (PA) in rheumatoid arthritis (RA). Methods A participatory action research design was used to involve lead users and stakeholders as co-designers. Lead users (n=5), a clinical physiotherapist (n=1), researchers (n=2) with knowledge in PA in RA and behavioral learning theories, an eHealth strategist (n=1), and an officer from the patient organization (n=1) collaborated in 4 workshops. Data-collection methods included video recordings and naturalistic observations. Results The inductive qualitative video-based analysis resulted in 1 overarching theme, merging perspectives, and 2 subthemes reflecting different aspects of merging: (1) finding a common starting point and (2) deciding on design solutions. Seven categories illustrated the specific challenges: reaching shared understanding of goals, clarifying and handling the complexity of participants’ roles, clarifying terminology related to system development, establishing the rationale for features, negotiating features, transforming ideas into concrete features, and participants’ alignment with the agreed goal and task. Conclusions Co-designing the system requirements of a mobile Internet service including multiple stakeholders was a complex and extensive collaborative decision-making process. Considering, valuing, counterbalancing, and integrating different perspectives into agreements and solutions (ie, the merging of participants’ perspectives) were crucial for moving the process forward and were considered the core challenges of co-design. Further research is needed to replicate the results and to increase knowledge on key factors for a successful co-design of health care services. PMID:26381221
Revenäs, Åsa; Martin, Cathrin; H Opava, Christina; Brusewitz, Maria; Keller, Christina; Åsenlöf, Pernilla
2015-09-17
User involvement in the development of health care services is important for the viability, usability, and effectiveness of services. This study reports on the second step of the co-design process. The aim was to explore the significant challenges in advancing the co-design process during the requirements specification phase of a mobile Internet service for the self-management of physical activity (PA) in rheumatoid arthritis (RA). A participatory action research design was used to involve lead users and stakeholders as co-designers. Lead users (n=5), a clinical physiotherapist (n=1), researchers (n=2) with knowledge in PA in RA and behavioral learning theories, an eHealth strategist (n=1), and an officer from the patient organization (n=1) collaborated in 4 workshops. Data-collection methods included video recordings and naturalistic observations. The inductive qualitative video-based analysis resulted in 1 overarching theme, merging perspectives, and 2 subthemes reflecting different aspects of merging: (1) finding a common starting point and (2) deciding on design solutions. Seven categories illustrated the specific challenges: reaching shared understanding of goals, clarifying and handling the complexity of participants' roles, clarifying terminology related to system development, establishing the rationale for features, negotiating features, transforming ideas into concrete features, and participants' alignment with the agreed goal and task. Co-designing the system requirements of a mobile Internet service including multiple stakeholders was a complex and extensive collaborative decision-making process. Considering, valuing, counterbalancing, and integrating different perspectives into agreements and solutions (ie, the merging of participants' perspectives) were crucial for moving the process forward and were considered the core challenges of co-design. Further research is needed to replicate the results and to increase knowledge on key factors for a successful co-design of health care services.
Merging Galaxies Create a Binary Quasar
NASA Astrophysics Data System (ADS)
2010-02-01
Astronomers have found the first clear evidence of a binary quasar within a pair of actively merging galaxies. Quasars are the extremely bright centers of galaxies surrounding super-massive black holes, and binary quasars are pairs of quasars bound together by gravity. Binary quasars, like other quasars, are thought to be the product of galaxy mergers. Until now, however, binary quasars have not been seen in galaxies that are unambiguously in the act of merging. But images of a new binary quasar from the Carnegie Institution's Magellan telescope in Chile show two distinct galaxies with "tails" produced by tidal forces from their mutual gravitational attraction. "This is really the first case in which you see two separate galaxies, both with quasars, that are clearly interacting," says Carnegie astronomer John Mulchaey who made observations crucial to understanding the galaxy merger. Most, if not all, large galaxies, such as our galaxy the Milky Way, host super-massive black holes at their centers. Because galaxies regularly interact and merge, astronomers have assumed that binary super-massive black holes have been common in the Universe, especially during its early history. Black holes can only be detected as quasars when they are actively accreting matter, a process that releases vast amounts of energy. A leading theory is that galaxy mergers trigger accretion, creating quasars in both galaxies. Because most such mergers would have happened in the distant past, binary quasars and their associated galaxies are very far away and therefore difficult for most telescopes to resolve. The binary quasar, labeled SDSS J1254+0846, was initially detected by the Sloan Digital Sky Survey, a large scale astronomical survey of galaxies and over 120,000 quasars. Further observations by Paul Green of the Harvard-Smithsonian Center for Astrophysics and colleagues* using NASA's Chandra's X-ray Observatory and telescopes at Kitt Peak National Observatory in Arizona and Palomar Observatory in California indicated that the object was likely a binary quasar in the midst of a galaxy merger. Carnegie's Mulchaey then used the 6.5 meter Baade-Magellan telescope at the Las Campanas observatory in Chile to obtain deeper images and more detailed spectroscopy of the merging galaxies. "Just because you see two galaxies that are close to each other in the sky doesn't mean they are merging," says Mulchaey. "But from the Magellan images we can actually see tidal tails, one from each galaxy, which suggests that the galaxies are in fact interacting and are in the process of merging." Thomas Cox, now a fellow at the Carnegie Observatories, corroborated this conclusion using computer simulations of the merging galaxies. When Cox's model galaxies merged, they showed features remarkably similar to what Mulchaey observed in the Magellan images. "The model verifies the merger origin for this binary quasar system," he says. "It also hints that this kind of galaxy interaction is a key component of the growth of black holes and production of quasars throughout our universe." * The authors of the paper published in the Astrophysical Journal are Paul J. Green of the Harvard-Smithsonian Center for Astrophysics, Adam D. Myers of the University of Illinois at Urbana-Champaign, Wayne A. Barkhouse of the University of North Dakota, John S. Mulchaey of the Observatories of the Carnegie Institution for Science, Vardha N. Bennert of the Department of Physics, University of California, Santa Barbara, Thomas J. Cox of the Observatories of the Carnegie Institution for Science, Thomas L. Aldcroft of the Harvard-Smithsonian Center for Astrophysics, and Joan M. Wrobel of National Radio Astronomy Observatory, Socorro, NM. More information, including images and other multimedia, can be found at: http://chandra.harvard.edu and http://chandra.nasa.gov
ERIC Educational Resources Information Center
Ramirez, Catherine Clark
1994-01-01
Suggests that the telling of vivid stories can help engage elementary students' emotions and increase the chances of fostering an interest in Texas history. Suggests that incorporating elements of the process approach to writing can merge with social studies objectives in creating a curriculum for wisdom. (RS)
Minimum Conflict Mainstreaming.
ERIC Educational Resources Information Center
Awen, Ed; And Others
Computer technology is discussed as a tool for facilitating the implementation of the mainstreaming process. Minimum conflict mainstreaming/merging (MCM) is defined as an approach which utilizes computer technology to circumvent such structural obstacles to mainstreaming as transportation scheduling, screening and assignment of students, testing,…
Global modeling of land water and energy balances. Part III: Interannual variability
Shmakin, A.B.; Milly, P.C.D.; Dunne, K.A.
2002-01-01
The Land Dynamics (LaD) model is tested by comparison with observations of interannual variations in discharge from 44 large river basins for which relatively accurate time series of monthly precipitation (a primary model input) have recently been computed. When results are pooled across all basins, the model explains 67% of the interannual variance of annual runoff ratio anomalies (i.e., anomalies of annual discharge volume, normalized by long-term mean precipitation volume). The new estimates of basin precipitation appear to offer an improvement over those from a state-of-the-art analysis of global precipitation (the Climate Prediction Center Merged Analysis of Precipitation, CMAP), judging from comparisons of parallel model runs and of analyses of precipitation-discharge correlations. When the new precipitation estimates are used, the performance of the LaD model is comparable to, but not significantly better than, that of a simple, semiempirical water-balance relation that uses only annual totals of surface net radiation and precipitation. This implies that the LaD simulations of interannual runoff variability do not benefit substantially from information on geographical variability of land parameters or seasonal structure of interannual variability of precipitation. The aforementioned analyses necessitated the development of a method for downscaling of long-term monthly precipitation data to the relatively short timescales necessary for running the model. The method merges the long-term data with a reference dataset of 1-yr duration, having high temporal resolution. The success of the method, for the model and data considered here, was demonstrated in a series of model-model comparisons and in the comparisons of modeled and observed interannual variations of basin discharge.
TED: A Tolerant Edit Distance for segmentation evaluation.
Funke, Jan; Klein, Jonas; Moreno-Noguer, Francesc; Cardona, Albert; Cook, Matthew
2017-02-15
In this paper, we present a novel error measure to compare a computer-generated segmentation of images or volumes against ground truth. This measure, which we call Tolerant Edit Distance (TED), is motivated by two observations that we usually encounter in biomedical image processing: (1) Some errors, like small boundary shifts, are tolerable in practice. Which errors are tolerable is application dependent and should be explicitly expressible in the measure. (2) Non-tolerable errors have to be corrected manually. The effort needed to do so should be reflected by the error measure. Our measure is the minimal weighted sum of split and merge operations to apply to one segmentation such that it resembles another segmentation within specified tolerance bounds. This is in contrast to other commonly used measures like Rand index or variation of information, which integrate small, but tolerable, differences. Additionally, the TED provides intuitive numbers and allows the localization and classification of errors in images or volumes. We demonstrate the applicability of the TED on 3D segmentations of neurons in electron microscopy images where topological correctness is arguable more important than exact boundary locations. Furthermore, we show that the TED is not just limited to evaluation tasks. We use it as the loss function in a max-margin learning framework to find parameters of an automatic neuron segmentation algorithm. We show that training to minimize the TED, i.e., to minimize crucial errors, leads to higher segmentation accuracy compared to other learning methods. Copyright © 2016. Published by Elsevier Inc.
Critical behavior of a two-step contagion model with multiple seeds
NASA Astrophysics Data System (ADS)
Choi, Wonjun; Lee, Deokjae; Kahng, B.
2017-06-01
A two-step contagion model with a single seed serves as a cornerstone for understanding the critical behaviors and underlying mechanism of discontinuous percolation transitions induced by cascade dynamics. When the contagion spreads from a single seed, a cluster of infected and recovered nodes grows without any cluster merging process. However, when the contagion starts from multiple seeds of O (N ) where N is the system size, a node weakened by a seed can be infected more easily when it is in contact with another node infected by a different pathogen seed. This contagion process can be viewed as a cluster merging process in a percolation model. Here we show analytically and numerically that when the density of infectious seeds is relatively small but O (1 ) , the epidemic transition is hybrid, exhibiting both continuous and discontinuous behavior, whereas when it is sufficiently large and reaches a critical point, the transition becomes continuous. We determine the full set of critical exponents describing the hybrid and the continuous transitions. Their critical behaviors differ from those in the single-seed case.
Kinetics of Aggregation with Choice
Ben-Naim, Eli; Krapivsky, Paul
2016-12-01
Here we generalize the ordinary aggregation process to allow for choice. In ordinary aggregation, two random clusters merge and form a larger aggregate. In our implementation of choice, a target cluster and two candidate clusters are randomly selected and the target cluster merges with the larger of the two candidate clusters.We study the long-time asymptotic behavior and find that as in ordinary aggregation, the size density adheres to the standard scaling form. However, aggregation with choice exhibits a number of different features. First, the density of the smallest clusters exhibits anomalous scaling. Second, both the small-size and the large-size tailsmore » of the density are overpopulated, at the expense of the density of moderate-size clusters. Finally, we also study the complementary case where the smaller candidate cluster participates in the aggregation process and find an abundance of moderate clusters at the expense of small and large clusters. Additionally, we investigate aggregation processes with choice among multiple candidate clusters and a symmetric implementation where the choice is between two pairs of clusters.« less
Use of internal control T-cell populations in the flow cytometric evaluation for T-cell neoplasms.
Hunt, Alicia M; Shallenberger, Wendy; Ten Eyck, Stephen P; Craig, Fiona E
2016-09-01
Flow cytometry is an important tool for identification of neoplastic T-cells, but immunophenotypic abnormalities are often subtle and must be distinguished from nonneoplastic subsets. Use of internal control (IC) T-cells in the evaluation for T-cell neoplasms was explored, both as a quality measure and as a reference for evaluating abnormal antigen expression. All peripheral blood specimens (3-month period), or those containing abnormal T-cells (29-month period), stained with CD45 V500, CD2 V450, CD3 PE-Cy7, CD7 PE, CD4 Per-CP-Cy5.5, CD8 APC-H7, CD56 APC, CD16&57 FITC, were evaluated. IC T-cells were identified (DIVA, BD Biosciences) and median fluorescence intensity (MFI) recorded. Selected files were merged and reference templates generated (Infinicyt, Cytognos). IC T-cells were present in all specimens, including those with abnormal T-cells, but subsets were less well-represented. IC T-cell CD3 MFI differed between instruments (p = 0.0007) and subsets (p < 0.001), but not specimen categories, and served as a longitudinal process control. Merged files highlighted small unusual IC-T subsets: CD2+(dim) (0.25% total), CD2- (0.03% total). An IC reference template highlighted neoplastic T-cells, but was limited by staining variability (IC CD3 MFI reference samples different from test (p = 0.003)). IC T-cells present in the majority of specimens can serve as positive and longitudinal process controls. Use of IC T-cells as an internal reference is limited by variable representation of subsets. Analysis of merged IC T-cells from previously analyzed patient samples can alert the interpreter to less-well-recognized non-neoplastic subsets. However, application of a merged file IC reference template was limited by staining variability. © 2016 Clinical Cytometry Society. © 2016 International Clinical Cytometry Society.
The effect of gas dynamics on semi-analytic modelling of cluster galaxies
NASA Astrophysics Data System (ADS)
Saro, A.; De Lucia, G.; Dolag, K.; Borgani, S.
2008-12-01
We study the degree to which non-radiative gas dynamics affect the merger histories of haloes along with subsequent predictions from a semi-analytic model (SAM) of galaxy formation. To this aim, we use a sample of dark matter only and non-radiative smooth particle hydrodynamics (SPH) simulations of four massive clusters. The presence of gas-dynamical processes (e.g. ram pressure from the hot intra-cluster atmosphere) makes haloes more fragile in the runs which include gas. This results in a 25 per cent decrease in the total number of subhaloes at z = 0. The impact on the galaxy population predicted by SAMs is complicated by the presence of `orphan' galaxies, i.e. galaxies whose parent substructures are reduced below the resolution limit of the simulation. In the model employed in our study, these galaxies survive (unaffected by the tidal stripping process) for a residual merging time that is computed using a variation of the Chandrasekhar formula. Due to ram-pressure stripping, haloes in gas simulations tend to be less massive than their counterparts in the dark matter simulations. The resulting merging times for satellite galaxies are then longer in these simulations. On the other hand, the presence of gas influences the orbits of haloes making them on average more circular and therefore reducing the estimated merging times with respect to the dark matter only simulation. This effect is particularly significant for the most massive satellites and is (at least in part) responsible for the fact that brightest cluster galaxies in runs with gas have stellar masses which are about 25 per cent larger than those obtained from dark matter only simulations. Our results show that gas dynamics has only a marginal impact on the statistical properties of the galaxy population, but that its impact on the orbits and merging times of haloes strongly influences the assembly of the most massive galaxies.
Context transfer in reinforcement learning using action-value functions.
Mousavi, Amin; Nadjar Araabi, Babak; Nili Ahmadabadi, Majid
2014-01-01
This paper discusses the notion of context transfer in reinforcement learning tasks. Context transfer, as defined in this paper, implies knowledge transfer between source and target tasks that share the same environment dynamics and reward function but have different states or action spaces. In other words, the agents learn the same task while using different sensors and actuators. This requires the existence of an underlying common Markov decision process (MDP) to which all the agents' MDPs can be mapped. This is formulated in terms of the notion of MDP homomorphism. The learning framework is Q-learning. To transfer the knowledge between these tasks, the feature space is used as a translator and is expressed as a partial mapping between the state-action spaces of different tasks. The Q-values learned during the learning process of the source tasks are mapped to the sets of Q-values for the target task. These transferred Q-values are merged together and used to initialize the learning process of the target task. An interval-based approach is used to represent and merge the knowledge of the source tasks. Empirical results show that the transferred initialization can be beneficial to the learning process of the target task.
Context Transfer in Reinforcement Learning Using Action-Value Functions
Mousavi, Amin; Nadjar Araabi, Babak; Nili Ahmadabadi, Majid
2014-01-01
This paper discusses the notion of context transfer in reinforcement learning tasks. Context transfer, as defined in this paper, implies knowledge transfer between source and target tasks that share the same environment dynamics and reward function but have different states or action spaces. In other words, the agents learn the same task while using different sensors and actuators. This requires the existence of an underlying common Markov decision process (MDP) to which all the agents' MDPs can be mapped. This is formulated in terms of the notion of MDP homomorphism. The learning framework is Q-learning. To transfer the knowledge between these tasks, the feature space is used as a translator and is expressed as a partial mapping between the state-action spaces of different tasks. The Q-values learned during the learning process of the source tasks are mapped to the sets of Q-values for the target task. These transferred Q-values are merged together and used to initialize the learning process of the target task. An interval-based approach is used to represent and merge the knowledge of the source tasks. Empirical results show that the transferred initialization can be beneficial to the learning process of the target task. PMID:25610457
Ierardi, Anna Maria; Petrillo, Mario; Xhepa, Genti; Laganà, Domenico; Piacentino, Filippo; Floridi, Chiara; Duka, Ejona; Fugazzola, Carlo; Carrafiello, Gianpaolo
2016-02-01
Recently different software with the ability to plan ablation volumes have been developed in order to minimize the number of attempts of positioning electrodes and to improve a safe overall tumor coverage. To assess the feasibility of three-dimensional cone beam computed tomography (3D CBCT) fusion imaging with "virtual probe" positioning, to predict ablation volume in lung tumors treated percutaneously. Pre-procedural computed tomography contrast-enhanced scans (CECT) were merged with a CBCT volume obtained to plan the ablation. An offline tumor segmentation was performed to determine the number of antennae and their positioning within the tumor. The volume of ablation obtained, evaluated on CECT performed after 1 month, was compared with the pre-procedural predicted one. Feasibility was assessed on the basis of accuracy evaluation (visual evaluation [VE] and quantitative evaluation [QE]), technical success (TS), and technical effectiveness (TE). Seven of the patients with lung tumor treated by percutaneous thermal ablation were selected and treated on the basis of the 3D CBCT fusion imaging. In all cases the volume of ablation predicted was in accordance with that obtained. The difference in volume between predicted ablation volumes and obtained ones on CECT at 1 month was 1.8 cm(3) (SD ± 2, min. 0.4, max. 0.9) for MW and 0.9 cm(3) (SD ± 1.1, min. 0.1, max. 0.7) for RF. Use of pre-procedural 3D CBCT fusion imaging could be useful to define expected ablation volumes. However, more patients are needed to ensure stronger evidence. © The Foundation Acta Radiologica 2015.
NASA Astrophysics Data System (ADS)
2008-08-01
Astronomers have caught multiple massive galaxies in the act of merging about 4 billion years ago. This discovery, made possible by combining the power of the best ground- and space-based telescopes, uniquely supports the favoured theory of how galaxies form. ESO PR Photo 24/08 ESO PR Photo 24/08 Merging Galaxies in Groups How do galaxies form? The most widely accepted answer to this fundamental question is the model of 'hierarchical formation', a step-wise process in which small galaxies merge to build larger ones. One can think of the galaxies forming in a similar way to how streams merge to form rivers, and how these rivers, in turn, merge to form an even larger river. This theoretical model predicts that massive galaxies grow through many merging events in their lifetime. But when did their cosmological growth spurts finish? When did the most massive galaxies get most of their mass? To answer these questions, astronomers study massive galaxies in clusters, the cosmological equivalent of cities filled with galaxies. "Whether the brightest galaxies in clusters grew substantially in the last few billion years is intensely debated. Our observations show that in this time, these galaxies have increased their mass by 50%," says Kim-Vy Tran from the University of Zürich, Switzerland, who led the research. The astronomers made use of a large ensemble of telescopes and instruments, including ESO's Very Large Telescope (VLT) and the Hubble Space Telescope, to study in great detail galaxies located 4 billion light-years away. These galaxies lie in an extraordinary system made of four galaxy groups that will assemble into a cluster. In particular, the team took images with VIMOS and spectra with FORS2, both instruments on the VLT. From these and other observations, the astronomers could identify a total of 198 galaxies belonging to these four groups. The brightest galaxies in each group contain between 100 and 1000 billion of stars, a property that makes them comparable to the most massive galaxies belonging to clusters. "Most surprising is that in three of the four groups, the brightest galaxy also has a bright companion galaxy. These galaxy pairs are merging systems," says Tran. The brightest galaxy in each group can be ordered in a time sequence that shows how luminous galaxies continue to grow by merging until recently, that is, in the last 5 billion years. It appears that due to the most recent episode of this 'galactic cannibalism', the brightest galaxies became at least 50% more massive. This discovery provides unique and powerful validation of hierarchical formation as manifested in both galaxy and cluster assembly. "The stars in these galaxies are already old and so we must conclude that the recent merging did not produce a new generation of stars," concludes Tran. "Most of the stars in these galaxies were born at least 7 billion years ago." The team is composed of Kim-Vy H. Tran (Institute for Theoretical Physics, University of Zürich, Switzerland), John Moustakas (New York University, USA), Anthony H. Gonzalez and Stefan J. Kautsch (University of Florida, Gainesville, USA), and Lei Bai and Dennis Zaritsky (Steward Observatory, University of Arizona, USA). The results presented here are published in the Astrophysical Journal Letters: "The Late Stellar Assembly Of Massive Cluster Galaxies Via Major Merging", by Tran et al.
ERIC Educational Resources Information Center
Watters, Kate
2008-01-01
In April 2007 "new" Ofsted ("old" Ofsted merged with the Adult Learning Inspectorate) became responsible for inspection of a very wide range of provision, including all post-compulsory education and training for adults. Significant changes in inspection processes had already been introduced for local authority adult and…
Madanecki, Piotr; Bałut, Magdalena; Buckley, Patrick G; Ochocka, J Renata; Bartoszewski, Rafał; Crossman, David K; Messiaen, Ludwine M; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp).
Bałut, Magdalena; Buckley, Patrick G.; Ochocka, J. Renata; Bartoszewski, Rafał; Crossman, David K.; Messiaen, Ludwine M.; Piotrowski, Arkadiusz
2018-01-01
High-throughput technologies generate considerable amount of data which often requires bioinformatic expertise to analyze. Here we present High-Throughput Tabular Data Processor (HTDP), a platform independent Java program. HTDP works on any character-delimited column data (e.g. BED, GFF, GTF, PSL, WIG, VCF) from multiple text files and supports merging, filtering and converting of data that is produced in the course of high-throughput experiments. HTDP can also utilize itemized sets of conditions from external files for complex or repetitive filtering/merging tasks. The program is intended to aid global, real-time processing of large data sets using a graphical user interface (GUI). Therefore, no prior expertise in programming, regular expression, or command line usage is required of the user. Additionally, no a priori assumptions are imposed on the internal file composition. We demonstrate the flexibility and potential of HTDP in real-life research tasks including microarray and massively parallel sequencing, i.e. identification of disease predisposing variants in the next generation sequencing data as well as comprehensive concurrent analysis of microarray and sequencing results. We also show the utility of HTDP in technical tasks including data merge, reduction and filtering with external criteria files. HTDP was developed to address functionality that is missing or rudimentary in other GUI software for processing character-delimited column data from high-throughput technologies. Flexibility, in terms of input file handling, provides long term potential functionality in high-throughput analysis pipelines, as the program is not limited by the currently existing applications and data formats. HTDP is available as the Open Source software (https://github.com/pmadanecki/htdp). PMID:29432475
Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment
NASA Astrophysics Data System (ADS)
David, S.; Visvikis, D.; Roux, C.; Hatt, M.
2011-09-01
In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.
2014-01-01
Background There are many methodological challenges in the conduct and analysis of cluster randomised controlled trials, but one that has received little attention is that of post-randomisation changes to cluster composition. To illustrate this, we focus on the issue of cluster merging, considering the impact on the design, analysis and interpretation of trial outcomes. Methods We explored the effects of merging clusters on study power using standard methods of power calculation. We assessed the potential impacts on study findings of both homogeneous cluster merges (involving clusters randomised to the same arm of a trial) and heterogeneous merges (involving clusters randomised to different arms of a trial) by simulation. To determine the impact on bias and precision of treatment effect estimates, we applied standard methods of analysis to different populations under analysis. Results Cluster merging produced a systematic reduction in study power. This effect depended on the number of merges and was most pronounced when variability in cluster size was at its greatest. Simulations demonstrate that the impact on analysis was minimal when cluster merges were homogeneous, with impact on study power being balanced by a change in observed intracluster correlation coefficient (ICC). We found a decrease in study power when cluster merges were heterogeneous, and the estimate of treatment effect was attenuated. Conclusions Examples of cluster merges found in previously published reports of cluster randomised trials were typically homogeneous rather than heterogeneous. Simulations demonstrated that trial findings in such cases would be unbiased. However, simulations also showed that any heterogeneous cluster merges would introduce bias that would be hard to quantify, as well as having negative impacts on the precision of estimates obtained. Further methodological development is warranted to better determine how to analyse such trials appropriately. Interim recommendations include avoidance of cluster merges where possible, discontinuation of clusters following heterogeneous merges, allowance for potential loss of clusters and additional variability in cluster size in the original sample size calculation, and use of appropriate ICC estimates that reflect cluster size. PMID:24884591
The role of relationships in connecting social work research and evidence-based practice.
Jones, Johnny M; Sherr, Michael E
2014-01-01
Critics of evidence-based practice (EBP) often challenge the efficacy of applying social work research in practice. Such skepticism underscores the historic chasm that still exists between social work researchers and practitioners. If taught and implemented consistently, the EBP model can mend the connection between researchers and practitioners by merging their roles. Merging their roles, however, requires a renewed emphasis on relationships in the research process. This article explores the role of relationships in social work research. Using a researcher/practitioner continuum, we assess the types of interactions faculty have with stakeholders. We then offer strategies for cultivating relationships with stakeholders that lead to community-derived and implemented research that is critical to advancing the widespread use of EBP in social work.
Shedding of dual structures in the wake of a surface-mounted low aspect ratio cone
NASA Astrophysics Data System (ADS)
Chen, Zixiang; Martinuzzi, Robert J.
2018-04-01
The periodic shedding of vortex pairs in the turbulent wake of a surface-mounted right cone of aspect ratio 0.867 protruding a thin turbulent boundary layer is investigated experimentally. A phase-averaged volumetric velocity field is reconstructed from planar stereoscopic particle image velocimetry. During a typical (phase-averaged) shedding cycle, counter-rotating base vortices alternately form. These are tilted and stretched to merge with stream-wise tip vortices. The merged structure sheds and is convected downstream. A synthesis of earlier observations suggests that a similar shedding process exists for other low aspect ratio tapered geometries and is more complex than the shedding patterns observed for cantilevered cylinders, despite similarities of the mean flow field structure.
Highly Resolved Measurements of a Developing Strong Collisional Plasma Shock
NASA Astrophysics Data System (ADS)
Rinderknecht, Hans G.; Park, H.-S.; Ross, J. S.; Amendt, P. A.; Higginson, D. P.; Wilks, S. C.; Haberberger, D.; Katz, J.; Froula, D. H.; Hoffman, N. M.; Kagan, G.; Keenan, B. D.; Vold, E. L.
2018-03-01
The structure of a strong collisional shock front forming in a plasma is directly probed for the first time in laser-driven gas-jet experiments. Thomson scattering of a 526.5 nm probe beam was used to diagnose temperature and ion velocity distribution in a strong shock (M ˜11 ) propagating through a low-density (ρ ˜0.01 mg /cc ) plasma composed of hydrogen. A forward-streaming population of ions traveling in excess of the shock velocity was observed to heat and slow down on an unmoving, unshocked population of cold protons, until ultimately the populations merge and begin to thermalize. Instabilities are observed during the merging, indicating a uniquely plasma-phase process in shock front formation.
Dual jets from binary black holes.
Palenzuela, Carlos; Lehner, Luis; Liebling, Steven L
2010-08-20
The coalescence of supermassive black holes--a natural outcome when galaxies merge--should produce gravitational waves and would likely be associated with energetic electromagnetic events. We have studied the coalescence of such binary black holes within an external magnetic field produced by the expected circumbinary disk surrounding them. Solving the Einstein equations to describe black holes interacting with surrounding plasma, we present numerical evidence for possible jets driven by these systems. Extending the process described by Blandford and Znajek for a single, spinning black hole, the picture that emerges suggests that the electromagnetic field extracts energy from the orbiting black holes, which ultimately merge and settle into the standard Blandford-Znajek scenario. Emissions along these jets could potentially be observable at large distances.
Lakeside: Merging Urban Design with Scientific Analysis
Guzowski, Leah; Catlett, Charlie; Woodbury, Ed
2018-01-16
Researchers at the U.S. Department of Energy's Argonne National Laboratory and the University of Chicago are developing tools that merge urban design with scientific analysis to improve the decision-making process associated with large-scale urban developments. One such tool, called LakeSim, has been prototyped with an initial focus on consumer-driven energy and transportation demand, through a partnership with the Chicago-based architectural and engineering design firm Skidmore, Owings & Merrill, Clean Energy Trust and developer McCaffery Interests. LakeSim began with the need to answer practical questions about urban design and planning, requiring a better understanding about the long-term impact of design decisions on energy and transportation demand for a 600-acre development project on Chicago's South Side - the Chicago Lakeside Development project.
NASA Technical Reports Server (NTRS)
Newell, Patrick T.; Sibeck, David G.; Meng, Ching-I
1995-01-01
Magnetosheath plasma peertated into the magnetospere creating the particle cusp, and similarly the interplanetary magnetic field (IMF) B(sub y) component penetrates the magnetopause. We reexamine the phenomenology of such penetration to investigate implications for the magnetopause merging site. Three models are popular: (1) the 'antiparallel' model, in which merging occurs where the local magnetic shear is largest (usually high magnetic latitude); (2) a tilted merging line passing through the subsolar point but extending to very high latitudes; or (3) a tilted merging line passing through the subsolar point in which most merging occurs within a few Earth radii of the equatorial plane and local noon (subsolar merging). It is difficult to distinguish between the first two models, but the third implies some very different predictions. We show that properties of the particle cusp imply that plasma injection into the magnetosphere occurs most often at high magnetic latitudes. In particular, we note the following: (1) The altitude of the merging site inferred from midaltitude cusp ion pitch angle dispersion is typically 8-12 R(sub E). (2) The highest ion energy observable when moving poleward through the cusp drops long before the bulk of the cusp plasma is reached, implying that ions are swimming upstream against the sheath flow shortly after merging. (3) Low-energy ions are less able to enter the winter cusp than the summer cusp. (4) The local time behavior of the cusp as a function of B(sub y) and B(sub z) corroborates predictions of the high-latitude merging models. We also reconsider the penetration of the IMF B(sub y) component onto closed dayside field lines. Our approach, in which closed field lines ove to fill in flux voids created by asymmetric magnetopause flux erosion, shows that strich subsolar merging cannot account for the observations.
Results of the Fluid Merging Viscosity Measurement International Space Station Experiment
NASA Technical Reports Server (NTRS)
Ethridge, Edwin C.; Kaukler, William; Antar, Basil
2009-01-01
The purpose of FMVM is to measure the rate of coalescence of two highly viscous liquid drops and correlate the results with the liquid viscosity and surface tension. The experiment takes advantage of the low gravitational free floating conditions in space to permit the unconstrained coalescence of two nearly spherical drops. The merging of the drops is accomplished by deploying them from a syringe and suspending them on Nomex threads followed by the astronaut s manipulation of one of the drops toward a stationary droplet till contact is achieved. Coalescence and merging occurs due to shape relaxation and reduction of surface energy, being resisted by the viscous drag within the liquid. Experiments were conducted onboard the International Space Station in July of 2004 and subsequently in May of 2005. The coalescence was recorded on video and down-linked near real-time. When the coefficient of surface tension for the liquid is known, the increase in contact radius can be used to determine the coefficient of viscosity for that liquid. The viscosity is determined by fitting the experimental speed to theoretically calculated contact radius speed for the same experimental parameters. Recent fluid dynamical numerical simulations of the coalescence process will be presented. The results are important for a better understanding of the coalescence process. The experiment is also relevant to liquid phase sintering, free form in-situ fabrication, and as a potential new method for measuring the viscosity of viscous glass formers at low shear rates.
NASA Astrophysics Data System (ADS)
Tang, Qingxin; Bo, Yanchen; Zhu, Yuxin
2016-04-01
Merging multisensor aerosol optical depth (AOD) products is an effective way to produce more spatiotemporally complete and accurate AOD products. A spatiotemporal statistical data fusion framework based on a Bayesian maximum entropy (BME) method was developed for merging satellite AOD products in East Asia. The advantages of the presented merging framework are that it not only utilizes the spatiotemporal autocorrelations but also explicitly incorporates the uncertainties of the AOD products being merged. The satellite AOD products used for merging are the Moderate Resolution Imaging Spectroradiometer (MODIS) Collection 5.1 Level-2 AOD products (MOD04_L2) and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Deep Blue Level 2 AOD products (SWDB_L2). The results show that the average completeness of the merged AOD data is 95.2%,which is significantly superior to the completeness of MOD04_L2 (22.9%) and SWDB_L2 (20.2%). By comparing the merged AOD to the Aerosol Robotic Network AOD records, the results show that the correlation coefficient (0.75), root-mean-square error (0.29), and mean bias (0.068) of the merged AOD are close to those (the correlation coefficient (0.82), root-mean-square error (0.19), and mean bias (0.059)) of the MODIS AOD. In the regions where both MODIS and SeaWiFS have valid observations, the accuracy of the merged AOD is higher than those of MODIS and SeaWiFS AODs. Even in regions where both MODIS and SeaWiFS AODs are missing, the accuracy of the merged AOD is also close to the accuracy of the regions where both MODIS and SeaWiFS have valid observations.
Modeling cooperative driving behavior in freeway merges.
DOT National Transportation Integrated Search
2011-11-01
Merging locations are major sources of freeway bottlenecks and are therefore important for freeway operations analysis. Microscopic simulation tools have been successfully used to analyze merging bottlenecks and to design optimum geometric configurat...
Quintard, Adrien; Constantieux, Thierry; Rodriguez, Jean
2013-12-02
Three is a lucky number: An enantioselective transformation of allylic alcohols into β-chiral saturated alcohols has been developed by combining two distinct metal- and organocatalyzed catalytic cycles. This waste-free triple cascade process merges an iron-catalyzed borrowing-hydrogen step with an aminocatalyzed nucleophilic addition reaction. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
1998-01-01
A Space Act Agreement between Kennedy Space Center and Surtreat Southeast, Inc., resulted in a new treatment that keeps buildings from corroding away over time. Structural corrosion is a multi-billion dollar problem in the United States. The agreement merged Kennedy Space Center's research into electrical treatments of structural corrosion with chemical processes developed by Surtreat. Combining NASA and Surtreat technologies has resulted in a unique process with broad corrosion-control applications.
Galaxy mergers and gravitational lens statistics
NASA Technical Reports Server (NTRS)
Rix, Hans-Walter; Maoz, Dan; Turner, Edwin L.; Fukugita, Masataka
1994-01-01
We investigate the impact of hierarchical galaxy merging on the statistics of gravitational lensing of distant sources. Since no definite theoretical predictions for the merging history of luminous galaxies exist, we adopt a parameterized prescription, which allows us to adjust the expected number of pieces comprising a typical present galaxy at z approximately 0.65. The existence of global parameter relations for elliptical galaxies and constraints on the evolution of the phase space density in dissipationless mergers, allow us to limit the possible evolution of galaxy lens properties under merging. We draw two lessons from implementing this lens evolution into statistical lens calculations: (1) The total optical depth to multiple imaging (e.g., of quasars) is quite insensitive to merging. (2) Merging leads to a smaller mean separation of observed multiple images. Because merging does not reduce drastically the expected lensing frequency, it cannot make lambda-dominated cosmologies compatible with the existing lensing observations. A comparison with the data from the Hubble Space Telescope (HST) Snapshot Survey shows that models with little or no evolution of the lens population are statistically favored over strong merging scenarios. A specific merging scenario proposed to Toomre can be rejected (95% level) by such a comparison. Some versions of the scenario proposed by Broadhurst, Ellis, & Glazebrook are statistically acceptable.
Modeling merging behavior at lane drops : [tech transfer summary].
DOT National Transportation Integrated Search
2015-02-01
A better understanding of the merging behavior of drivers will lead : to the development of better lane-drop traffic-control plans and : strategies, which will provide better guidance to drivers for safer : merging.
Unravelling merging behaviors and electrostatic properties of CVD-grown monolayer MoS{sub 2} domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hao, Song; Yang, Bingchu, E-mail: bingchuyang@csu.edu.cn; Hunan Key Laboratory for Super-Microstructure and Ultrafast Process, Central South University, 932 South Lushan Road, Changsha 410012
The presence of grain boundaries is inevitable for chemical vapor deposition (CVD)-grown MoS{sub 2} domains owing to various merging behaviors, which greatly limits its potential applications in novel electronic and optoelectronic devices. It is therefore of great significance to unravel the merging behaviors of the synthesized polygon shape MoS{sub 2} domains. Here we provide systematic investigations of merging behaviors and electrostatic properties of CVD-grown polycrystalline MoS{sub 2} crystals by multiple means. Morphological results exhibit various polygon shape features, ascribed to polycrystalline crystals merged with triangle shape MoS{sub 2} single crystals. The thickness of triangle and polygon shape MoS{sub 2} crystalsmore » is identical manifested by Raman intensity and peak position mappings. Three merging behaviors are proposed to illustrate the formation mechanisms of observed various polygon shaped MoS{sub 2} crystals. The combined photoemission electron microscopy and kelvin probe force microscopy results reveal that the surface potential of perfect merged crystals is identical, which has an important implication for fabricating MoS{sub 2}-based devices.« less
Actin dynamics provides membrane tension to merge fusing vesicles into the plasma membrane
Wen, Peter J.; Grenklo, Staffan; Arpino, Gianvito; Tan, Xinyu; Liao, Hsien-Shun; Heureaux, Johanna; Peng, Shi-Yong; Chiang, Hsueh-Cheng; Hamid, Edaeni; Zhao, Wei-Dong; Shin, Wonchul; Näreoja, Tuomas; Evergren, Emma; Jin, Yinghui; Karlsson, Roger; Ebert, Steven N.; Jin, Albert; Liu, Allen P.; Shupliakov, Oleg; Wu, Ling-Gang
2016-01-01
Vesicle fusion is executed via formation of an Ω-shaped structure (Ω-profile), followed by closure (kiss-and-run) or merging of the Ω-profile into the plasma membrane (full fusion). Although Ω-profile closure limits release but recycles vesicles economically, Ω-profile merging facilitates release but couples to classical endocytosis for recycling. Despite its crucial role in determining exocytosis/endocytosis modes, how Ω-profile merging is mediated is poorly understood in endocrine cells and neurons containing small ∼30–300 nm vesicles. Here, using confocal and super-resolution STED imaging, force measurements, pharmacology and gene knockout, we show that dynamic assembly of filamentous actin, involving ATP hydrolysis, N-WASP and formin, mediates Ω-profile merging by providing sufficient plasma membrane tension to shrink the Ω-profile in neuroendocrine chromaffin cells containing ∼300 nm vesicles. Actin-directed compounds also induce Ω-profile accumulation at lamprey synaptic active zones, suggesting that actin may mediate Ω-profile merging at synapses. These results uncover molecular and biophysical mechanisms underlying Ω-profile merging. PMID:27576662
Merged SAGE II / MIPAS / OMPS Ozone Record : Impact of Transfer Standard on Ozone Trends.
NASA Astrophysics Data System (ADS)
Kramarova, N. A.; Laeng, A.; von Clarmann, T.; Stiller, G. P.; Walker, K. A.; Zawodny, J. M.; Plieninger, J.
2017-12-01
The deseasonalized ozone anomalies from SAGE II, MIPAS and OMPS-LP datasets are merged into one long record. Two versions of the dataset will be presented : ACE-FTS instrument or MLS instrument are used as a transfer standard. The data are provided in 10 degrees latitude bins, going from 60N to 60S for the period from October 1984 to March 2017. The main differences between presented in this study merged ozone record and the merged SAGE II / Ozone_CCI / OMPS-Saskatoon dataset by V. Sofieva are: - the OMPS-LP data are from the NASA GSFC version 2 processor - the MIPAS 2002-2004 date are taken into the record - Data are merged using a transfer standard. In overlapping periods data are merged as weighted means where the weights are inversely proportional to the standard errors of the means (SEM) of the corresponding individual monthly means. The merged dataset comes with the uncertainty estimates. Ozone trends are calculated out of both versions of the dataset. The impact of transfer standard on obtained trends is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeno, K.
The aim of this note to better understand the effect of merging the Gold bunches in the Booster into one on the resulting AGS longitudinal emittance as compared to not merging them. The reason it matters whether they are merged or not is because they pass through a stripping foil in the BtA line. Data was taken last run (Run 17) for the case where the bunches are not merged, and it will be compared with data from cases where the bunches are merged. Previous data from Tandem operation will also be considered. There are two main pieces to thismore » puzzle. The first is the ε growth associated with the energy spread due to ‘energy straggling’ in the BtA stripping foil and the second is the effective ε growth associated with the energy loss that occurs while passing through the foil. Both of these effects depend on whether or not the Booster bunches have been merged into one.« less
Particle Acceleration via Reconnection Processes in the Supersonic Solar Wind
NASA Astrophysics Data System (ADS)
Zank, G. P.; le Roux, J. A.; Webb, G. M.; Dosch, A.; Khabarova, O.
2014-12-01
An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We derive a gyrophase-averaged transport equation for particles experiencing pitch-angle scattering and energization in a super-Alfvénic flowing plasma experiencing multiple small-scale reconnection events. A simpler advection-diffusion transport equation for a nearly isotropic particle distribution is derived. The dominant charged particle energization processes are (1) the electric field induced by quasi-2D magnetic island merging and (2) magnetic island contraction. The magnetic island topology ensures that charged particles are trapped in regions where they experience repeated interactions with the induced electric field or contracting magnetic islands. Steady-state solutions of the isotropic transport equation with only the induced electric field and a fixed source yield a power-law spectrum for the accelerated particles with index α = -(3 + MA )/2, where MA is the Alfvén Mach number. Considering only magnetic island contraction yields power-law-like solutions with index -3(1 + τ c /(8τdiff)), where τ c /τdiff is the ratio of timescales between magnetic island contraction and charged particle diffusion. The general solution is a power-law-like solution with an index that depends on the Alfvén Mach number and the timescale ratio τdiff/τ c . Observed power-law distributions of energetic particles observed in the quiet supersonic solar wind at 1 AU may be a consequence of particle acceleration associated with dissipative small-scale reconnection processes in a turbulent plasma, including the widely reported c -5 (c particle speed) spectra observed by Fisk & Gloeckler and Mewaldt et al.
Particle acceleration via reconnection processes in the supersonic solar wind
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zank, G. P.; Le Roux, J. A.; Webb, G. M.
An emerging paradigm for the dissipation of magnetic turbulence in the supersonic solar wind is via localized small-scale reconnection processes, essentially between quasi-2D interacting magnetic islands. Charged particles trapped in merging magnetic islands can be accelerated by the electric field generated by magnetic island merging and the contraction of magnetic islands. We derive a gyrophase-averaged transport equation for particles experiencing pitch-angle scattering and energization in a super-Alfvénic flowing plasma experiencing multiple small-scale reconnection events. A simpler advection-diffusion transport equation for a nearly isotropic particle distribution is derived. The dominant charged particle energization processes are (1) the electric field induced bymore » quasi-2D magnetic island merging and (2) magnetic island contraction. The magnetic island topology ensures that charged particles are trapped in regions where they experience repeated interactions with the induced electric field or contracting magnetic islands. Steady-state solutions of the isotropic transport equation with only the induced electric field and a fixed source yield a power-law spectrum for the accelerated particles with index α = –(3 + M{sub A} )/2, where M{sub A} is the Alfvén Mach number. Considering only magnetic island contraction yields power-law-like solutions with index –3(1 + τ {sub c}/(8τ{sub diff})), where τ {sub c}/τ{sub diff} is the ratio of timescales between magnetic island contraction and charged particle diffusion. The general solution is a power-law-like solution with an index that depends on the Alfvén Mach number and the timescale ratio τ{sub diff}/τ {sub c}. Observed power-law distributions of energetic particles observed in the quiet supersonic solar wind at 1 AU may be a consequence of particle acceleration associated with dissipative small-scale reconnection processes in a turbulent plasma, including the widely reported c {sup –5} (c particle speed) spectra observed by Fisk and Gloeckler and Mewaldt et al.« less
NASA Astrophysics Data System (ADS)
Beach, A. L., III; Early, A. B.; Chen, G.; Parker, L.
2014-12-01
NASA has conducted airborne tropospheric chemistry studies for about three decades. These field campaigns have generated a great wealth of observations, which are characterized by a wide range of trace gases and aerosol properties. The airborne observational data have often been used in assessment and validation of models and satellite instruments. The ASDC Toolset for Airborne Data (TAD) is being designed to meet the user community needs for manipulating aircraft data for scientific research on climate change and air quality relevant issues. Given the sheer volume of data variables across field campaigns and instruments reporting data on different time scales, this data is often difficult and time-intensive for researchers to analyze. The TAD web application is designed to provide an intuitive user interface (UI) to facilitate quick and efficient discovery from a vast number of airborne variables and data. Users are given the option to search based on high-level parameter groups, individual common names, mission and platform, as well as date ranges. Experienced users can immediately filter by keyword using the global search option. Once the user has chosen their required variables, they are given the option to either request PI data files based on their search criteria or create merged data, i.e. geo-located data from one or more measurement PIs. The purpose of the merged data feature is to allow users to compare data from one flight, as not all data from each flight is taken on the same time scale. Time bases can be continuous or based on the time base from one of the measurement time scales and intervals. After an order is submitted and processed, an ASDC email is sent to the user with a link for data download. The TAD user interface design, application architecture, and proposed future enhancements will be presented.
Serial data acquisition for GEM-2D detector
NASA Astrophysics Data System (ADS)
Kolasinski, Piotr; Pozniak, Krzysztof T.; Czarski, Tomasz; Linczuk, Maciej; Byszuk, Adrian; Chernyshova, Maryna; Juszczyk, Bartlomiej; Kasprowicz, Grzegorz; Wojenski, Andrzej; Zabolotny, Wojciech; Zienkiewicz, Pawel; Mazon, Didier; Malard, Philippe; Herrmann, Albrecht; Vezinet, Didier
2014-11-01
This article debates about data fast acquisition and histogramming method for the X-ray GEM detector. The whole process of histogramming is performed by FPGA chips (Spartan-6 series from Xilinx). The results of the histogramming process are stored in an internal FPGA memory and then sent to PC. In PC data is merged and processed by MATLAB. The structure of firmware functionality implemented in the FPGAs is described. Examples of test measurements and results are presented.
NASA Astrophysics Data System (ADS)
Tornambe, Amedeo
1989-08-01
Theoretical rates of mergings of envelope-deprived components of binary systems, which can give rise to supernova events are described. The effects of the various assumptions on the physical properties of the progenitor system and of its evolutionary behavior through common envelope phases are discussed. Four cases have been analyzed: CO-CO, He-CO, He-He double degenerate mergings and He star-CO dwarf merging. It is found that, above a critical efficiency of the common envelope action in system shrinkage, the rate of CO-CO mergings is not strongly sensitive to the efficiency. Below this critical value, no CO-CO systems will survive for times larger than a few Gyr. In contrast, He-CO dwarf systems will continue to merge at a reasonable rate up to 20 Gyr, and more, also under extreme conditions.
Merging the Intellectual and Technical Infrastructures in Higher Education: The Internet Example.
ERIC Educational Resources Information Center
Hannah, Richard L.
1998-01-01
The pervasiveness of information technology in higher education requires rethinking notions of student, course, curriculum, and other traditional concepts and processes of instruction. This article discusses Internet issues (access, portability, reliability, reboot priorities and back-up communications, student preparedness) and describes new…
Quantum Logic: Approach a Child's Environment from "Inside."
ERIC Educational Resources Information Center
Rhodes, William C.
1987-01-01
With the advent of quantum mechanics, physics has merged with psychology, and cognitive science has been revolutionized. Quantum logic supports the notion of influencing the environment by increasing the child's capacity for cognitive processing. This special educational approach is theoretically more effective than social and political…
Simulation of 6 to 3 to 1 merge and squeeze of Au77+ bunches in AGS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, C. J.
2016-05-09
In order to increase the intensity per Au77+ bunch at AGS extraction, a 6 to 3 to 1 merge scheme was developed and implemented by K. Zeno during the 2016 RHIC run. For this scheme, 12 Booster loads, each consisting of a single bunch, are delivered to AGS per AGS magnetic cycle. The bunch from Booster is itself the result of a 4 to 2 to 1 merge which is carried out on a flat porch during the Booster magnetic cycle. Each Booster bunch is injected into a harmonic 24 bucket on the AGS injection porch. In order to fitmore » into the buckets and allow for the AGS injection kicker rise time, the bunch width must be reduced by exciting quadrupole oscillations just before extraction from Booster. The bunches are injected into two groups of six adjacent harmonic 24 buckets. In each group the 6 bunches are merged into 3 by bringing on RF harmonic 12 while reducing harmonic 24. This is a straightforward 2 to 1 merge (in which two adjacent bunches are merged into one). One ends up with two groups of three adjacent bunches sitting in harmonic 12 buckets. These bunches are accelerated to an intermediate porch for further merging. Doing the merge on a porch that sits above injection energy helps reduce losses that are believed to be due to the space-charge force acting on the bunched particles. (The 6 to 3 merge is done on the injection porch because the harmonic 24 frequency on the intermediate porch would be too high for the AGS RF cavities.) On the intermediate porch each group of 3 bunches is merged into one by bringing on RF harmonics 8 and 4 and then reducing harmonics 12 and 8. One ends up with 2 bunches, each the result of a 6 to 3 to 1 merge and each sitting in a harmonic 4 bucket. This puts 6 Booster loads into each bunch. Each merged bunch needs to be squeezed into a harmonic 12 bucket for subsequent acceleration. This is done by again bringing on harmonic 8 and then harmonic 12. Results of simulations of the 6 to 3 to 1 merge and the subsequent squeeze into harmonic 12 buckets are presented in this note. In particular, they provide a benchmark for what can be achieved with the available RF voltages.« less
NASA Astrophysics Data System (ADS)
Hu, Lei; Wu, Xuefeng; Andreoni, Igor; Ashley, Michael C. B.; Cooke, Jeff; Cui, Xiangqun; Du, Fujia; Dai, Zigao; Gu, Bozhong; Hu, Yi; Lu, Haiping; Li, Xiaoyan; Li, Zhengyang; Liang, Ensi; Liu, Liangduan; Ma, Bin; Shang, Zhaohui; Sun, Tianrui; Suntzeff, N. B.; Tao, Charling; Udden, Syed A.; Wang, Lifan; Wang, Xiaofeng; Wen, Haikun; Xiao, Di; Su, Jin; Yang, Ji; Yang, Shihai; Yuan, Xiangyan; Zhou, Hongyan; Zhang, Hui; Zhou, Jilin; Zhu, Zonghong
2017-10-01
The LIGO detection of gravitational waves (GW) from merging black holes in 2015 marked the beginning of a new era in observational astronomy. The detection of an electromagnetic signal from a GW source is the critical next step to explore in detail the physics involved. The Antarctic Survey Telescopes (AST3), located at Dome A, Antarctica, is uniquely situated for rapid response time-domain astronomy with its continuous night-time coverage during the austral winter. We report optical observations of the GW source (GW 170817) in the nearby galaxy NGC 4993 using AST3. The data show a rapidly fading transient at around 1 day after the GW trigger, with the i-band magnitude declining from 17.23±0.13 magnitude to 17.72±0.09 magnitude in ˜ 0.8 hour. The brightness and time evolution of the optical transient associated with GW 170817 are broadly consistent with the predictions of models involving merging binary neutron stars. We infer from our data that the merging process ejected about ˜ 10^{-2} solar mass of radioactive material at a speed of up to 30% the speed of light.
Taminau, Jonatan; Meganck, Stijn; Lazar, Cosmin; Steenhoff, David; Coletta, Alain; Molter, Colin; Duque, Robin; de Schaetzen, Virginie; Weiss Solís, David Y; Bersini, Hugues; Nowé, Ann
2012-12-24
With an abundant amount of microarray gene expression data sets available through public repositories, new possibilities lie in combining multiple existing data sets. In this new context, analysis itself is no longer the problem, but retrieving and consistently integrating all this data before delivering it to the wide variety of existing analysis tools becomes the new bottleneck. We present the newly released inSilicoMerging R/Bioconductor package which, together with the earlier released inSilicoDb R/Bioconductor package, allows consistent retrieval, integration and analysis of publicly available microarray gene expression data sets. Inside the inSilicoMerging package a set of five visual and six quantitative validation measures are available as well. By providing (i) access to uniformly curated and preprocessed data, (ii) a collection of techniques to remove the batch effects between data sets from different sources, and (iii) several validation tools enabling the inspection of the integration process, these packages enable researchers to fully explore the potential of combining gene expression data for downstream analysis. The power of using both packages is demonstrated by programmatically retrieving and integrating gene expression studies from the InSilico DB repository [https://insilicodb.org/app/].
Comparison of online and offline based merging methods for high resolution rainfall intensities
NASA Astrophysics Data System (ADS)
Shehu, Bora; Haberlandt, Uwe
2016-04-01
Accurate rainfall intensities with high spatial and temporal resolution are crucial for urban flow prediction. Commonly, raw or bias corrected radar fields are used for forecasting, while different merging products are employed for simulation. The merging products are proven to be adequate for rainfall intensities estimation, however their application in forecasting is limited as they are developed for offline mode. This study aims at adapting and refining the offline merging techniques for the online implementation, and at comparing the performance of these methods for high resolution rainfall data. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of different spatial and temporal filters on the predictive skill of all methods. Raw radar data and kriging interpolation of station data are considered as a reference to check the benefit of the merged products. The methods are applied for several extreme events in the time period 2006-2012 caused by different meteorological conditions, and their performance is evaluated by split sampling. The study area is located within the 112 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps. The results of this study reveal how the performance of the methods is affected by the adjustment of radar data, choice of merging method and selected event. Merging techniques can be used to improve the performance of online rainfall estimation, which gives way to the application of merging products in forecasting.
Kohno, Jun-Ya; Higashiura, Tetsu; Eguchi, Takaaki; Miura, Shumpei; Ogawa, Masato
2016-08-11
Materials work in multicomponent forms. A wide range of compositions must be tested to obtain the optimum composition for a specific application. We propose optimization using a series of small levitated single particles. We describe a tandem-trap apparatus for merging liquid droplets and analyzing the merged droplets and/or dried particles that are produced from the merged droplets under levitation conditions. Droplet merging was confirmed by Raman spectroscopic studies of the levitated particles. The tandem-trap apparatus enables the synthesis of a particle and spectroscopic investigation of its properties. This provides a basis for future investigation of the properties of levitated single particles.
Colliding Neutron Stars as the Source of Heavy Elements
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-09-01
Where do the heavy elements the chemical elements beyond iron in our universe come from? One of the primary candidate sources is the merger of two neutron stars, but recent observations have cast doubt on this model. Can neutron-star mergers really be responsible?Elements from Collisions?Periodic table showing the origin of each chemical element. Those produced by the r-process are shaded orange and attributed to supernovae in this image; though supernovae are one proposed source of r-process elements, an alternative source is the merger of two neutron stars. [Cmglee]When a binary-neutron-star system inspirals and the two neutron stars smash into each other, a shower of neutrons are released. These neutrons are thought to bombard the surrounding atoms, rapidly producing heavy elements in what is known as r-process nucleosynthesis.So could these mergers be responsible for producing the majority of the universes heavy r-process elements? Proponents of this model argue that its supported by observations. The overall amount of heavy r-process material in the Milky Way, for instance, is consistent with the expected ejection amounts from mergers, based both on predicted merger rates for neutron stars in the galaxy, and on the observed rates of soft gamma-ray bursts (which are thought to accompany double-neutron-star mergers).Challenges from Ultra-Faint DwarfsRecently, however, r-process elements have been observed in ultra-faint dwarf satellite galaxies. This discovery raises two major challenges to the merger model for heavy-element production:When neutron stars are born during a core-collapse supernova, mass is ejected, providing the stars with asymmetric natal kicks. During the second collapse in a double-neutron-star binary, wouldnt the kick exceed the low escape velocity of an ultra-faint dwarf, ejecting the binary before it could merge and enrich the galaxy?Ultra-faint dwarfs have very old stellar populations and the observation of r-process elements in these stars requires mergers to have occurred very early in the galaxys history. Can double-neutron-star systems merge quickly enough to account for the observed chemical enrichment?Small Kicks and Fast MergersFraction of double-neutron-star systems that remain bound, vs. the magnitude of the kick they receive. A typical escape velocity for an ultra-faint dwarf is ~15 km/s; roughly 55-65% of binaries receive smaller kicks than that and wouldnt be ejected from an ultra-faint dwarf. [Beniamini et al. 2016]Led by Paz Beniamini, a team of scientists from the Racah Institute of Physics at the Hebrew University of Jerusalem has set out to answer these questions. Using the statistics of our galaxys double-neutron-star population, the team performed Monte Carlo simulations to estimate the distributions of mass ejection and kick velocities for the systems.Beniamini and collaborators find that, for typical initial separations, more than half of neutron star binaries are born with small enough kicks that they remain bound and arent ejected even from small, ultra-faint dwarf galaxies.The team also used their statistics to calculate the time until merger for the population of binaries, finding that ~90% of the double-neutron-star systems merge within 300 Myr, and around 15% merge within 100 Myr quick enough to enrich even the old population of stars.This population of systems that remain confined to the galaxy and merge rapidly can therefore explain the observations of r-process material in ultra-faint dwarf galaxies. Beniamini and collaborators work suggests that the merger of neutron stars is indeed a viable model for the production of heavy elements in our universe.CitationPaz Beniamini et al 2016 ApJ 829 L13. doi:10.3847/2041-8205/829/1/L13
Media Messages and Perceptions of the Affordable Care Act during the Early Phase of Implementation.
Fowler, Erika Franklin; Baum, Laura M; Barry, Colleen L; Niederdeppe, Jeff; Gollust, Sarah E
2017-02-01
Public opinion about the Affordable Care Act (ACA) has been polarized since the law's passage. Past research suggests these conditions would make any media influence on the public limited at best. However, during the early phase of implementation, locally broadcast ACA-related media messages-in the form of paid health insurance and political advertisements and news media stories-abounded as advocates, insurance marketers, and politicians sought to shape the public's perceptions of the law. To what extent did message exposure affect ACA perceptions during the first open enrollment period? We merge data on volumes of messaging at the media market level with nationally representative survey data to examine the relationship between estimated exposure to media messaging and the public's perceptions of how informed they were about and favorable toward the ACA in October 2013. We find that higher volumes of insurance advertising and local news coverage are associated with participants' perceptions of being informed about the law. Volumes of insurance advertising and of local news coverage are also associated with participants' favorability toward the law, but the relationship varies with partisanship, supporting the growing body of research describing partisan perceptual bias. Copyright © 2017 by Duke University Press.
Kinetics of Diffusional Droplet Growth in a Liquid/Liquid Two-Phase System
NASA Technical Reports Server (NTRS)
Glicksman, M. E.; Fradkov, V. E.
1996-01-01
We address the problem of diffusional interactions in a finite sized cluster of spherical particles for volume fractions, V(sub v) in the range 0-0.01. We determined the quasi-static monopole diffusion solution for n particles distributed at random in a continuous matrix. A global mass conservation condition is employed, obviating the need for any external boundary condition. The numerical results provide the instantaneous (snapshot) growth or shrinkage rate of each particle, precluding the need for extensive time-dependent computations. The close connection between these snapshot results and the coarsegrained kinetic constants are discussed. A square-root dependence of the deviations of the rate constants from their zero volume fraction value is found for the higher V(sub v) investigated. This behavior is consistent with predictions from diffusion Debye-Huckel screening theory. By contrast, a cube-root dependence, reported in earlier numerical studies, is found for the lower V(sub v) investigated. The roll-over region of the volume fraction where the two asymptotics merge depends on the number of particles, n, alone. A theoretical estimate for the roll-over point predicts that the corresponding V(sub v) varies as n(sup -2), in good agreement with the numerical results.
NASA Astrophysics Data System (ADS)
Pradhan, Kalpataru; Yunoki, Seiji
2017-12-01
Using a two-band double-exchange model with Jahn-Teller lattice distortions and superexchange interactions, supplemented by quenched disorder, at an electron density n =0.65 , we explicitly demonstrate the coexistence of the n =1 /2 -type (π ,π ) charge-ordered and the ferromagnetic nanoclusters above the ferromagnetic transition temperature Tc in colossal magnetoresistive (CMR) manganites. The resistivity increases due to the enhancement of the volume fraction of the charge-ordered and the ferromagnetic nanoclusters upon decreasing the temperature down to Tc. The ferromagnetic nanoclusters start to grow and merge, and the volume fraction of the charge-ordered nanoclusters decreases below Tc, leading to the sharp drop in the resistivity. By applying a small external magnetic field h , we show that the resistivity above Tc increases, as compared with the case when h =0 , a fact that further confirms the coexistence of the charge-ordered and the ferromagnetic nanoclusters. In addition, we show that the volume fraction of the charge-ordered nanoclusters decreases upon increasing the bandwidth, and consequently the resistivity hump diminishes for large bandwidth manganites, in good qualitative agreement with experiments. The obtained insights from our calculations provide a complete pathway to understand the phase competition in CMR manganites.
Extraction of drainage networks from large terrain datasets using high throughput computing
NASA Astrophysics Data System (ADS)
Gong, Jianya; Xie, Jibo
2009-02-01
Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.
NASA Astrophysics Data System (ADS)
de Gouw, J. A.; Warneke, C.; Stohl, A.; Wollny, A. G.; Brock, C. A.; Cooper, O. R.; Holloway, J. S.; Trainer, M.; Fehsenfeld, F. C.; Atlas, E. L.; Donnelly, S. G.; Stroud, V.; Lueb, A.
2006-05-01
The NOAA WP-3 aircraft intercepted aged forest fire plumes from Alaska and western Canada during several flights of the NEAQS-ITCT 2k4 mission in 2004. Measurements of acetonitrile (CH3CN) indicated that the air masses had been influenced by biomass burning. The locations of the plume intercepts were well described using emissions estimates and calculations with the transport model FLEXPART. The best description of the data was generally obtained when FLEXPART injected the forest fire emissions to high altitudes in the model. The observed plumes were generally drier than the surrounding air masses at the same altitude, suggesting that the fire plumes had been processed by clouds and that moisture had been removed by precipitation. Different degrees of photochemical processing of the plumes were determined from the measurements of aromatic VOCs. The removal of aromatic VOCs was slow considering the transport times estimated from the FLEXPART model. This suggests that the average OH levels were low during the transport, which may be explained by the low humidity and high concentrations of carbon monoxide and other pollutants. In contrast with previous work, no strong secondary production of acetone, methanol and acetic acid is inferred from the measurements. A clear case of removal of submicron particle volume and acetic acid due to precipitation scavenging was observed.
Computational methods for a three-dimensional model of the petroleum-discovery process
Schuenemeyer, J.H.; Bawiec, W.J.; Drew, L.J.
1980-01-01
A discovery-process model devised by Drew, Schuenemeyer, and Root can be used to predict the amount of petroleum to be discovered in a basin from some future level of exploratory effort: the predictions are based on historical drilling and discovery data. Because marginal costs of discovery and production are a function of field size, the model can be used to make estimates of future discoveries within deposit size classes. The modeling approach is a geometric one in which the area searched is a function of the size and shape of the targets being sought. A high correlation is assumed between the surface-projection area of the fields and the volume of petroleum. To predict how much oil remains to be found, the area searched must be computed, and the basin size and discovery efficiency must be estimated. The basin is assumed to be explored randomly rather than by pattern drilling. The model may be used to compute independent estimates of future oil at different depth intervals for a play involving multiple producing horizons. We have written FORTRAN computer programs that are used with Drew, Schuenemeyer, and Root's model to merge the discovery and drilling information and perform the necessary computations to estimate undiscovered petroleum. These program may be modified easily for the estimation of remaining quantities of commodities other than petroleum. ?? 1980.
Investigation of alternative work zone merging sign configurations.
DOT National Transportation Integrated Search
2013-12-01
This study investigated the effect of an alternative merge sign configuration within a freeway work zone. In this alternative : configuration, the graphical lane closed sign from the MUTCD was compared with a MERGE/arrow sign on one side and a : RIGH...
A computational geometry approach to pore network construction for granular packings
NASA Astrophysics Data System (ADS)
van der Linden, Joost H.; Sufian, Adnan; Narsilio, Guillermo A.; Russell, Adrian R.; Tordesillas, Antoinette
2018-03-01
Pore network construction provides the ability to characterize and study the pore space of inhomogeneous and geometrically complex granular media in a range of scientific and engineering applications. Various approaches to the construction have been proposed, however subtle implementational details are frequently omitted, open access to source code is limited, and few studies compare multiple algorithms in the context of a specific application. This study presents, in detail, a new pore network construction algorithm, and provides a comprehensive comparison with two other, well-established Delaunay triangulation-based pore network construction methods. Source code is provided to encourage further development. The proposed algorithm avoids the expensive non-linear optimization procedure in existing Delaunay approaches, and is robust in the presence of polydispersity. Algorithms are compared in terms of structural, geometrical and advanced connectivity parameters, focusing on the application of fluid flow characteristics. Sensitivity of the various networks to permeability is assessed through network (Stokes) simulations and finite-element (Navier-Stokes) simulations. Results highlight strong dependencies of pore volume, pore connectivity, throat geometry and fluid conductance on the degree of tetrahedra merging and the specific characteristics of the throats targeted by the merging algorithm. The paper concludes with practical recommendations on the applicability of the three investigated algorithms.
How wide in magnetic local time is the cusp? An event study
NASA Astrophysics Data System (ADS)
Maynard, N. C.; Weber, E. J.; Weimer, D. R.; Moen, J.; Onsager, T.; Heelis, R. A.; Egeland, A.
1997-03-01
A unique pass of the DMSP F11 satellite, longitudinally cutting through the cusp and mantle, combined with simultaneous optical measurements of the dayside cusp from Svalbard has been used to determine the width in local time of the cusp. We have shown from this event study that the cusp was at least 3.7 hours wide in magnetic local time. These measurements provide a lower limit for the cusp width. The observed cusp optical emissions are relatively constant, considering the processes which lead to the 630.0 nm emissions, and require precipitating electron flux to be added each minute during the DMSP pass throughout the local time extent observed by the imaging photometer and probably over the whole extent of the cusp defined by DMSP data. We conclude that the electron fluxes which produce the cusp aurora are from a process which must have been operable sometime during each minute but could have had both temporal and spatial variations. The measured width along with models of cusp precipitation provide the rationale to conclude that the region of flux tube opening in the dayside merging process involves the whole frontside magnetopause and can extend beyond the dawn-dusk terminator. The merging process for this event was found to be continuous, although spatially and temporally variable.
Validating a magnetic reconnection model for the magnetopause
NASA Astrophysics Data System (ADS)
Schultz, Colin
2012-01-01
Originating in the Sun's million-degree corona, the solar wind flows at supersonic speeds into interplanetary space, carrying with it the solar magnetic field. As the solar wind reaches Earth's orbit, its interaction with the geomagnetic field forms the magnetosphere, a bubble-like structure within the solar wind flow that shields Earth from direct exposure to the solar wind as well as to the highly energetic charged particles produced during solar storms. Under certain orientations, the magnetic field entrained in the solar wind, known as the interplanetary magnetic field (IMF), merges with the geomagnetic field, transferring mass, momentum, and energy to the magnetosphere. The merging of these two distinct magnetic fields occurs through magnetic reconnection, a fundamental plasma-physical process that converts magnetic energy into kinetic energy and heat.
Merging W W and W W + jet with Minlo
Hamilton, Keith; Melia, Tom; Monni, Pier Francesco; ...
2016-09-12
We present a simulation program for the production of a pair of W bosons in association with a jet, that can be used in conjunction with general-purpose shower Monte Carlo generators, according to the Powheg method. We have further adapted and implemented the Minlo ' method on top of the NLO calculation underlying our W + W - + jet generator. Thus, the resulting simulation achieves NLO accuracy not only for inclusive distributions in W + W - + jet production but also W + W - production, i.e. when the associated jet is not resolved, without the introduction ofmore » any unphysical merging scale. This work represents the first extension of the Minlo ' method, in its original form, to the case of a genuine underlying 2 → 2 process, with non-trivial virtual corrections.« less
Longitudinal study of skin aging: from microrelief to wrinkles.
Bazin, Roland; Lévêque, Jean Luc
2011-05-01
To study the changes in skin microrelief and periocular wrinkles during the aging process. Replicas of the crow's feet area of volunteers were recorded in 1987 and 2008 and observed comparatively. Characteristic features were quantified by image analysis. Observation shows that some microrelief features disappear and even merge with wrinkles that become more marked. Some primary lines also tend to merge to form thin new wrinkles. Quantitative data support these observations: the size of small and medium objects of skin relief decreases with age while large objects are becoming larger. Over 21 years, in the group studied, the total area of the detected objects remains quite constant. Only the distribution between small and large detected objects (microrelief features and wrinkles, respectively) is modified. © 2011 John Wiley & Sons A/S.
Merging Electronic Health Record Data and Genomics for Cardiovascular Research
Hall, Jennifer L.; Ryan, John J.; Bray, Bruce E.; Brown, Candice; Lanfear, David; Newby, L. Kristin; Relling, Mary V.; Risch, Neil J.; Roden, Dan M.; Shaw, Stanley Y.; Tcheng, James E.; Tenenbaum, Jessica; Wang, Thomas N.; Weintraub, William S.
2017-01-01
The process of scientific discovery is rapidly evolving. The funding climate has influenced a favorable shift in scientific discovery toward the use of existing resources such as the electronic health record. The electronic health record enables long-term outlooks on human health and disease, in conjunction with multidimensional phenotypes that include laboratory data, images, vital signs, and other clinical information. Initial work has confirmed the utility of the electronic health record for understanding mechanisms and patterns of variability in disease susceptibility, disease evolution, and drug responses. The addition of biobanks and genomic data to the information contained in the electronic health record has been demonstrated. The purpose of this statement is to discuss the current challenges in and the potential for merging electronic health record data and genomics for cardiovascular research. PMID:26976545
Highly Resolved Measurements of a Developing Strong Collisional Plasma Shock
Rinderknecht, Hans G.; Park, H. -S.; Ross, J. S.; ...
2018-03-02
In this paper, the structure of a strong collisional shock front forming in a plasma is directly probed for the first time in laser-driven gas-jet experiments. Thomson scattering of a 526.5 nm probe beam was used to diagnose temperature and ion velocity distribution in a strong shock (more » $$M{\\sim}11$$) propagating through a low-density ($${\\rho}{\\sim}0.01\\text{ }\\text{ }\\mathrm{mg}/\\mathrm{cc}$$) plasma composed of hydrogen. A forward-streaming population of ions traveling in excess of the shock velocity was observed to heat and slow down on an unmoving, unshocked population of cold protons, until ultimately the populations merge and begin to thermalize. Finally, instabilities are observed during the merging, indicating a uniquely plasma-phase process in shock front formation.« less
User Manual for the PROTEUS Mesh Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Micheal A.; Shemon, Emily R.
2015-06-01
This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a givenmore » mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less
Electro-Fermentation - Merging Electrochemistry with Fermentation in Industrial Applications.
Schievano, Andrea; Pepé Sciarria, Tommy; Vanbroekhoven, Karolien; De Wever, Heleen; Puig, Sebastià; Andersen, Stephen J; Rabaey, Korneel; Pant, Deepak
2016-11-01
Electro-fermentation (EF) merges traditional industrial fermentation with electrochemistry. An imposed electrical field influences the fermentation environment and microbial metabolism in either a reductive or oxidative manner. The benefit of this approach is to produce target biochemicals with improved selectivity, increase carbon efficiency, limit the use of additives for redox balance or pH control, enhance microbial growth, or in some cases enhance product recovery. We discuss the principles of electrically driven fermentations and how EF can be used to steer both pure culture and microbiota-based fermentations. An overview is given on which advantages EF may bring to both existing and innovative industrial fermentation processes, and which doors might be opened in waste biomass utilization towards added-value biorefineries. Copyright © 2016 Elsevier Ltd. All rights reserved.
Highly Resolved Measurements of a Developing Strong Collisional Plasma Shock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rinderknecht, Hans G.; Park, H. -S.; Ross, J. S.
In this paper, the structure of a strong collisional shock front forming in a plasma is directly probed for the first time in laser-driven gas-jet experiments. Thomson scattering of a 526.5 nm probe beam was used to diagnose temperature and ion velocity distribution in a strong shock (more » $$M{\\sim}11$$) propagating through a low-density ($${\\rho}{\\sim}0.01\\text{ }\\text{ }\\mathrm{mg}/\\mathrm{cc}$$) plasma composed of hydrogen. A forward-streaming population of ions traveling in excess of the shock velocity was observed to heat and slow down on an unmoving, unshocked population of cold protons, until ultimately the populations merge and begin to thermalize. Finally, instabilities are observed during the merging, indicating a uniquely plasma-phase process in shock front formation.« less
NASA Astrophysics Data System (ADS)
van Rossum, Anne C.; Lin, Hai Xiang; Dubbeldam, Johan; van der Herik, H. Jaap
2018-04-01
In machine vision typical heuristic methods to extract parameterized objects out of raw data points are the Hough transform and RANSAC. Bayesian models carry the promise to optimally extract such parameterized objects given a correct definition of the model and the type of noise at hand. A category of solvers for Bayesian models are Markov chain Monte Carlo methods. Naive implementations of MCMC methods suffer from slow convergence in machine vision due to the complexity of the parameter space. Towards this blocked Gibbs and split-merge samplers have been developed that assign multiple data points to clusters at once. In this paper we introduce a new split-merge sampler, the triadic split-merge sampler, that perform steps between two and three randomly chosen clusters. This has two advantages. First, it reduces the asymmetry between the split and merge steps. Second, it is able to propose a new cluster that is composed out of data points from two different clusters. Both advantages speed up convergence which we demonstrate on a line extraction problem. We show that the triadic split-merge sampler outperforms the conventional split-merge sampler. Although this new MCMC sampler is demonstrated in this machine vision context, its application extend to the very general domain of statistical inference.
ERIC Educational Resources Information Center
Haiming, Liu; Gaowa, Naren; Shu, Wang
2013-01-01
With the advancements of nine years of universal compulsory education, the development of China's basic education has resulted in new demands aimed at improving the overall quality of basic education in rural areas. Closings and consolidation are important measures in this regard. In the process of merging and consolidation, the construction of a…
Family Adventure Programming for Troubled Adolescents.
ERIC Educational Resources Information Center
Gerstein, Jaclyn S.
The family adventure program merges traditional family therapy and adventure therapy to provide a more effective therapeutic process for the troubled adolescent. Family adventure programming is based on the assumption that the family has the skills and resources for positive change and growth. The stressful nature of adventure activities removes…
A Socio-Psychological Exploration of Fyodor Dostoyevsky's Crime and Punishment
ERIC Educational Resources Information Center
Uwasomba, Chijioke
2009-01-01
Using a socio-psychological approach, the essay explores Fyodor Dostoyevsky's Crime and Punishment. The exploration highlights Dostoyevsky's heavy reliance on the use of psychological realism, showing in the process the intricate interplay between psychology, sociology and literature. In the novel, the reader comes across the merging of the…
Estimating error cross-correlations in soil moisture data sets using extended collocation analysis
USDA-ARS?s Scientific Manuscript database
Consistent global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multi-source soil moisture retrievals int...
Authenticating Historical Fiction: Rationale and Process
ERIC Educational Resources Information Center
Groce, Eric; Groce, Robin
2005-01-01
In merging social studies education with language arts, classroom teachers are utilizing selections of historical fiction to teach critical literacy skills while also meeting social studies standards. While most of the selections used for teaching history and social studies themes are strong in terms of literary merit, they may be deficient in…
NASA Technical Reports Server (NTRS)
Tilton, James C. (Inventor)
2010-01-01
A method, computer readable storage, and apparatus for implementing recursive segmentation of data with spatial characteristics into regions including splitting-remerging of pixels with contagious region designations and a user controlled parameter for providing a preference for merging adjacent regions to eliminate window artifacts.
Merging Disparate Data and Numerical Model Results for Dynamically Constrained Nowcasts
1999-09-30
of Delaware Newark, DE 19716 phone: (302) 831-6836 fax: (302) 831-6838 email: brucel @udel.edu Award #: N000149910052 http://newark.cms.udel.edu... brucel /hrd.html LONG-TERM GOALS The long term goal of our research is to quantify submesoscale dynamical processes and understand their interactions
ERIC Educational Resources Information Center
Smith, Corinne Roth
A multidimensional approach to assessment of children with learning difficulties is examined. The approach explores factors along five dimensions: (1) learner characteristics (motivation, social-emotional maturity, cognitive abilities and styles); (2) task-based contributors (match of tasks to maturational levels and to cognitive style); (3)…
Massive Black Hole Binary Mergers and their Gravitational Waves
NASA Astrophysics Data System (ADS)
Kelley, Luke Zoltan; Blecha, Laura; Hernquist, Lars; Sesana, Alberto
2017-01-01
Gravitational Waves (GW) from stellar-mass BH binaries have recently been observed by LIGO, but GW from their supermassive counterparts have remained elusive. Recent upper limits from Pulsar Timing Arrays (PTA) have excluded significant portions of the predicted parameter space. Most previous studies, however, have assumed that most or all Massive Black Hole (MBH) Binaries merge effectively and quickly. I will present results derived—for the first time—from cosmological, hydrodynamic simulations with self-consistently coevolved populations of MBH particles. We perform post-processing simulations of the MBH merger process, using realistic galactic environments, including models of dynamical friction, stellar scattering, gas drag from a circumbinary disk, and GW emission—with no assumptions of merger fractions or timescales. We find that despite only the most massive systems merging effectively (and still on gigayear timescales), the GW Background is only just below current detection limits with PTA. Our models suggest that PTA should make detections within the next decade, and will provide information about MBH binary populations, environments, and even eccentricities. I’ll also briefly discuss prospects for observations of dual-AGN, and the possible importance of MBH triples in the merger process.
Fluid management in roll-to-roll nanoimprint lithography
NASA Astrophysics Data System (ADS)
Jain, A.; Bonnecaze, R. T.
2013-06-01
The key process parameters of UV roll-to-roll nanoimprint lithography are identified from an analysis of the fluid, curing, and peeling dynamics. The process includes merging of droplets of imprint material, curing of the imprint material from a viscous liquid to elastic solid resist, and pattern replication and detachment of the resist from template. The time and distances on the web or rigid substrate over which these processes occur are determined as function of the physical properties of the uncured liquid, the cured solid, and the roller configuration. The upper convected Maxwell equation is used to model the viscoelastic liquid and to calculate the force on the substrate and the torque on the roller. The available exposure time is found to be the rate limiting parameter and it is O(√Rho /uo), where R is the radius of the roller, ho is minimum gap between the roller and web, and uo is the velocity of the web. The residual layer thickness of the resist should be larger than the gap between the roller and the substrate to ensure complete feature filling and optimal pattern replication. For lower residual layer thickness, the droplets may not merge to form a continuous film for pattern transfer.
Coalescence of liquid droplets in micro fluidic device
NASA Astrophysics Data System (ADS)
Wu, Mingming; Cubaud, Thomas; Ho, Chih-Ming; Chiou, Peiyu; Wu, Ming C.
2003-11-01
We study experimentally the initial dynamic process when two droplets (diameter range 100μm -1000μm) merge in micro fluidic device. It is known that passive mixing in micro fluidic device relies mostly on a time consuming process - diffusion. In digital fluidic platform,(S.K. Cho, H. Moon, and C.J. Kim, J. of Microelectromechanical Systems, Vol 12, No 1, 70(2003).) we find that the surface-tension-driven flow at the initial stage of the merging can be used to enhance mixing. In our experiments, the droplets are manipulated by two different methods, and results are compared. In one method, the droplet is manipulated by pressure driven flow in micro channels, and in the other, the droplet is moved using an optical electro-wetting device. The droplet is seeded with 4 μm diameter latex particles for visualizing the mixing process. The outlines of the droplets as well as the flow patterns marked by the latex particles inside the droplets are recorded using a high speed imaging system. This work is supported by the National Science Foundation (CTS-0121340), Institute for CMISE (a NASA URETI), DARPA MPG program, and DARPA Optoelectronics Center Program (CHIPS).
An inkjet vision measurement technique for high-frequency jetting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Kye-Si, E-mail: kskwon@sch.ac.kr; Jang, Min-Hyuck; Park, Ha Yeong
2014-06-15
Inkjet technology has been used as manufacturing a tool for printed electronics. To increase the productivity, the jetting frequency needs to be increased. When using high-frequency jetting, the printed pattern quality could be non-uniform since the jetting performance characteristics including the jetting speed and droplet volume could vary significantly with increases in jet frequency. Therefore, high-frequency jetting behavior must be evaluated properly for improvement. However, it is difficult to measure high-frequency jetting behavior using previous vision analysis methods, because subsequent droplets are close or even merged. In this paper, we present vision measurement techniques to evaluate the drop formation ofmore » high-frequency jetting. The proposed method is based on tracking target droplets such that subsequent droplets can be excluded in the image analysis by focusing on the target droplet. Finally, a frequency sweeping method for jetting speed and droplet volume is presented to understand the overall jetting frequency effects on jetting performance.« less
Merged analog and photon counting profiles used as input for other RLPROF VAPs
Newsom, Rob
2014-10-03
The rlprof_merge VAP "merges" the photon counting and analog signals appropriately for each channel, creating an output data file that is very similar to the original raw data file format that the Raman lidar initially had.
Merged analog and photon counting profiles used as input for other RLPROF VAPs
Newsom, Rob
1998-03-01
The rlprof_merge VAP "merges" the photon counting and analog signals appropriately for each channel, creating an output data file that is very similar to the original raw data file format that the Raman lidar initially had.
Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems
Hendrix, Valerie; Fox, James; Ghoshal, Devarshi; ...
2016-07-21
The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less
NASA Astrophysics Data System (ADS)
Ruopp, A.; Ruprecht, A.; Riedelbauch, S.; Arnaud, G.; Hamad, I.
2014-03-01
The development of a hydro-kinetic prototype was shown including the compound structure, guide vanes, runner blades and a draft tube section with a steeply sloping, short spoiler. The design process of the hydrodynamic layout was split into three major steps. First the compound and the draft tube section was designed and the best operating point was identified using porous media as replacement for the guide vane and runner section (step one). The best operating point and the volume flux as well as the pressure drop was identified and used for the design of the guide vane section and the runner section. Both were designed and simulated independently (step two). In step three, all parts were merged in stationary simulation runs detecting peak power and operational bandwidth. In addition, the full scale demonstrator was installed in August 2010 and measured in the St. Lawrence River in Quebec supporting the average inflow velocity using ADCP (Acoustic Doppler Current Profiler) and the generator power output over the variable rotational speed. Simulation data and measurements are in good agreement. Thus, the presented approach is a suitable way in designing a hydro kinetic turbine.
The MOS silicon gate technology and the first microprocessors
NASA Astrophysics Data System (ADS)
Faggin, F.
2015-12-01
Today we are so used to the enormous capabilities of microelectronics that it is hard to imagine what it might have been like in the early Sixties and Seventies when much of the technology we use today was being developed. This paper will first present a brief history of microelectronics and computers, taking us to the threshold of the inventions of the MOS silicon gate technology and the microprocessor. These two creations provided the basic technology that would allow only a few years later to merge microelectronics and computers into the first commercial monolithic computer. By the late Seventies, the first monolithic computer weighting less than one gram, occupying a volume of less than one cubic centimeter, dissipating less than one Watt, and selling for less than ten dollars, could perform more information processing than the UNIVAC I, the first commercial electronic computer introduced in 1951, made with 5200 vacuum tubes, dissipating 125kW, weighting 13 metric tons, occupying a room larger than 35m2, and selling for more than one million dollars per unit. The first-person story of the SGT and the early microprocessors will be told by the Italian-born physicist who led both projects.
Tigres Workflow Library: Supporting Scientific Pipelines on HPC Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hendrix, Valerie; Fox, James; Ghoshal, Devarshi
The growth in scientific data volumes has resulted in the need for new tools that enable users to operate on and analyze data on large-scale resources. In the last decade, a number of scientific workflow tools have emerged. These tools often target distributed environments, and often need expert help to compose and execute the workflows. Data-intensive workflows are often ad-hoc, they involve an iterative development process that includes users composing and testing their workflows on desktops, and scaling up to larger systems. In this paper, we present the design and implementation of Tigres, a workflow library that supports the iterativemore » workflow development cycle of data-intensive workflows. Tigres provides an application programming interface to a set of programming templates i.e., sequence, parallel, split, merge, that can be used to compose and execute computational and data pipelines. We discuss the results of our evaluation of scientific and synthetic workflows showing Tigres performs with minimal template overheads (mean of 13 seconds over all experiments). We also discuss various factors (e.g., I/O performance, execution mechanisms) that affect the performance of scientific workflows on HPC systems.« less
Developing Toolsets for AirBorne Data (TAD): Overview of Design Concept
NASA Astrophysics Data System (ADS)
Parker, L.; Perez, J.; Chen, G.; Benson, A.; Peeters, M. C.
2013-12-01
NASA has conducted airborne tropospheric chemistry studies for about three decades. These field campaigns have generated a great wealth of observations, including a wide range of the trace gases and aerosol properties. Even though the spatial and temporal coverage is limited, the aircraft data offer high resolution and comprehensive simultaneous coverage of many variables, e.g. ozone precursors, intermediate photochemical species, and photochemical products. The recent NASA Earth Venture Program has generated an unprecedented amount of aircraft observations in terms of the sheer number of measurements and data volume. The ASDC Toolset for Airborne Data (TAD) is being designed to meet the user community needs for aircraft data for scientific research on climate change and air quality relevant issues, particularly: 1) Provide timely access to a broad user community, 2) Provide an intuitive user interface to facilitate quick discovery of the variables and data, 3) Provide data products and tools to facilitate model assessment activities, e.g., merge files and data subsetting capabilities, 4) Provide simple utility 'calculators', e.g., unit conversion and aerosol size distribution processing, and 5) Provide Web Coverage Service capable tools to enhance the data usability. The general strategy and design of TAD will be presented.
Mélin, Frédéric; Zibordi, Giuseppe
2007-06-20
An optically based technique is presented that produces merged spectra of normalized water-leaving radiances L(WN) by combining spectral data provided by independent satellite ocean color missions. The assessment of the merging technique is based on a four-year field data series collected by an autonomous above-water radiometer located on the Acqua Alta Oceanographic Tower in the Adriatic Sea. The uncertainties associated with the merged L(WN) obtained from the Sea-viewing Wide Field-of-view Sensor and the Moderate Resolution Imaging Spectroradiometer are consistent with the validation statistics of the individual sensor products. The merging including the third mission Medium Resolution Imaging Spectrometer is also addressed for a reduced ensemble of matchups.
Triple collocation based merging of satellite soil moisture retrievals
USDA-ARS?s Scientific Manuscript database
We propose a method for merging soil moisture retrievals from space borne active and passive microwave instruments based on weighted averaging taking into account the error characteristics of the individual data sets. The merging scheme is parameterized using error variance estimates obtained from u...
Observations of the Ion Signatures of Double Merging and the Formation of Newly Closed Field Lines
NASA Technical Reports Server (NTRS)
Chandler, Michael O.; Avanov, Levon A.; Craven, Paul D.
2007-01-01
Observations from the Polar spacecraft, taken during a period of northward interplanetary magnetic field (IMF) show magnetosheath ions within the magnetosphere with velocity distributions resulting from multiple merging sites along the same field line. The observations from the TIDE instrument show two separate ion energy-time dispersions that are attributed to two widely separated (-20Re) merging sites. Estimates of the initial merging times show that they occurred nearly simultaneously (within 5 minutes.) Along with these populations, cold, ionospheric ions were observed counterstreaming along the field lines. The presence of such ions is evidence that these field lines are connected to the ionosphere on both ends. These results are consistent with the hypothesis that double merging can produce closed field lines populated by solar wind plasma. While the merging sites cannot be unambiguously located, the observations and analyses favor one site poleward of the northern cusp and a second site at low latitudes.
Online Visualization and Analysis of Merged Global Geostationary Satellite Infrared Dataset
NASA Technical Reports Server (NTRS)
Liu, Zhong; Ostrenga, D.; Leptoukh, G.; Mehta, A.
2008-01-01
The NASA Goddard Earth Sciences Data Information Services Center (GES DISC) is home of Tropical Rainfall Measuring Mission (TRMM) data archive. The global merged IR product also known as the NCEP/CPC 4-km Global (60 degrees N - 60 degrees S) IR Dataset, is one of TRMM ancillary datasets. They are globally merged (60 degrees N - 60 degrees S) pixel-resolution (4 km) IR brightness temperature data (equivalent blackbody temperatures), merged from all available geostationary satellites (GOES-8/10, METEOSAT-7/5 and GMS). The availability of data from METEOSAT-5, which is located at 63E at the present time, yields a unique opportunity for total global (60 degrees N- 60 degrees S) coverage. The GES DISC has collected over 8 years of the data beginning from February of 2000. This high temporal resolution dataset can not only provide additional background information to TRMM and other satellite missions, but also allow observing a wide range of meteorological phenomena from space, such as, mesoscale convection systems, tropical cyclones, hurricanes, etc. The dataset can also be used to verify model simulations. Despite that the data can be downloaded via ftp, however, its large volume poses a challenge for many users. A single file occupies about 70 MB disk space and there is a total of approximately 73,000 files (approximately 4.5 TB) for the past 8 years. In order to facilitate data access, we have developed a web prototype to allow users to conduct online visualization and analysis of this dataset. With a web browser and few mouse clicks, users can have a full access to over 8 year and over 4.5 TB data and generate black and white IR imagery and animation without downloading any software and data. In short, you can make your own images! Basic functions include selection of area of interest, single imagery or animation, a time skip capability for different temporal resolution and image size. Users can save an animation as a file (animated gif) and import it in other presentation software, such as, Microsoft PowerPoint. The prototype will be integrated into GIOVANNI and existing GIOVANNI capabilities, such as, data download, Google Earth KMZ, etc will be available. Users will also be able to access other data products in the GIOVANNI family.
The Current State of Data Transmission Channels from Pushchino to Moscow and Perspectives
NASA Astrophysics Data System (ADS)
Dumsky, D. V.; Isaev, E. A.; Samodurov, V. A.; Shatskaya, M. V.
Since the work of a unique space radio telescope in the international VLBI project "Radioastron" extended to 2017 the transmission and storage of large volumes of scientific and telemetry data obtained during the experiments is still remains actual. This project is carried out by the Astro Space Center of Lebedev Physical Institute in Moscow, Russia. It requires us to maintain in operating state the high-speed link to merge into a single LAN buffer data center in Puschino and scientific information center in Moscow. Still relevant the chanal equipment monitoring system, and storage systems, as well as the timely replacement of hardware and software upgrades, backups, and documentation of the network infrastructure.
Jin, Si Hyung; Jeong, Heon-Ho; Lee, Byungjin; Lee, Sung Sik; Lee, Chang-Soo
2015-01-01
We present a programmable microfluidic static droplet array (SDA) device that can perform user-defined multistep combinatorial protocols. It combines the passive storage of aqueous droplets without any external control with integrated microvalves for discrete sample dispensing and dispersion-free unit operation. The addressable picoliter-volume reaction is systematically achieved by consecutively merging programmable sequences of reagent droplets. The SDA device is remarkably reusable and able to perform identical enzyme kinetic experiments at least 30 times via automated cross-contamination-free removal of droplets from individual hydrodynamic traps. Taking all these features together, this programmable and reusable universal SDA device will be a general microfluidic platform that can be reprogrammed for multiple applications.
TOMS and SBUV Data: Comparison to 3D Chemical-Transport Model Results
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; Douglass, Anne R.; Steenrod, Steve; Frith, Stacey
2003-01-01
We have updated our merged ozone data (MOD) set using the TOMS data from the new version 8 algorithm. We then analyzed these data for contributions from solar cycle, volcanoes, QBO, and halogens using a standard statistical time series model. We have recently completed a hindcast run of our 3D chemical-transport model for the same years. This model uses off-line winds from the finite-volume GCM, a full stratospheric photochemistry package, and time-varying forcing due to halogens, solar uv, and volcanic aerosols. We will report on a parallel analysis of these model results using the same statistical time series technique as used for the MOD data.
NASA Technical Reports Server (NTRS)
Luhmann, J. G.; Walker, R. J.; Russell, C. T.; Spreiter, J. R.; Stahara, S. S.; Williams, D. J.
1984-01-01
An approximate picture of the volumes occupied by particles that originate in the vicinity of the magnetopause is obtained by mapping magnetosheath magnetic field lines which drape over the magnetopause through the bow shock. Subsets of these field lines that connect to potential sites of magnetic merging on the magnetopause are also traced in the event that the particle leakage occurs preferentially where normal components of the field are present across that boundary. The results of this modeling exercise suggest that energetic magnetospheric particles which are not scattered by magnetosheath magnetic fluctuations are likely to exit the magnetosheath in the region of the quasi-parallel shock.
Supercontinent Formation in 3-D Spherical Mantle Convection Models With Multiple Continental Blocks
NASA Astrophysics Data System (ADS)
Zhang, N.; Zhong, S.; McNamara, A.
2007-12-01
Much of the large-scale tectonics on the Earth in the last Ga is predominated by the assembly and breakup of supercontinents Rodinia and Pangea. However, the mechanism that is responsible for supercontinent formation remains poorly understood. Zhong et al [2007] recently showed that mantle convection with moderately strong lithosphere and lower mantle is characterized by a largely degree-1 planform in which one hemisphere is predominated by upwellings while the other by downwellings. They further suggested that the downwellings should attract all the continental blocks to merge in the downwelling hemisphere, thus leading to supercontinent formation there. However, Zhong et al. [2007] did not consider drifting and collision processes of continents. In this study, we explore the supercontinent formation mechanisms by including drifting and collision processes of multiple continental blocks in 3-D spherical mantle convection models. We use thermochemical CitcomS code to model 3-D spherical mantle convection with continental blocks. In our models, particles are used to represent continents and to track their motions. We found that for models with mantle viscosity (i.e., moderately strong lithosphere and lower mantle) that leads to degree-1 convection as reported in Zhong et al. [2007], initially evenly- distributed continental blocks always merge to form a supercontinent on a time-scale of about 6 transit times (i.e., corresponding to about 300 Ma). The hemisphere where a supercontinent is formed is predominated by downwellings as continents merge towards there, while the other hemisphere by upwellings. However, after the supercontinent formation, upwellings are generated beneath the supercontinent. This scenario is qualitatively consistent with what Zhong et al. [2007] proposed. We also found that while some convection models with intrinsically small-scale planforms may also lead to formation of a supercontinent, some other models may fail to produce a supercontinent. For these models with intrinsically small-scale planforms, the merged continental blocks promote long-wavelength mantle structure near the continents. However, in non-continental regions, convective wavelengths remain relatively small. We suggest that time-scales for supercontinent formation and convective wavelengths in non-continental area are important parameters that help constrain mechanisms for supercontinent formation.
Robust Intratumor Partitioning to Identify High-Risk Subregions in Lung Cancer: A Pilot Study.
Wu, Jia; Gensheimer, Michael F; Dong, Xinzhe; Rubin, Daniel L; Napel, Sandy; Diehn, Maximilian; Loo, Billy W; Li, Ruijiang
2016-08-01
To develop an intratumor partitioning framework for identifying high-risk subregions from (18)F-fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) imaging and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer. In this institutional review board-approved retrospective study, we analyzed the pretreatment FDG-PET and CT scans of 44 lung cancer patients treated with radiation therapy. A novel, intratumor partitioning method was developed, based on a 2-stage clustering process: first at the patient level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET and CT images; next, tumor subregions were identified by merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP). Three spatially distinct subregions were identified within each tumor that were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI of 0.66-0.67. When restricting the analysis to patients with stage III disease (n=32), the same subregion achieved an even higher CI of 0.75 (hazard ratio 3.93, log-rank P=.002) for predicting OS, and a CI of 0.76 (hazard ratio 4.84, log-rank P=.002) for predicting OFP. In comparison, conventional imaging markers, including tumor volume, maximum standardized uptake value, and metabolic tumor volume using threshold of 50% standardized uptake value maximum, were not predictive of OS or OFP, with CI mostly below 0.60 (log-rank P>.05). We propose a robust intratumor partitioning method to identify clinically relevant, high-risk subregions in lung cancer. We envision that this approach will be applicable to identifying useful imaging biomarkers in many cancer types. Copyright © 2016 Elsevier Inc. All rights reserved.
Robust Intratumor Partitioning to Identify High-Risk Subregions in Lung Cancer: A Pilot Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Jia; Gensheimer, Michael F.; Dong, Xinzhe
2016-08-01
Purpose: To develop an intratumor partitioning framework for identifying high-risk subregions from {sup 18}F-fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) imaging and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer. Methods and Materials: In this institutional review board–approved retrospective study, we analyzed the pretreatment FDG-PET and CT scans of 44 lung cancer patients treated with radiation therapy. A novel, intratumor partitioning method was developed, based on a 2-stage clustering process: first at the patient level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET andmore » CT images; next, tumor subregions were identified by merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP). Results: Three spatially distinct subregions were identified within each tumor that were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI of 0.66-0.67. When restricting the analysis to patients with stage III disease (n=32), the same subregion achieved an even higher CI of 0.75 (hazard ratio 3.93, log-rank P=.002) for predicting OS, and a CI of 0.76 (hazard ratio 4.84, log-rank P=.002) for predicting OFP. In comparison, conventional imaging markers, including tumor volume, maximum standardized uptake value, and metabolic tumor volume using threshold of 50% standardized uptake value maximum, were not predictive of OS or OFP, with CI mostly below 0.60 (log-rank P>.05). Conclusion: We propose a robust intratumor partitioning method to identify clinically relevant, high-risk subregions in lung cancer. We envision that this approach will be applicable to identifying useful imaging biomarkers in many cancer types.« less
MEMS-based system and image processing strategy for epiretinal prosthesis.
Xia, Peng; Hu, Jie; Qi, Jin; Gu, Chaochen; Peng, Yinghong
2015-01-01
Retinal prostheses have the potential to restore some level of visual function to the patients suffering from retinal degeneration. In this paper, an epiretinal approach with active stimulation devices is presented. The MEMS-based processing system consists of an external micro-camera, an information processor, an implanted electrical stimulator and a microelectrode array. The image processing strategy combining image clustering and enhancement techniques was proposed and evaluated by psychophysical experiments. The results indicated that the image processing strategy improved the visual performance compared with direct merging pixels to low resolution. The image processing methods assist epiretinal prosthesis for vision restoration.
Responsible Student Affairs Practice: Merging Student Development and Quality Management.
ERIC Educational Resources Information Center
Whitner, Phillip A.; And Others
The merging of Total Quality Management (TQM) and Involvement Theory into a managerial philosophy can assist student affairs professionals with an approach for conducting work that improves student affairs practice. When merged or integrated, accountability can easily be obtained because the base philosophies of qualitative research, TQM, and…
Park, Jong Bo; Shin, Dongha; Kang, Sangmin; Cho, Sung-Pyo; Hong, Byung Hee
2016-11-01
Two nanobubbles that merge in a graphene liquid cell take elliptical shapes rather than the ideal circular shapes. This phenomenon was investigated in detail by using in situ transmission electron microscopy (TEM). The results show that the distortion in the two-dimensional shapes of the merging nanobubbles is attributed to the anisotropic gas transport flux between the nanobubbles. We also predicted and confirmed the same phenomenon in a three-nanobubble system, indicating that the relative size difference is important in determining the shape of merging nanobubbles.
Online Optimal Control of Connected Vehicles for Efficient Traffic Flow at Merging Roads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rios-Torres, Jackeline; Malikopoulos, Andreas; Pisu, Pierluigi
2015-01-01
This paper addresses the problem of coordinating online connected vehicles at merging roads to achieve a smooth traffic flow without stop-and-go driving. We present a framework and a closed-form solution that optimize the acceleration profile of each vehicle in terms of fuel economy while avoiding collision with other vehicles at the merging zone. The proposed solution is validated through simulation and it is shown that coordination of connected vehicles can reduce significantly fuel consumption and travel time at merging roads.
NASA Astrophysics Data System (ADS)
Berchem, J.; Marchaudon, A.; Bosqued, J.; Escoubet, C. P.; Dunlop, M.; Owen, C. J.; Reme, H.; Balogh, A.; Carr, C.; Fazakerley, A. N.; Cao, J. B.
2005-12-01
Synoptic measurements from the DOUBLE STAR and CLUSTER spacecraft offer a unique opportunity to evaluate global models in simulating the complex topology and dynamics of the dayside merging region. We compare observations from the DOUBLE STAR TC-1 and CLUSTER spacecraft on May 8, 2004 with the predictions from a three-dimensional magnetohydrodynamic (MHD) simulation that uses plasma and magnetic field parameters measured upstream of the bow shock by the WIND spacecraft. Results from the global simulation are consistent with the large-scale features observed by CLUSTER and TC-1. We discuss topological changes and plasma flows at the dayside magnetospheric boundary inferred from the simulation results. The simulation shows that the DOUBLE STAR spacecraft passed through the dawn side merging region as the IMF rotated. In particular, the simulation indicates that at times TC-1 was very close to the merging region. In addition, we found that the bifurcation of the merging region in the simulation results is consistent with predictions by the antiparallel merging model. However, because of the draping of the magnetosheath field lines over the magnetopause, the positions and shape of the merging region differ significantly from those predicted by the model.
NASA Astrophysics Data System (ADS)
Jiang, Feng; Gu, Qing; Hao, Huizhen; Li, Na; Wang, Bingqian; Hu, Xiumian
2018-06-01
Automatic grain segmentation of sandstone is to partition mineral grains into separate regions in the thin section, which is the first step for computer aided mineral identification and sandstone classification. The sandstone microscopic images contain a large number of mixed mineral grains where differences among adjacent grains, i.e., quartz, feldspar and lithic grains, are usually ambiguous, which make grain segmentation difficult. In this paper, we take advantage of multi-angle cross-polarized microscopic images and propose a method for grain segmentation with high accuracy. The method consists of two stages, in the first stage, we enhance the SLIC (Simple Linear Iterative Clustering) algorithm, named MSLIC, to make use of multi-angle images and segment the images as boundary adherent superpixels. In the second stage, we propose the region merging technique which combines the coarse merging and fine merging algorithms. The coarse merging merges the adjacent superpixels with less evident boundaries, and the fine merging merges the ambiguous superpixels using the spatial enhanced fuzzy clustering. Experiments are designed on 9 sets of multi-angle cross-polarized images taken from the three major types of sandstones. The results demonstrate both the effectiveness and potential of the proposed method, comparing to the available segmentation methods.
Seamless contiguity method for parallel segmentation of remote sensing image
NASA Astrophysics Data System (ADS)
Wang, Geng; Wang, Guanghui; Yu, Mei; Cui, Chengling
2015-12-01
Seamless contiguity is the key technology for parallel segmentation of remote sensing data with large quantities. It can be effectively integrate fragments of the parallel processing into reasonable results for subsequent processes. There are numerous methods reported in the literature for seamless contiguity, such as establishing buffer, area boundary merging and data sewing. et. We proposed a new method which was also based on building buffers. The seamless contiguity processes we adopt are based on the principle: ensuring the accuracy of the boundary, ensuring the correctness of topology. Firstly, block number is computed based on data processing ability, unlike establishing buffer on both sides of block line, buffer is established just on the right side and underside of the line. Each block of data is segmented respectively and then gets the segmentation objects and their label value. Secondly, choose one block(called master block) and do stitching on the adjacent blocks(called slave block), process the rest of the block in sequence. Through the above processing, topological relationship and boundaries of master block are guaranteed. Thirdly, if the master block polygons boundaries intersect with buffer boundary and the slave blocks polygons boundaries intersect with block line, we adopt certain rules to merge and trade-offs them. Fourthly, check the topology and boundary in the buffer area. Finally, a set of experiments were conducted and prove the feasibility of this method. This novel seamless contiguity algorithm provides an applicable and practical solution for efficient segmentation of massive remote sensing image.
Terrestrial Laser Scanning for Coastal Geomorphologic Research in Western Greece
NASA Astrophysics Data System (ADS)
Hoffmeister, D.; Tilly, N.; Curdt, C.; Aasen, H.; Ntageretzis, K.; Hadler, H.; Willershäuser, T.; Vött, A.; Bareth, G.
2012-07-01
We used terrestrial laser scanning (TLS) for (i) accurate volume estimations of dislocated boulders moved by high-energy impacts and for (ii) monitoring of annual coastal changes. In this contribution, we present three selected sites in Western Greece that were surveyed during a time span of four years (2008-2011). The Riegl LMS-Z420i laser scanner was used in combination with a precise DGPS system (Topcon HiPer Pro). Each scan position and a further target were recorded for georeferencing and merging of the point clouds. For the annual detection of changes, reference points for the base station of the DGPS system were marked. Our studies show that TLS is capable to accurately estimate volumes of boulders, which were dislocated and deposited inland from the littoral zone. The mass of each boulder was calculated from this 3D-reconstructed volume and according density data. The masses turned out to be considerably smaller than common estimated masses based on tape-measurements and according density approximations. The accurate mass data was incorporated into wave transport equations, which estimate wave velocities of high-energy impacts. As expected, these show smaller wave velocities, due to the incorporated smaller mass. Furthermore, TLS is capable to monitor annual changes on coastal areas. The changes are detected by comparing high resolution digital elevation models from every year. On a beach site, larger areas of sea-weed and sandy sediments are eroded. In contrast, bigger gravel with 30-50 cm diameter was accumulated. At the other area with bigger boulders and a different coastal configuration only slightly differences were detectable. In low-lying coastal areas and along recent beaches, post-processing of point clouds turned out to be more difficult, due to noise effects by water and shadowing effects. However, our studies show that the application of TLS in different littoral settings is an appropriate and promising tool. The combination of both instruments worked well and the annual positioning procedure with own survey point is precose for this purpose.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, J; Gensheimer, M; Dong, X
Purpose: To develop an intra-tumor partitioning framework for identifying high-risk subregions from 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) and CT imaging, and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer. Methods: In this institutional review board-approved retrospective study, we analyzed the pre-treatment FDG-PET and CT scans of 44 lung cancer patients treated with radiotherapy. A novel, intra-tumor partitioning method was developed based on a two-stage clustering process: first at patient-level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET and CT images; next, tumor subregions were identified bymore » merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP). Results: Three spatially distinct subregions were identified within each tumor, which were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI = 0.66–0.67. When restricting the analysis to patients with stage III disease (n = 32), the same subregion achieved an even higher CI = 0.75 (HR = 3.93, logrank p = 0.002) for predicting OS, and a CI = 0.76 (HR = 4.84, logrank p = 0.002) for predicting OFP. In comparison, conventional imaging markers including tumor volume, SUVmax and MTV50 were not predictive of OS or OFP, with CI mostly below 0.60 (p < 0.001). Conclusion: We propose a robust intra-tumor partitioning method to identify clinically relevant, high-risk subregions in lung cancer. We envision that this approach will be applicable to identifying useful imaging biomarkers in many cancer types.« less
Stress and Support in Family Relationships after Hurricane Katrina
ERIC Educational Resources Information Center
Reid, Megan; Reczek, Corinne
2011-01-01
In this article, the authors merge the study of support, strain, and ambivalence in family relationships with the study of stress to explore the ways family members provide support or contribute to strain in the disaster recovery process. The authors analyze interviews with 71 displaced Hurricane Katrina survivors, and identify three family…
Australian Secondary School Students' Understanding of Climate Change
ERIC Educational Resources Information Center
Dawson, Vaille; Carson, Katherine
2013-01-01
This study investigated 438 Year 10 students (15 and 16 years old) from Western Australian schools, on their understanding of the greenhouse effect and climate change, and the sources of their information. Results showed that most students have an understanding of how the greenhouse effect works, however, many students merge the processes of the…
Gang Free: Influencing Friendship Choices in Today's World.
ERIC Educational Resources Information Center
Wiener, Valerie
This book presents information as to how teenagers select their friends. Material was garnered through a review of the literature and hundreds of interviews. The book opens with a focus on how teenagers make friends, and includes such issues as the power of self, finding groups, peer impact, the merging process, seeking significance, and putting…
The Pinch Pot Technique and Raku.
ERIC Educational Resources Information Center
Demery, Marie
Since the 16th century, the small Japanese raku tea bowl has reflected the merged cultural influences of art, religion, and other countries on the art of Japanese pottery. Artistically, the bowl is a combination of ceramics (pinching) and sculpture (carving). The dictates of the Zen Buddhist tea masters determine its sculptural process and steps,…
DOT National Transportation Integrated Search
2012-01-01
Top-down cracking in flexible pavement is one of the most common and crucial modes of pavement distress in Florida, reducing both service quality and life of flexible pavement. The process begins with micro-cracks (micro-damage), which grow and merge...
The Hidden Picture: Administrators' Perspectives on Least Restrictive Environment
ERIC Educational Resources Information Center
Garner, Gina Marlene
2009-01-01
This study looks to better understand how administrators make a decision about a least restrictive environment placement recommendation. What decision processes do they engage in when merging information of individual and environment to create a working plan of access that will benefit all involved? It also seeks the factors that are primary in…
Closing the Gap: Merging Student Affairs, Advising and Registration
ERIC Educational Resources Information Center
Goomas, David T.
2012-01-01
In a pilot study at El Centro College, an urban college in the Dallas County Community College District, students in a new-to-college educational framework class were offered an intervention intended to enhance the academic advising process. The intervention consisted of in-class career services advising, degree selection, degree planning, course…
Some Thoughts on the Relationship of Developmental Science and Population Neuroscience
ERIC Educational Resources Information Center
Paus, Tomáš
2012-01-01
This essay describes briefly population neuroscience, the merging of genetics and epidemiology with neuroscience, and its goals with regard to (1) gaining new knowledge about "processes" leading to a particular "state" of brain structure and function, and (2) using this knowledge to predict the risk (and resilience) of an…
Merging Disparate Data and Numerical Model Results for Dynamically Constrained Nowcasts
2000-09-30
of Delaware Newark, DE 19716 phone: (302) 831-6836 fax: (302) 831-6838 email: brucel @udel.edu Award #: N000149910052 http://newark.cms.udel.edu... brucel /hrd.html LONG-TERM GOALS Our long-term goal is to quantify submesoscale dynamical processes in the ocean so that we can better understand
Can mergers-in-progress be unmerged in speech accommodation?
Babel, Molly; McAuliffe, Michael; Haber, Graham
2013-01-01
This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged.
Can mergers-in-progress be unmerged in speech accommodation?
Babel, Molly; McAuliffe, Michael; Haber, Graham
2013-01-01
This study examines spontaneous phonetic accommodation of a dialect with distinct categories by speakers who are in the process of merging those categories. We focus on the merger of the NEAR and SQUARE lexical sets in New Zealand English, presenting New Zealand participants with an unmerged speaker of Australian English. Mergers-in-progress are a uniquely interesting sound change as they showcase the asymmetry between speech perception and production. Yet, we examine mergers using spontaneous phonetic imitation, which is phenomenon that is necessarily a behavior where perceptual input influences speech production. Phonetic imitation is quantified by a perceptual measure and an acoustic calculation of mergedness using a Pillai-Bartlett trace. The results from both analyses indicate spontaneous phonetic imitation is moderated by extra-linguistic factors such as the valence of assigned conditions and social bias. We also find evidence for a decrease in the degree of mergedness in post-exposure productions. Taken together, our results suggest that under the appropriate conditions New Zealanders phonetically accommodate to Australian English and that in the process of speech imitation, mergers-in-progress can, but do not consistently, become less merged. PMID:24069011
NASA Astrophysics Data System (ADS)
Borjas, Zulema; Esteve-Núñez, Abraham; Ortiz, Juan Manuel
2017-07-01
Microbial Desalination Cells constitute an innovative technology where microbial fuel cell and electrodialysis merge in the same device for obtaining fresh water from saline water with no energy-associated cost for the user. In this work, an anodic biofilm of the electroactive bacteria Geobacter sulfurreducens was able to efficiently convert the acetate present in synthetic waste water into electric current (j = 0.32 mA cm-2) able to desalinate water. .Moreover, we implemented an efficient start-up protocol where desalination up to 90% occurred in a desalination cycle (water production:0.308 L m-2 h-1, initial salinity: 9 mS cm-1, final salinity: <1 mS cm-1) using a filter press-based MDC prototype without any energy supply (excluding peristaltic pump energy). This start-up protocol is not only optimized for time but also simplifies operational procedures making it a more feasible strategy for future scaling-up of MDCs either as a single process or as a pre-treatment method combined with other well established desalination technologies such as reverse osmosis (RO) or reverse electrodialysis.
Multi-scales region segmentation for ROI separation in digital mammograms
NASA Astrophysics Data System (ADS)
Zhang, Dapeng; Zhang, Di; Li, Yue; Wang, Wei
2017-02-01
Mammography is currently the most effective imaging modality used by radiologists for the screening of breast cancer. Segmentation is one of the key steps in the process of developing anatomical models for calculation of safe medical dose of radiation. This paper explores the potential of the statistical region merging segmentation technique for Breast segmentation in digital mammograms. First, the mammograms are pre-processing for regions enhancement, then the enhanced images are segmented using SRM with multi scales, finally these segmentations are combined for region of interest (ROI) separation and edge detection. The proposed algorithm uses multi-scales region segmentation in order to: separate breast region from background region, region edge detection and ROIs separation. The experiments are performed using a data set of mammograms from different patients, demonstrating the validity of the proposed criterion. Results show that, the statistical region merging segmentation algorithm actually can work on the segmentation of medical image and more accurate than another methods. And the outcome shows that the technique has a great potential to become a method of choice for segmentation of mammograms.
Arpin, Jacques
2008-09-01
What can a healer learn from theatre and performance studies? What can theatre and performance studies bring to healing practices? Both disciplines are distinct in Western societies, at times merged into miscellaneous forms of 'art therapy'. What lessons can we learn from traditions that do not separate these competencies and have always integrated them as being naturally complementary? In a consultation of cultural psychiatry, both patients and healers are actively aware of various degrees of merging of art and medicine. Narration, then, cannot be limited to verbal case-history making and verbal therapeutic approaches. Bringing patients and healers on a stage and using all forms of text and performance allow for another way of (re)constructing case histories. Expanding the narrative process opens doors to exploring traditions: their origin, their apprenticeship, their performance and their transmission.
Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images.
Watson, Jeffrey R; Gainer, Christian F; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G Michael; Anton, Rein; Romanowski, Marek
2015-10-01
Intraoperative applications of near-infrared (NIR) fluorescent contrast agents can be aided by instrumentation capable of merging the view of surgical field with that of NIR fluorescence. We demonstrate augmented microscopy, an intraoperative imaging technique in which bright-field (real) and electronically processed NIR fluorescence (synthetic) images are merged within the optical path of a stereomicroscope. Under luminance of 100,000 lx, representing typical illumination of the surgical field, the augmented microscope detects 189 nM concentration of indocyanine green and produces a composite of the real and synthetic images within the eyepiece of the microscope at 20 fps. Augmentation described here can be implemented as an add-on module to visualize NIR contrast agents, laser beams, or various types of electronic data within the surgical microscopes commonly used in neurosurgical, cerebrovascular, otolaryngological, and ophthalmic procedures.
Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images
NASA Astrophysics Data System (ADS)
Watson, Jeffrey R.; Gainer, Christian F.; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G. Michael, Jr.; Anton, Rein; Romanowski, Marek
2015-10-01
Intraoperative applications of near-infrared (NIR) fluorescent contrast agents can be aided by instrumentation capable of merging the view of surgical field with that of NIR fluorescence. We demonstrate augmented microscopy, an intraoperative imaging technique in which bright-field (real) and electronically processed NIR fluorescence (synthetic) images are merged within the optical path of a stereomicroscope. Under luminance of 100,000 lx, representing typical illumination of the surgical field, the augmented microscope detects 189 nM concentration of indocyanine green and produces a composite of the real and synthetic images within the eyepiece of the microscope at 20 fps. Augmentation described here can be implemented as an add-on module to visualize NIR contrast agents, laser beams, or various types of electronic data within the surgical microscopes commonly used in neurosurgical, cerebrovascular, otolaryngological, and ophthalmic procedures.
Comparing nonparametric Bayesian tree priors for clonal reconstruction of tumors.
Deshwar, Amit G; Vembu, Shankar; Morris, Quaid
2015-01-01
Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor…
Merging Sounder and Imager Data for Improved Cloud Depiction on SNPP and JPSS.
NASA Astrophysics Data System (ADS)
Heidinger, A. K.; Holz, R.; Li, Y.; Platnick, S. E.; Wanzong, S.
2017-12-01
Under the NOAA GOES-R Algorithm Working Group (AWG) Program, NOAA supports the development of an Infrared (IR) Optimal Estimation (OE) Cloud Height Algorithm (ACHA). ACHA is an enterprise solution that supports many geostationary and polar orbiting imager sensors. ACHA is operational at NOAA on SNPP VIIRS and has been adopted as the cloud height algorithm for the NASA NPP Atmospheric Suite of products. Being an OE algorithm, ACHA is flexible and capable of using additional observations and constraints. We have modified ACHA to use sounder (CriS) observations to improve the cloud detection, typing and height estimation. Specifically, these improvements include retrievals in multi-layer scenarios and improved performance in polar regions. This presentation will describe the process for merging VIIRS and CrIS and a demonstration of the improvements.
NASA Astrophysics Data System (ADS)
Cecinati, F.; Wani, O.; Rico-Ramirez, M. A.
2017-11-01
Merging radar and rain gauge rainfall data is a technique used to improve the quality of spatial rainfall estimates and in particular the use of Kriging with External Drift (KED) is a very effective radar-rain gauge rainfall merging technique. However, kriging interpolations assume Gaussianity of the process. Rainfall has a strongly skewed, positive, probability distribution, characterized by a discontinuity due to intermittency. In KED rainfall residuals are used, implicitly calculated as the difference between rain gauge data and a linear function of the radar estimates. Rainfall residuals are non-Gaussian as well. The aim of this work is to evaluate the impact of applying KED to non-Gaussian rainfall residuals, and to assess the best techniques to improve Gaussianity. We compare Box-Cox transformations with λ parameters equal to 0.5, 0.25, and 0.1, Box-Cox with time-variant optimization of λ, normal score transformation, and a singularity analysis technique. The results suggest that Box-Cox with λ = 0.1 and the singularity analysis is not suitable for KED. Normal score transformation and Box-Cox with optimized λ, or λ = 0.25 produce satisfactory results in terms of Gaussianity of the residuals, probability distribution of the merged rainfall products, and rainfall estimate quality, when validated through cross-validation. However, it is observed that Box-Cox transformations are strongly dependent on the temporal and spatial variability of rainfall and on the units used for the rainfall intensity. Overall, applying transformations results in a quantitative improvement of the rainfall estimates only if the correct transformations for the specific data set are used.
A merged surface reflectance product from the Landsat and Sentinel-2 Missions
NASA Astrophysics Data System (ADS)
Vermote, E.; Claverie, M.; Masek, J. G.; Becker-Reshef, I.; Justice, C. O.
2013-12-01
This project is aimed at producing a merged surface product from the Landsat and Sentinel-2 missions to ultimately achieve high temporal coverage (~2 days repeat cycle) at high spatial resolution (20-60m). The goal is to achieve a seamless/consistent stream of surface reflectance data from the different sensors. The first part of this presentation discusses the basic requirements of such a product and the necessary processing steps: mainly calibration, atmospheric corrections, BRDF effect corrections, spectral band pass adjustments and gridding. We demonstrate the performance of those different corrections by using MODIS and VIIRS (Climate Modeling Grid at 0.05deg) data globally as well as Formosat-2 (8m spatial resolution) data (one crop site in South of France where 105 scenes were acquired during 2006-2010). The consistency of the surface reflectance product from MODIS and Formosat-2 ranges from 6 to 8% relative depending on the spectral bands (Green to NIR) with a bias between 2% (NIR) to 5% (green), which is acceptable given the cumulated limitation in cross-calibration, atmospheric correction and BRDF correction. The second part is devoted to the simulation of the merged Landsat and Sentinel-2 mission by using Landsat-7, LDCM (early) and SPOT-4 Take 5 dataset. SPOT-4 Take 5 dataset is a collection of 42 sites distributed globally and systematically acquired by SPOT-4 HRV every 5 days during the decommissioning phase of the SPOT4 mission (February-May 2013). Finally, the benefits of such a merged surface reflectance at high spatial and temporal resolution are discussed within the context of the agricultural monitoring, in particular in the perspective of the GEOGLAM (Global Earth Observation for Global Land Agriculture Monitoring) project.
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework
Antonopoulos, Georgios C.; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available. PMID:26599984
Tile-Based Two-Dimensional Phase Unwrapping for Digital Holography Using a Modular Framework.
Antonopoulos, Georgios C; Steltner, Benjamin; Heisterkamp, Alexander; Ripken, Tammo; Meyer, Heiko
2015-01-01
A variety of physical and biomedical imaging techniques, such as digital holography, interferometric synthetic aperture radar (InSAR), or magnetic resonance imaging (MRI) enable measurement of the phase of a physical quantity additionally to its amplitude. However, the phase can commonly only be measured modulo 2π, as a so called wrapped phase map. Phase unwrapping is the process of obtaining the underlying physical phase map from the wrapped phase. Tile-based phase unwrapping algorithms operate by first tessellating the phase map, then unwrapping individual tiles, and finally merging them to a continuous phase map. They can be implemented computationally efficiently and are robust to noise. However, they are prone to failure in the presence of phase residues or erroneous unwraps of single tiles. We tried to overcome these shortcomings by creating novel tile unwrapping and merging algorithms as well as creating a framework that allows to combine them in modular fashion. To increase the robustness of the tile unwrapping step, we implemented a model-based algorithm that makes efficient use of linear algebra to unwrap individual tiles. Furthermore, we adapted an established pixel-based unwrapping algorithm to create a quality guided tile merger. These original algorithms as well as previously existing ones were implemented in a modular phase unwrapping C++ framework. By examining different combinations of unwrapping and merging algorithms we compared our method to existing approaches. We could show that the appropriate choice of unwrapping and merging algorithms can significantly improve the unwrapped result in the presence of phase residues and noise. Beyond that, our modular framework allows for efficient design and test of new tile-based phase unwrapping algorithms. The software developed in this study is freely available.
NASA Astrophysics Data System (ADS)
Boche, H.; Janßen, G.
2014-08-01
We consider one-way quantum state merging and entanglement distillation under compound and arbitrarily varying source models. Regarding quantum compound sources, where the source is memoryless, but the source state an unknown member of a certain set of density matrices, we continue investigations begun in the work of Bjelaković et al. ["Universal quantum state merging," J. Math. Phys. 54, 032204 (2013)] and determine the classical as well as entanglement cost of state merging. We further investigate quantum state merging and entanglement distillation protocols for arbitrarily varying quantum sources (AVQS). In the AVQS model, the source state is assumed to vary in an arbitrary manner for each source output due to environmental fluctuations or adversarial manipulation. We determine the one-way entanglement distillation capacity for AVQS, where we invoke the famous robustification and elimination techniques introduced by Ahlswede. Regarding quantum state merging for AVQS we show by example that the robustification and elimination based approach generally leads to suboptimal entanglement as well as classical communication rates.
Novel crystal timing calibration method based on total variation
NASA Astrophysics Data System (ADS)
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
Automatic extraction of numeric strings in unconstrained handwritten document images
NASA Astrophysics Data System (ADS)
Haji, M. Mehdi; Bui, Tien D.; Suen, Ching Y.
2011-01-01
Numeric strings such as identification numbers carry vital pieces of information in documents. In this paper, we present a novel algorithm for automatic extraction of numeric strings in unconstrained handwritten document images. The algorithm has two main phases: pruning and verification. In the pruning phase, the algorithm first performs a new segment-merge procedure on each text line, and then using a new regularity measure, it prunes all sequences of characters that are unlikely to be numeric strings. The segment-merge procedure is composed of two modules: a new explicit character segmentation algorithm which is based on analysis of skeletal graphs and a merging algorithm which is based on graph partitioning. All the candidate sequences that pass the pruning phase are sent to a recognition-based verification phase for the final decision. The recognition is based on a coarse-to-fine approach using probabilistic RBF networks. We developed our algorithm for the processing of real-world documents where letters and digits may be connected or broken in a document. The effectiveness of the proposed approach is shown by extensive experiments done on a real-world database of 607 documents which contains handwritten, machine-printed and mixed documents with different types of layouts and levels of noise.
Extraction of edge-based and region-based features for object recognition
NASA Astrophysics Data System (ADS)
Coutts, Benjamin; Ravi, Srinivas; Hu, Gongzhu; Shrikhande, Neelima
1993-08-01
One of the central problems of computer vision is object recognition. A catalogue of model objects is described as a set of features such as edges and surfaces. The same features are extracted from the scene and matched against the models for object recognition. Edges and surfaces extracted from the scenes are often noisy and imperfect. In this paper algorithms are described for improving low level edge and surface features. Existing edge extraction algorithms are applied to the intensity image to obtain edge features. Initial edges are traced by following directions of the current contour. These are improved by using corresponding depth and intensity information for decision making at branch points. Surface fitting routines are applied to the range image to obtain planar surface patches. An algorithm of region growing is developed that starts with a coarse segmentation and uses quadric surface fitting to iteratively merge adjacent regions into quadric surfaces based on approximate orthogonal distance regression. Surface information obtained is returned to the edge extraction routine to detect and remove fake edges. This process repeats until no more merging or edge improvement can take place. Both synthetic (with Gaussian noise) and real images containing multiple object scenes have been tested using the merging criteria. Results appeared quite encouraging.
Qiu, Chenhui; Wang, Yuanyuan; Guo, Yanen; Xia, Shunren
2018-03-14
Image fusion techniques can integrate the information from different imaging modalities to get a composite image which is more suitable for human visual perception and further image processing tasks. Fusing green fluorescent protein (GFP) and phase contrast images is very important for subcellular localization, functional analysis of protein and genome expression. The fusion method of GFP and phase contrast images based on complex shearlet transform (CST) is proposed in this paper. Firstly the GFP image is converted to IHS model and its intensity component is obtained. Secondly the CST is performed on the intensity component and the phase contrast image to acquire the low-frequency subbands and the high-frequency subbands. Then the high-frequency subbands are merged by the absolute-maximum rule while the low-frequency subbands are merged by the proposed Haar wavelet-based energy (HWE) rule. Finally the fused image is obtained by performing the inverse CST on the merged subbands and conducting IHS-to-RGB conversion. The proposed fusion method is tested on a number of GFP and phase contrast images and compared with several popular image fusion methods. The experimental results demonstrate that the proposed fusion method can provide better fusion results in terms of subjective quality and objective evaluation. © 2018 Wiley Periodicals, Inc.
An automatic dose verification system for adaptive radiotherapy for helical tomotherapy
NASA Astrophysics Data System (ADS)
Mo, Xiaohu; Chen, Mingli; Parnell, Donald; Olivera, Gustavo; Galmarini, Daniel; Lu, Weiguo
2014-03-01
Purpose: During a typical 5-7 week treatment of external beam radiotherapy, there are potential differences between planned patient's anatomy and positioning, such as patient weight loss, or treatment setup. The discrepancies between planned and delivered doses resulting from these differences could be significant, especially in IMRT where dose distributions tightly conforms to target volumes while avoiding organs-at-risk. We developed an automatic system to monitor delivered dose using daily imaging. Methods: For each treatment, a merged image is generated by registering the daily pre-treatment setup image and planning CT using treatment position information extracted from the Tomotherapy archive. The treatment dose is then computed on this merged image using our in-house convolution-superposition based dose calculator implemented on GPU. The deformation field between merged and planning CT is computed using the Morphon algorithm. The planning structures and treatment doses are subsequently warped for analysis and dose accumulation. All results are saved in DICOM format with private tags and organized in a database. Due to the overwhelming amount of information generated, a customizable tolerance system is used to flag potential treatment errors or significant anatomical changes. A web-based system and a DICOM-RT viewer were developed for reporting and reviewing the results. Results: More than 30 patients were analysed retrospectively. Our in-house dose calculator passed 97% gamma test evaluated with 2% dose difference and 2mm distance-to-agreement compared with Tomotherapy calculated dose, which is considered sufficient for adaptive radiotherapy purposes. Evaluation of the deformable registration through visual inspection showed acceptable and consistent results, except for cases with large or unrealistic deformation. Our automatic flagging system was able to catch significant patient setup errors or anatomical changes. Conclusions: We developed an automatic dose verification system that quantifies treatment doses, and provides necessary information for adaptive planning without impeding clinical workflows.
GEM: a dynamic tracking model for mesoscale eddies in the ocean
NASA Astrophysics Data System (ADS)
Li, Qiu-Yang; Sun, Liang; Lin, Sheng-Fu
2016-12-01
The Genealogical Evolution Model (GEM) presented here is an efficient logical model used to track dynamic evolution of mesoscale eddies in the ocean. It can distinguish between different dynamic processes (e.g., merging and splitting) within a dynamic evolution pattern, which is difficult to accomplish using other tracking methods. To this end, the GEM first uses a two-dimensional (2-D) similarity vector (i.e., a pair of ratios of overlap area between two eddies to the area of each eddy) rather than a scalar to measure the similarity between eddies, which effectively solves the "missing eddy" problem (temporarily lost eddy in tracking). Second, for tracking when an eddy splits, the GEM uses both "parent" (the original eddy) and "child" (eddy split from parent) and the dynamic processes are described as the birth and death of different generations. Additionally, a new look-ahead approach with selection rules effectively simplifies computation and recording. All of the computational steps are linear and do not include iteration. Given the pixel number of the target region L, the maximum number of eddies M, the number N of look-ahead time steps, and the total number of time steps T, the total computer time is O(LM(N + 1)T). The tracking of each eddy is very smooth because we require that the snapshots of each eddy on adjacent days overlap one another. Although eddy splitting or merging is ubiquitous in the ocean, they have different geographic distributions in the North Pacific Ocean. Both the merging and splitting rates of the eddies are high, especially at the western boundary, in currents and in "eddy deserts". The GEM is useful not only for satellite-based observational data, but also for numerical simulation outputs. It is potentially useful for studying dynamic processes in other related fields, e.g., the dynamics of cyclones in meteorology.
Online, On Demand Access to Coastal Digital Elevation Models
NASA Astrophysics Data System (ADS)
Long, J.; Bristol, S.; Long, D.; Thompson, S.
2014-12-01
Process-based numerical models for coastal waves, water levels, and sediment transport are initialized with digital elevation models (DEM) constructed by interpolating and merging bathymetric and topographic elevation data. These gridded surfaces must seamlessly span the land-water interface and may cover large regions where the individual raw data sources are collected at widely different spatial and temporal resolutions. In addition, the datasets are collected from different instrument platforms with varying accuracy and may or may not overlap in coverage. The lack of available tools and difficulties in constructing these DEMs lead scientists to 1) rely on previously merged, outdated, or over-smoothed DEMs; 2) discard more recent data that covers only a portion of the DEM domain; and 3) use inconsistent methodologies to generate DEMs. The objective of this work is to address the immediate need of integrating land and water-based elevation data sources and streamline the generation of a seamless data surface that spans the terrestrial-marine boundary. To achieve this, the U.S. Geological Survey (USGS) is developing a web processing service to format and initialize geoprocessing tasks designed to create coastal DEMs. The web processing service is maintained within the USGS ScienceBase data management system and has an associated user interface. Through the map-based interface, users define a geographic region that identifies the bounds of the desired DEM and a time period of interest. This initiates a query for elevation datasets within federal science agency data repositories. A geoprocessing service is then triggered to interpolate, merge, and smooth the data sources creating a DEM based on user-defined configuration parameters. Uncertainty and error estimates for the DEM are also returned by the geoprocessing service. Upon completion, the information management platform provides access to the final gridded data derivative and saves the configuration parameters for future reference. The resulting products and tools developed here could be adapted to future data sources and projects beyond the coastal environment.
Assesment on the performance of electrode arrays using image processing technique
NASA Astrophysics Data System (ADS)
Usman, N.; Khiruddin, A.; Nawawi, Mohd
2017-08-01
Interpreting inverted resistivity section is time consuming, tedious and requires other sources of information to be relevant geologically. Image processing technique was used in order to perform post inversion processing which make geophysical data interpretation easier. The inverted data sets were imported into the PCI Geomatica 9.0.1 for further processing. The data sets were clipped and merged together in order to match the coordinates of the three layers and permit pixel to pixel analysis. Dipole-dipole array is more sensitive to resistivity variation with depth in comparison with Werner-Schlumberger and pole-dipole. Image processing serves as good post-inversion tool in geophysical data processing.
Toward a complete dataset of drug-drug interaction information from publicly available sources.
Ayvaz, Serkan; Horn, John; Hassanzadeh, Oktie; Zhu, Qian; Stan, Johann; Tatonetti, Nicholas P; Vilar, Santiago; Brochhausen, Mathias; Samwald, Matthias; Rastegar-Mojarad, Majid; Dumontier, Michel; Boyce, Richard D
2015-06-01
Although potential drug-drug interactions (PDDIs) are a significant source of preventable drug-related harm, there is currently no single complete source of PDDI information. In the current study, all publically available sources of PDDI information that could be identified using a comprehensive and broad search were combined into a single dataset. The combined dataset merged fourteen different sources including 5 clinically-oriented information sources, 4 Natural Language Processing (NLP) Corpora, and 5 Bioinformatics/Pharmacovigilance information sources. As a comprehensive PDDI source, the merged dataset might benefit the pharmacovigilance text mining community by making it possible to compare the representativeness of NLP corpora for PDDI text extraction tasks, and specifying elements that can be useful for future PDDI extraction purposes. An analysis of the overlap between and across the data sources showed that there was little overlap. Even comprehensive PDDI lists such as DrugBank, KEGG, and the NDF-RT had less than 50% overlap with each other. Moreover, all of the comprehensive lists had incomplete coverage of two data sources that focus on PDDIs of interest in most clinical settings. Based on this information, we think that systems that provide access to the comprehensive lists, such as APIs into RxNorm, should be careful to inform users that the lists may be incomplete with respect to PDDIs that drug experts suggest clinicians be aware of. In spite of the low degree of overlap, several dozen cases were identified where PDDI information provided in drug product labeling might be augmented by the merged dataset. Moreover, the combined dataset was also shown to improve the performance of an existing PDDI NLP pipeline and a recently published PDDI pharmacovigilance protocol. Future work will focus on improvement of the methods for mapping between PDDI information sources, identifying methods to improve the use of the merged dataset in PDDI NLP algorithms, integrating high-quality PDDI information from the merged dataset into Wikidata, and making the combined dataset accessible as Semantic Web Linked Data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Computing and visualizing time-varying merge trees for high-dimensional data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Heine, Christian; Weber, Gunther H.
2017-06-03
We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.
NASA Technical Reports Server (NTRS)
Johnson, Sally
2017-01-01
Trajectory-Based Operations (TBO) is one of the essential paradigm shifts in the NextGen transformation of the National Airspace System. Under TBO, aircraft are managed by 4-dimensional trajectories, and airborne and ground-based metering, merging, and spacing operations are key to managing those trajectories. This paper presents the results of a study of potential metering, merging, and spacing operations within a future TBO environment. A number of operational scenarios for tactical and strategic uses of metering, merging, and spacing are described, and interdependencies between concurrent tactical and strategic operations are identified.
Electromagnetic navigation versus fluoroscopy in aortic endovascular procedures: a phantom study.
Tystad Lund, Kjetil; Tangen, Geir Arne; Manstad-Hulaas, Frode
2017-01-01
To explore the possible benefits of electromagnetic (EM) navigation versus conventional fluoroscopy during abdominal aortic endovascular procedures. The study was performed on a phantom representing the abdominal aorta. Intraoperative cone beam computed tomography (CBCT) of the phantom was acquired and merged with a preoperative multidetector CT (MDCT). The CBCT was performed with a reference plate fixed to the phantom that, after merging the CBCT with the MDCT, facilitated registration of the MDCT volume with the EM space. An EM field generator was stationed near the phantom. Navigation software was used to display EM-tracked instruments within the 3D image volume. Fluoroscopy was performed using a C-arm system. Five operators performed a series of renal artery cannulations using modified instruments, alternatingly using fluoroscopy or EM navigation as the sole guidance method. Cannulation durations and associated radiation dosages were noted along with the number of cannulations complicated by loss of guidewire insertion. A total of 120 cannulations were performed. The median cannulation durations were 41.5 and 34.5 s for the fluoroscopy- and EM-guided cannulations, respectively. No significant difference in cannulation duration was found between the two modalities (p = 0.736). Only EM navigation showed a significant reduction in cannulation duration in the latter half of its cannulation series compared with the first half (p = 0.004). The median dose area product for fluoroscopy was 0.0836 [Formula: see text]. EM-guided cannulations required a one-time CBCT dosage of 3.0278 [Formula: see text]. Three EM-guided and zero fluoroscopy-guided cannulations experienced loss of guidewire insertion. Our findings indicate that EM navigation is not inferior to fluoroscopy in terms of the ability to guide endovascular interventions. Its utilization may be of particular interest in complex interventions where adequate visualization or minimal use of contrast agents is critical. In vivo studies featuring an optimized implementation of EM navigation should be conducted.
NASA Astrophysics Data System (ADS)
Mencin, David; Hodgkinson, Kathleen; Sievers, Charlie; David, Phillips; Charles, Meertens; Glen, Mattioli
2017-04-01
UNAVCO has been providing infrastructure and support for solid-earth sciences and earthquake natural hazards for the past two decades. Recent advances in GNSS technology and data processing are now providing position solutions with centimeter-level precision at high-rate (>1 Hz) and low latency (i.e. the time required for data to arrive for analysis, in this case less than 1 second). These data have the potential to improve our understanding in diverse areas of geophysics including properties of seismic, volcanic, magmatic and tsunami sources, and thus profoundly transform rapid event characterization and warning. Scientific and operational applications also include glacier and ice sheet motions; tropospheric modeling; and space weather. These areas of geophysics represent a spectrum of research fields, including geodesy, seismology, tropospheric weather, space weather and natural hazards. Processed Real-Time GNSS (RT-GNSS) data will require formats and standards that allow this broad and diverse community to use these data and associated meta-data in existing research infrastructure. These advances have critically highlighted the difficulties associated with merging data and metadata between scientific disciplines. Even seemingly very closely related fields such as geodesy and seismology, which both have rich histories of handling large volumes of data and metadata, do not go together well in any automated way. Community analysis strategies, or lack thereof, such as treatment of error prove difficult to address and are reflected in the data and metadata. In addition, these communities have differing security, accessibility and reliability requirements. We propose some solutions to the particular problem of making RT-GNSS processed solution data and metadata accessible to multiply scientific and natural hazard communities. Importantly, we discuss the roadblocks encounter and solved and those that remain to be addressed.
A hierarchical word-merging algorithm with class separability measure.
Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan
2014-03-01
In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.
Investigating different computed tomography techniques for internal target volume definition.
Yoganathan, S A; Maria Das, K J; Subramanian, V Siva; Raj, D Gowtham; Agarwal, Arpita; Kumar, Shaleen
2017-01-01
The aim of this work was to evaluate the various computed tomography (CT) techniques such as fast CT, slow CT, breath-hold (BH) CT, full-fan cone beam CT (FF-CBCT), half-fan CBCT (HF-CBCT), and average CT for delineation of internal target volume (ITV). In addition, these ITVs were compared against four-dimensional CT (4DCT) ITVs. Three-dimensional target motion was simulated using dynamic thorax phantom with target insert of diameter 3 cm for ten respiration data. CT images were acquired using a commercially available multislice CT scanner, and the CBCT images were acquired using On-Board-Imager. Average CT was generated by averaging 10 phases of 4DCT. ITVs were delineated for each CT by contouring the volume of the target ball; 4DCT ITVs were generated by merging all 10 phases target volumes. Incase of BH-CT, ITV was derived by boolean of CT phases 0%, 50%, and fast CT target volumes. ITVs determined by all CT and CBCT scans were significantly smaller (P < 0.05) than the 4DCT ITV, whereas there was no significant difference between average CT and 4DCT ITVs (P = 0.17). Fast CT had the maximum deviation (-46.1% ± 20.9%) followed by slow CT (-34.3% ± 11.0%) and FF-CBCT scans (-26.3% ± 8.7%). However, HF-CBCT scans (-12.9% ± 4.4%) and BH-CT scans (-11.1% ± 8.5%) resulted in almost similar deviation. On the contrary, average CT had the least deviation (-4.7% ± 9.8%). When comparing with 4DCT, all the CT techniques underestimated ITV. In the absence of 4DCT, the HF-CBCT target volumes with appropriate margin may be a reasonable approach for defining the ITV.
29 CFR 4211.31 - Allocation of unfunded vested benefits following the merger of plans.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) through (d) of this section, when two or more multiemployer plans merge, the merged plan shall adopt one... allocation methods prescribed in §§ 4211.32 through 4211.35, and the method adopted shall apply to all employer withdrawals occurring after the initial plan year. Alternatively, a merged plan may adopt its own...
76 FR 54931 - Post Office (PO) Box Fee Groups for Merged Locations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-06
... POSTAL SERVICE 39 CFR Part 111 Post Office (PO) Box Fee Groups for Merged Locations AGENCY: Postal... different ZIP Code TM location because of a merger of two or more ZIP Code locations into a single location... merged with a location whose box section is more than one fee group level different, the location would...
Merged Federal Files [Academic Year] 1978-79 [machine-readable data file].
ERIC Educational Resources Information Center
National Center for Education Statistics (ED), Washington, DC.
The Merged Federal File for 1978-79 contains school district level data from the following six source files: (1) the Census of Governments' Survey of Local Government Finances--School Systems (F-33) (with 16,343 records merged); (2) the National Center for Education Statistics Survey of School Systems (School District Universe) (with 16,743…
An invertebrate embryologist's guide to routine processing of confocal images.
von Dassow, George
2014-01-01
It is almost impossible to use a confocal microscope without encountering the need to transform the raw data through image processing. Adherence to a set of straightforward guidelines will help ensure that image manipulations are both credible and repeatable. Meanwhile, attention to optimal data collection parameters will greatly simplify image processing, not only for convenience but for quality and credibility as well. Here I describe how to conduct routine confocal image processing tasks, including creating 3D animations or stereo images, false coloring or merging channels, background suppression, and compressing movie files for display.
NASA Technical Reports Server (NTRS)
Hubert, Daan; Lambert, Jean-Christopher; Verhoelst, Tijl; Granville, Jose; Keppens, Arno; Baray, Jean-Luc; Cortesi, Ugo; Degenstein, D. A.; Froidevaux, Lucien; Godin-Beekmann, Sophie;
2015-01-01
Most recent assessments of long-term changes in the vertical distribution of ozone (by e.g. WMO and SI2N) rely on data sets that integrate observations by multiple instruments. Several merged satellite ozone profile records have been developed over the past few years; each considers a particular set of instruments and adopts a particular merging strategy. Their intercomparison by Tummon et al. revealed that the current merging schemes are not sufficiently refined to correct for all major differences between the limb/occultation records. This shortcoming introduces uncertainties that need to be known to obtain a sound interpretation of the different satellite-based trend studies. In practice however, producing realistic uncertainty estimates is an intricate task which depends on a sufficiently detailed understanding of the characteristics of each contributing data record and on the subsequent interplay and propagation of these through the merging scheme. Our presentation discusses these challenges in the context of limb/occultation ozone profile records, but they are equally relevant for other instruments and atmospheric measurements. We start by showing how the NDACC and GAW-affiliated ground-based networks of ozonesonde and lidar instruments allowed us to characterize fourteen limb/occultation ozone profile records, together providing a global view over the last three decades. Our prime focus will be on techniques to estimate long-term drift since our results suggest this is the main driver of the major trend differences between the merged data sets. The single-instrument drift estimates are then used for a tentative estimate of the systematic uncertainty in the profile trends from merged data records. We conclude by reflecting on possible further steps needed to improve the merging algorithms and to obtain a better characterization of the uncertainties involved.
NASA Astrophysics Data System (ADS)
Giacobbo, Nicola; Mapelli, Michela; Spera, Mario
2018-03-01
The first four gravitational wave events detected by LIGO were all interpreted as merging black hole binaries (BHBs), opening a new perspective on the study of such systems. Here we use our new population-synthesis code MOBSE, an upgraded version of BSE, to investigate the demography of merging BHBs. MOBSE includes metallicity-dependent prescriptions for mass-loss of massive hot stars. It also accounts for the impact of the electron-scattering Eddington factor on mass-loss. We perform >108 simulations of isolated massive binaries, with 12 different metallicities, to study the impact of mass-loss, core-collapse supernovae and common envelope on merging BHBs. Accounting for the dependence of stellar winds on the Eddington factor leads to the formation of black holes (BHs) with mass up to 65 M⊙ at metallicity Z ˜ 0.0002. However, most BHs in merging BHBs have masses ≲ 40 M⊙. We find merging BHBs with mass ratios in the 0.1-1.0 range, even if mass ratios >0.6 are more likely. We predict that systems like GW150914, GW170814 and GW170104 can form only from progenitors with metallicity Z ≤ 0.006, Z ≤ 0.008 and Z ≤ 0.012, respectively. Most merging BHBs have gone through a common envelope phase, but up to ˜17 per cent merging BHBs at low metallicity did not undergo any common envelope phase. We find a much higher number of mergers from metal-poor progenitors than from metal-rich ones: the number of BHB mergers per unit mass is ˜10-4 M_{⊙}^{-1} at low metallicity (Z = 0.0002-0.002) and drops to ˜10-7 M_{⊙}^{-1} at high metallicity (Z ˜ 0.02).
Geographically weighted regression based methods for merging satellite and gauge precipitation
NASA Astrophysics Data System (ADS)
Chao, Lijun; Zhang, Ke; Li, Zhijia; Zhu, Yuelong; Wang, Jingfeng; Yu, Zhongbo
2018-03-01
Real-time precipitation data with high spatiotemporal resolutions are crucial for accurate hydrological forecasting. To improve the spatial resolution and quality of satellite precipitation, a three-step satellite and gauge precipitation merging method was formulated in this study: (1) bilinear interpolation is first applied to downscale coarser satellite precipitation to a finer resolution (PS); (2) the (mixed) geographically weighted regression methods coupled with a weighting function are then used to estimate biases of PS as functions of gauge observations (PO) and PS; and (3) biases of PS are finally corrected to produce a merged precipitation product. Based on the above framework, eight algorithms, a combination of two geographically weighted regression methods and four weighting functions, are developed to merge CMORPH (CPC MORPHing technique) precipitation with station observations on a daily scale in the Ziwuhe Basin of China. The geographical variables (elevation, slope, aspect, surface roughness, and distance to the coastline) and a meteorological variable (wind speed) were used for merging precipitation to avoid the artificial spatial autocorrelation resulting from traditional interpolation methods. The results show that the combination of the MGWR and BI-square function (MGWR-BI) has the best performance (R = 0.863 and RMSE = 7.273 mm/day) among the eight algorithms. The MGWR-BI algorithm was then applied to produce hourly merged precipitation product. Compared to the original CMORPH product (R = 0.208 and RMSE = 1.208 mm/hr), the quality of the merged data is significantly higher (R = 0.724 and RMSE = 0.706 mm/hr). The developed merging method not only improves the spatial resolution and quality of the satellite product but also is easy to implement, which is valuable for hydrological modeling and other applications.
Efficient Merge and Insert Operations for Binary Heaps and Trees
NASA Technical Reports Server (NTRS)
Kuszmaul, Christopher Lee; Woo, Alex C. (Technical Monitor)
2000-01-01
Binary heaps and binary search trees merge efficiently. We introduce a new amortized analysis that allows us to prove the cost of merging either binary heaps or balanced binary trees is O(l), in the amortized sense. The standard set of other operations (create, insert, delete, extract minimum, in the case of binary heaps, and balanced binary trees, as well as a search operation for balanced binary trees) remain with a cost of O(log n). For binary heaps implemented as arrays, we show a new merge algorithm that has a single operation cost for merging two heaps, a and b, of O(absolute value of a + min(log absolute value of b log log absolute value of b. log absolute value of a log absolute value of b). This is an improvement over O(absolute value of a + log absolute value of a log absolute value of b). The cost of the new merge is so low that it can be used in a new structure which we call shadow heaps. to implement the insert operation to a tunable efficiency. Shadow heaps support the insert operation for simple priority queues in an amortized time of O(f(n)) and other operations in time O((log n log log n)/f (n)), where 1 less than or equal to f (n) less than or equal to log log n. More generally, the results here show that any data structure with operations that change its size by at most one, with the exception of a merge (aka meld) operation, can efficiently amortize the cost of the merge under conditions that are true for most implementations of binary heaps and search trees.
Photogrammetry Tool for Forensic Analysis
NASA Technical Reports Server (NTRS)
Lane, John
2012-01-01
A system allows crime scene and accident scene investigators the ability to acquire visual scene data using cameras for processing at a later time. This system uses a COTS digital camera, a photogrammetry calibration cube, and 3D photogrammetry processing software. In a previous instrument developed by NASA, the laser scaling device made use of parallel laser beams to provide a photogrammetry solution in 2D. This device and associated software work well under certain conditions. In order to make use of a full 3D photogrammetry system, a different approach was needed. When using multiple cubes, whose locations relative to each other are unknown, a procedure that would merge the data from each cube would be as follows: 1. One marks a reference point on cube 1, then marks points on cube 2 as unknowns. This locates cube 2 in cube 1 s coordinate system. 2. One marks reference points on cube 2, then marks points on cube 1 as unknowns. This locates cube 1 in cube 2 s coordinate system. 3. This procedure is continued for all combinations of cubes. 4. The coordinate of all of the found coordinate systems is then merged into a single global coordinate system. In order to achieve maximum accuracy, measurements are done in one of two ways, depending on scale: when measuring the size of objects, the coordinate system corresponding to the nearest cube is used, or when measuring the location of objects relative to a global coordinate system, a merged coordinate system is used. Presently, traffic accident analysis is time-consuming and not very accurate. Using cubes with differential GPS would give absolute positions of cubes in the accident area, so that individual cubes would provide local photogrammetry calibration to objects near a cube.
Black Hole Mergers in Galactic Nuclei Induced by the Eccentric Kozai–Lidov Effect
NASA Astrophysics Data System (ADS)
Hoang, Bao-Minh; Naoz, Smadar; Kocsis, Bence; Rasio, Frederic A.; Dosopoulou, Fani
2018-04-01
Nuclear star clusters around a central massive black hole (MBH) are expected to be abundant in stellar black hole (BH) remnants and BH–BH binaries. These binaries form a hierarchical triple system with the central MBH, and gravitational perturbations from the MBH can cause high-eccentricity excitation in the BH–BH binary orbit. During this process, the eccentricity may approach unity, and the pericenter distance may become sufficiently small so that gravitational-wave emission drives the BH–BH binary to merge. In this work, we construct a simple proof-of-concept model for this process, and specifically, we study the eccentric Kozai–Lidov mechanism in unequal-mass, soft BH–BH binaries. Our model is based on a set of Monte Carlo simulations for BH–BH binaries in galactic nuclei, taking into account quadrupole- and octupole-level secular perturbations, general relativistic precession, and gravitational-wave emission. For a typical steady-state number of BH–BH binaries, our model predicts a total merger rate of ∼1–3 {Gpc} ‑3 {yr} ‑1, depending on the assumed density profile in the nucleus. Thus, our mechanism could potentially compete with other dynamical formation processes for merging BH–BH binaries, such as the interactions of stellar BHs in globular clusters or in nuclear star clusters without an MBH.
NASA Astrophysics Data System (ADS)
Placko, Dominique; Bore, Thierry; Rivollet, Alain; Joubert, Pierre-Yves
2015-10-01
This paper deals with the problem of imaging defects in metallic structures through eddy current (EC) inspections, and proposes an original process for a possible tomographical crack evaluation. This process is based on a semi analytical modeling, called "distributed point source method" (DPSM) which is used to describe and equate the interactions between the implemented EC probes and the structure under test. Several steps will be successively described, illustrating the feasibility of this new imaging process dedicated to the quantitative evaluation of defects. The basic principles of this imaging process firstly consist in creating a 3D grid by meshing the volume potentially inspected by the sensor. As a result, a given number of elemental volumes (called voxels) are obtained. Secondly, the DPSM modeling is used to compute an image for all occurrences in which only one of the voxels has a different conductivity among all the other ones. The assumption consists to consider that a real defect may be truly represented by a superimposition of elemental voxels: the resulting accuracy will naturally depend on the density of space sampling. On other hand, the excitation device of the EC imager has the capability to be oriented in several directions, and driven by an excitation current at variable frequency. So, the simulation will be performed for several frequencies and directions of the eddy currents induced in the structure, which increases the signal entropy. All these results are merged in a so-called "observation matrix" containing all the probe/structure interaction configurations. This matrix is then used in an inversion scheme in order to perform the evaluation of the defect location and geometry. The modeled EC data provided by the DPSM are compared to the experimental images provided by an eddy current imager (ECI), implemented on aluminum plates containing some buried defects. In order to validate the proposed inversion process, we feed it with computed images of various acquisition configurations. Additive noise was added to the images so that they are more representative of actual EC data. In the case of simple notch type defects, for which the relative conductivity may only take two extreme values (1 or 0), a threshold was introduced on the inverted images, in a post processing step, taking advantage of a priori knowledge of the statistical properties of the restored images. This threshold allowed to enhance the image contrast and has contributed to eliminate both the residual noise and the pixels showing non-realistic values.
Merged GLORIA sidescan and hydrosweep pseudo-sidescan: Processing and creation of digital mosaics
Bird, R.T.; Searle, R.C.; Paskevich, V.; Twichell, D.C.
1996-01-01
We have replaced the usual band of poor-quality data in the near-nadir region of our GLORIA long-range sidescan-sonar imagery with a shaded-relief image constructed from swath bathymetry data (collected simultaneously with GLORIA) which completely cover the nadir area. We have developed a technique to enhance these "pseudo-sidescan" images in order to mimic the neighbouring GLORIA backscatter intensities. As a result, the enhanced images greatly facilitate the geologic interpretation of the adjacent GLORIA data, and geologic features evident in the GLORIA data may be correlated with greater confidence across track. Features interpreted from the pseudo-sidescan may be extrapolated from the near-nadir region out into the GLORIA range where they may not have been recognized otherwise, and therefore the pseudo-sidescan can be used to ground-truth GLORIA interpretations. Creation of digital sidescan mosaics utilized an approach not previously used for GLORIA data. Pixels were correctly placed in cartographic space and the time required to complete a final mosaic was significantly reduced. Computer software for digital mapping and mosaic creation is incorporated into the newly-developed Woods Hole Image Processing System (WHIPS) which can process both low- and high-frequency sidescan, and can interchange data with the Mini Image Processing System (MIPS) most commonly used for GLORIA processing. These techniques are tested by creating digital mosaics of merged GLORIA sidescan and Hydrosweep pseudo-sidescan data from the vicinity of the Juan Fernandez microplate along the East Pacific Rise (EPR).
The Careful Puppet Master: Reducing risk and fortifying acceptance testing with Jenkins CI
NASA Astrophysics Data System (ADS)
Smith, Jason A.; Richman, Gabriel; DeStefano, John; Pryor, James; Rao, Tejas; Strecker-Kellogg, William; Wong, Tony
2015-12-01
Centralized configuration management, including the use of automation tools such as Puppet, can greatly increase provisioning speed and efficiency when configuring new systems or making changes to existing systems, reduce duplication of work, and improve automated processes. However, centralized management also brings with it a level of inherent risk: a single change in just one file can quickly be pushed out to thousands of computers and, if that change is not properly and thoroughly tested and contains an error, could result in catastrophic damage to many services, potentially bringing an entire computer facility offline. Change management procedures can—and should—be formalized in order to prevent such accidents. However, like the configuration management process itself, if such procedures are not automated, they can be difficult to enforce strictly. Therefore, to reduce the risk of merging potentially harmful changes into our production Puppet environment, we have created an automated testing system, which includes the Jenkins CI tool, to manage our Puppet testing process. This system includes the proposed changes and runs Puppet on a pool of dozens of RedHat Enterprise Virtualization (RHEV) virtual machines (VMs) that replicate most of our important production services for the purpose of testing. This paper describes our automated test system and how it hooks into our production approval process for automatic acceptance testing. All pending changes that have been pushed to production must pass this validation process before they can be approved and merged into production.
Hospitalization patterns associated with Appalachian coal mining.
Hendryx, Michael; Ahern, Melissa M; Nurkiewicz, Timothy R
2007-12-01
The goal of this study was to test whether the volume of coal mining was related to population hospitalization risk for diseases postulated to be sensitive or insensitive to coal mining by-products. The study was a retrospective analysis of 2001 adult hospitalization data (n = 93,952) for West Virginia, Kentucky, and Pennsylvania, merged with county-level coal production figures. Hospitalization data were obtained from the Health Care Utilization Project National Inpatient Sample. Diagnoses postulated to be sensitive to coal mining by-product exposure were contrasted with diagnoses postulated to be insensitive to exposure. Data were analyzed using hierarchical nonlinear models, controlling for patient age, gender, insurance, comorbidities, hospital teaching status, county poverty, and county social capital. Controlling for covariates, the volume of coal mining was significantly related to hospitalization risk for two conditions postulated to be sensitive to exposure: hypertension and chronic obstructive pulmonary disease (COPD). The odds for a COPD hospitalization increased 1% for each 1462 tons of coal, and the odds for a hypertension hospitalization increased 1% for each 1873 tons of coal. Other conditions were not related to mining volume. Exposure to particulates or other pollutants generated by coal mining activities may be linked to increased risk of COPD and hypertension hospitalizations. Limitations in the data likely result in an underestimate of associations.
An improved bathymetric model for the modern and palaeo Lake Eyre
NASA Astrophysics Data System (ADS)
Leon, J. X.; Cohen, T. J.
2012-11-01
Here we demonstrate the applicability of using altimetry data and Landsat imagery to provide the most accurate digital elevation model (DEM) of Australia's largest playa lake — Lake Eyre. We demonstrate through the use of geospatial techniques a robust assessment of lake area and volume of recent lake-filling episodes whilst also providing the most accurate estimates of area and volume for larger lake filling episodes that occurred throughout the last glacial cycle. We highlight that at a depth of 25 m Lake Mega-Eyre would merge with the adjacent Lake Mega-Frome to form an immense waterbody with a combined area of almost 35,000 km2 and a combined volume of ~ 520 km3. This would represent a vast water body in what is now the arid interior of the Australian continent. The improved DEM is more reliable from a geomorphological and hydrological perspective and allows a more accurate assessment of water balance under the modern hydrological regime. The results presented using GLAS/ICESat data suggest that earlier historical soundings were correct and the actual lowest topographic point in Australia is - 15.6 m below sea level. The results also contrast nicely the different basin characteristics of two adjacent lake systems: Lake Eyre and Lake Frome.
A monolithic mass tracking formulation for bubbles in incompressible flow
NASA Astrophysics Data System (ADS)
Aanjaneya, Mridul; Patkar, Saket; Fedkiw, Ronald
2013-08-01
We devise a novel method for treating bubbles in incompressible flow that relies on the conservative advection of bubble mass and an associated equation of state in order to determine pressure boundary conditions inside each bubble. We show that executing this algorithm in a traditional manner leads to stability issues similar to those seen for partitioned methods for solid-fluid coupling. Therefore, we reformulate the problem monolithically. This is accomplished by first proposing a new fully monolithic approach to coupling incompressible flow to fully nonlinear compressible flow including the effects of shocks and rarefactions, and then subsequently making a number of simplifying assumptions on the air flow removing not only the nonlinearities but also the spatial variations of both the density and the pressure. The resulting algorithm is quite robust, has been shown to converge to known solutions for test problems, and has been shown to be quite effective on more realistic problems including those with multiple bubbles, merging and pinching, etc. Notably, this approach departs from a standard two-phase incompressible flow model where the air flow preserves its volume despite potentially large forces and pressure differentials in the surrounding incompressible fluid that should change its volume. Our bubbles readily change volume according to an isothermal equation of state.
ERIC Educational Resources Information Center
Schmied, Emily; Parada, Humberto; Horton, Lucy; Ibarra, Leticia; Ayala, Guadalupe
2015-01-01
"Entre Familia: Reflejos de Salud" was a successful family-based randomized controlled trial designed to improve dietary behaviors and intake among U.S. Latino families, specifically fruit and vegetable intake. The novel intervention design merged a community health worker ("promotora") model with an entertainment-education…
Satellite Studies of Cirrus Clouds for Project Fire
NASA Technical Reports Server (NTRS)
1997-01-01
Examine global cloud climatologies for evidence of human caused changes in cloud cover and their effect on the Earth's heat budget through radiative processes. Quantify climatological changes in global cloud cover and estimate their effect on the Earth's heat budget. Improve our knowledge of global cloud cover and its changes through the merging of several satellite data sets.
Do Intelligent Robots Need Emotion?
Pessoa, Luiz
2017-11-01
What is the place of emotion in intelligent robots? Researchers have advocated the inclusion of some emotion-related components in the information-processing architecture of autonomous agents. It is argued here that emotion needs to be merged with all aspects of the architecture: cognitive-emotional integration should be a key design principle. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Kurubacak, Gulsun
2006-01-01
The main purpose of this article is to generate a functional model of evaluation that EMSs can be able to empower online communications characterized by imperative decision making task. The evaluation process of EMSs must merge the multicultural strategies of the theory of Media Richness, and the ethical concerns of the critical approach. Media…
ERIC Educational Resources Information Center
Kurubacak, Gulsun
2006-01-01
The main purpose of this article is to generate a functional model of evaluation that EMSs can be able to empower online communications characterized by imperative decision making task. The evaluation process of EMSs must merge the multicultural strategies of the theory of "Media Richness," and the ethical concerns of the critical approach. "Media…
ERIC Educational Resources Information Center
Brunelliere, Angele; Dufour, Sophie; Nguyen, Noel; Frauenfelder, Ulrich Hans
2009-01-01
This event-related potential (ERP) study examined the impact of phonological variation resulting from a vowel merger on phoneme perception. The perception of the /e/-/[epsilon]/ contrast which does not exist in Southern French-speaking regions, and which is in the process of merging in Northern French-speaking regions, was compared to the…
ERIC Educational Resources Information Center
Booi, Kwanele; Khuzwayo, Mamsie Ethel
2018-01-01
A qualitative case study was conducted at six purposively sampled universities; out of a population of approximately 23 universities. This sampling strategy was based on selecting some universities that became Universities of Technology during the process of merging Higher Education Institutions (HEIs) while other universities kept their identity;…
Digital Storytelling: An Integrated Approach to Language Learning for the 21st Century Student
ERIC Educational Resources Information Center
Ribeiro, Sandra
2015-01-01
Societal changes have, throughout history, pushed the long-established boundaries of education across all grade levels. Technology and media merge with education in a continuous complex social process with human consequences and effects. We, teachers, can aspire to understand and interpret this volatile context that is being redesigned at the same…
ERIC Educational Resources Information Center
Wefer, Stephen H.; Anderson, O. Roger
2008-01-01
Bioinformatics, merging biological data with computer science, is increasingly incorporated into school curricula at all levels. This case study of 10 secondary school students highlights student individual differences (especially the way they processed information and integrated procedural and analytical thought) and summarizes a variety of…
a Line-Based 3d Roof Model Reconstruction Algorithm: Tin-Merging and Reshaping (tmr)
NASA Astrophysics Data System (ADS)
Rau, J.-Y.
2012-07-01
Three-dimensional building model is one of the major components of a cyber-city and is vital for the realization of 3D GIS applications. In the last decade, the airborne laser scanning (ALS) data is widely used for 3D building model reconstruction and object extraction. Instead, based on 3D roof structural lines, this paper presents a novel algorithm for automatic roof models reconstruction. A line-based roof model reconstruction algorithm, called TIN-Merging and Reshaping (TMR), is proposed. The roof structural line, such as edges, eaves and ridges, can be measured manually from aerial stereo-pair, derived by feature line matching or inferred from ALS data. The originality of the TMR algorithm for 3D roof modelling is to perform geometric analysis and topology reconstruction among those unstructured lines and then reshapes the roof-type using elevation information from the 3D structural lines. For topology reconstruction, a line constrained Delaunay Triangulation algorithm is adopted where the input structural lines act as constraint and their vertex act as input points. Thus, the constructed TINs will not across the structural lines. Later at the stage of Merging, the shared edge between two TINs will be check if the original structural line exists. If not, those two TINs will be merged into a polygon. Iterative checking and merging of any two neighboured TINs/Polygons will result in roof polygons on the horizontal plane. Finally, at the Reshaping stage any two structural lines with fixed height will be used to adjust a planar function for the whole roof polygon. In case ALS data exist, the Reshaping stage can be simplified by adjusting the point cloud within the roof polygon. The proposed scheme reduces the complexity of 3D roof modelling and makes the modelling process easier. Five test datasets provided by ISPRS WG III/4 located at downtown Toronto, Canada and Vaihingen, Germany are used for experiment. The test sites cover high rise buildings and residential area with diverse roof type. For performance evaluation, the adopted roof structural lines are manually measured from the provided stereo-pair. Experimental results indicate a nearly 100% success rate for topology reconstruction was achieved provided that the 3D structural lines can be enclosed as polygons. On the other hand, the success rate at the Reshaping stage is dependent on the complexity of the rooftop structure. Thus, a visual inspection and semi-automatic adjustment of roof-type is suggested and implemented to complete the roof modelling. The results demonstrate that the proposed scheme is robust and reliable with a high degree of completeness, correctness, and quality, even when a group of connected buildings with multiple layers and mixed roof types is processed.
Reconciling mass functions with the star-forming main sequence via mergers
NASA Astrophysics Data System (ADS)
Steinhardt, Charles L.; Yurk, Dominic; Capak, Peter
2017-06-01
We combine star formation along the 'main sequence', quiescence and clustering and merging to produce an empirical model for the evolution of individual galaxies. Main-sequence star formation alone would significantly steepen the stellar mass function towards low redshift, in sharp conflict with observation. However, a combination of star formation and merging produces a consistent result for correct choice of the merger rate function. As a result, we are motivated to propose a model in which hierarchical merging is disconnected from environmentally independent star formation. This model can be tested via correlation functions and would produce new constraints on clustering and merging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merritt, Elizabeth C., E-mail: emerritt@lanl.gov; Adams, Colin S.; University of New Mexico, Albuquerque, New Mexico 87131
We report spatially resolved measurements of the oblique merging of two supersonic laboratory plasma jets. The jets are formed and launched by pulsed-power-driven railguns using injected argon, and have electron density ∼10{sup 14} cm{sup −3}, electron temperature ≈1.4 eV, ionization fraction near unity, and velocity ≈40 km/s just prior to merging. The jet merging produces a few-cm-thick stagnation layer, as observed in both fast-framing camera images and multi-chord interferometer data, consistent with collisional shock formation [E. C. Merritt et al., Phys. Rev. Lett. 111, 085003 (2013)].
Effect of adaptive cruise control systems on mixed traffic flow near an on-ramp
NASA Astrophysics Data System (ADS)
Davis, L. C.
2007-06-01
Mixed traffic flow consisting of vehicles equipped with adaptive cruise control (ACC) and manually driven vehicles is analyzed using car-following simulations. Simulations of merging from an on-ramp onto a freeway reported in the literature have not thus far demonstrated a substantial positive impact of ACC. In this paper cooperative merging for ACC vehicles is proposed to improve throughput and increase distance traveled in a fixed time. In such a system an ACC vehicle senses not only the preceding vehicle in the same lane but also the vehicle immediately in front in the other lane. Prior to reaching the merge region, the ACC vehicle adjusts its velocity to ensure that a safe gap for merging is obtained. If on-ramp demand is moderate, cooperative merging produces significant improvement in throughput (20%) and increases up to 3.6 km in distance traveled in 600 s for 50% ACC mixed flow relative to the flow of all-manual vehicles. For large demand, it is shown that autonomous merging with cooperation in the flow of all ACC vehicles leads to throughput limited only by the downstream capacity, which is determined by speed limit and headway time.
NASA Astrophysics Data System (ADS)
Ramakrishnan, Sowmya; Alvino, Christopher; Grady, Leo; Kiraly, Atilla
2011-03-01
We present a complete automatic system to extract 3D centerlines of ribs from thoracic CT scans. Our rib centerline system determines the positional information for the rib cage consisting of extracted rib centerlines, spinal canal centerline, pairing and labeling of ribs. We show an application of this output to produce an enhanced visualization of the rib cage by the method of Kiraly et al., in which the ribs are digitally unfolded along their centerlines. The centerline extraction consists of three stages: (a) pre-trace processing for rib localization, (b) rib centerline tracing, and (c) post-trace processing to merge the rib traces. Then we classify ribs from non-ribs and determine anatomical rib labeling. Our novel centerline tracing technique uses the Random Walker algorithm to segment the structural boundary of the rib in successive 2D cross sections orthogonal to the longitudinal direction of the ribs. Then the rib centerline is progressively traced along the rib using a 3D Kalman filter. The rib centerline extraction framework was evaluated on 149 CT datasets with varying slice spacing, dose, and under a variety of reconstruction kernels. The results of the evaluation are presented. The extraction takes approximately 20 seconds on a modern radiology workstation and performs robustly even in the presence of partial volume effects or rib pathologies such as bone metastases or fractures, making the system suitable for assisting clinicians in expediting routine rib reading for oncology and trauma applications.
Anomalous Stars and Where to Find Them
NASA Astrophysics Data System (ADS)
Muna, Demitri; Huff, Eric
2018-01-01
The sky is now extensively mapped by imaging surveys in wavelengths that span the electromagnetic spectrum, ranging from Fermi and GALEX down to WISE, Planck, and radio surveys like FIRST and VLSS. Individual public catalogs now contain on order hundreds of millions of distinct sources. Recent progress in image analysis techniques makes possible great increases in the efficiency, sensitivity, and reliability of measurements that combine imaging data from multiple probes with heterogeneous properties. This is especially true for the identification of anomalous sources: traditional methods for finding ‘outliers’ typically rely on making hard cuts on noisy catalog properties, greatly restricting the potential discovery space. Cross-catalog matches confine investigation to objects that occur at signal-to-noise ratios sufficient to be independently detectable in a subset of all the available multi-wavelength coverage. The process of merging the latest analyses with existing data is severely hampered, however, by the fractured way in which these data are processed and stored, limitations of data access, the data volume involved, and the computation power required. This has left archive data far from fully exploited. Stellar anomalies present the best place to start: joint distributions of stellar colors and magnitudes have finer structures than extended sources, and modelling of point sources is computationally cheaper than for galaxies. We present a framework to solve the problem of applying new algorithms to old data while overcoming the limitations described above, in the search for the undiscovered anomalous.
Multimessenger Observations of Neutron Star Mergers: Probing the Physics of High-Density Matter
NASA Astrophysics Data System (ADS)
Radice, David
2016-09-01
Neutron star mergers are Nature's ultimate hadron colliders. They are extremely violent events resulting in gravitational-waves and electromagnetic emissions that could be detected at distances of several hundred mega-parsecs. Imprinted in these signals are important clues on the properties of high-density matter, waiting to be harnessed by us. In this talk, I will review our current knowledge of neutron star mergers from the theoretical side. I will discuss the prospects of measuring neutron star radii and masses using gravitational-wave observations of the late-inspiral of merging neutron stars. Then, I will show how multimessenger observations of the merger and post-merger evolution of merging neutron stars could be used to place further constrains on the nuclear equation of state at very high densities. Finally, I will discuss the possible role of neutron star mergers in the creation of the r-process nuclei in the Universe.
Galaxies Collide to Create Hot, Huge Galaxy
NASA Technical Reports Server (NTRS)
2009-01-01
This image of a pair of colliding galaxies called NGC 6240 shows them in a rare, short-lived phase of their evolution just before they merge into a single, larger galaxy. The prolonged, violent collision has drastically altered the appearance of both galaxies and created huge amounts of heat turning NGC 6240 into an 'infrared luminous' active galaxy. A rich variety of active galaxies, with different shapes, luminosities and radiation profiles exist. These galaxies may be related astronomers have suspected that they may represent an evolutionary sequence. By catching different galaxies in different stages of merging, a story emerges as one type of active galaxy changes into another. NGC 6240 provides an important 'missing link' in this process. This image was created from combined data from the infrared array camera of NASA's Spitzer Space Telescope at 3.6 and 8.0 microns (red) and visible light from NASA's Hubble Space Telescope (green and blue).Augmented microscopy: real-time overlay of bright-field and near-infrared fluorescence images
Watson, Jeffrey R.; Gainer, Christian F.; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G. Michael; Anton, Rein; Romanowski, Marek
2015-01-01
Abstract. Intraoperative applications of near-infrared (NIR) fluorescent contrast agents can be aided by instrumentation capable of merging the view of surgical field with that of NIR fluorescence. We demonstrate augmented microscopy, an intraoperative imaging technique in which bright-field (real) and electronically processed NIR fluorescence (synthetic) images are merged within the optical path of a stereomicroscope. Under luminance of 100,000 lx, representing typical illumination of the surgical field, the augmented microscope detects 189 nM concentration of indocyanine green and produces a composite of the real and synthetic images within the eyepiece of the microscope at 20 fps. Augmentation described here can be implemented as an add-on module to visualize NIR contrast agents, laser beams, or various types of electronic data within the surgical microscopes commonly used in neurosurgical, cerebrovascular, otolaryngological, and ophthalmic procedures. PMID:26440760
NASA Astrophysics Data System (ADS)
Kanai, Toshiaki; Guo, Wei; Tsubota, Makoto
2018-01-01
It is a common view that rotational motion in a superfluid can exist only in the presence of topological defects, i.e., quantized vortices. However, in our numerical studies on the merging of two concentric Bose-Einstein condensates with axial symmetry in two-dimensional space, we observe the emergence of a spiral dark soliton when one condensate has a nonzero initial angular momentum. This spiral dark soliton enables the transfer of angular momentum between the condensates and allows the merged condensate to rotate even in the absence of quantized vortices. Our examination of the flow field around the soliton strikingly reveals that its sharp endpoint can induce flow like a vortex point but with a fraction of a quantized circulation. This interesting nontopological "phase defect" may generate broad interest since rotational motion is essential in many quantum transport processes.
NASA Astrophysics Data System (ADS)
Tanabe, H.; Yamada, T.; Watanabe, T.; Gi, K.; Inomoto, M.; Imazawa, R.; Gryaznevich, M.; Scannell, R.; Conway, N. J.; Michael, C.; Crowley, B.; Fitzgerald, I.; Meakins, A.; Hawkes, N.; McClements, K. G.; Harrison, J.; O'Gorman, T.; Cheng, C. Z.; Ono, Y.; The MAST Team
2017-05-01
We present results of recent studies of merging/reconnection heating during central solenoid (CS)-free plasma startup in the Mega Amp Spherical Tokamak (MAST). During this process, ions are heated globally in the downstream region of an outflow jet, and electrons locally around the X-point produced by the magnetic field of two internal P3 coils and of two plasma rings formed around these coils, the final temperature being proportional to the reconnecting field energy. There is an effective confinement of the downstream thermal energy, due to a thick layer of reconnected flux. The characteristic structure is sustained for longer than an ion-electron energy relaxation time, and the energy exchange between ions and electrons contributes to the bulk electron heating in the downstream region. The peak electron temperature around the X-point increases with toroidal field, but the downstream electron and ion temperatures do not change.
Kibinge, Nelson; Ono, Naoaki; Horie, Masafumi; Sato, Tetsuo; Sugiura, Tadao; Altaf-Ul-Amin, Md; Saito, Akira; Kanaya, Shigehiko
2016-06-01
Conventionally, workflows examining transcription regulation networks from gene expression data involve distinct analytical steps. There is a need for pipelines that unify data mining and inference deduction into a singular framework to enhance interpretation and hypotheses generation. We propose a workflow that merges network construction with gene expression data mining focusing on regulation processes in the context of transcription factor driven gene regulation. The pipeline implements pathway-based modularization of expression profiles into functional units to improve biological interpretation. The integrated workflow was implemented as a web application software (TransReguloNet) with functions that enable pathway visualization and comparison of transcription factor activity between sample conditions defined in the experimental design. The pipeline merges differential expression, network construction, pathway-based abstraction, clustering and visualization. The framework was applied in analysis of actual expression datasets related to lung, breast and prostrate cancer. Copyright © 2016 Elsevier Inc. All rights reserved.
Multispectral Photogrammetric Data Acquisition and Processing Forwall Paintings Studies
NASA Astrophysics Data System (ADS)
Pamart, A.; Guillon, O.; Faraci, S.; Gattet, E.; Genevois, M.; Vallet, J. M.; De Luca, L.
2017-02-01
In the field of wall paintings studies different imaging techniques are commonly used for the documentation and the decision making in term of conservation and restoration. There is nowadays some challenging issues to merge scientific imaging techniques in a multimodal context (i.e. multi-sensors, multi-dimensions, multi-spectral and multi-temporal approaches). For decades those CH objects has been widely documented with Technical Photography (TP) which gives precious information to understand or retrieve the painting layouts and history. More recently there is an increasing demand of the use of digital photogrammetry in order to provide, as one of the possible output, an orthophotomosaic which brings a possibility for metrical quantification of conservators/restorators observations and actions planning. This paper presents some ongoing experimentations of the LabCom MAP-CICRP relying on the assumption that those techniques can be merged through a common pipeline to share their own benefits and create a more complete documentation.
Hall, Jennifer L; Ryan, John J; Bray, Bruce E; Brown, Candice; Lanfear, David; Newby, L Kristin; Relling, Mary V; Risch, Neil J; Roden, Dan M; Shaw, Stanley Y; Tcheng, James E; Tenenbaum, Jessica; Wang, Thomas N; Weintraub, William S
2016-04-01
The process of scientific discovery is rapidly evolving. The funding climate has influenced a favorable shift in scientific discovery toward the use of existing resources such as the electronic health record. The electronic health record enables long-term outlooks on human health and disease, in conjunction with multidimensional phenotypes that include laboratory data, images, vital signs, and other clinical information. Initial work has confirmed the utility of the electronic health record for understanding mechanisms and patterns of variability in disease susceptibility, disease evolution, and drug responses. The addition of biobanks and genomic data to the information contained in the electronic health record has been demonstrated. The purpose of this statement is to discuss the current challenges in and the potential for merging electronic health record data and genomics for cardiovascular research. © 2016 American Heart Association, Inc.
Simulation of Planetary Formation using Python
NASA Astrophysics Data System (ADS)
Bufkin, James; Bixler, David
2015-03-01
A program to simulate planetary formation was developed in the Python programming language. The program consists of randomly placed and massed bodies surrounding a central massive object in order to approximate a protoplanetary disk. The orbits of these bodies are time-stepped, with accelerations, velocities and new positions calculated in each step. Bodies are allowed to merge if their disks intersect. Numerous parameters (orbital distance, masses, number of particles, etc.) were varied in order to optimize the program. The program uses an iterative difference equation approach to solve the equations of motion using a kinematic model. Conservation of energy and angular momentum are not specifically forced, but conservation of momentum is forced during the merging of bodies. The initial program was created in Visual Python (VPython) but the current intention is to allow for higher particle count and faster processing by utilizing PyOpenCl and PyOpenGl. Current results and progress will be reported.
NASA Astrophysics Data System (ADS)
Schlueter, S.; Sheppard, A.; Wildenschild, D.
2013-12-01
Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.
Fast data transmission from serial data acquisition for the GEM detector system
NASA Astrophysics Data System (ADS)
Kolasinski, Piotr; Pozniak, Krzysztof T.; Czarski, Tomasz; Byszuk, Adrian; Chernyshova, Maryna; Kasprowicz, Grzegorz; Krawczyk, Rafal D.; Wojenski, Andrzej; Zabolotny, Wojciech
2015-09-01
This article proposes new method of storing data and transferring it to PC in the X-ray GEM detector system. The whole process is performed by FPGA chips (Spartan-6 series from Xilinx). Comparing to previous methods, new approach allows to store much more data in the system. New, improved implementation of the communication algorithm significantly increases transfer rate between system and PC. In PC data is merged and processed by MATLAB. The structure of firmware implemented in the FPGAs is described.
Fundamental Problems of Hybrid CMOS/Nanodevice Circuits
2010-12-14
Development of an area-distributed CMOS/nanodevice interface We have carried out the first design of CMOS chips for the CMOS/nanodevice integration, and...got them fabricated in IBM’ 180-nm 7RF process (via MOSIS, Inc. silicon foundry). Each 44 mm2 chip assembly of the design consists of 4 component... chips , merged together for processing convenience. Each 22 mm2 component chip features two interface arrays, with 1010 vias each, with chip’s MOSFETs
Merging history of three bimodal clusters
NASA Astrophysics Data System (ADS)
Maurogordato, S.; Sauvageot, J. L.; Bourdin, H.; Cappi, A.; Benoist, C.; Ferrari, C.; Mars, G.; Houairi, K.
2011-01-01
We present a combined X-ray and optical analysis of three bimodal galaxy clusters selected as merging candidates at z ~ 0.1. These targets are part of MUSIC (MUlti-Wavelength Sample of Interacting Clusters), which is a general project designed to study the physics of merging clusters by means of multi-wavelength observations. Observations include spectro-imaging with XMM-Newton EPIC camera, multi-object spectroscopy (260 new redshifts), and wide-field imaging at the ESO 3.6 m and 2.2 m telescopes. We build a global picture of these clusters using X-ray luminosity and temperature maps together with galaxy density and velocity distributions. Idealized numerical simulations were used to constrain the merging scenario for each system. We show that A2933 is very likely an equal-mass advanced pre-merger ~200 Myr before the core collapse, while A2440 and A2384 are post-merger systems (~450 Myr and ~1.5 Gyr after core collapse, respectively). In the case of A2384, we detect a spectacular filament of galaxies and gas spreading over more than 1 h-1 Mpc, which we infer to have been stripped during the previous collision. The analysis of the MUSIC sample allows us to outline some general properties of merging clusters: a strong luminosity segregation of galaxies in recent post-mergers; the existence of preferential axes - corresponding to the merging directions - along which the BCGs and structures on various scales are aligned; the concomitance, in most major merger cases, of secondary merging or accretion events, with groups infalling onto the main cluster, and in some cases the evidence of previous merging episodes in one of the main components. These results are in good agreement with the hierarchical scenario of structure formation, in which clusters are expected to form by successive merging events, and matter is accreted along large-scale filaments. Based on data obtained with the European Southern Observatory, Chile (programs 072.A-0595, 075.A-0264, and 079.A-0425).Tables 5-7 are only available in electronic form at the CDS via anonymous ftp to cdsarc.u-strasbg.fr (130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/525/A79
Rapid production of optimal-quality reduced-resolution representations of very large databases
Sigeti, David E.; Duchaineau, Mark; Miller, Mark C.; Wolinsky, Murray; Aldrich, Charles; Mineev-Weinstein, Mark B.
2001-01-01
View space representation data is produced in real time from a world space database representing terrain features. The world space database is first preprocessed. A database is formed having one element for each spatial region corresponding to a finest selected level of detail. A multiresolution database is then formed by merging elements and a strict error metric is computed for each element at each level of detail that is independent of parameters defining the view space. The multiresolution database and associated strict error metrics are then processed in real time for real time frame representations. View parameters for a view volume comprising a view location and field of view are selected. The error metric with the view parameters is converted to a view-dependent error metric. Elements with the coarsest resolution are chosen for an initial representation. Data set first elements from the initial representation data set are selected that are at least partially within the view volume. The first elements are placed in a split queue ordered by the value of the view-dependent error metric. If the number of first elements in the queue meets or exceeds a predetermined number of elements or whether the largest error metric is less than or equal to a selected upper error metric bound, the element at the head of the queue is force split and the resulting elements are inserted into the queue. Force splitting is continued until the determination is positive to form a first multiresolution set of elements. The first multiresolution set of elements is then outputted as reduced resolution view space data representing the terrain features.
Okumura, Yasuo; Johnson, Susan B; Bunch, T Jared; Henz, Benhur D; O'Brien, Christine J; Packer, Douglas L
2008-06-01
While catheter tip/tissue contact has been shown to be an important determinant of ablative lesions in in vitro studies, the impact of contact on the outcomes of mapping and ablation in the intact heart has not been evaluated. Twelve dogs underwent atrial ablation guided by the Senesitrade mark robotic catheter remote control system. After intracardiac ultrasound (ICE) validation of contact force measured by an in-line mechanical sensor, the relationship between contact force and individual lesion formation was established during irrigated-tipped ablation (flow 17 mL/sec) at 15 watts for 30 seconds. Minimal contact by ICE correlated with force of 4.7 +/- 5.8 grams, consistent contact 9.9 +/- 8.6 grams and tissue tenting produced 25.0 +/- 14.0 grams. Conversely, catheter tip/tissue contact by ICE was predicted by contact force. A contact force of 10-20 and > or =20 grams generated full-thickness, larger volume ablative lesions than that created with <10 grams (98 +/- 69 and 89 +/- 70 mm(3) vs 40 +/- 42 mm(3), P < 0.05). Moderate (10 grams) and marked contact (15-20 grams) application produced 1.5 X greater electroanatomic map volumes that were seen with minimal contact (5 grams) (26 +/- 3 cm(3) vs 33 +/- 6, 39 +/- 3 cm(3), P < 0.05). The electroanatomic map/CT merge process was also more distorted when mapping was generated at moderate to marked contact force. This study shows that mapping and ablation using a robotic sheath guidance system are critically dependent on generated force. These findings suggest that ablative lesion size is optimized by the application of 10-20 grams of contact force, although mapping requires lower-force application to avoid image distortions.
Disparities in the Presentation and Management of Cutaneous Melanoma That Required Admission.
Al-Qurayshi, Zaid; Srivastav, Sudesh; Wang, Alun; Boh, Erin; Hamner, John; Hassan, Mohamed; Kandil, Emad
2018-06-18
In this study, we aimed to examine the association of demographic and socioeconomic factors with cutaneous melanoma that required admission. A cross-sectional study utilizing the Nationwide Inpatient Sample database, 2003-2009, was merged with County Health Rankings Data. A total of 2,765 discharge -records were included. Men were more likely to have melanoma in the head, neck, and trunk regions (p < 0.001), while extremities melanoma was more common in women (p < 0.001). Males had a higher risk of lymph node metastasis on presentation (OR 1.54, 95% CI [1.27-1.89]). Blacks and Hispanics were more likely to present with extremities melanoma. Patients with low annual income were more likely to be treated by low-volume surgeons and in hospitals located in high-risk communities (p < 0.05 each). Patients with Medicaid coverage were twice as likely to present with distant metastasis and were more likely to be managed by low-volume surgeons (p < 0.05 each). The presentation and outcomes of cutaneous melanoma have a distinct pattern of distribution based on patients' characteristics. © 2018 S. Karger AG, Basel.
High-resolution mobile optical 3D scanner with color mapping
NASA Astrophysics Data System (ADS)
Ramm, Roland; Bräuer-Burchardt, Christian; Kühmstedt, Peter; Notni, Gunther
2017-07-01
A high-resolution mobile handheld scanning device suitable for 3D data acquisition and analysis for forensic investigations, rapid prototyping, design, quality management, and archaeology with a measurement volume of approximately 325 mm x 200 mm x 100mm and a lateral object resolution of 170 µm developed at our institute is introduced. The scanners weight is 4.4 kg with an optional color DLSR camera. The PC for measurement control and point calculation is included inside the housing. Power supply is realized by rechargeable batteries. Possible operation time is between 30 and 60 minutes. The object distance is between 400 and 500 mm, and the scan time for one 3D shot may vary between 0.1 and 0.5 seconds. The complete 3D result is obtained a few seconds after starting the scan. For higher quality 3D and color images the scanner is attachable to tripod use. Measurement objects larger than the measurement volume must be acquired partly. The different resulting datasets are merged using a suitable software module. The scanner has been successfully used in various applications.
Merging Bottom-Up with Top-Down: Continuous Lamellar Networks and Block Copolymer Lithography
NASA Astrophysics Data System (ADS)
Campbell, Ian Patrick
Block copolymer lithography is an emerging nanopatterning technology with capabilities that may complement and eventually replace those provided by existing optical lithography techniques. This bottom-up process relies on the parallel self-assembly of macromolecules composed of covalently linked, chemically distinct blocks to generate periodic nanostructures. Among the myriad potential morphologies, lamellar structures formed by diblock copolymers with symmetric volume fractions have attracted the most interest as a patterning tool. When confined to thin films and directed to assemble with interfaces perpendicular to the substrate, two-dimensional domains are formed between the free surface and the substrate, and selective removal of a single block creates a nanostructured polymeric template. The substrate exposed between the polymeric features can subsequently be modified through standard top-down microfabrication processes to generate novel nanostructured materials. Despite tremendous progress in our understanding of block copolymer self-assembly, continuous two-dimensional materials have not yet been fabricated via this robust technique, which may enable nanostructured material combinations that cannot be fabricated through bottom-up methods. This thesis aims to study the effects of block copolymer composition and processing on the lamellar network morphology of polystyrene-block-poly(methyl methacrylate) (PS-b-PMMA) and utilize this knowledge to fabricate continuous two-dimensional materials through top-down methods. First, block copolymer composition was varied through homopolymer blending to explore the physical phenomena surrounding lamellar network continuity. After establishing a framework for tuning the continuity, the effects of various processing parameters were explored to engineer the network connectivity via defect annihilation processes. Precisely controlling the connectivity and continuity of lamellar networks through defect engineering and optimizing the block copolymer lithography process thus enabled the top-down fabrication of continuous two-dimensional gold networks with nanoscale properties. The lamellar structure of these networks was found to confer unique mechanical properties on the nanowire networks and suggests that materials templated via this method may be excellent candidates for integration into stretchable and flexible devices.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boche, H., E-mail: boche@tum.de; Janßen, G., E-mail: gisbert.janssen@tum.de
We consider one-way quantum state merging and entanglement distillation under compound and arbitrarily varying source models. Regarding quantum compound sources, where the source is memoryless, but the source state an unknown member of a certain set of density matrices, we continue investigations begun in the work of Bjelaković et al. [“Universal quantum state merging,” J. Math. Phys. 54, 032204 (2013)] and determine the classical as well as entanglement cost of state merging. We further investigate quantum state merging and entanglement distillation protocols for arbitrarily varying quantum sources (AVQS). In the AVQS model, the source state is assumed to vary inmore » an arbitrary manner for each source output due to environmental fluctuations or adversarial manipulation. We determine the one-way entanglement distillation capacity for AVQS, where we invoke the famous robustification and elimination techniques introduced by Ahlswede. Regarding quantum state merging for AVQS we show by example that the robustification and elimination based approach generally leads to suboptimal entanglement as well as classical communication rates.« less
Overview and analysis of the 2016 Gold Run in the Booster and AGS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeno, K.
2016-09-16
Run 16 differed from preceding Au runs in that during most of it a 12:6:2 merge was employed in the AGS instead of an 8:4:2 merge. This was done to provide higher bunch intensities for RHIC. Since the approach to providing higher bunch intensities is, and has been, to merge more Booster bunches of the same intensity into one final bunch, detailing the longitudinal aspects of this setup seems quite relevant. So, aside from providing an overview of the Au portion of Run 16, this note also contains a series of emittance measurements in the Booster and AGS. Comparisons ofmore » these to similar measurements in previous runs are also made in hopes of gaining a better understanding of what factors contribute to the emittance of a bunch at AGS extraction. The note also tries to provide some context in which to understand the various merge schemes and describes a potential 8 to 1 type merge.« less
Chavez, P.S.; Sides, S.C.; Anderson, J.A.
1991-01-01
The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the Hue-Intensity-Saturation (HIS), Principal Component Analysis (PCA), and High-Pass Filter (HPF) procedures. The HIS method distorted the spectral characteristics of the data the most. The HPF method distorted the spectral characteristics the least; the distortions were minimal and difficult to detect. -Authors
Merging Dietary Assessment with the Adolescent Lifestyle
Schap, TusaRebecca E; Zhu, Fengqing M; Delp, Edward J; Boushey, Carol J
2013-01-01
The use of image-based dietary assessment methods shows promise for improving dietary self-report among children. The Technology Assisted Dietary Assessment (TADA) food record application is a self-administered food record specifically designed to address the burden and human error associated with conventional methods of dietary assessment. Users would take images of foods and beverages at all eating occasions using a mobile telephone or mobile device with an integrated camera, (e.g., Apple iPhone, Google Nexus One, Apple iPod Touch). Once the images are taken, the images are transferred to a back-end server for automated analysis. The first step in this process is image analysis, i.e., segmentation, feature extraction, and classification, allows for automated food identification. Portion size estimation is also automated via segmentation and geometric shape template modeling. The results of the automated food identification and volume estimation can be indexed with the Food and Nutrient Database for Dietary Studies (FNDDS) to provide a detailed diet analysis for use in epidemiologic or intervention studies. Data collected during controlled feeding studies in a camp-like setting have allowed for formative evaluation and validation of the TADA food record application. This review summarizes the system design and the evidence-based development of image-based methods for dietary assessment among children. PMID:23489518
Lootens, Didier; Bentz, Dale P.
2016-01-01
Previous research has demonstrated a linear relationship between compressive strength (mortar cubes and concrete cylinders) and cumulative heat release normalized per unit volume of (mixing) water for a wide variety of cement-based mixtures at ages of 1 d and beyond. This paper utilizes concurrent ultrasonic reflection and calorimetry measurements to further explore this relationship from the time of specimen casting to 3 d. The ultrasonic measurements permit a continuous evaluation of thickening, setting, and strength development during this time period for comparison with the ongoing chemical reactions, as characterized by isothermal calorimetry measurements. Initially, the ultrasonic strength-heat release relation depends strongly on water-to-cement ratio, as well as admixture additions, with no universal behavior. Still, each individual strength-heat release curve is consistent with a percolation-based view of the cement setting process. However, beyond about 8 h for the systems investigated in the present study, the various strength-heat release curves merge towards a single relationship that broadly characterizes the development of strength as a function of heat released (fractional space filled), demonstrating that mortar and/or concrete strength at early ages can be effectively monitored using either ultrasonic or calorimetry measurements on small paste or mortar specimens. PMID:27046956
Photonic Low Cost Micro-Sensor for in-Line Wear Particle Detection in Flowing Lube Oils.
Mabe, Jon; Zubia, Joseba; Gorritxategi, Eneko
2017-03-14
The presence of microscopic particles in suspension in industrial fluids is often an early warning of latent or imminent failures in the equipment or processes where they are being used. This manuscript describes work undertaken to integrate different photonic principles with a micro- mechanical fluidic structure and an embedded processor to develop a fully autonomous wear debris sensor for in-line monitoring of industrial fluids. Lens-less microscopy, stroboscopic illumination, a CMOS imager and embedded machine vision technologies have been merged to develop a sensor solution that is able to detect and quantify the number and size of micrometric particles suspended in a continuous flow of a fluid. A laboratory test-bench has been arranged for setting up the configuration of the optical components targeting a static oil sample and then a sensor prototype has been developed for migrating the measurement principles to real conditions in terms of operating pressure and flow rate of the oil. Imaging performance is quantified using micro calibrated samples, as well as by measuring real used lubricated oils. Sampling a large fluid volume with a decent 2D spatial resolution, this photonic micro sensor offers a powerful tool at very low cost and compacted size for in-line wear debris monitoring.
Photonic Low Cost Micro-Sensor for in-Line Wear Particle Detection in Flowing Lube Oils
Mabe, Jon; Zubia, Joseba; Gorritxategi, Eneko
2017-01-01
The presence of microscopic particles in suspension in industrial fluids is often an early warning of latent or imminent failures in the equipment or processes where they are being used. This manuscript describes work undertaken to integrate different photonic principles with a micro- mechanical fluidic structure and an embedded processor to develop a fully autonomous wear debris sensor for in-line monitoring of industrial fluids. Lens-less microscopy, stroboscopic illumination, a CMOS imager and embedded machine vision technologies have been merged to develop a sensor solution that is able to detect and quantify the number and size of micrometric particles suspended in a continuous flow of a fluid. A laboratory test-bench has been arranged for setting up the configuration of the optical components targeting a static oil sample and then a sensor prototype has been developed for migrating the measurement principles to real conditions in terms of operating pressure and flow rate of the oil. Imaging performance is quantified using micro calibrated samples, as well as by measuring real used lubricated oils. Sampling a large fluid volume with a decent 2D spatial resolution, this photonic micro sensor offers a powerful tool at very low cost and compacted size for in-line wear debris monitoring. PMID:28335436
Lootens, Didier; Bentz, Dale P
2016-04-01
Previous research has demonstrated a linear relationship between compressive strength (mortar cubes and concrete cylinders) and cumulative heat release normalized per unit volume of (mixing) water for a wide variety of cement-based mixtures at ages of 1 d and beyond. This paper utilizes concurrent ultrasonic reflection and calorimetry measurements to further explore this relationship from the time of specimen casting to 3 d. The ultrasonic measurements permit a continuous evaluation of thickening, setting, and strength development during this time period for comparison with the ongoing chemical reactions, as characterized by isothermal calorimetry measurements. Initially, the ultrasonic strength-heat release relation depends strongly on water-to-cement ratio, as well as admixture additions, with no universal behavior. Still, each individual strength-heat release curve is consistent with a percolation-based view of the cement setting process. However, beyond about 8 h for the systems investigated in the present study, the various strength-heat release curves merge towards a single relationship that broadly characterizes the development of strength as a function of heat released (fractional space filled), demonstrating that mortar and/or concrete strength at early ages can be effectively monitored using either ultrasonic or calorimetry measurements on small paste or mortar specimens.
NASA Astrophysics Data System (ADS)
Chen, Xin; Xing, Pei; Luo, Yong; Zhao, Zongci; Nie, Suping; Huang, Jianbin; Wang, Shaowu; Tian, Qinhua
2015-04-01
A new dataset of annual mean surface temperature has been constructed over North America in recent 500 years by performing optimal interpolation (OI) algorithm. Totally, 149 series totally were screened out including 69 tree ring width (MXD) and 80 tree ring width (TRW) chronologies are screened from International Tree Ring Data Bank (ITRDB). The simulated annual mean surface temperature derives from the past1000 experiment results of Community Climate System Model version 4 (CCSM4). Different from existing research that applying data assimilation approach to (General Circulation Models) GCMs simulation, the errors of both the climate model simulation and tree ring reconstruction were considered, with a view to combining the two parts in an optimal way. Variance matching (VM) was employed to calibrate tree ring chronologies on CRUTEM4v, and corresponding errors were estimated through leave-one-out process. Background error covariance matrix was estimated from samples of simulation results in a running 30-year window in a statistical way. Actually, the background error covariance matrix was calculated locally within the scanning range (2000km in this research). Thus, the merging process continued with a time-varying local gain matrix. The merging method (MM) was tested by two kinds of experiments, and the results indicated standard deviation of errors can be reduced by about 0.3 degree centigrade lower than tree ring reconstructions and 0.5 degree centigrade lower than model simulation. During the recent Obvious decadal variability can be identified in MM results including the evident cooling (0.10 degree per decade) in 1940-60s, wherein the model simulation exhibit a weak increasing trend (0.05 degree per decade) instead. MM results revealed a compromised spatial pattern of the linear trend of surface temperature during a typical period (1601-1800 AD) in Little Ice Age, which basically accorded with the phase transitions of the Pacific decadal oscillation (PDO) and Atlantic multi-decadal oscillation (AMO). Through the empirical orthogonal functions and power spectrum analysis, it was demonstrated that, compared with the pure simulations of CCSM4, MM made significant improvement of decadal variability for the gridded temperature in North America by merging the temperature-sensitive tree ring records.
Provenance in Data Interoperability for Multi-Sensor Intercomparison
NASA Technical Reports Server (NTRS)
Lynnes, Chris; Leptoukh, Greg; Berrick, Steve; Shen, Suhung; Prados, Ana; Fox, Peter; Yang, Wenli; Min, Min; Holloway, Dan; Enloe, Yonsook
2008-01-01
As our inventory of Earth science data sets grows, the ability to compare, merge and fuse multiple datasets grows in importance. This requires a deeper data interoperability than we have now. Efforts such as Open Geospatial Consortium and OPeNDAP (Open-source Project for a Network Data Access Protocol) have broken down format barriers to interoperability; the next challenge is the semantic aspects of the data. Consider the issues when satellite data are merged, cross-calibrated, validated, inter-compared and fused. We must match up data sets that are related, yet different in significant ways: the phenomenon being measured, measurement technique, location in space-time or quality of the measurements. If subtle distinctions between similar measurements are not clear to the user, results can be meaningless or lead to an incorrect interpretation of the data. Most of these distinctions trace to how the data came to be: sensors, processing and quality assessment. For example, monthly averages of satellite-based aerosol measurements often show significant discrepancies, which might be due to differences in spatio- temporal aggregation, sampling issues, sensor biases, algorithm differences or calibration issues. Provenance information must be captured in a semantic framework that allows data inter-use tools to incorporate it and aid in the intervention of comparison or merged products. Semantic web technology allows us to encode our knowledge of measurement characteristics, phenomena measured, space-time representation, and data quality attributes in a well-structured, machine-readable ontology and rulesets. An analysis tool can use this knowledge to show users the provenance-related distrintions between two variables, advising on options for further data processing and analysis. An additional problem for workflows distributed across heterogeneous systems is retrieval and transport of provenance. Provenance may be either embedded within the data payload, or transmitted from server to client in an out-of-band mechanism. The out of band mechanism is more flexible in the richness of provenance information that can be accomodated, but it relies on a persistent framework and can be difficult for legacy clients to use. We are prototyping the embedded model, incorporating provenance within metadata objects in the data payload. Thus, it always remains with the data. The downside is a limit to the size of provenance metadata that we can include, an issue that will eventually need resolution to encompass the richness of provenance information required for daata intercomparison and merging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, K. A. P.; Nishida, K.; Shibata, K.
The Solar Optical Telescope (SOT) on board Hinode allows observations with high spatiotemporal resolution and stable image quality. A {lambda}-shaped chromospheric anemone jet was observed in high resolution with SOT/Hinode. We found that several fine-scale jets were launched from one end of the footpoint to the other. These fine-scale jets ({approx}1.5-2.5 Mm) gradually move from one end of the footpoint to the other and finally merge into a single jet. This process occurs recurrently, and as time progresses the jet activity becomes more and more violent. The time evolution of the region below the jet in Ca II H filtergrammore » images taken with SOT shows that various parts (or knots) appear at different positions. These bright knots gradually merge into each other during the maximum phase. The systematic motion of the fine-scale jets is observed when different knots merge into each other. Such morphology would arise due to the emergence of a three-dimensional twisted flux rope in which the axial component (or the guide field) appears in the later stages of the flux rope emergence. The partial appearance of the knots could be due to the azimuthal magnetic field that appears during the early stage of the flux rope emergence. If the guide field is strong and reconnection occurs between the emerging flux rope and an ambient magnetic field, this could explain the typical feature of systematic motion in chromospheric anemone jets.« less
Algorithms for Large-Scale Astronomical Problems
2013-08-01
implemented as a succession of Hadoop MapReduce jobs and sequential programs written in Java . The sampling and splitting stages are implemented as...one MapReduce job, the partitioning and clustering phases make up another job. The merging stage is implemented as a stand-alone Java program. The...Merging. The merging stage is implemented as a sequential Java program that reads the files with the shell information, which were generated by
Web-Scale Search-Based Data Extraction and Integration
2011-10-17
differently, posing challenges for aggregating this information. For example, for the task of finding population for cities in Benin, we were faced with...merged record. Our GeoMerging algorithm attempts to address various ambiguity challenges : • For name: The name of a hospital is not a unique...departments in the same building. For agent-extractor results from structured sources, our GeoMerging algorithm overcomes these challenges using a two
3D Printing Processes Applied to the Creation of Glass Art
ERIC Educational Resources Information Center
Chivers, Morgan
2015-01-01
The purpose of this article is to present a few of the innovative techniques used in the execution of Morgan Chivers' sculptural work, not on the content of the work itself. The author's interest has been in merging the methodologies and precise output control of 3D printing with finished objects in nonprintable materials as required by the…
An Econometric Approach to Evaluate Navy Advertising Efficiency.
1996-03-01
This thesis uses an econometric approach to systematically and comprehensively analyze Navy advertising and recruiting data to determine Navy... advertising cost efficiency in the Navy recruiting process. Current recruiting and advertising cost data are merged into an appropriate data base and...evaluated using multiple regression techniques to find assessments of the relationships between Navy advertising expenditures and recruit contracts attained
ERIC Educational Resources Information Center
Zitter, Ilya; Hoeve, Aimee
2012-01-01
This paper deals with the problematic nature of the transition between education and the workplace. A smooth transition between education and the workplace requires learners to develop an integrated knowledge base, but this is problematic as most educational programmes offer knowledge and experiences in a fragmented manner, scattered over a…
ERIC Educational Resources Information Center
Chen, Cheng-ping; Wang, Chang-Hwa
2015-01-01
Studies have proven that merging hands-on and online learning can result in an enhanced experience in learning science. In contrast to traditional online learning, multiple in-classroom activities may be involved in an augmented-reality (AR)-embedded e-learning process and thus could reduce the effects of individual differences. Using a…
Merging Education and Business Models to Create and Sustain Transformational Change
ERIC Educational Resources Information Center
Isenberg, Susan
2010-01-01
In 2004, a large Midwest hospital was losing money, patients, employees, and physicians. A business consultant was hired to engage key employees in a process to improve the quality and efficiency of patient care. The improvement was negligible after the first year, so a 3-man consultancy was added in 2005 to engage all employees in an educational…
A BARYONIC EFFECT ON THE MERGER TIMESCALE OF GALAXY CLUSTERS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Congyao; Yu, Qingjuan; Lu, Youjun, E-mail: yuqj@pku.edu.cn
2016-04-01
Accurate estimation of the merger timescales of galaxy clusters is important for understanding the cluster merger process and further understanding the formation and evolution of the large-scale structure of the universe. In this paper, we explore a baryonic effect on the merger timescale of galaxy clusters by using hydrodynamical simulations. We find that the baryons play an important role in accelerating the merger process. The merger timescale decreases upon increasing the gas fraction of galaxy clusters. For example, the merger timescale is shortened by a factor of up to 3 for merging clusters with gas fractions of 0.15, compared withmore » the timescale obtained with 0 gas fractions. The baryonic effect is significant for a wide range of merger parameters and is particularly more significant for nearly head-on mergers and high merging velocities. The baryonic effect on the merger timescale of galaxy clusters is expected to have an impact on the structure formation in the universe, such as the cluster mass function and massive substructures in galaxy clusters, and a bias of “no-gas” may exist in the results obtained from the dark matter-only cosmological simulations.« less
The Role Of Mergers In Galaxy Formation And Transformations
NASA Astrophysics Data System (ADS)
Conselice, Christopher J.; Mundy, Carl; Duncan, Kenneth
2017-06-01
Baryonic assembly of galaxies is one of the largest questions in extragalactic studies, which relates to many other issues, including environment, feedback, star formation, gas accretion and merging. In fact, all of these processes are related and must be accounted for and understood to paint a full picture of galaxy assembly. Perhaps the most straightforward of these processes to measure are the merging and star formation histories. I will present results of combining in a new reanalysis of the three deepest and large NIR surveys take to date: UDS, Ultra-VISTA and VIDEO as part of the REFINE project. Using consistently measured stellar masses and photometric redshifts for galaxies in these fields up to z =3, I will show how the major and minor merger rate can consistently be measured across these fields. Our new method involves a full use of the PDF for photo-zs and stellar masses. We show how the merger fraction and rate are lower than previous results and the implications for this for other methods of galaxy assembly and feedback mechanisms. Invited Talk presented at the conference Galaxy Evolution Across Time, 12-16 June, Paris, France
Axt, Brant; Hsieh, Yi-Fan; Nalayanda, Divya; Wang, Tza-Huei
2017-09-01
Droplet microfluidics has found use in many biological assay applications as a means of high-throughput sample processing. One of the challenges of the technology, however, is the ability to control and merge droplets on-demand as they flow through the microdevices. It is in the interest of developing lab-on-chip devices to be able to combinatorically program additive mixing steps for more complex multistep and multiplex assays. Existing technologies to merge droplets are either passive in nature or require highly predictable droplet movement for feedforward control, making them vulnerable to errors during high throughput operation. In this paper, we describe and demonstrate a microfluidic valve-based device for the purpose of combinatorial droplet injection at any stage in a multistep assay. Microfluidic valves are used to robustly control fluid flow, droplet generation, and droplet mixing in the device on-demand, while on-chip impedance measurements taken in real time are used as feedback to accurately time the droplet injections. The presented system is contrasted to attempts without feedback, and is shown to be 100% reliable over long durations. Additionally, content detection and discretionary injections are explored and successfully executed.
Controllable growth of shaped graphene domains by atmospheric pressure chemical vapour deposition
NASA Astrophysics Data System (ADS)
Fan, Lili; Li, Zhen; Li, Xiao; Wang, Kunlin; Zhong, Minlin; Wei, Jinquan; Wu, Dehai; Zhu, Hongwei
2011-12-01
Graphene domains in different shapes have been grown on copper substrates via atmospheric pressure chemical vapour deposition by controlling the growth process parameters. Under stabilized conditions, graphene domains tend to be six-fold symmetric hexagons under low flow rate methane with some domains in an irregular hexagonal shape. After further varying the growth duration, methane flow rate, and temperature, graphene domains have developed shapes from hexagon to shovel and dendrite. Two connecting modes, through overlap and merging of adjacent graphene domains, are proposed.Graphene domains in different shapes have been grown on copper substrates via atmospheric pressure chemical vapour deposition by controlling the growth process parameters. Under stabilized conditions, graphene domains tend to be six-fold symmetric hexagons under low flow rate methane with some domains in an irregular hexagonal shape. After further varying the growth duration, methane flow rate, and temperature, graphene domains have developed shapes from hexagon to shovel and dendrite. Two connecting modes, through overlap and merging of adjacent graphene domains, are proposed. Electronic supplementary information (ESI) available: Schematics of CVD setups for graphene growth, Raman spectra and SEM images. See DOI: 10.1039/c1nr11480h
Demosaiced pixel super-resolution for multiplexed holographic color imaging
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2016-01-01
To synthesize a holographic color image, one can sequentially take three holograms at different wavelengths, e.g., at red (R), green (G) and blue (B) parts of the spectrum, and digitally merge them. To speed up the imaging process by a factor of three, a Bayer color sensor-chip can also be used to demultiplex three wavelengths that simultaneously illuminate the sample and digitally retrieve individual set of holograms using the known transmission spectra of the Bayer color filters. However, because the pixels of different channels (R, G, B) on a Bayer color sensor are not at the same physical location, conventional demosaicing techniques generate color artifacts in holographic imaging using simultaneous multi-wavelength illumination. Here we demonstrate that pixel super-resolution can be merged into the color de-multiplexing process to significantly suppress the artifacts in wavelength-multiplexed holographic color imaging. This new approach, termed Demosaiced Pixel Super-Resolution (D-PSR), generates color images that are similar in performance to sequential illumination at three wavelengths, and therefore improves the speed of holographic color imaging by 3-fold. D-PSR method is broadly applicable to holographic microscopy applications, where high-resolution imaging and multi-wavelength illumination are desired. PMID:27353242
Interferometric side scan sonar and data fusion
NASA Astrophysics Data System (ADS)
Sintes, Christophe R.; Solaiman, Basel
2000-04-01
This paper concerns the possibilities of sea bottom imaging and altitude determining of each imaged point. The performances of new side scan sonars which are able to image the sea bottom with a high definition and are able to evaluate the relief with the same definition derive from an interferometric multisensor system. The drawbacks concern the precision of the numerical altitude model. One way to improve the measurements precision is to merge all the information issued from the multi-sensors system. This leads to increase the Signal to Noise Ratio (SNR) and the robustness of the used method. The aim of this paper is to clearly demonstrate the ability to derive benefits of all information issued from the three arrays side scan sonar by merging: (1) the three phase signals obtained at the output of the sensors, (2) this same set of data after the application of different processing methods, and (3) the a priori relief contextual information. The key idea the proposed fusion technique is to exploit the strength and the weaknesses of each data element in the fusion of process so that the global SNR will be improved as well as the robustness to hostile noisy environments.
Burke, Órlaith; Benton, Samantha; Szafranski, Pawel; von Dadelszen, Peter; Buhimschi, S Catalin; Cetin, Irene; Chappell, Lucy; Figueras, Francesc; Galindo, Alberto; Herraiz, Ignacio; Holzman, Claudia; Hubel, Carl; Knudsen, Ulla; Kronborg, Camilla; Laivuori, Hannele; Lapaire, Olav; McElrath, Thomas; Moertl, Manfred; Myers, Jenny; Ness, Roberta B; Oliveira, Leandro; Olson, Gayle; Poston, Lucilla; Ris-Stalpers, Carrie; Roberts, James M; Schalekamp-Timmermans, Sarah; Schlembach, Dietmar; Steegers, Eric; Stepan, Holger; Tsatsaris, Vassilis; van der Post, Joris A; Verlohren, Stefan; Villa, Pia M; Williams, David; Zeisler, Harald; Redman, Christopher W G; Staff, Anne Cathrine
2016-01-01
A common challenge in medicine, exemplified in the analysis of biomarker data, is that large studies are needed for sufficient statistical power. Often, this may only be achievable by aggregating multiple cohorts. However, different studies may use disparate platforms for laboratory analysis, which can hinder merging. Using circulating placental growth factor (PlGF), a potential biomarker for hypertensive disorders of pregnancy (HDP) such as preeclampsia, as an example, we investigated how such issues can be overcome by inter-platform standardization and merging algorithms. We studied 16,462 pregnancies from 22 study cohorts. PlGF measurements (gestational age ⩾20 weeks) analyzed on one of four platforms: R&D Systems, AlereTriage, RocheElecsys or AbbottArchitect, were available for 13,429 women. Two merging algorithms, using Z-Score and Multiple of Median transformations, were applied. Best reference curves (BRC), based on merged, transformed PlGF measurements in uncomplicated pregnancy across six gestational age groups, were estimated. Identification of HDP by these PlGF-BRCs was compared to that of platform-specific curves. We demonstrate the feasibility of merging PlGF concentrations from different analytical platforms. Overall BRC identification of HDP performed at least as well as platform-specific curves. Our method can be extended to any set of biomarkers obtained from different laboratory platforms in any field. Merged biomarker data from multiple studies will improve statistical power and enlarge our understanding of the pathophysiology and management of medical syndromes. Copyright © 2015 International Society for the Study of Hypertension in Pregnancy. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tamminen, J.; Sofieva, V.; Kyrölä, E.; Laine, M.; Degenstein, D. A.; Bourassa, A. E.; Roth, C.; Zawada, D.; Weber, M.; Rozanov, A.; Rahpoe, N.; Stiller, G. P.; Laeng, A.; von Clarmann, T.; Walker, K. A.; Sheese, P.; Hubert, D.; Van Roozendael, M.; Zehner, C.; Damadeo, R. P.; Zawodny, J. M.; Kramarova, N. A.; Bhartia, P. K.
2017-12-01
We present a merged dataset of ozone profiles from several satellite instruments: SAGE II on ERBS, GOMOS, SCIAMACHY and MIPAS on Envisat, OSIRIS on Odin, ACE-FTS on SCISAT, and OMPS on Suomi-NPP. The merged dataset is created in the framework of European Space Agency Climate Change Initiative (Ozone_cci) with the aim of analyzing stratospheric ozone trends. For the merged dataset, we used the latest versions of the original ozone datasets. The datasets from the individual instruments have been extensively validated and inter-compared; only those datasets, which are in good agreement and do not exhibit significant drifts with respect to collocated ground-based observations and with respect to each other, are used for merging. The long-term SAGE-CCI-OMPS dataset is created by computation and merging of deseasonalized anomalies from individual instruments. The merged SAGE-CCI-OMPS dataset consists of deseasonalized anomalies of ozone in 10° latitude bands from 90°S to 90°N and from 10 to 50 km in steps of 1 km covering the period from October 1984 to July 2016. This newly created dataset is used for evaluating ozone trends in the stratosphere through multiple linear regression. Negative ozone trends in the upper stratosphere are observed before 1997 and positive trends are found after 1997. The upper stratospheric trends are statistically significant at mid-latitudes in the upper stratosphere and indicate ozone recovery, as expected from the decrease of stratospheric halogens that started in the middle of the 1990s.
Safety evaluation of joint and conventional lane merge configurations for freeway work zones.
Ishak, Sherif; Qi, Yan; Rayaprolu, Pradeep
2012-01-01
Inefficient operation of traffic in work zone areas not only leads to an increase in travel time delays, queue length, and fuel consumption but also increases the number of forced merges and roadway accidents. This study evaluated the safety performance of work zones with a conventional lane merge (CLM) configuration in Louisiana. Analysis of variance (ANOVA) was used to compare the crash rates for accidents involving fatalities, injuries, and property damage only (PDO) in each of the following 4 areas: (1) advance warning area, (2) transition area, (3) work area, and (4) termination area. The analysis showed that the advance warning area had higher fatality, injury, and PDO crash rates when compared to the transition area, work area, and termination area. This finding confirmed the need to make improvements in the advance warning area where merging maneuvers take place. Therefore, a new lane merge configuration, called joint lane merge (JLM), was proposed and its safety performance was examined and compared to the conventional lane merge configuration using a microscopic simulation model (VISSIM), which was calibrated with real-world data from an existing work zone on I-55 and used to simulate a total of 25 different scenarios with different levels of demand and traffic composition. Safety performance was evaluated using 2 surrogate measures: uncomfortable decelerations and speed variance. Statistical analysis was conducted to determine whether the differences in safety performance between both configurations were significant. The safety analysis indicated that JLM outperformed CLM in most cases with low to moderate flow rates and that the percentage of trucks did not have a significant impact on the safety performance of either configuration. Though the safety analysis did not clearly indicate which lane merge configuration is safer for the overall work zone area, it was able to identify the possibly associated safety changes within the work zone area under different traffic conditions. Copyright © 2012 Taylor & Francis Group, LLC
Research of real-time video processing system based on 6678 multi-core DSP
NASA Astrophysics Data System (ADS)
Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang
2017-10-01
In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.
Post processing for offline Chinese handwritten character string recognition
NASA Astrophysics Data System (ADS)
Wang, YanWei; Ding, XiaoQing; Liu, ChangSong
2012-01-01
Offline Chinese handwritten character string recognition is one of the most important research fields in pattern recognition. Due to the free writing style, large variability in character shapes and different geometric characteristics, Chinese handwritten character string recognition is a challenging problem to deal with. However, among the current methods over-segmentation and merging method which integrates geometric information, character recognition information and contextual information, shows a promising result. It is found experimentally that a large part of errors are segmentation error and mainly occur around non-Chinese characters. In a Chinese character string, there are not only wide characters namely Chinese characters, but also narrow characters like digits and letters of the alphabet. The segmentation error is mainly caused by uniform geometric model imposed on all segmented candidate characters. To solve this problem, post processing is employed to improve recognition accuracy of narrow characters. On one hand, multi-geometric models are established for wide characters and narrow characters respectively. Under multi-geometric models narrow characters are not prone to be merged. On the other hand, top rank recognition results of candidate paths are integrated to boost final recognition of narrow characters. The post processing method is investigated on two datasets, in total 1405 handwritten address strings. The wide character recognition accuracy has been improved lightly and narrow character recognition accuracy has been increased up by 10.41% and 10.03% respectively. It indicates that the post processing method is effective to improve recognition accuracy of narrow characters.
Operational Art and its Relevance to Army Logisticians
1999-12-06
essence. Chapter two describes how the operational planner synthesizes the problem environment and merges the " art " and " science " of war through...catalyst that merges application of science and theory into " art ." The planner must also possess personal attributes that allow application to be...synergize (instead of dictate) the merging of science and art . be an expert at applying MDMP.. .not enslaved by it. absorb the construct of operational art
Revision history aware repositories of computational models of biological systems.
Miller, Andrew K; Yu, Tommy; Britten, Randall; Cooling, Mike T; Lawson, James; Cowan, Dougal; Garny, Alan; Halstead, Matt D B; Hunter, Peter J; Nickerson, David P; Nunns, Geo; Wimalaratne, Sarala M; Nielsen, Poul M F
2011-01-14
Building repositories of computational models of biological systems ensures that published models are available for both education and further research, and can provide a source of smaller, previously verified models to integrate into a larger model. One problem with earlier repositories has been the limitations in facilities to record the revision history of models. Often, these facilities are limited to a linear series of versions which were deposited in the repository. This is problematic for several reasons. Firstly, there are many instances in the history of biological systems modelling where an 'ancestral' model is modified by different groups to create many different models. With a linear series of versions, if the changes made to one model are merged into another model, the merge appears as a single item in the history. This hides useful revision history information, and also makes further merges much more difficult, as there is no record of which changes have or have not already been merged. In addition, a long series of individual changes made outside of the repository are also all merged into a single revision when they are put back into the repository, making it difficult to separate out individual changes. Furthermore, many earlier repositories only retain the revision history of individual files, rather than of a group of files. This is an important limitation to overcome, because some types of models, such as CellML 1.1 models, can be developed as a collection of modules, each in a separate file. The need for revision history is widely recognised for computer software, and a lot of work has gone into developing version control systems and distributed version control systems (DVCSs) for tracking the revision history. However, to date, there has been no published research on how DVCSs can be applied to repositories of computational models of biological systems. We have extended the Physiome Model Repository software to be fully revision history aware, by building it on top of Mercurial, an existing DVCS. We have demonstrated the utility of this approach, when used in conjunction with the model composition facilities in CellML, to build and understand more complex models. We have also demonstrated the ability of the repository software to present version history to casual users over the web, and to highlight specific versions which are likely to be useful to users. Providing facilities for maintaining and using revision history information is an important part of building a useful repository of computational models, as this information is useful both for understanding the source of and justification for parts of a model, and to facilitate automated processes such as merges. The availability of fully revision history aware repositories, and associated tools, will therefore be of significant benefit to the community.
Dynamics of merging: post-merger mixing and relaxation of an Illustris galaxy
NASA Astrophysics Data System (ADS)
Young, Anthony M.; Williams, Liliya L. R.; Hjorth, Jens
2018-02-01
During the merger of two galaxies, the resulting system undergoes violent relaxation and seeks stable equilibrium. However, the details of this evolution are not fully understood. Using Illustris simulation, we probe two physically related processes, mixing and relaxation. Though the two are driven by the same dynamics—global time-varying potential for the energy, and torques caused by asymmetries for angular momentum—we measure them differently. We define mixing as the redistribution of energy and angular momentum between particles of the two merging galaxies. We assess the degree of mixing as the difference between the shapes of their energy distributions, N(E)s, and their angular momentum distributions, N(L2)s. We find that the difference is decreasing with time, indicating mixing. To measure relaxation, we compare N(E) of the newly merged system to N(E) of a theoretical prediction for relaxed collisionless systems, DARKexp, and witness the system becoming more relaxed, in the sense that N(E) approaches DARKexp N(E). Because the dynamics driving mixing and relaxation are the same, the timescale is similar for both. We measure two sequential timescales: a rapid, 1 Gyr phase after the initial merger, during which the difference in N(E) of the two merging halos decreases by ~ 80%, followed by a slow phase, when the difference decreases by ~ 50% over ~ 8.5 Gyrs. This is a direct measurement of the relaxation timescale. Our work also draws attention to the fact that when a galaxy has reached Jeans equilibrium it may not yet have reached a fully relaxed state given by DARKexp, in that it retains information about its past history. This manifests itself most strongly in stars being centrally concentrated. We argue that it is particularly difficult for stars, and other tightly bound particles, to mix because they have less time to be influenced by the fluctuating potential, even across multiple merger events.
Program Merges SAR Data on Terrain and Vegetation Heights
NASA Technical Reports Server (NTRS)
Siqueira, Paul; Hensley, Scott; Rodriguez, Ernesto; Simard, Marc
2007-01-01
X/P Merge is a computer program that estimates ground-surface elevations and vegetation heights from multiple sets of data acquired by the GeoSAR instrument [a terrain-mapping synthetic-aperture radar (SAR) system that operates in the X and bands]. X/P Merge software combines data from X- and P-band digital elevation models, SAR backscatter magnitudes, and interferometric correlation magnitudes into a simplified set of output topographical maps of ground-surface elevation and tree height.
Control methods for merging ALSM and ground-based laser point clouds acquired under forest canopies
NASA Astrophysics Data System (ADS)
Slatton, Kenneth C.; Coleman, Matt; Carter, William E.; Shrestha, Ramesh L.; Sartori, Michael
2004-12-01
Merging of point data acquired from ground-based and airborne scanning laser rangers has been demonstrated for cases in which a common set of targets can be readily located in both data sets. However, direct merging of point data was not generally possible if the two data sets did not share common targets. This is often the case for ranging measurements acquired in forest canopies, where airborne systems image the canopy crowns well, but receive a relatively sparse set of points from the ground and understory. Conversely, ground-based scans of the understory do not generally sample the upper canopy. An experiment was conducted to establish a viable procedure for acquiring and georeferencing laser ranging data underneath a forest canopy. Once georeferenced, the ground-based data points can be merged with airborne points even in cases where no natural targets are common to both data sets. Two ground-based laser scans are merged and georeferenced with a final absolute error in the target locations of less than 10cm. This is comparable to the accuracy of the georeferenced airborne data. Thus, merging of the georeferenced ground-based and airborne data should be feasible. The motivation for this investigation is to facilitate a thorough characterization of airborne laser ranging phenomenology over forested terrain as a function of vertical location in the canopy.
Merging of multi-string BWTs with applications
Holt, James; McMillan, Leonard
2014-01-01
Motivation: The throughput of genomic sequencing has increased to the point that is overrunning the rate of downstream analysis. This, along with the desire to revisit old data, has led to a situation where large quantities of raw, and nearly impenetrable, sequence data are rapidly filling the hard drives of modern biology labs. These datasets can be compressed via a multi-string variant of the Burrows–Wheeler Transform (BWT), which provides the side benefit of searches for arbitrary k-mers within the raw data as well as the ability to reconstitute arbitrary reads as needed. We propose a method for merging such datasets for both increased compression and downstream analysis. Results: We present a novel algorithm that merges multi-string BWTs in O(LCS×N) time where LCS is the length of their longest common substring between any of the inputs, and N is the total length of all inputs combined (number of symbols) using O(N×log2(F)) bits where F is the number of multi-string BWTs merged. This merged multi-string BWT is also shown to have a higher compressibility compared with the input multi-string BWTs separately. Additionally, we explore some uses of a merged multi-string BWT for bioinformatics applications. Availability and implementation: The MSBWT package is available through PyPI with source code located at https://code.google.com/p/msbwt/. Contact: holtjma@cs.unc.edu PMID:25172922
Hybrid region merging method for segmentation of high-resolution remote sensing images
NASA Astrophysics Data System (ADS)
Zhang, Xueliang; Xiao, Pengfeng; Feng, Xuezhi; Wang, Jiangeng; Wang, Zuo
2014-12-01
Image segmentation remains a challenging problem for object-based image analysis. In this paper, a hybrid region merging (HRM) method is proposed to segment high-resolution remote sensing images. HRM integrates the advantages of global-oriented and local-oriented region merging strategies into a unified framework. The globally most-similar pair of regions is used to determine the starting point of a growing region, which provides an elegant way to avoid the problem of starting point assignment and to enhance the optimization ability for local-oriented region merging. During the region growing procedure, the merging iterations are constrained within the local vicinity, so that the segmentation is accelerated and can reflect the local context, as compared with the global-oriented method. A set of high-resolution remote sensing images is used to test the effectiveness of the HRM method, and three region-based remote sensing image segmentation methods are adopted for comparison, including the hierarchical stepwise optimization (HSWO) method, the local-mutual best region merging (LMM) method, and the multiresolution segmentation (MRS) method embedded in eCognition Developer software. Both the supervised evaluation and visual assessment show that HRM performs better than HSWO and LMM by combining both their advantages. The segmentation results of HRM and MRS are visually comparable, but HRM can describe objects as single regions better than MRS, and the supervised and unsupervised evaluation results further prove the superiority of HRM.
Simulation Results for Airborne Precision Spacing along Continuous Descent Arrivals
NASA Technical Reports Server (NTRS)
Barmore, Bryan E.; Abbott, Terence S.; Capron, William R.; Baxley, Brian T.
2008-01-01
This paper describes the results of a fast-time simulation experiment and a high-fidelity simulator validation with merging streams of aircraft flying Continuous Descent Arrivals through generic airspace to a runway at Dallas-Ft Worth. Aircraft made small speed adjustments based on an airborne-based spacing algorithm, so as to arrive at the threshold exactly at the assigned time interval behind their Traffic-To-Follow. The 40 aircraft were initialized at different altitudes and speeds on one of four different routes, and then merged at different points and altitudes while flying Continuous Descent Arrivals. This merging and spacing using flight deck equipment and procedures to augment or implement Air Traffic Management directives is called Flight Deck-based Merging and Spacing, an important subset of a larger Airborne Precision Spacing functionality. This research indicates that Flight Deck-based Merging and Spacing initiated while at cruise altitude and well prior to the Terminal Radar Approach Control entry can significantly contribute to the delivery of aircraft at a specified interval to the runway threshold with a high degree of accuracy and at a reduced pilot workload. Furthermore, previously documented work has shown that using a Continuous Descent Arrival instead of a traditional step-down descent can save fuel, reduce noise, and reduce emissions. Research into Flight Deck-based Merging and Spacing is a cooperative effort between government and industry partners.
Merging-compression formation of high temperature tokamak plasma
NASA Astrophysics Data System (ADS)
Gryaznevich, M. P.; Sykes, A.
2017-07-01
Merging-compression is a solenoid-free plasma formation method used in spherical tokamaks (STs). Two plasma rings are formed and merged via magnetic reconnection into one plasma ring that then is radially compressed to form the ST configuration. Plasma currents of several hundred kA and plasma temperatures in the keV-range have been produced using this method, however until recently there was no full understanding of the merging-compression formation physics. In this paper we explain in detail, for the first time, all stages of the merging-compression plasma formation. This method will be used to create ST plasmas in the compact (R ~ 0.4-0.6 m) high field, high current (3 T/2 MA) ST40 tokamak. Moderate extrapolation from the available experimental data suggests the possibility of achieving plasma current ~2 MA, and 10 keV range temperatures at densities ~1-5 × 1020 m-3, bringing ST40 plasmas into a burning plasma (alpha particle heating) relevant conditions directly from the plasma formation. Issues connected with this approach for ST40 and future ST reactors are discussed
A graph-based watershed merging using fuzzy C-means and simulated annealing for image segmentation
NASA Astrophysics Data System (ADS)
Vadiveloo, Mogana; Abdullah, Rosni; Rajeswari, Mandava
2015-12-01
In this paper, we have addressed the issue of over-segmented regions produced in watershed by merging the regions using global feature. The global feature information is obtained from clustering the image in its feature space using Fuzzy C-Means (FCM) clustering. The over-segmented regions produced by performing watershed on the gradient of the image are then mapped to this global information in the feature space. Further to this, the global feature information is optimized using Simulated Annealing (SA). The optimal global feature information is used to derive the similarity criterion to merge the over-segmented watershed regions which are represented by the region adjacency graph (RAG). The proposed method has been tested on digital brain phantom simulated dataset to segment white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) soft tissues regions. The experiments showed that the proposed method performs statistically better, with average of 95.242% regions are merged, than the immersion watershed and average accuracy improvement of 8.850% in comparison with RAG-based immersion watershed merging using global and local features.
An eMERGE Clinical Center at Partners Personalized Medicine
Smoller, Jordan W.; Karlson, Elizabeth W.; Green, Robert C.; Kathiresan, Sekar; MacArthur, Daniel G.; Talkowski, Michael E.; Murphy, Shawn N.; Weiss, Scott T.
2016-01-01
The integration of electronic medical records (EMRs) and genomic research has become a major component of efforts to advance personalized and precision medicine. The Electronic Medical Records and Genomics (eMERGE) network, initiated in 2007, is an NIH-funded consortium devoted to genomic discovery and implementation research by leveraging biorepositories linked to EMRs. In its most recent phase, eMERGE III, the network is focused on facilitating implementation of genomic medicine by detecting and disclosing rare pathogenic variants in clinically relevant genes. Partners Personalized Medicine (PPM) is a center dedicated to translating personalized medicine into clinical practice within Partners HealthCare. One component of the PPM is the Partners Healthcare Biobank, a biorepository comprising broadly consented DNA samples linked to the Partners longitudinal EMR. In 2015, PPM joined the eMERGE Phase III network. Here we describe the elements of the eMERGE clinical center at PPM, including plans for genomic discovery using EMR phenotypes, evaluation of rare variant penetrance and pleiotropy, and a novel randomized trial of the impact of returning genetic results to patients and clinicians. PMID:26805891
An eMERGE Clinical Center at Partners Personalized Medicine.
Smoller, Jordan W; Karlson, Elizabeth W; Green, Robert C; Kathiresan, Sekar; MacArthur, Daniel G; Talkowski, Michael E; Murphy, Shawn N; Weiss, Scott T
2016-01-20
The integration of electronic medical records (EMRs) and genomic research has become a major component of efforts to advance personalized and precision medicine. The Electronic Medical Records and Genomics (eMERGE) network, initiated in 2007, is an NIH-funded consortium devoted to genomic discovery and implementation research by leveraging biorepositories linked to EMRs. In its most recent phase, eMERGE III, the network is focused on facilitating implementation of genomic medicine by detecting and disclosing rare pathogenic variants in clinically relevant genes. Partners Personalized Medicine (PPM) is a center dedicated to translating personalized medicine into clinical practice within Partners HealthCare. One component of the PPM is the Partners Healthcare Biobank, a biorepository comprising broadly consented DNA samples linked to the Partners longitudinal EMR. In 2015, PPM joined the eMERGE Phase III network. Here we describe the elements of the eMERGE clinical center at PPM, including plans for genomic discovery using EMR phenotypes, evaluation of rare variant penetrance and pleiotropy, and a novel randomized trial of the impact of returning genetic results to patients and clinicians.
Mixed Element Type Unstructured Grid Generation for Viscous Flow Applications
NASA Technical Reports Server (NTRS)
Marcum, David L.; Gaither, J. Adam
2000-01-01
A procedure is presented for efficient generation of high-quality unstructured grids suitable for CFD simulation of high Reynolds number viscous flow fields. Layers of anisotropic elements are generated by advancing along prescribed normals from solid boundaries. The points are generated such that either pentahedral or tetrahedral elements with an implied connectivity can be be directly recovered. As points are generated they are temporarily attached to a volume triangulation of the boundary points. This triangulation allows efficient local search algorithms to be used when checking merging layers, The existing advancing-front/local-reconnection procedure is used to generate isotropic elements outside of the anisotropic region. Results are presented for a variety of applications. The results demonstrate that high-quality anisotropic unstructured grids can be efficiently and consistently generated for complex configurations.
ERIC Educational Resources Information Center
Oyster, Nancy
This study reflects the feelings and impressions of women physical education teachers in 1975 at a time when many departments were in the process of merging. At that time few of the women held major administrative positions. Most women taught, published, and foresaw future graduate work in the areas of teacher preparation and curriculum. Only…
ERIC Educational Resources Information Center
Croston, Amanda
2013-01-01
Background: This paper examines physical education (PE) teachers' perceptions of talent in PE and sport within the context of English policy, where the process of identifying talent has been formalised and supported through specific resources (YST 2009). English policy has merged educational and sporting targets, which has resulted in a shift in…
NASA Astrophysics Data System (ADS)
Sun, Liang; Li, Qiu-Yang
2017-04-01
The oceanic mesoscale eddies play a major role in ocean climate system. To analyse spatiotemporal dynamics of oceanic mesoscale eddies, the Genealogical Evolution Model (GEM) based on satellite data is developed, which is an efficient logical model used to track dynamic evolution of mesoscale eddies in the ocean. It can distinguish different dynamic processes (e.g., merging and splitting) within a dynamic evolution pattern, which is difficult to accomplish using other tracking methods. To this end, a mononuclear eddy detection method was firstly developed with simple segmentation strategies, e.g. watershed algorithm. The algorithm is very fast by searching the steepest descent path. Second, the GEM uses a two-dimensional similarity vector (i.e. a pair of ratios of overlap area between two eddies to the area of each eddy) rather than a scalar to measure the similarity between eddies, which effectively solves the ''missing eddy" problem (temporarily lost eddy in tracking). Third, for tracking when an eddy splits, GEM uses both "parent" (the original eddy) and "child" (eddy split from parent) and the dynamic processes are described as birth and death of different generations. Additionally, a new look-ahead approach with selection rules effectively simplifies computation and recording. All of the computational steps are linear and do not include iteration. Given the pixel number of the target region L, the maximum number of eddies M, the number N of look-ahead time steps, and the total number of time steps T, the total computer time is O (LM(N+1)T). The tracking of each eddy is very smooth because we require that the snapshots of each eddy on adjacent days overlap one another. Although eddy splitting or merging is ubiquitous in the ocean, they have different geographic distribution in the Northern Pacific Ocean. Both the merging and splitting rates of the eddies are high, especially at the western boundary, in currents and in "eddy deserts". GEM is useful not only for satellite-based observational data but also for numerical simulation outputs. It is potentially useful for studying dynamic processes in other related fields, e.g., the dynamics of cyclones in meteorology.
NASA Astrophysics Data System (ADS)
Smith, D. P.; Kvitek, R.; Quan, S.; Iampietro, P.; Paddock, E.; Richmond, S. F.; Gomez, K.; Aiello, I. W.; Consulo, P.
2009-12-01
Models of watershed sediment yield are complicated by spatial and temporal variability of geologic substrate, land cover, and precipitation parameters. Episodic events such as ENSO cycles and severe wildfire are frequent enough to matter in the long-term average yield, and they can produce short-lived, extreme geomorphic responses. The sediment yield from extreme events is difficult to accurately capture because of the obvious dangers associated with field measurements during flood conditions, but it is critical to include extreme values for developing realistic models of rainfall-sediment yield relations, and for calculating long term average denudation rates. Dammed rivers provide a time-honored natural laboratory for quantifying average annual sediment yield and extreme-event sediment yield. While lead-line surveys of the past provided crude estimates of reservoir sediment trapping, recent advances in geospatial technology now provide unprecedented opportunities to improve volume change measurements. High-precision digital elevation models surveyed on an annual basis, or before-and-after specific rainfall-runoff events can be used to quantify relations between rainfall and sediment yield as a function of landscape parameters, including spatially explicit fire intensity. The Basin-Complex Fire of June and July 2008 resulted in moderate to severe burns in the 114 km^2 portion of the Carmel River watershed above Los Padres Dam. The US Geological Survey produced a debris flow probability/volume model for the region indicating that the reservoir could lose considerable capacity if intense enough precipitation occurred in the 2009-10 winter. Loss of Los Padres reservoir capacity has implications for endangered steelhead and red-legged frogs, and groundwater on municipal water supply. In anticipation of potentially catastrophic erosion, we produced an accurate volume calculation of the Los Padres reservoir in fall 2009, and locally monitored hillslope and fluvial processes during winter months. The pre-runoff reservoir volume was developed by collecting and merging sonar and LiDAR data from a small research skiff equipped with a high-precision positioning and attitude-correcting system. The terrestrial LiDAR data were augmented with shore-based total station positioning. Watershed monitoring included benchmarked serial stream surveys and semi-quantitative assessment of a variety of near-channel colluvial processes. Rainfall in the 2009-10 water year was not intense enough to trigger widespread debris flows of slope failure in the burned watershed, but dry ravel was apparently accelerated. The geomorphic analysis showed that sediment yield was not significantly higher during this low-rainfall year, despite the wide-spread presence of very steep, fire-impacted slopes. Because there was little to no increase in sediment yield this year, we have postponed our second reservoir survey. A predicted ENSO event that might bring very intense rains to the watershed is currently predicted for winter 2009-10.