Sample records for single introduced source

  1. Employment-based health insurance is failing: now what?

    PubMed

    Enthoven, Alain C

    2003-01-01

    Employment-based health insurance is failing. Costs are out of control. Employers have no effective strategy to deal with this. They must think strategically about fundamental change. This analysis explains how employers' purchasing policies contribute to rising costs and block growth of economical care. Single-source managed care is ineffective, and effective managed care cannot be a single source. Employers should create exchanges through which they can offer employees wide, responsible, individual, multiple choices among health care delivery systems and create serious competition based on value for money. Recently introduced technology can assist this process.

  2. A source number estimation method for single optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Hu, Junpeng; Huang, Zhiping; Su, Shaojing; Zhang, Yimeng; Liu, Chunwu

    2015-10-01

    The single-channel blind source separation (SCBSS) technique makes great significance in many fields, such as optical fiber communication, sensor detection, image processing and so on. It is a wide range application to realize blind source separation (BSS) from a single optical fiber sensor received data. The performance of many BSS algorithms and signal process methods will be worsened with inaccurate source number estimation. Many excellent algorithms have been proposed to deal with the source number estimation in array signal process which consists of multiple sensors, but they can not be applied directly to the single sensor condition. This paper presents a source number estimation method dealing with the single optical fiber sensor received data. By delay process, this paper converts the single sensor received data to multi-dimension form. And the data covariance matrix is constructed. Then the estimation algorithms used in array signal processing can be utilized. The information theoretic criteria (ITC) based methods, presented by AIC and MDL, Gerschgorin's disk estimation (GDE) are introduced to estimate the source number of the single optical fiber sensor's received signal. To improve the performance of these estimation methods at low signal noise ratio (SNR), this paper make a smooth process to the data covariance matrix. By the smooth process, the fluctuation and uncertainty of the eigenvalues of the covariance matrix are reduced. Simulation results prove that ITC base methods can not estimate the source number effectively under colored noise. The GDE method, although gets a poor performance at low SNR, but it is able to accurately estimate the number of sources with colored noise. The experiments also show that the proposed method can be applied to estimate the source number of single sensor received data.

  3. Theoretical value of pre-trade testing for Salmonella in Swedish cattle herds.

    PubMed

    Sternberg Lewerin, Susanna

    2018-05-01

    The Swedish Salmonella control programme includes mandatory action if Salmonella is detected in a herd. The aim of this study was to assess the relative value of different strategies for pre-movement testing of cattle. Three fictitious herds were included: dairy, beef and specialised calf-fattening. The yearly risks of introducing Salmonella with and without individual serological or bulk milk testing were assessed as well as the effects of sourcing animals from low-prevalence areas or reducing the number of source herds. The initial risk was highest for the calf-fattening herd and lowest for the beef herd. For the beef and dairy herds, the yearly risk of Salmonella introduction was reduced by about 75% with individual testing. Sourcing animals from low-prevalence areas reduced the risk by >99%. For the calf-fattening herd, the yearly risk was reduced by almost 50% by individual testing or sourcing animals from a maximum of five herds. The method was useful for illustrating effects of risk mitigation when introducing animals into a herd. Sourcing animals from low-risk areas (or herds) is more effective than single testing of individual animals or bulk milk. A comprehensive approach to reduce the risk of introducing Salmonella from source herds is justified. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Performance comparison of single axis tracking and 40° solar panels for sunny weather

    NASA Astrophysics Data System (ADS)

    Chua, Yaw Long; Yong, Yoon Kuang; Koh, Yit Yan

    2017-09-01

    The rapid increment in human population and economy growth had led to the rise of the energy demand globally. With the rapid diminishing fossil fuels based energy sources, renewable energy sources had been introduced due to its unlimited availability especially solar energy which is a sustainable and reliable energy. This research was conducted to study and compare the efficiency of the single axis tracking solar panel with a 40° inclined angle solar panel in sunny weather condition. The results indicated that the output generated by the solar panel was directly affected by the angle which the solar panel facing the sun. In terms of performance the single axis tracking solar panel emerged to be more efficient with greater energy generated.

  5. Sol-gel precursors and products thereof

    DOEpatents

    Warren, Scott C.; DiSalvo, Jr., Francis J.; Weisner, Ulrich B.

    2017-02-14

    The present invention provides a generalizable single-source sol-gel precursor capable of introducing a wide range of functionalities to metal oxides such as silica. The sol-gel precursor facilitates a one-molecule, one-step approach to the synthesis of metal-silica hybrids with combinations of biological, catalytic, magnetic, and optical functionalities. The single-source precursor also provides a flexible route for simultaneously incorporating functional species of many different types. The ligands employed for functionalizing the metal oxides are derived from a library of amino acids, hydroxy acids, or peptides and a silicon alkoxide, allowing many biological functionalities to be built into silica hybrids. The ligands can coordinate with a wide range of metals via a carboxylic acid, thereby allowing direct incorporation of inorganic functionalities from across the periodic table. Using the single-source precursor a wide range of functionalized nanostructures such as monolith structures, mesostructures, multiple metal gradient mesostructures and Stober-type nanoparticles can be synthesized. ##STR00001##

  6. Independent component analysis based digital signal processing in coherent optical fiber communication systems

    NASA Astrophysics Data System (ADS)

    Li, Xiang; Luo, Ming; Qiu, Ying; Alphones, Arokiaswami; Zhong, Wen-De; Yu, Changyuan; Yang, Qi

    2018-02-01

    In this paper, channel equalization techniques for coherent optical fiber transmission systems based on independent component analysis (ICA) are reviewed. The principle of ICA for blind source separation is introduced. The ICA based channel equalization after both single-mode fiber and few-mode fiber transmission for single-carrier and orthogonal frequency division multiplexing (OFDM) modulation formats are investigated, respectively. The performance comparisons with conventional channel equalization techniques are discussed.

  7. Measurement-device-independent quantum key distribution with multiple crystal heralded source with post-selection

    NASA Astrophysics Data System (ADS)

    Chen, Dong; Shang-Hong, Zhao; MengYi, Deng

    2018-03-01

    The multiple crystal heralded source with post-selection (MHPS), originally introduced to improve the single-photon character of the heralded source, has specific applications for quantum information protocols. In this paper, by combining decoy-state measurement-device-independent quantum key distribution (MDI-QKD) with spontaneous parametric downconversion process, we present a modified MDI-QKD scheme with MHPS where two architectures are proposed corresponding to symmetric scheme and asymmetric scheme. The symmetric scheme, which linked by photon switches in a log-tree structure, is adopted to overcome the limitation of the current low efficiency of m-to-1 optical switches. The asymmetric scheme, which shows a chained structure, is used to cope with the scalability issue with increase in the number of crystals suffered in symmetric scheme. The numerical simulations show that our modified scheme has apparent advances both in transmission distance and key generation rate compared to the original MDI-QKD with weak coherent source and traditional heralded source with post-selection. Furthermore, the recent advances in integrated photonics suggest that if built into a single chip, the MHPS might be a practical alternative source in quantum key distribution tasks requiring single photons to work.

  8. Arrangement, Dopant Source, And Method For Making Solar Cells

    DOEpatents

    Rohatgi, Ajeet; Krygowski, Thomas W.

    1999-10-26

    Disclosed is an arrangement, dopant source and method used in the fabrication of photocells that minimize handling of cell wafers and involve a single furnace step. First, dopant sources are created by depositing selected dopants onto both surfaces of source wafers. The concentration of dopant that is placed on the surface is relatively low so that the sources are starved sources. These sources are stacked with photocell wafers in alternating orientation in a furnace. Next, the temperature is raised and thermal diffusion takes place whereby the dopant leaves the source wafers and becomes diffused in a cell wafer creating the junctions necessary for photocells to operate. The concentration of dopant diffused into a single side of the cell wafer is proportional to the concentration placed on the respective dopant source facing the side of the cell wafer. Then, in the same thermal cycle, a layer of oxide is created by introducing oxygen into the furnace environment after sufficient diffusion has taken place. Finally, the cell wafers receive an anti-reflective coating and electrical contacts for the purpose of gathering electrical charge.

  9. Wideband tunable laser phase noise reduction using single sideband modulation in an electro-optical feed-forward scheme.

    PubMed

    Aflatouni, Firooz; Hashemi, Hossein

    2012-01-15

    A wideband laser phase noise reduction scheme is introduced where the optical field of a laser is single sideband modulated with an electrical signal containing the discriminated phase noise of the laser. The proof-of-concept experiments on a commercially available 1549 nm distributed feedback laser show linewidth reduction from 7.5 MHz to 1.8 kHz without using large optical cavity resonators. This feed-forward scheme performs wideband phase noise cancellation independent of the light source and, as such, it is compatible with the original laser source tunability without requiring tunable optical components. By placing the proposed phase noise reduction system after a commercial tunable laser, a tunable coherent light source with kilohertz linewidth over a tuning range of 1530-1570 nm is demonstrated.

  10. Invasive Cyprinid Fish in Europe Originate from the Single Introduction of an Admixed Source Population Followed by a Complex Pattern of Spread

    PubMed Central

    Simon, Andrea; Britton, Robert; Gozlan, Rodolphe; van Oosterhout, Cock; Volckaert, Filip A. M.; Hänfling, Bernd

    2011-01-01

    The Asian cyprinid fish, the topmouth gudgeon (Pseudorasbora parva), was introduced into Europe in the 1960s. A highly invasive freshwater fish, it is currently found in at least 32 countries outside its native range. Here we analyse a 700 base pair fragment of the mitochondrial cytochrome b gene to examine different models of colonisation and spread within the invasive range, and to investigate the factors that may have contributed to their invasion success. Haplotype and nucleotide diversity of the introduced populations from continental Europe was higher than that of the native populations, although two recently introduced populations from the British Isles showed low levels of variability. Based on coalescent theory, all introduced and some native populations showed a relative excess of nucleotide diversity compared to haplotype diversity. This suggests that these populations are not in mutation-drift equilibrium, but rather that the relative inflated level of nucleotide diversity is consistent with recent admixture. This study elucidates the colonisation patterns of P. parva in Europe and provides an evolutionary framework of their invasion. It supports the hypothesis that their European colonisation was initiated by their introduction to a single location or small geographic area with subsequent complex pattern of spread including both long distance and stepping-stone dispersal. Furthermore, it was preceded by, or associated with, the admixture of genetically diverse source populations that may have augmented its invasive-potential. PMID:21674031

  11. Active temporal multiplexing of indistinguishable heralded single photons

    PubMed Central

    Xiong, C.; Zhang, X.; Liu, Z.; Collins, M. J.; Mahendra, A.; Helt, L. G.; Steel, M. J.; Choi, D. -Y.; Chae, C. J.; Leong, P. H. W.; Eggleton, B. J.

    2016-01-01

    It is a fundamental challenge in quantum optics to deterministically generate indistinguishable single photons through non-deterministic nonlinear optical processes, due to the intrinsic coupling of single- and multi-photon-generation probabilities in these processes. Actively multiplexing photons generated in many temporal modes can decouple these probabilities, but key issues are to minimize resource requirements to allow scalability, and to ensure indistinguishability of the generated photons. Here we demonstrate the multiplexing of photons from four temporal modes solely using fibre-integrated optics and off-the-shelf electronic components. We show a 100% enhancement to the single-photon output probability without introducing additional multi-photon noise. Photon indistinguishability is confirmed by a fourfold Hong–Ou–Mandel quantum interference with a 91±16% visibility after subtracting multi-photon noise due to high pump power. Our demonstration paves the way for scalable multiplexing of many non-deterministic photon sources to a single near-deterministic source, which will be of benefit to future quantum photonic technologies. PMID:26996317

  12. Quality and efficiency of statin prescribing across countries with a special focus on South Africa: findings and future implications.

    PubMed

    Godman, Brian; Bishop, Iain; Campbell, Stephen M; Malmström, Rickard E; Truter, Ilse

    2015-04-01

    Statins are recommended first-line treatment for hyperlipidemia, with published studies suggesting limited differences between them. However, there are reports of under-dosing. South Africa has introduced measures to enhance generic utilization. Part one documents prescribed doses of statins in 2011. Part two determines the extent of generics versus originator and single-sourced statins in 2011 and their costs. Underdosing of simvastatin in 2011 with average prescribed dose of 23.7 mg; however, not for atorvastatin (20.91 mg) or rosuvastatin (15.02 mg). High utilization of generics versus originators at 93-99% for atorvastatin and simvastatin, with limited utilization of single-sourced statins (22% of total statins - defined daily dose basis), mirroring Netherlands, Sweden and UK. Generics priced 33-51% below originator prices. Opportunity to increase simvastatin dosing through education, prescribing targets and incentives. Opportunity to lower generic prices with generic simvastatin 96-98% below single-sourced prices in some European countries.

  13. On the inclusion of mass source terms in a single-relaxation-time lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Aursjø, Olav; Jettestuen, Espen; Vinningland, Jan Ludvig; Hiorth, Aksel

    2018-05-01

    We present a lattice Boltzmann algorithm for incorporating a mass source in a fluid flow system. The proposed mass source/sink term, included in the lattice Boltzmann equation, maintains the Galilean invariance and the accuracy of the overall method, while introducing a mass source/sink term in the fluid dynamical equations. The method can, for instance, be used to inject or withdraw fluid from any preferred lattice node in a system. This suggests that injection and withdrawal of fluid does not have to be introduced through cumbersome, and sometimes less accurate, boundary conditions. The method also suggests that, through a chosen equation of state relating mass density to pressure, the proposed mass source term will render it possible to set a preferred pressure at any lattice node in a system. We demonstrate how this model handles injection and withdrawal of a fluid. And we show how it can be used to incorporate pressure boundaries. The accuracy of the algorithm is identified through a Chapman-Enskog expansion of the model and supported by the numerical simulations.

  14. CONNJUR spectrum translator: an open source application for reformatting NMR spectral data.

    PubMed

    Nowling, Ronald J; Vyas, Jay; Weatherby, Gerard; Fenwick, Matthew W; Ellis, Heidi J C; Gryk, Michael R

    2011-05-01

    NMR spectroscopists are hindered by the lack of standardization for spectral data among the file formats for various NMR data processing tools. This lack of standardization is cumbersome as researchers must perform their own file conversion in order to switch between processing tools and also restricts the combination of tools employed if no conversion option is available. The CONNJUR Spectrum Translator introduces a new, extensible architecture for spectrum translation and introduces two key algorithmic improvements. This first is translation of NMR spectral data (time and frequency domain) to a single in-memory data model to allow addition of new file formats with two converter modules, a reader and a writer, instead of writing a separate converter to each existing format. Secondly, the use of layout descriptors allows a single fid data translation engine to be used for all formats. For the end user, sophisticated metadata readers allow conversion of the majority of files with minimum user configuration. The open source code is freely available at http://connjur.sourceforge.net for inspection and extension.

  15. A tale of two mechanisms. Strain-softening versus strain-hardening in single crystals under small stressed volumes

    DOE PAGES

    Bei, Hongbin; Xia, Yuzhi; Barabash, Rozaliya; ...

    2015-08-10

    Pre-straining defect-free single crystals will introduce heterogeneous dislocation nucleation sources that reduce the measured strength from the theoretical value, while pre-straining bulk samples will lead to strain hardening. Their competition is investigated by nanoindentation pop-in tests on variously pre-strained Mo single crystals with several indenter radii (~micrometer). Pre-straining primarily shifts deformation mechanism from homogeneous dislocation nucleation to a stochastic behavior, while strain hardening plays a secondary role, as summarized in a master plot of pop-in strength versus normalized indenter radius.

  16. The Chandra Source Catalog 2.0: Interfaces

    NASA Astrophysics Data System (ADS)

    D'Abrusco, Raffaele; Zografou, Panagoula; Tibbetts, Michael; Allen, Christopher E.; Anderson, Craig S.; Budynkiewicz, Jamie A.; Burke, Douglas; Chen, Judy C.; Civano, Francesca Maria; Doe, Stephen M.; Evans, Ian N.; Evans, Janet D.; Fabbiano, Giuseppina; Gibbs, Danny G., II; Glotfelty, Kenny J.; Graessle, Dale E.; Grier, John D.; Hain, Roger; Hall, Diane M.; Harbo, Peter N.; Houck, John C.; Lauer, Jennifer L.; Laurino, Omar; Lee, Nicholas P.; Martínez-Galarza, Rafael; McCollough, Michael L.; McDowell, Jonathan C.; Miller, Joseph; McLaughlin, Warren; Morgan, Douglas L.; Mossman, Amy E.; Nguyen, Dan T.; Nichols, Joy S.; Nowak, Michael A.; Paxson, Charles; Plummer, David A.; Primini, Francis Anthony; Rots, Arnold H.; Siemiginowska, Aneta; Sundheim, Beth A.; Van Stone, David W.

    2018-01-01

    Easy-to-use, powerful public interfaces to access the wealth of information contained in any modern, complex astronomical catalog are fundamental to encourage its usage. In this poster,I present the public interfaces of the second Chandra Source Catalog (CSC2). CSC2 is the most comprehensive catalog of X-ray sources detected by Chandra, thanks to the inclusion of Chandra observations public through the end of 2014 and to methodological advancements. CSC2 provides measured properties for a large number of sources that sample the X-ray sky at fainter levels than the previous versions of the CSC, thanks to the stacking of single overlapping observations within 1’ before source detection. Sources from stacks are then crossmatched, if multiple stacks cover the same area of the sky, to create a list of unique, optimal CSC2 sources. The properties of sources detected in each single stack and each single observation are also measured. The layered structure of the CSC2 catalog is mirrored in the organization of the CSC2 database, consisting of three tables containing all properties for the unique stacked sources (“Master Source”), single stack sources (“Stack Source”) and sources in any single observation (“Observation Source”). These tables contain estimates of the position, flags, extent, significances, fluxes, spectral properties and variability (and associated errors) for all classes of sources. The CSC2 also includes source region and full-field data products for all master sources, stack sources and observation sources: images, photon event lists, light curves and spectra.CSCview, the main interface to the CSC2 source properties and data products, is a GUI tool that allows to build queries based on the values of all properties contained in CSC2 tables, query the catalog, inspect the returned table of source properties, browse and download the associated data products. I will also introduce the suite of command-line interfaces to CSC2 that can be used in alternative to CSCview, and will present the concept for an additional planned cone-search web-based interface.This work has been supported by NASA under contract NAS 8-03060 to the Smithsonian Astrophysical Observatory for operation of the Chandra X-ray Center.

  17. Development of an Imaging Fourier Transform Spectrometer

    DTIC Science & Technology

    1986-05-01

    during multiple tests or concurrently applying many identical instrument systems to a single test. These difficult, expensive, and time-consuming...processes would introduce AEDC-TR-86-17 uncertainties due to nonstationary sources and instrument instability associated with multiple firings or... multiple instruments. For even moderate spatial, spectral, and temporal resolution, none of the previously mentioned approaches is reasonable. The

  18. Characterizing multi-photon quantum interference with practical light sources and threshold single-photon detectors

    NASA Astrophysics Data System (ADS)

    Navarrete, Álvaro; Wang, Wenyuan; Xu, Feihu; Curty, Marcos

    2018-04-01

    The experimental characterization of multi-photon quantum interference effects in optical networks is essential in many applications of photonic quantum technologies, which include quantum computing and quantum communication as two prominent examples. However, such characterization often requires technologies which are beyond our current experimental capabilities, and today's methods suffer from errors due to the use of imperfect sources and photodetectors. In this paper, we introduce a simple experimental technique to characterize multi-photon quantum interference by means of practical laser sources and threshold single-photon detectors. Our technique is based on well-known methods in quantum cryptography which use decoy settings to tightly estimate the statistics provided by perfect devices. As an illustration of its practicality, we use this technique to obtain a tight estimation of both the generalized Hong‑Ou‑Mandel dip in a beamsplitter with six input photons and the three-photon coincidence probability at the output of a tritter.

  19. A statistical approach to combining multisource information in one-class classifiers

    DOE PAGES

    Simonson, Katherine M.; Derek West, R.; Hansen, Ross L.; ...

    2017-06-08

    A new method is introduced in this paper for combining information from multiple sources to support one-class classification. The contributing sources may represent measurements taken by different sensors of the same physical entity, repeated measurements by a single sensor, or numerous features computed from a single measured image or signal. The approach utilizes the theory of statistical hypothesis testing, and applies Fisher's technique for combining p-values, modified to handle nonindependent sources. Classifier outputs take the form of fused p-values, which may be used to gauge the consistency of unknown entities with one or more class hypotheses. The approach enables rigorousmore » assessment of classification uncertainties, and allows for traceability of classifier decisions back to the constituent sources, both of which are important for high-consequence decision support. Application of the technique is illustrated in two challenge problems, one for skin segmentation and the other for terrain labeling. Finally, the method is seen to be particularly effective for relatively small training samples.« less

  20. A statistical approach to combining multisource information in one-class classifiers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonson, Katherine M.; Derek West, R.; Hansen, Ross L.

    A new method is introduced in this paper for combining information from multiple sources to support one-class classification. The contributing sources may represent measurements taken by different sensors of the same physical entity, repeated measurements by a single sensor, or numerous features computed from a single measured image or signal. The approach utilizes the theory of statistical hypothesis testing, and applies Fisher's technique for combining p-values, modified to handle nonindependent sources. Classifier outputs take the form of fused p-values, which may be used to gauge the consistency of unknown entities with one or more class hypotheses. The approach enables rigorousmore » assessment of classification uncertainties, and allows for traceability of classifier decisions back to the constituent sources, both of which are important for high-consequence decision support. Application of the technique is illustrated in two challenge problems, one for skin segmentation and the other for terrain labeling. Finally, the method is seen to be particularly effective for relatively small training samples.« less

  1. Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source

    NASA Astrophysics Data System (ADS)

    Mason, Jonathan H.; Perelli, Alessandro; Nailon, William H.; Davies, Mike E.

    2017-11-01

    Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the x-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water-bone or photoelectric-Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements.

  2. Implementation of Single Source Based Hospital Information System for the Catholic Medical Center Affiliated Hospitals

    PubMed Central

    Choi, Inyoung; Choi, Ran; Lee, Jonghyun

    2010-01-01

    Objectives The objective of this research is to introduce the unique approach of the Catholic Medical Center (CMC) integrate network hospitals with organizational and technical methodologies adopted for seamless implementation. Methods The Catholic Medical Center has developed a new hospital information system to connect network hospitals and adopted new information technology architecture which uses single source for multiple distributed hospital systems. Results The hospital information system of the CMC was developed to integrate network hospitals adopting new system development principles; one source, one route and one management. This information architecture has reduced the cost for system development and operation, and has enhanced the efficiency of the management process. Conclusions Integrating network hospital through information system was not simple; it was much more complicated than single organization implementation. We are still looking for more efficient communication channel and decision making process, and also believe that our new system architecture will be able to improve CMC health care system and provide much better quality of health care service to patients and customers. PMID:21818432

  3. The role of upstream distal electrodes in mitigating electrochemical degradation of ionic liquid ion sources

    NASA Astrophysics Data System (ADS)

    Brikner, Natalya; Lozano, Paulo C.

    2012-11-01

    Ionic liquid ion sources produce molecular ions from micro-tip emitters wetted with room-temperature molten salts. When a single ion polarity is extracted, counterions accumulate and generate electrochemical reactions that limit the source lifetime. The dynamics of double layer formation are reviewed and distal electrode contacts are introduced to resolve detrimental electrochemical decomposition effects at the micro-tip apex. By having the emitter follow the ionic liquid potential, operation can be achieved for an extended period of time with no apparent degradation of the material, indicating that electrochemistry can be curtailed and isolated to the upstream distal electrode.

  4. A Miniaturized Linear Wire Ion Trap with Electron Ionization and Single Photon Ionization Sources

    NASA Astrophysics Data System (ADS)

    Wu, Qinghao; Tian, Yuan; Li, Ailin; Andrews, Derek; Hawkins, Aaron R.; Austin, Daniel E.

    2017-05-01

    A linear wire ion trap (LWIT) with both electron ionization (EI) and single photon ionization (SPI) sources was built. The SPI was provided by a vacuum ultraviolet (VUV) lamp with the ability to softly ionize organic compounds. The VUV lamp was driven by a pulse amplifier, which was controlled by a pulse generator, to avoid the detection of photons during ion detection. Sample gas was introduced through a leak valve, and the pressure in the system is shown to affect the signal-to-noise ratio and resolving power. Under optimized conditions, the limit of detection (LOD) for benzene was 80 ppbv using SPI, better than the LOD using EI (137 ppbv). System performance was demonstrated by distinguishing compounds in different classes from gasoline.

  5. Temporal resolution and motion artifacts in single-source and dual-source cardiac CT.

    PubMed

    Schöndube, Harald; Allmendinger, Thomas; Stierstorfer, Karl; Bruder, Herbert; Flohr, Thomas

    2013-03-01

    The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm. To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT. While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process. The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.

  6. A Self-Adaptive Dynamic Recognition Model for Fatigue Driving Based on Multi-Source Information and Two Levels of Fusion

    PubMed Central

    Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai

    2015-01-01

    To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615

  7. Single-shot gas-phase thermometry using pure-rotational hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering.

    PubMed

    Miller, Joseph D; Roy, Sukesh; Slipchenko, Mikhail N; Gord, James R; Meyer, Terrence R

    2011-08-01

    High-repetition-rate, single-laser-shot measurements are important for the investigation of unsteady flows where temperature and species concentrations can vary significantly. Here, we demonstrate single-shot, pure-rotational, hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (fs/ps RCARS) thermometry based on a kHz-rate fs laser source. Interferences that can affect nanosecond (ns) and ps CARS, such as nonresonant background and collisional dephasing, are eliminated by selecting an appropriate time delay between the 100-fs pump/Stokes pulses and the pulse-shaped 8.4-ps probe. A time- and frequency-domain theoretical model is introduced to account for rotational-level dependent collisional dephasing and indicates that the optimal probe-pulse time delay is 13.5 ps to 30 ps. This time delay allows for uncorrected best-fit N2-RCARS temperature measurements with ~1% accuracy. Hence, the hybrid fs/ps RCARS approach can be performed with kHz-rate laser sources while avoiding corrections that can be difficult to predict in unsteady flows.

  8. Single-shot gas-phase thermometry using pure-rotational hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering

    NASA Astrophysics Data System (ADS)

    Miller, Joseph D.; Roy, Sukesh; Slipchenko, Mikhail N.; Gord, James R.; Meyer, Terrence R.

    2011-08-01

    High-repetition-rate, single-laser-shot measurements are important for the investigation of unsteady flows where temperature and species concentrations can vary significantly. Here, we demonstrate single-shot, pure-rotational, hybrid femtosecond/picosecond coherent anti-Stokes Raman scattering (fs/ps RCARS) thermometry based on a kHz-rate fs laser source. Interferences that can affect nanosecond (ns) and ps CARS, such as nonresonant background and collisional dephasing, are eliminated by selecting an appropriate time delay between the 100-fs pump/Stokes pulses and the pulse-shaped 8.4-ps probe. A time- and frequency-domain theoretical model is introduced to account for rotational-level dependent collisional dephasing and indicates that the optimal probe-pulse time delay is 13.5 ps to 30 ps. This time delay allows for uncorrected best-fit N2-RCARS temperature measurements with ~1% accuracy. Hence, the hybrid fs/ps RCARS approach can be performed with kHz-rate laser sources while avoiding corrections that can be difficult to predict in unsteady flows.

  9. Simultaneous DC and three phase output using hybrid converter

    NASA Astrophysics Data System (ADS)

    Surenderanath, S.; Rathnavel, P.; Prakash, G.; Rayavel, P.

    2018-04-01

    This Paper introduces new hybrid converter topologies which can supply simultaneously three phase AC as well as DC from a single DC source. The new Hybrid Converter is derived from the single switch controlled Boost converter by replacing the controlled switch with voltage source inverter (VSI). This new hybrid converter has the advantages like reduced number of switches as compared with conventional design having separate converter for supplying three phase AC and DC loads, provide DC and three AC outputs with an increased reliability, resulting from the inherent shoot through protection in the inverter stage. The proposed converter, studied in this paper, is called Boost-Derived Hybrid Converter (BDHC) as it is obtained from the conventional boost topology. A DSPIC based feedback controller is designed to regulate the DC as well as AC outputs. The proposed Converter can supply DC and AC loads at 95 V and 35 V (line to ground) respectively from a 48 V DC source.

  10. Spectrally resolved single-shot wavefront sensing of broadband high-harmonic sources

    NASA Astrophysics Data System (ADS)

    Freisem, L.; Jansen, G. S. M.; Rudolf, D.; Eikema, K. S. E.; Witte, S.

    2018-03-01

    Wavefront sensors are an important tool to characterize coherent beams of extreme ultraviolet radiation. However, conventional Hartmann-type sensors do not allow for independent wavefront characterization of different spectral components that may be present in a beam, which limits their applicability for intrinsically broadband high-harmonic generation (HHG) sources. Here we introduce a wavefront sensor that measures the wavefronts of all the harmonics in a HHG beam in a single camera exposure. By replacing the mask apertures with transmission gratings at different orientations, we simultaneously detect harmonic wavefronts and spectra, and obtain sensitivity to spatiotemporal structure such as pulse front tilt as well. We demonstrate the capabilities of the sensor through a parallel measurement of the wavefronts of 9 harmonics in a wavelength range between 25 and 49 nm, with up to lambda/32 precision.

  11. Generation of tunable radially polarized array beams by controllable coherence

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Zhang, Jipeng; Zhu, Shijun; Li, Zhenhua

    2017-05-01

    In this paper, a new method for converting a single radial polarization beam into an arbitrary radially polarized array (RPA) beam such as a radial or rectangular symmetry array in the focal plane by modulating a periodic correlation structure is introduced. The realizability conditions for such source and the beam condition for radiation generated by such source are derived. It is illustrated that both the amplitude and the polarization are controllable by means of initial correlation structure and coherence parameter. Furthermore, by designing the source correlation structure, a tunable NUST-shaped RPA beam is demonstrated, which can find widespread applications in micro-nano engineering. Such a method for generation of arbitrary vector array beams is useful in beam shaping and optical tweezers.

  12. Tunable room-temperature single-photon emission at telecom wavelengths from sp 3 defects in carbon nanotubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Xiaowei; Hartmann, Nicolai F.; Ma, Xuedan

    Generating quantum light emitters that operate at room temperature and at telecom wavelengths remains a significant materials challenge. To achieve this goal requires light sources that emit in the near-infrared wavelength region and that, ideally, are tunable to allow desired output wavelengths to be accessed in a controllable manner. Here, we show that exciton localization at covalently introduced aryl sp 3 defect sites in single-walled carbon nanotubes provides a route to room-temperature single-photon emission with ultrahigh single-photon purity (99%) and enhanced emission stability approaching the shot-noise limit. Moreover, we demonstrate that the inherent optical tunability of single-walled carbon nanotubes, presentmore » in their structural diversity, allows us to generate room-temperature single-photon emission spanning the entire telecom band. Furthermore, single-photon emission deep into the centre of the telecom C band (1.55 um) is achieved at the largest nanotube diameters we explore (0.936 nm).« less

  13. Tunable room-temperature single-photon emission at telecom wavelengths from sp 3 defects in carbon nanotubes

    DOE PAGES

    He, Xiaowei; Hartmann, Nicolai F.; Ma, Xuedan; ...

    2017-07-31

    Generating quantum light emitters that operate at room temperature and at telecom wavelengths remains a significant materials challenge. To achieve this goal requires light sources that emit in the near-infrared wavelength region and that, ideally, are tunable to allow desired output wavelengths to be accessed in a controllable manner. Here, we show that exciton localization at covalently introduced aryl sp 3 defect sites in single-walled carbon nanotubes provides a route to room-temperature single-photon emission with ultrahigh single-photon purity (99%) and enhanced emission stability approaching the shot-noise limit. Moreover, we demonstrate that the inherent optical tunability of single-walled carbon nanotubes, presentmore » in their structural diversity, allows us to generate room-temperature single-photon emission spanning the entire telecom band. Furthermore, single-photon emission deep into the centre of the telecom C band (1.55 um) is achieved at the largest nanotube diameters we explore (0.936 nm).« less

  14. A tunable single-monochromator Raman system based on the supercontinuum laser and tunable filters for resonant Raman profile measurements.

    PubMed

    Liu, X-L; Liu, H-N; Tan, P-H

    2017-08-01

    Resonant Raman spectroscopy requires that the wavelength of the laser used is close to that of an electronic transition. A tunable laser source and a triple spectrometer are usually necessary for resonant Raman profile measurements. However, such a system is complex with low signal throughput, which limits its wide application by scientific community. Here, a tunable micro-Raman spectroscopy system based on the supercontinuum laser, transmission grating, tunable filters, and single-stage spectrometer is introduced to measure the resonant Raman profile. The supercontinuum laser in combination with transmission grating makes a tunable excitation source with a bandwidth of sub-nanometer. Such a system exhibits continuous excitation tunability and high signal throughput. Its good performance and flexible tunability are verified by resonant Raman profile measurement of twisted bilayer graphene, which demonstrates its potential application prospect for resonant Raman spectroscopy.

  15. Phase-encoded measurement device independent quantum key distribution without a shared reference frame

    NASA Astrophysics Data System (ADS)

    Zhuo-Dan, Zhu; Shang-Hong, Zhao; Chen, Dong; Ying, Sun

    2018-07-01

    In this paper, a phase-encoded measurement device independent quantum key distribution (MDI-QKD) protocol without a shared reference frame is presented, which can generate secure keys between two parties while the quantum channel or interferometer introduces an unknown and slowly time-varying phase. The corresponding secret key rate and single photons bit error rate is analysed, respectively, with single photons source (SPS) and weak coherent source (WCS), taking finite-key analysis into account. The numerical simulations show that the modified phase-encoded MDI-QKD protocol has apparent superiority both in maximal secure transmission distance and key generation rate while possessing the improved robustness and practical security in the high-speed case. Moreover, the rejection of the frame-calibrating part will intrinsically reduce the consumption of resources as well as the potential security flaws of practical MDI-QKD systems.

  16. The development and testing of a linear induction motor being fed from the source with limited electric power

    NASA Astrophysics Data System (ADS)

    Tiunov, V. V.

    2018-02-01

    The report provides results of the research related to the tubular linear induction motors’ application. The motors’ design features, a calculation model, a description of test specimens for mining and electric power industry are introduced. The most attention is given to the single-phase motors for high voltage switches drives with the usage of inexpensive standard single-phase transformers for motors’ power supply. The method of the motor’s parameters determination, when the motor is being fed from the transformer, working in the overload mode, was described, and the results of it practical usage were good enough for the engineering practice.

  17. High efficiency IR supercontinuum generation and applications

    NASA Astrophysics Data System (ADS)

    Yin, Stuart (Shizhuo); Ruffin, Paul; Brantley, Christina; Edwards, Eugene; Yang, Chia-En; Luo, Claire

    2010-08-01

    In this paper, we have reviewed our recent works on IR supercontinuum generation (SCG) and its applications. First, we provide a brief review on the physical mechanism of the supercontinuum generation. Second, the advance of SCG in single crystal sapphire fibers is reviewed and introduced. In particular, we discussed how to fabricate thinned sapphire fiber and use it for high efficiency SCG. Finally, experimental results of chemical analysis with supercontinuum source are reviewed.

  18. Shipping Science Worldwide with Open Source Containers

    NASA Astrophysics Data System (ADS)

    Molineaux, J. P.; McLaughlin, B. D.; Pilone, D.; Plofchan, P. G.; Murphy, K. J.

    2014-12-01

    Scientific applications often present difficult web-hosting needs. Their compute- and data-intensive nature, as well as an increasing need for high-availability and distribution, combine to create a challenging set of hosting requirements. In the past year, advancements in container-based virtualization and related tooling have offered new lightweight and flexible ways to accommodate diverse applications with all the isolation and portability benefits of traditional virtualization. This session will introduce and demonstrate an open-source, single-interface, Platform-as-a-Serivce (PaaS) that empowers application developers to seamlessly leverage geographically distributed, public and private compute resources to achieve highly-available, performant hosting for scientific applications.

  19. Quantum metrology of spatial deformation using arrays of classical and quantum light emitters

    NASA Astrophysics Data System (ADS)

    Sidhu, Jasminder S.; Kok, Pieter

    2017-06-01

    We introduce spatial deformations to an array of light sources and study how the estimation precision of the interspacing distance d changes with the sources of light used. The quantum Fisher information (QFI) is used as the figure of merit in this work to quantify the amount of information we have on the estimation parameter. We derive the generator of translations G ̂ in d due to an arbitrary homogeneous deformation applied to the array. We show how the variance of the generator can be used to easily consider how different deformations and light sources can effect the estimation precision. The single-parameter estimation problem is applied to the array, and we report on the optimal state that maximizes the QFI for d . Contrary to what may have been expected, the higher average mode occupancies of the classical states performs better in estimating d when compared with single photon emitters (SPEs). The optimal entangled state is constructed from the eigenvectors of the generator and found to outperform all these states. We also find the existence of multiple optimal estimators for the measurement of d . Our results find applications in evaluating stresses and strains, fracture prevention in materials expressing great sensitivities to deformations, and selecting frequency distinguished quantum sources from an array of reference sources.

  20. InGaAsP/InP-air-aperture microcavities for single-photon sources at 1.55-μm telecommunication band

    NASA Astrophysics Data System (ADS)

    Guo, Sijie; Zheng, Yanzhen; Weng, Zhuo; Yao, Haicheng; Ju, Yuhao; Zhang, Lei; Ren, Zhilei; Gao, Ruoyao; Wang, Zhiming M.; Song, Hai-Zhi

    2016-11-01

    InGaAsP/InP-air-aperture micropillar cavities are proposed to serve as 1.55-μm single photon sources, which are indispensable in silica-fiber based quantum information processing. Owing to air-apertures introduced to InP layers, and adiabatically tapered distributed Bragg-reflector structures used in the central cavity layers, the pillar diameters can be less than 1 μm, achieving mode volume as small as (λ/n)3, and the quality factors are more than 104 - 105, sufficient to increase the quantum dot emission rate for 100 times and create strong coupling between the optical mode and the 1.55- μm InAs/InP quantum dot emitter. The mode wavelengths and quality factors are found weakly changing with the cavity size and the deviation from the ideal shape, indicating the robustness against the imperfection of the fabrication technique. The fabrication, simply epitaxial growth, dry and chemical etching, is a damage-free and monolithic process, which is advantageous over previous hybrid cavities. The above properties satisfy the requirements of efficient, photonindistinguishable and coherent 1.55-μm quantum dot single photon sources, so the proposed InGaAsP/InP-air-aperture micropillar cavities are prospective candidates for quantum information devices at telecommunication band.

  1. Computational Modeling of Single Neuron Extracellular Electric Potentials and Network Local Field Potentials using LFPsim.

    PubMed

    Parasuram, Harilal; Nair, Bipin; D'Angelo, Egidio; Hines, Michael; Naldi, Giovanni; Diwakar, Shyam

    2016-01-01

    Local Field Potentials (LFPs) are population signals generated by complex spatiotemporal interaction of current sources and dipoles. Mathematical computations of LFPs allow the study of circuit functions and dysfunctions via simulations. This paper introduces LFPsim, a NEURON-based tool for computing population LFP activity and single neuron extracellular potentials. LFPsim was developed to be used on existing cable compartmental neuron and network models. Point source, line source, and RC based filter approximations can be used to compute extracellular activity. As a demonstration of efficient implementation, we showcase LFPs from mathematical models of electrotonically compact cerebellum granule neurons and morphologically complex neurons of the neocortical column. LFPsim reproduced neocortical LFP at 8, 32, and 56 Hz via current injection, in vitro post-synaptic N2a, N2b waves and in vivo T-C waves in cerebellum granular layer. LFPsim also includes a simulation of multi-electrode array of LFPs in network populations to aid computational inference between biophysical activity in neural networks and corresponding multi-unit activity resulting in extracellular and evoked LFP signals.

  2. Hypothesis tests for the detection of constant speed radiation moving sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir

    2015-07-01

    Radiation Portal Monitors are deployed in linear network to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal to noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes amore » benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive background, and a vehicle source carrier under the same respectively high and low count rate radioactive background, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm, while guaranteeing the stability of its optimization parameter regardless of signal to noise ratio variations between 2 to 0.8. (authors)« less

  3. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  4. Single photon emitters in boron nitride: More than a supplementary material

    NASA Astrophysics Data System (ADS)

    Koperski, M.; Nogajewski, K.; Potemski, M.

    2018-03-01

    We present comprehensive optical studies of recently discovered single photon sources in boron nitride, which appear in form of narrow lines emitting centres. Here, we aim to compactly characterise their basic optical properties, including the demonstration of several novel findings, in order to inspire discussion about their origin and utility. Initial inspection reveals the presence of narrow emission lines in boron nitride powder and exfoliated flakes of hexagonal boron nitride deposited on Si/SiO2 substrates. Generally rather stable, the boron nitride emitters constitute a good quality visible light source. However, as briefly discussed, certain specimens reveal a peculiar type of blinking effects, which are likely related to existence of meta-stable electronic states. More advanced characterisation of representative stable emitting centres uncovers a strong dependence of the emission intensity on the energy and polarisation of excitation. On this basis, we speculate that rather strict excitation selectivity is an important factor determining the character of the emission spectra, which allows the observation of single and well-isolated emitters. Finally, we investigate the properties of the emitting centres in varying external conditions. Quite surprisingly, it is found that the application of a magnetic field introduces no change in the emission spectra of boron nitride emitters. Further analysis of the impact of temperature on the emission spectra and the features seen in second-order correlation functions is used to provide an assessment of the potential functionality of boron nitride emitters as single photon sources capable of room temperature operation.

  5. An intraocular micro light-emitting diode device for endo-illumination during pars plana vitrectomy.

    PubMed

    Koelbl, Philipp S; Lingenfelder, Christian; Spraul, Christoph W; Kampmeier, Juergen; Koch, Frank Hj; Kim, Yong Keun; Hessling, Martin

    2018-03-01

    Development of a new, fiber-free, single-use endo-illuminator for pars plana vitrectomy as a replacement for fiber-based systems with external light sources. The hand-guided intraocularly placed white micro light-emitting diode is evaluated for its illumination properties and potential photochemical and thermal hazards. A micro light-emitting diode was used to develop a single-use intraocular illumination system. The light-source-on-tip device was implemented in a prototype with 23G trocar compatible outer diameter of 0.6 mm. The experimental testing was performed on porcine eyes. All calculations of possible photochemical and thermal hazards during the application of the intraocular micro light-emitting diode were calculated according to DIN EN ISO 15007-2: 2014. The endo-illuminator generated a homogeneous and bright illumination of the intraocular space. The color impression was physiologic and natural. Contrary to initial apprehension, the possible risk caused by inserting a light-emitting diode into the intraocular vitreous was much smaller when compared to conventional fiber-based illumination systems. The photochemical and thermal hazards allowed a continuous exposure time to the retina of at least 4.7 h. This first intraocular light source showed that a light-emitting diode can be introduced into the eye. The system can be built as single-use illumination system. This light-source-on-tip light-emitting diode-endo-illumination combines a chandelier wide-angle illumination with an adjustable endo-illuminator.

  6. A wireless centrifuge force microscope (CFM) enables multiplexed single-molecule experiments in a commercial centrifuge.

    PubMed

    Hoang, Tony; Patel, Dhruv S; Halvorsen, Ken

    2016-08-01

    The centrifuge force microscope (CFM) was recently introduced as a platform for massively parallel single-molecule manipulation and analysis. Here we developed a low-cost and self-contained CFM module that works directly within a commercial centrifuge, greatly improving accessibility and ease of use. Our instrument incorporates research grade video microscopy, a power source, a computer, and wireless transmission capability to simultaneously monitor many individually tethered microspheres. We validated the instrument by performing single-molecule force shearing of short DNA duplexes. For a 7 bp duplex, we observed over 1000 dissociation events due to force dependent shearing from 2 pN to 12 pN with dissociation times in the range of 10-100 s. We extended the measurement to a 10 bp duplex, applying a 12 pN force clamp and directly observing single-molecule dissociation over an 85 min experiment. Our new CFM module facilitates simple and inexpensive experiments that dramatically improve access to single-molecule analysis.

  7. Contemporary morphological diversification of passerine birds introduced to the Hawaiian archipelago.

    PubMed

    Mathys, Blake A; Lockwood, Julie L

    2011-08-07

    Species that have been introduced to islands experience novel and strong selection pressures after establishment. There is evidence that exotic species diverge from their native source populations; further, a few studies have demonstrated adaptive divergence across multiple exotic populations of a single species. Exotic birds provide a good study system, as they have been introduced to many locations worldwide, and we often know details concerning the propagule origin, time of introduction, and dynamics of establishment and dispersal within the introduced range. These data make them especially conducive to the examination of contemporary evolution. Island faunas have received intense scrutiny, therefore we have expectations concerning the patterns of diversification for exotic species. We examine six passerine bird species that were introduced to the Hawaiian archipelago less than 150 years ago. We find that five of these show morphological divergence among islands from the time since they were established. We demonstrate that some of this divergence cannot be accounted for by genetic drift, and therefore we must consider adaptive evolution to explain it. We also evaluate evolutionary divergence rates and find that these species are diverging at similar rates to those found in published studies of contemporary evolution in native species.

  8. MM&T Program to Establish Production Techniques for the Automatic Detection and Qualification of Trace Elements Present in the Production of Microwave Semiconductors.

    DTIC Science & Technology

    1981-03-01

    lots. A single store of partially processed devices may serve as a source for several different product lines. Because the manufacture of microwave...matrix, or react chem- ically with some of the semiconductor materials. In some cases these element impurities may migrate to an interface inducing... different viscosity, the background intensity varied independently of the signal, a significant error could be introduced. A more effec- tive method

  9. General wave optics propagation scaling law.

    PubMed

    Shakir, Sami A; Dolash, Thomas M; Spencer, Mark; Berdine, Richard; Cargill, Daniel S; Carreras, Richard

    2016-12-01

    A general far-field wave propagation scaling law is developed. The formulation is simple but predicts diffraction peak irradiance accurately in the far field, regardless of the near-field beam type or geometry, including laser arrays. We also introduce the concept of the equivalent uniform circular beam that generates a far-field peak irradiance and power-in-the-bucket that are the same as an arbitrary laser source. Applications to clipped Gaussian beams with an obscuration, both as a single beam and as an array of beams, are shown.

  10. Design of control system based on SCM music fountain

    NASA Astrophysics Data System (ADS)

    Li, Biqing; Li, Zhao; Jiang, Suping

    2018-06-01

    The design of the design of a microprocessor controlled by simple circuit, introduced this design applied to the components, and draw the main flow chart presentation. System is the use of an external music source, the intensity of the input audio signal lights will affect the light off, the fountain spray of water level will be based on changes in the lantern light off. This design uses a single-chip system is simple, powerful, good reliability and low cost.

  11. Tools for open geospatial science

    NASA Astrophysics Data System (ADS)

    Petras, V.; Petrasova, A.; Mitasova, H.

    2017-12-01

    Open science uses open source to deal with reproducibility challenges in data and computational sciences. However, just using open source software or making the code public does not make the research reproducible. Moreover, the scientists face the challenge of learning new unfamiliar tools and workflows. In this contribution, we will look at a graduate-level course syllabus covering several software tools which make validation and reuse by a wider professional community possible. For the novices in the open science arena, we will look at how scripting languages such as Python and Bash help us reproduce research (starting with our own work). Jupyter Notebook will be introduced as a code editor, data exploration tool, and a lab notebook. We will see how Git helps us not to get lost in revisions and how Docker is used to wrap all the parts together using a single text file so that figures for a scientific paper or a technical report can be generated with a single command. We will look at examples of software and publications in the geospatial domain which use these tools and principles. Scientific contributions to GRASS GIS, a powerful open source desktop GIS and geoprocessing backend, will serve as an example of why and how to publish new algorithms and tools as part of a bigger open source project.

  12. Two-mode squeezed light source for quantum illumination and quantum imaging

    NASA Astrophysics Data System (ADS)

    Masada, Genta

    2015-09-01

    We started to research quantum illumination radar and quantum imaging by utilizing high quality continuous-wave two-mode squeezed light source as a quantum entanglement resource. Two-mode squeezed light is a macroscopic quantum entangled state of the electro-magnetic field and shows strong correlation between quadrature phase amplitudes of each optical field. One of the most effective methods to generate two-mode squeezed light is combining two independent single-mode squeezed lights by using a beam splitter with relative phase of 90 degrees between each optical field. As a first stage of our work we are developing two-mode squeezed light source for exploring the possibility of quantum illumination radar and quantum imaging. In this article we introduce current development of experimental investigation of single-mode squeezed light. We utilize a sub-threshold optical parametric oscillator with bow-tie configuration which includes a periodically-polled potassium titanyl phosphate crystal as a nonlinear optical medium. We observed the noise level of squeezed quadrature -3.08+/-0.13 dB and anti-squeezed quadrature at 9.29+/-0.13 dB, respectively. We also demonstrated the remote tuning of squeezing level of the light source which leads to the technology for tuning the quantum entanglement in order to adapt to the actual environmental condition.

  13. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process.

    PubMed

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-12

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.

  14. An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process †

    PubMed Central

    Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori

    2018-01-01

    To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210

  15. Evaluation of US Federal Legislation for Opioid Abuse: 1973-2016.

    PubMed

    Ruble, James H

    2016-09-01

    The 114th Congress (2014-2016) has received recent attention for the high number of legislative bills directed to the public health crisis in prescription opioid abuse. The US government does not have a single source for determining public policy; however, the people expect that there will be some level of efficiency and coordination between federal and state leaders to improve the nation's health. A search of the National Library of Congress database to analyze legislative bills introduced between 1973 and 2016 and which contain the term "opioid" identified 127 bills that characterize consistency and coordination with other governmental efforts in prescription opioid abuse. Despite the recent number of introduced bills, there does not appear to be a close coordination between Congress and Federal Administrative agencies regarding this crisis.

  16. Simulation and source identification of X-ray contrast media in the water cycle of Berlin.

    PubMed

    Knodel, J; Geissen, S-U; Broll, J; Dünnbier, U

    2011-11-01

    This article describes the development of a model to simulate the fate of iodinated X-ray contrast media (XRC) in the water cycle of the German capital, Berlin. It also handles data uncertainties concerning the different amounts and sources of input for XRC via source densities in single districts for the XRC usage by inhabitants, hospitals, and radiologists. As well, different degradation rates for the behavior of the adsorbable organic iodine (AOI) were investigated in single water compartments. The introduced model consists of mass balances and includes, in addition to naturally branched bodies of water, the water distribution network between waterways and wastewater treatment plants, which are coupled to natural surface waters at numerous points. Scenarios were calculated according to the data uncertainties that were statistically evaluated to identify the scenario with the highest agreement among the provided measurement data. The simulation of X-ray contrast media in the water cycle of Berlin showed that medical institutions have to be considered as point sources for congested urban areas due to their high levels of X-ray contrast media emission. The calculations identified hospitals, represented by their capacity (number of hospital beds), as the most relevant point sources, while the inhabitants served as important diffusive sources. Deployed for almost inert substances like contrast media, the model can be used for qualitative statements and, therefore, as a decision-support tool. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. A stepped-plate bi-frequency source for generating a difference frequency sound with a parametric array.

    PubMed

    Je, Yub; Lee, Haksue; Park, Jongkyu; Moon, Wonkyu

    2010-06-01

    An ultrasonic radiator is developed to generate a difference frequency sound from two frequencies of ultrasound in air with a parametric array. A design method is proposed for an ultrasonic radiator capable of generating highly directive, high-amplitude ultrasonic sound beams at two different frequencies in air based on a modification of the stepped-plate ultrasonic radiator. The stepped-plate ultrasonic radiator was introduced by Gallego-Juarez et al. [Ultrasonics 16, 267-271 (1978)] in their previous study and can effectively generate highly directive, large-amplitude ultrasonic sounds in air, but only at a single frequency. Because parametric array sources must be able to generate sounds at more than one frequency, a design modification is crucial to the application of a stepped-plate ultrasonic radiator as a parametric array source in air. The aforementioned method was employed to design a parametric radiator for use in air. A prototype of this design was constructed and tested to determine whether it could successfully generate a difference frequency sound with a parametric array. The results confirmed that the proposed single small-area transducer was suitable as a parametric radiator in air.

  18. A robust hypothesis test for the sensitive detection of constant speed radiation moving sources

    NASA Astrophysics Data System (ADS)

    Dumazert, Jonathan; Coulon, Romain; Kondrasovs, Vladimir; Boudergui, Karim; Moline, Yoann; Sannié, Guillaume; Gameiro, Jordan; Normand, Stéphane; Méchin, Laurence

    2015-09-01

    Radiation Portal Monitors are deployed in linear networks to detect radiological material in motion. As a complement to single and multichannel detection algorithms, inefficient under too low signal-to-noise ratios, temporal correlation algorithms have been introduced. Test hypothesis methods based on empirically estimated mean and variance of the signals delivered by the different channels have shown significant gain in terms of a tradeoff between detection sensitivity and false alarm probability. This paper discloses the concept of a new hypothesis test for temporal correlation detection methods, taking advantage of the Poisson nature of the registered counting signals, and establishes a benchmark between this test and its empirical counterpart. The simulation study validates that in the four relevant configurations of a pedestrian source carrier under respectively high and low count rate radioactive backgrounds, and a vehicle source carrier under the same respectively high and low count rate radioactive backgrounds, the newly introduced hypothesis test ensures a significantly improved compromise between sensitivity and false alarm. It also guarantees that the optimal coverage factor for this compromise remains stable regardless of signal-to-noise ratio variations between 2 and 0.8, therefore allowing the final user to parametrize the test with the sole prior knowledge of background amplitude.

  19. Process dissociation between contextual retrieval and item recognition.

    PubMed

    Weis, Susanne; Specht, Karsten; Klaver, Peter; Tendolkar, Indira; Willmes, Klaus; Ruhlmann, Jürgen; Elger, Christian E; Fernández, Guillén

    2004-12-22

    We employed a source memory task in an event related fMRI study to dissociate MTL processes associated with either contextual retrieval or item recognition. To introduce context during study, stimuli (photographs of buildings and natural landscapes) were transformed into one of four single-color-scales: red, blue, yellow, or green. In the subsequent old/new recognition memory test, all stimuli were presented as gray scale photographs, and old-responses were followed by a four-alternative source judgment referring to the color in which the stimulus was presented during study. Our results suggest a clear-cut process dissociation within the human MTL. While an activity increase accompanies successful retrieval of contextual information, an activity decrease provides a familiarity signal that is sufficient for successful item recognition.

  20. Single-particle cryo-EM-Improved ab initio 3D reconstruction with SIMPLE/PRIME.

    PubMed

    Reboul, Cyril F; Eager, Michael; Elmlund, Dominika; Elmlund, Hans

    2018-01-01

    Cryogenic electron microscopy (cryo-EM) and single-particle analysis now enables the determination of high-resolution structures of macromolecular assemblies that have resisted X-ray crystallography and other approaches. We developed the SIMPLE open-source image-processing suite for analysing cryo-EM images of single-particles. A core component of SIMPLE is the probabilistic PRIME algorithm for identifying clusters of images in 2D and determine relative orientations of single-particle projections in 3D. Here, we extend our previous work on PRIME and introduce new stochastic optimization algorithms that improve the robustness of the approach. Our refined method for identification of homogeneous subsets of images in accurate register substantially improves the resolution of the cluster centers and of the ab initio 3D reconstructions derived from them. We now obtain maps with a resolution better than 10 Å by exclusively processing cluster centers. Excellent parallel code performance on over-the-counter laptops and CPU workstations is demonstrated. © 2017 The Protein Society.

  1. Single-electron-occupation metal-oxide-semiconductor quantum dots formed from efficient poly-silicon gate layout

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carroll, Malcolm S.; rochette, sophie; Rudolph, Martin

    We introduce a silicon metal-oxide-semiconductor quantum dot structure that achieves dot-reservoir tunnel coupling control without a dedicated barrier gate. The elementary structure consists of two accumulation gates separated spatially by a gap, one gate accumulating a reservoir and the other a quantum dot. Control of the tunnel rate between the dot and the reservoir across the gap is demonstrated in the single electron regime by varying the reservoir accumulation gate voltage while compensating with the dot accumulation gate voltage. The method is then applied to a quantum dot connected in series to source and drain reservoirs, enabling transport down tomore » the single electron regime. Finally, tuning of the valley splitting with the dot accumulation gate voltage is observed. This split accumulation gate structure creates silicon quantum dots of similar characteristics to other realizations but with less electrodes, in a single gate stack subtractive fabrication process that is fully compatible with silicon foundry manufacturing.« less

  2. On the Definition of Surface Potentials for Finite-Difference Operators

    NASA Technical Reports Server (NTRS)

    Tsynkov, S. V.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    For a class of linear constant-coefficient finite-difference operators of the second order, we introduce the concepts similar to those of conventional single- and double-layer potentials for differential operators. The discrete potentials are defined completely independently of any notion related to the approximation of the continuous potentials on the grid. We rather use all approach based on differentiating, and then inverting the differentiation of a function with surface discontinuity of a particular kind, which is the most general way of introducing surface potentials in the theory of distributions. The resulting finite-difference "surface" potentials appear to be solutions of the corresponding continuous potentials. Primarily, this pertains to the possibility of representing a given solution to the homogeneous equation on the domain as a variety of surface potentials, with the density defined on the domain's boundary. At the same time the discrete surface potentials can be interpreted as one specific realization of the generalized potentials of Calderon's type, and consequently, their approximation properties can be studied independently in the framework of the difference potentials method by Ryaben'kii. The motivation for introducing and analyzing the discrete surface potentials was provided by the problems of active shielding and control of sound, in which the aforementioned source terms that drive the potentials are interpreted as the acoustic control sources that cancel out the unwanted noise on a predetermined region of interest.

  3. A double-correlation tremor-location method

    NASA Astrophysics Data System (ADS)

    Li, Ka Lok; Sgattoni, Giulia; Sadeghisorkhani, Hamzeh; Roberts, Roland; Gudmundsson, Olafur

    2017-02-01

    A double-correlation method is introduced to locate tremor sources based on stacks of complex, doubly-correlated tremor records of multiple triplets of seismographs back projected to hypothetical source locations in a geographic grid. Peaks in the resulting stack of moduli are inferred source locations. The stack of the moduli is a robust measure of energy radiated from a point source or point sources even when the velocity information is imprecise. Application to real data shows how double correlation focuses the source mapping compared to the common single correlation approach. Synthetic tests demonstrate the robustness of the method and its resolution limitations which are controlled by the station geometry, the finite frequency of the signal, the quality of the used velocity information and noise level. Both random noise and signal or noise correlated at time shifts that are inconsistent with the assumed velocity structure can be effectively suppressed. Assuming a surface wave velocity, we can constrain the source location even if the surface wave component does not dominate. The method can also in principle be used with body waves in 3-D, although this requires more data and seismographs placed near the source for depth resolution.

  4. Chiral Majorana interference as a source of quantum entanglement

    NASA Astrophysics Data System (ADS)

    Chirolli, Luca; Baltanás, José Pablo; Frustaglia, Diego

    2018-04-01

    Two-particle Hanbury Brown-Twiss interferometry with chiral Majorana modes produces maximally entangled electron-hole pairs. We promote the electron-hole quantum number to an interferometric degree of freedom and complete the set of linear tools for single- and two-particle interferometry by introducing a key phase gate that, combined with a Mach-Zehnder, allows full electron-hole rotations. By considering entanglement witnesses built on current cross-correlation measurements, we find that the possibility of independent local-channel rotations in the electron-hole subspace leads to a significant boost of the entanglement detection power.

  5. Assessment Methods of Groundwater Overdraft Area and Its Application

    NASA Astrophysics Data System (ADS)

    Dong, Yanan; Xing, Liting; Zhang, Xinhui; Cao, Qianqian; Lan, Xiaoxun

    2018-05-01

    Groundwater is an important source of water, and long-term large demand make groundwater over-exploited. Over-exploitation cause a lot of environmental and geological problems. This paper explores the concept of over-exploitation area, summarizes the natural and social attributes of over-exploitation area, as well as expounds its evaluation methods, including single factor evaluation, multi-factor system analysis and numerical method. At the same time, the different methods are compared and analyzed. And then taking Northern Weifang as an example, this paper introduces the practicality of appraisal method.

  6. The Opticians Act 1989 and UK optometry.

    PubMed

    Taylor, S P

    1991-04-01

    The build-up to the original 1958 Opticians Act is used as an introduction to the more recent developments in UK optics that have culminated in the introduction of the Opticians Act 1989. The changes introduced as a result of the Health and Social Security Act 1984 and the Health and Medicines Act 1988 are briefly described before discussing the sectional arrangement of the new Act. This new legislation pulls together much of the law relating to optometry and dispensing optics in the UK and provides a single accessible source.

  7. Recent health policy initiatives in Nordic countries

    PubMed Central

    Saltman, Richard B.

    1992-01-01

    Health care systems in Sweden, Finland, and Denmark are in the midst of substantial organizational reconfiguration. Although retaining their tax-based single source financing arrangements, they have begun experiments that introduce a limited measure of competitive behavior in the delivery of health services. The emphasis has been on restructuring public operated hospitals and health centers into various forms of public firms, rather than on the privatization of ownership of institutions. If successful, the reforms will enable these Nordic countries to combine their existing macroeconomic controls with enhanced microeconomic efficiency, effectiveness, and responsiveness to patients. PMID:10122003

  8. 2D joint inversion of CSAMT and magnetic data based on cross-gradient theory

    NASA Astrophysics Data System (ADS)

    Wang, Kun-Peng; Tan, Han-Dong; Wang, Tao

    2017-06-01

    A two-dimensional forward and backward algorithm for the controlled-source audio-frequency magnetotelluric (CSAMT) method is developed to invert data in the entire region (near, transition, and far) and deal with the effects of artificial sources. First, a regularization factor is introduced in the 2D magnetic inversion, and the magnetic susceptibility is updated in logarithmic form so that the inversion magnetic susceptibility is always positive. Second, the joint inversion of the CSAMT and magnetic methods is completed with the introduction of the cross gradient. By searching for the weight of the cross-gradient term in the objective function, the mutual influence between two different physical properties at different locations are avoided. Model tests show that the joint inversion based on cross-gradient theory offers better results than the single-method inversion. The 2D forward and inverse algorithm for CSAMT with source can effectively deal with artificial sources and ensures the reliability of the final joint inversion algorithm.

  9. Ghirardi-Rimini-Weber model with massive flashes

    NASA Astrophysics Data System (ADS)

    Tilloy, Antoine

    2018-01-01

    I introduce a modification of the Ghirardi-Rimini-Weber (GRW) model in which the flashes (or space-time collapse events) source a classical gravitational field. The resulting semiclassical theory of Newtonian gravity preserves the statistical interpretation of quantum states of matter in contrast with mean field approaches. It can be seen as a discrete version of recent proposals of consistent hybrid quantum classical theories. The model is in agreement with known experimental data and introduces new falsifiable predictions: (1) single particles do not self-interact, (2) the 1 /r gravitational potential of Newtonian gravity is cut off at short (≲10-7 m ) distances, and (3) gravity makes spatial superpositions decohere at a rate inversely proportional to that coming from the vanilla GRW model. Together, the last two predictions make the model experimentally falsifiable for all values of its parameters.

  10. MPRAnator: a web-based tool for the design of massively parallel reporter assay experiments

    PubMed Central

    Georgakopoulos-Soares, Ilias; Jain, Naman; Gray, Jesse M; Hemberg, Martin

    2017-01-01

    Motivation: With the rapid advances in DNA synthesis and sequencing technologies and the continuing decline in the associated costs, high-throughput experiments can be performed to investigate the regulatory role of thousands of oligonucleotide sequences simultaneously. Nevertheless, designing high-throughput reporter assay experiments such as massively parallel reporter assays (MPRAs) and similar methods remains challenging. Results: We introduce MPRAnator, a set of tools that facilitate rapid design of MPRA experiments. With MPRA Motif design, a set of variables provides fine control of how motifs are placed into sequences, thereby allowing the investigation of the rules that govern transcription factor (TF) occupancy. MPRA single-nucleotide polymorphism design can be used to systematically examine the functional effects of single or combinations of single-nucleotide polymorphisms at regulatory sequences. Finally, the Transmutation tool allows for the design of negative controls by permitting scrambling, reversing, complementing or introducing multiple random mutations in the input sequences or motifs. Availability and implementation: MPRAnator tool set is implemented in Python, Perl and Javascript and is freely available at www.genomegeek.com and www.sanger.ac.uk/science/tools/mpranator. The source code is available on www.github.com/hemberg-lab/MPRAnator/ under the MIT license. The REST API allows programmatic access to MPRAnator using simple URLs. Contact: igs@sanger.ac.uk or mh26@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27605100

  11. MPRAnator: a web-based tool for the design of massively parallel reporter assay experiments.

    PubMed

    Georgakopoulos-Soares, Ilias; Jain, Naman; Gray, Jesse M; Hemberg, Martin

    2017-01-01

    With the rapid advances in DNA synthesis and sequencing technologies and the continuing decline in the associated costs, high-throughput experiments can be performed to investigate the regulatory role of thousands of oligonucleotide sequences simultaneously. Nevertheless, designing high-throughput reporter assay experiments such as massively parallel reporter assays (MPRAs) and similar methods remains challenging. We introduce MPRAnator, a set of tools that facilitate rapid design of MPRA experiments. With MPRA Motif design, a set of variables provides fine control of how motifs are placed into sequences, thereby allowing the investigation of the rules that govern transcription factor (TF) occupancy. MPRA single-nucleotide polymorphism design can be used to systematically examine the functional effects of single or combinations of single-nucleotide polymorphisms at regulatory sequences. Finally, the Transmutation tool allows for the design of negative controls by permitting scrambling, reversing, complementing or introducing multiple random mutations in the input sequences or motifs. MPRAnator tool set is implemented in Python, Perl and Javascript and is freely available at www.genomegeek.com and www.sanger.ac.uk/science/tools/mpranator The source code is available on www.github.com/hemberg-lab/MPRAnator/ under the MIT license. The REST API allows programmatic access to MPRAnator using simple URLs. igs@sanger.ac.uk or mh26@sanger.ac.ukSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  12. Standardized shrinking LORETA-FOCUSS (SSLOFO): a new algorithm for spatio-temporal EEG source reconstruction.

    PubMed

    Liu, Hesheng; Schimpf, Paul H; Dong, Guoya; Gao, Xiaorong; Yang, Fusheng; Gao, Shangkai

    2005-10-01

    This paper presents a new algorithm called Standardized Shrinking LORETA-FOCUSS (SSLOFO) for solving the electroencephalogram (EEG) inverse problem. Multiple techniques are combined in a single procedure to robustly reconstruct the underlying source distribution with high spatial resolution. This algorithm uses a recursive process which takes the smooth estimate of sLORETA as initialization and then employs the re-weighted minimum norm introduced by FOCUSS. An important technique called standardization is involved in the recursive process to enhance the localization ability. The algorithm is further improved by automatically adjusting the source space according to the estimate of the previous step, and by the inclusion of temporal information. Simulation studies are carried out on both spherical and realistic head models. The algorithm achieves very good localization ability on noise-free data. It is capable of recovering complex source configurations with arbitrary shapes and can produce high quality images of extended source distributions. We also characterized the performance with noisy data in a realistic head model. An important feature of this algorithm is that the temporal waveforms are clearly reconstructed, even for closely spaced sources. This provides a convenient way to estimate neural dynamics directly from the cortical sources.

  13. Energy-Saving Control of a Novel Hydraulic Drive System for Field Walking Robot

    NASA Astrophysics Data System (ADS)

    Fang, Delei; Shang, Jianzhong; Xue, Yong; Yang, Junhong; Wang, Zhuo

    2018-01-01

    To improve the efficiency of the hydraulic drive system in field walking robot, this paper proposed a novel hydraulic system based on two-stage pressure source. Based on the analysis of low efficiency of robot single-stage hydraulic system, the paper firstly introduces the concept and design of two-stage pressure source drive system. Then, the new hydraulic system energy-saving control is planned according to the characteristics of walking robot. The feasibility of the new hydraulic system is proved by the simulation of the walking robot squatting. Finally, the efficiencies of two types hydraulic system are calculated, indicating that the novel hydraulic system can increase the efficiency by 41.5%, which can contribute to enhance knowledge about hydraulic drive system for field walking robot.

  14. GPU Acceleration of Mean Free Path Based Kernel Density Estimators for Monte Carlo Neutronics Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burke, TImothy P.; Kiedrowski, Brian C.; Martin, William R.

    Kernel Density Estimators (KDEs) are a non-parametric density estimation technique that has recently been applied to Monte Carlo radiation transport simulations. Kernel density estimators are an alternative to histogram tallies for obtaining global solutions in Monte Carlo tallies. With KDEs, a single event, either a collision or particle track, can contribute to the score at multiple tally points with the uncertainty at those points being independent of the desired resolution of the solution. Thus, KDEs show potential for obtaining estimates of a global solution with reduced variance when compared to a histogram. Previously, KDEs have been applied to neutronics formore » one-group reactor physics problems and fixed source shielding applications. However, little work was done to obtain reaction rates using KDEs. This paper introduces a new form of the MFP KDE that is capable of handling general geometries. Furthermore, extending the MFP KDE to 2-D problems in continuous energy introduces inaccuracies to the solution. An ad-hoc solution to these inaccuracies is introduced that produces errors smaller than 4% at material interfaces.« less

  15. High linearity current communicating passive mixer employing a simple resistor bias

    NASA Astrophysics Data System (ADS)

    Rongjiang, Liu; Guiliang, Guo; Yuepeng, Yan

    2013-03-01

    A high linearity current communicating passive mixer including the mixing cell and transimpedance amplifier (TIA) is introduced. It employs the resistor in the TIA to reduce the source voltage and the gate voltage of the mixing cell. The optimum linearity and the maximum symmetric switching operation are obtained at the same time. The mixer is implemented in a 0.25 μm CMOS process. The test shows that it achieves an input third-order intercept point of 13.32 dBm, conversion gain of 5.52 dB, and a single sideband noise figure of 20 dB.

  16. Practical adaptive quantum tomography

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Ferrie, Christopher; Flammia, Steven T.

    2017-11-01

    We introduce a fast and accurate heuristic for adaptive tomography that addresses many of the limitations of prior methods. Previous approaches were either too computationally intensive or tailored to handle special cases such as single qubits or pure states. By contrast, our approach combines the efficiency of online optimization with generally applicable and well-motivated data-processing techniques. We numerically demonstrate these advantages in several scenarios including mixed states, higher-dimensional systems, and restricted measurements. http://cgranade.com complete data and source code for this work are available online [1], and can be previewed at https://goo.gl/koiWxR.

  17. A FORTRAN realization of the block adjustment of CCD frames

    NASA Astrophysics Data System (ADS)

    Yu, Yong; Tang, Zhenghong; Li, Jinling; Zhao, Ming

    A FORTRAN version realization of the block adjustment (BA) of overlapping CCD frames is developed. The flowchart is introduced including (a) data collection, (b) preprocessing, and (c) BA and object positioning. The subroutines and their functions are also demonstrated. The program package is tested by simulated data with/without the application of white noises. It is also preliminarily applied to the reduction of optical positions of four extragalactic radio sources. The results show that because of the increase in the sky coverage and number of reference stars, the precision of deducted positions is improved compared with single plate adjustment.

  18. Silicon on insulator self-aligned transistors

    DOEpatents

    McCarthy, Anthony M.

    2003-11-18

    A method for fabricating thin-film single-crystal silicon-on-insulator (SOI) self-aligned transistors. Standard processing of silicon substrates is used to fabricate the transistors. Physical spaces, between the source and gate, and the drain and gate, introduced by etching the polysilicon gate material, are used to provide connecting implants (bridges) which allow the transistor to perform normally. After completion of the silicon substrate processing, the silicon wafer is bonded to an insulator (glass) substrate, and the silicon substrate is removed leaving the transistors on the insulator (glass) substrate. Transistors fabricated by this method may be utilized, for example, in flat panel displays, etc.

  19. Determination of Optimum Operating Parameters of Single-Electron Photomultiplier

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lukichev, A.A.; Shalyapin, A.L.; Shul'gin, B.V.

    1986-06-01

    This paper presents a procedure for determining the effective quantum yield and average gain of the dynode system of a photomultiplier. An objective performance figure, the extrapolated count response, is introduced. The results for FEU-142 photomultiplier studies are given. The radiation source used in the studies was a PRK-2 mercury-vapor lamp with a regulated power supply. The recorder was a pulse analyzer based on a UNO-4096-90, which was equipped with BPA-2-95, BPV-2-90, and UVTs-2-90 units. The amplifier used a KP303V transistor and a 140UD1B operational amplifier.

  20. Accurate proteome-wide protein quantification from high-resolution 15N mass spectra

    PubMed Central

    2011-01-01

    In quantitative mass spectrometry-based proteomics, the metabolic incorporation of a single source of 15N-labeled nitrogen has many advantages over using stable isotope-labeled amino acids. However, the lack of a robust computational framework for analyzing the resulting spectra has impeded wide use of this approach. We have addressed this challenge by introducing a new computational methodology for analyzing 15N spectra in which quantification is integrated with identification. Application of this method to an Escherichia coli growth transition reveals significant improvement in quantification accuracy over previous methods. PMID:22182234

  1. Investigation of airflow effects on the dielectric barrier discharge with single/double discharge channel arrangement

    NASA Astrophysics Data System (ADS)

    Fan, Zhihui; Yan, Huijie; Liu, Yidi; Guo, Hongfei; Wang, Yuying; Ren, Chunsheng

    2018-05-01

    Atmospheric-pressure dielectric barrier discharge (DBD) with airflow participation has been widely used in recent years. In this paper, effects of airflow on DBD characteristics are experimentally investigated by single/double pin-to-plate DBD arrangements with an AC exciting source. The discharge electrical characteristics and the movements of discharge channels in airflow are investigated with a single pin electrode arrangement. The current intensities increase in positive cycles and decrease in negative cycles with the increase in airflow velocity. The transition from a filamentary discharge to a diffuse discharge is observed under certain airflow conditions, and the discharge channels move with the airflow with a movement velocity less than the corresponding airflow velocity. In the cases of double pin electrode arrangements, the repulsion between double pin discharge channels is apparent at a 10 mm distance but is not obvious at a 20 mm distance. When the airflow is introduced into the discharge gap, not as in the case of single pin electrode arrangement, the movements of discharge channels in airflow are affected by adjacent discharge channels. The corresponding reasons are analyzed in the paper.

  2. Protein gradients in single cells induced by their coupling to "morphogen"-like diffusion

    NASA Astrophysics Data System (ADS)

    Nandi, Saroj Kumar; Safran, Sam A.

    2018-05-01

    One of the many ways cells transmit information within their volume is through steady spatial gradients of different proteins. However, the mechanism through which proteins without any sources or sinks form such single-cell gradients is not yet fully understood. One of the models for such gradient formation, based on differential diffusion, is limited to proteins with large ratios of their diffusion constants or to specific protein-large molecule interactions. We introduce a novel mechanism for gradient formation via the coupling of the proteins within a single cell with a molecule, that we call a "pronogen," whose action is similar to that of morphogens in multi-cell assemblies; the pronogen is produced with a fixed flux at one side of the cell. This coupling results in an effectively non-linear diffusion degradation model for the pronogen dynamics within the cell, which leads to a steady-state gradient of the protein concentration. We use stability analysis to show that these gradients are linearly stable with respect to perturbations.

  3. A platform for exploding wires in different media

    NASA Astrophysics Data System (ADS)

    Han, Ruoyu; Wu, Jiawei; Qiu, Aici; Zhou, Haibin; Wang, Yanan; Yan, Jiaqi; Ding, Weidong

    2017-10-01

    A platform SWE-2 used for single wire explosion experiments has been designed, established, and commissioned. This paper describes the design and initial experiments of SWE-2. In summary, two pulsed current sources based on pulse capacitors and spark gaps are adopted to drive sub-microsecond and microsecond time scale wire explosions in a gaseous/liquid medium, respectively. In the initial experiments, a single copper wire was exploded in air, helium, and argon with a 0.1-0.3 MPa ambient pressure as well as tap water with a 283-323 K temperature, 184-11 000 μ S/cm conductivity, or 0.1-0.9 MPa hydrostatic pressure. In addition, the diagnostic system is introduced in detail. Energy deposition, optical emission, and shock wave characteristics are briefly discussed based on experimental results. The platform was demonstrated to operate successfully with a single wire load. These results provide the potential for further applications of this platform, such as plasma-matter interactions, shock wave effects, and reservoir simulations.

  4. Cast aluminium single crystals cross the threshold from bulk to size-dependent stochastic plasticity

    NASA Astrophysics Data System (ADS)

    Krebs, J.; Rao, S. I.; Verheyden, S.; Miko, C.; Goodall, R.; Curtin, W. A.; Mortensen, A.

    2017-07-01

    Metals are known to exhibit mechanical behaviour at the nanoscale different to bulk samples. This transition typically initiates at the micrometre scale, yet existing techniques to produce micrometre-sized samples often introduce artefacts that can influence deformation mechanisms. Here, we demonstrate the casting of micrometre-scale aluminium single-crystal wires by infiltration of a salt mould. Samples have millimetre lengths, smooth surfaces, a range of crystallographic orientations, and a diameter D as small as 6 μm. The wires deform in bursts, at a stress that increases with decreasing D. Bursts greater than 200 nm account for roughly 50% of wire deformation and have exponentially distributed intensities. Dislocation dynamics simulations show that single-arm sources that produce large displacement bursts halted by stochastic cross-slip and lock formation explain microcast wire behaviour. This microcasting technique may be extended to several other metals or alloys and offers the possibility of exploring mechanical behaviour spanning the micrometre scale.

  5. Integrating single-cell transcriptomic data across different conditions, technologies, and species.

    PubMed

    Butler, Andrew; Hoffman, Paul; Smibert, Peter; Papalexi, Efthymia; Satija, Rahul

    2018-06-01

    Computational single-cell RNA-seq (scRNA-seq) methods have been successfully applied to experiments representing a single condition, technology, or species to discover and define cellular phenotypes. However, identifying subpopulations of cells that are present across multiple data sets remains challenging. Here, we introduce an analytical strategy for integrating scRNA-seq data sets based on common sources of variation, enabling the identification of shared populations across data sets and downstream comparative analysis. We apply this approach, implemented in our R toolkit Seurat (http://satijalab.org/seurat/), to align scRNA-seq data sets of peripheral blood mononuclear cells under resting and stimulated conditions, hematopoietic progenitors sequenced using two profiling technologies, and pancreatic cell 'atlases' generated from human and mouse islets. In each case, we learn distinct or transitional cell states jointly across data sets, while boosting statistical power through integrated analysis. Our approach facilitates general comparisons of scRNA-seq data sets, potentially deepening our understanding of how distinct cell states respond to perturbation, disease, and evolution.

  6. Exciton dynamics of C60-based single-photon emitters explored by Hanbury Brown-Twiss scanning tunnelling microscopy.

    PubMed

    Merino, P; Große, C; Rosławska, A; Kuhnke, K; Kern, K

    2015-09-29

    Exciton creation and annihilation by charges are crucial processes for technologies relying on charge-exciton-photon conversion. Improvement of organic light sources or dye-sensitized solar cells requires methods to address exciton dynamics at the molecular scale. Near-field techniques have been instrumental for this purpose; however, characterizing exciton recombination with molecular resolution remained a challenge. Here, we study exciton dynamics by using scanning tunnelling microscopy to inject current with sub-molecular precision and Hanbury Brown-Twiss interferometry to measure photon correlations in the far-field electroluminescence. Controlled injection allows us to generate excitons in solid C60 and let them interact with charges during their lifetime. We demonstrate electrically driven single-photon emission from localized structural defects and determine exciton lifetimes in the picosecond range. Monitoring lifetime shortening and luminescence saturation for increasing carrier injection rates provides access to charge-exciton annihilation dynamics. Our approach introduces a unique way to study single quasi-particle dynamics on the ultimate molecular scale.

  7. Retrieving the Height of Smoke and Dust Aerosols by Synergistic Use of Multiple Satellite Sensors

    NASA Technical Reports Server (NTRS)

    Lee, Jaehwa; Hsu, N. Christina; Bettenhausen, Corey; Sayer, Andrew M.; Seftor, Colin J.; Jeong, Myeong-Jae

    2016-01-01

    The Aerosol Single scattering albedo and Height Estimation (ASHE) algorithm was first introduced in Jeong and Hsu (2008) to provide aerosol layer height and single scattering albedo (SSA) for biomass burning smoke aerosols. By using multiple satellite sensors synergistically, ASHE can provide the height information over much broader areas than lidar observations alone. The complete ASHE algorithm uses aerosol data from MODIS or VIIRS, OMI or OMPS, and CALIOP. A simplified algorithm also exists that does not require CALIOP data as long as the SSA of the aerosol layer is provided by another source. Several updates have recently been made: inclusion of dust layers in the retrieval process, better determination of the input aerosol layer height from CALIOP, improvement in aerosol optical depth (AOD) for nonspherical dust, development of quality assurance (QA) procedure, etc.

  8. Imaging of molecular hydrogen and oxygen by single and two-photon fluorescence using laser and flashlamp sources

    NASA Technical Reports Server (NTRS)

    Diskin, Glenn S.; Lempert, Walter R.; Miles, Richard B.; Kumar, Vinod; Glesk, Ivan

    1991-01-01

    Two flow visualization techniques, i.e., simultaneous two-dimensional fluorescence imaging of H2 and O2 in a diffusion flame, and quasi-linear fluorescence imaging of O2, are presented. The first uses an injection-locked argon-fluoride excimer laser and a partial overlap of a two-photon ground state absorption in H2 with a single photon absorption from a vibrational level in O2. The second uses a simple, high-intensity ultraviolet flashlamp which provides a flux of photons in the 180-195 nm range, sufficient to produce a quasi-one-dimensional fluorescence image of hot/room temperature oxygen. Both techniques do not require that a seed material be introduced into the flow, they can image major flow constituents, and provide an instantaneous snapshot of the flow.

  9. Combining EEG and MEG for the Reconstruction of Epileptic Activity Using a Calibrated Realistic Volume Conductor Model

    PubMed Central

    Aydin, Ümit; Vorwerk, Johannes; Küpper, Philipp; Heers, Marcel; Kugel, Harald; Galka, Andreas; Hamid, Laith; Wellmer, Jörg; Kellinghaus, Christoph; Rampp, Stefan; Wolters, Carsten Hermann

    2014-01-01

    To increase the reliability for the non-invasive determination of the irritative zone in presurgical epilepsy diagnosis, we introduce here a new experimental and methodological source analysis pipeline that combines the complementary information in EEG and MEG, and apply it to data from a patient, suffering from refractory focal epilepsy. Skull conductivity parameters in a six compartment finite element head model with brain anisotropy, constructed from individual MRI data, are estimated in a calibration procedure using somatosensory evoked potential (SEP) and field (SEF) data. These data are measured in a single run before acquisition of further runs of spontaneous epileptic activity. Our results show that even for single interictal spikes, volume conduction effects dominate over noise and need to be taken into account for accurate source analysis. While cerebrospinal fluid and brain anisotropy influence both modalities, only EEG is sensitive to skull conductivity and conductivity calibration significantly reduces the difference in especially depth localization of both modalities, emphasizing its importance for combining EEG and MEG source analysis. On the other hand, localization differences which are due to the distinct sensitivity profiles of EEG and MEG persist. In case of a moderate error in skull conductivity, combined source analysis results can still profit from the different sensitivity profiles of EEG and MEG to accurately determine location, orientation and strength of the underlying sources. On the other side, significant errors in skull modeling are reflected in EEG reconstruction errors and could reduce the goodness of fit to combined datasets. For combined EEG and MEG source analysis, we therefore recommend calibrating skull conductivity using additionally acquired SEP/SEF data. PMID:24671208

  10. Prenatally fabricated autologous human living heart valves based on amniotic fluid derived progenitor cells as single cell source.

    PubMed

    Schmidt, Dörthe; Achermann, Josef; Odermatt, Bernhard; Breymann, Christian; Mol, Anita; Genoni, Michele; Zund, Gregor; Hoerstrup, Simon P

    2007-09-11

    A novel concept providing prenatally tissue engineered human autologous heart valves based on routinely obtained fetal amniotic fluid progenitors as single cell source is introduced. Fetal human amniotic progenitors were isolated from routinely sampled amniotic fluid and sorted using CD133 magnetic beads. After expansion and differentiation, cell phenotypes of CD133- and CD133+ cells were analyzed by immunohistochemistry and flowcytometry. After characterization, CD133- derived cells were seeded onto heart valve leaflet scaffolds (n=18) fabricated from rapidly biodegradable polymers, conditioned in a pulse duplicator system, and subsequently coated with CD133+ derived cells. After in vitro maturation, opening and closing behavior of leaflets was investigated. Neo-tissues were analyzed by histology, immunohistochemistry, and scanning electron microscopy (SEM). Extracellular matrix (ECM) elements and cell numbers were quantified biochemically. Mechanical properties were assessed by tensile testing. CD133- derived cells demonstrated characteristics of mesenchymal progenitors expressing CD44 and CD105. Differentiated CD133+ cells showed features of functional endothelial cells by eNOS and CD141 expression. Engineered heart valve leaflets demonstrated endothelialized tissue formation with production of ECM elements (GAG 80%, HYP 5%, cell number 100% of native values). SEM showed intact endothelial surfaces. Opening and closing behavior was sufficient under half of systemic conditions. The use of amniotic fluid as single cell source is a promising low-risk approach enabling the prenatal fabrication of heart valves ready to use at birth. These living replacements with the potential of growth, remodeling, and regeneration may realize the early repair of congenital malformations.

  11. Foundations for Protecting Renewable-Rich Distribution Systems.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Abraham; Brahma, Sukumar; Ranade, Satish

    High proliferation of Inverter Interfaced Distributed Energy Resources (IIDERs) into the electric distribution grid introduces new challenges to protection of such systems. This is because the existing protection systems are designed with two assumptions: 1) system is single-sourced, resulting in unidirectional fault current, and (2) fault currents are easily detectable due to much higher magnitudes compared to load currents. Due to the fact that most renewables interface with the grid though inverters, and inverters restrict their current output to levels close to the full load currents, both these assumptions are no longer valid - the system becomes multi-sourced, and overcurrent-basedmore » protection does not work. The primary scope of this study is to analyze the response of a grid-tied inverter to different faults in the grid, leading to new guidelines on protecting renewable-rich distribution systems.« less

  12. Calcium isotope fractionation between soft and mineralized tissues as a monitor of calcium use in vertebrates.

    PubMed

    Skulan, J; DePaolo, D J

    1999-11-23

    Calcium from bone and shell is isotopically lighter than calcium of soft tissue from the same organism and isotopically lighter than source (dietary) calcium. When measured as the (44)Ca/(40)Ca isotopic ratio, the total range of variation observed is 5.5 per thousand, and as much as 4 per thousand variation is found in a single organism. The observed intraorganismal calcium isotopic variations and the isotopic differences between tissues and diet indicate that isotopic fractionation occurs mainly as a result of mineralization. Soft tissue calcium becomes heavier or lighter than source calcium during periods when there is net gain or loss of mineral mass, respectively. These results suggest that variations of natural calcium isotope ratios in tissues may be useful for assessing the calcium and mineral balance of organisms without introducing isotopic tracers.

  13. Calcium isotope fractionation between soft and mineralized tissues as a monitor of calcium use in vertebrates

    PubMed Central

    Skulan, Joseph; DePaolo, Donald J.

    1999-01-01

    Calcium from bone and shell is isotopically lighter than calcium of soft tissue from the same organism and isotopically lighter than source (dietary) calcium. When measured as the 44Ca/40Ca isotopic ratio, the total range of variation observed is 5.5‰, and as much as 4‰ variation is found in a single organism. The observed intraorganismal calcium isotopic variations and the isotopic differences between tissues and diet indicate that isotopic fractionation occurs mainly as a result of mineralization. Soft tissue calcium becomes heavier or lighter than source calcium during periods when there is net gain or loss of mineral mass, respectively. These results suggest that variations of natural calcium isotope ratios in tissues may be useful for assessing the calcium and mineral balance of organisms without introducing isotopic tracers. PMID:10570137

  14. Microseismic source locations with deconvolution migration

    NASA Astrophysics Data System (ADS)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2018-03-01

    Identifying and locating microseismic events are critical problems in hydraulic fracturing monitoring for unconventional resources exploration. In contrast to active seismic data, microseismic data are usually recorded with unknown source excitation time and source location. In this study, we introduce deconvolution migration by combining deconvolution interferometry with interferometric cross-correlation migration (CCM). This method avoids the need for the source excitation time and enhances both the spatial resolution and robustness by eliminating the square term of the source wavelets from CCM. The proposed algorithm is divided into the following three steps: (1) generate the virtual gathers by deconvolving the master trace with all other traces in the microseismic gather to remove the unknown excitation time; (2) migrate the virtual gather to obtain a single image of the source location and (3) stack all of these images together to get the final estimation image of the source location. We test the proposed method on complex synthetic and field data set from the surface hydraulic fracturing monitoring, and compare the results with those obtained by interferometric CCM. The results demonstrate that the proposed method can obtain a 50 per cent higher spatial resolution image of the source location, and more robust estimation with smaller errors of the localization especially in the presence of velocity model errors. This method is also beneficial for source mechanism inversion and global seismology applications.

  15. Get to Understand More from Single-Cells: Current Studies of Microfluidic-Based Techniques for Single-Cell Analysis.

    PubMed

    Lo, Shih-Jie; Yao, Da-Jeng

    2015-07-23

    This review describes the microfluidic techniques developed for the analysis of a single cell. The characteristics of microfluidic (e.g., little sample amount required, high-throughput performance) make this tool suitable to answer and to solve biological questions of interest about a single cell. This review aims to introduce microfluidic related techniques for the isolation, trapping and manipulation of a single cell. The major approaches for detection in single-cell analysis are introduced; the applications of single-cell analysis are then summarized. The review concludes with discussions of the future directions and opportunities of microfluidic systems applied in analysis of a single cell.

  16. Get to Understand More from Single-Cells: Current Studies of Microfluidic-Based Techniques for Single-Cell Analysis

    PubMed Central

    Lo, Shih-Jie; Yao, Da-Jeng

    2015-01-01

    This review describes the microfluidic techniques developed for the analysis of a single cell. The characteristics of microfluidic (e.g., little sample amount required, high-throughput performance) make this tool suitable to answer and to solve biological questions of interest about a single cell. This review aims to introduce microfluidic related techniques for the isolation, trapping and manipulation of a single cell. The major approaches for detection in single-cell analysis are introduced; the applications of single-cell analysis are then summarized. The review concludes with discussions of the future directions and opportunities of microfluidic systems applied in analysis of a single cell. PMID:26213918

  17. Source characterization of urban particles from meat smoking activities in Chongqing, China using single particle aerosol mass spectrometry.

    PubMed

    Chen, Yang; Wenger, John C; Yang, Fumo; Cao, Junji; Huang, Rujin; Shi, Guangming; Zhang, Shumin; Tian, Mi; Wang, Huanbo

    2017-09-01

    A Single Particle Aerosol Mass Spectrometer (SPAMS) was deployed in the urban area of Chongqing to characterize the particles present during a severe particulate pollution event that occurred in winter 2014-2015. The measurements were made at a time when residents engaged in traditional outdoor meat smoking activities to preserve meat before the Chinese Spring Festival. The measurement period was predominantly characterized by stagnant weather conditions, highly elevated levels of PM 2.5 , and low visibility. Eleven major single particle types were identified, with over 92.5% of the particles attributed to biomass burning emissions. Most of the particle types showed appreciable signs of aging in the stagnant air conditions. To simulate the meat smoking activities, a series of controlled smoldering experiments was conducted using freshly cut pine and cypress branches, both with and without wood logs. SPAMS data obtained from these experiments revealed a number of biomass burning particle types, including an elemental and organic carbon (ECOC) type that proved to be the most suitable marker for meat smoking activities. The traditional activity of making preserved meat in southwestern China is shown here to be a major source of particulate pollution. Improved measures to reduce emissions from the smoking of meat should be introduced to improve air quality in regions where smoking meat activity prevails. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Porting plasma physics simulation codes to modern computing architectures using the libmrc framework

    NASA Astrophysics Data System (ADS)

    Germaschewski, Kai; Abbott, Stephen

    2015-11-01

    Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source libmrc framework that has been used to modularize and port three plasma physics codes: The extended MHD code MRCv3 with implicit time integration and curvilinear grids; the OpenGGCM global magnetosphere model; and the particle-in-cell code PSC. libmrc consolidates basic functionality needed for simulations based on structured grids (I/O, load balancing, time integrators), and also introduces a parallel object model that makes it possible to maintain multiple implementations of computational kernels, on e.g. conventional processors and GPUs. It handles data layout conversions and enables us to port performance-critical parts of a code to a new architecture step-by-step, while the rest of the code can remain unchanged. We will show examples of the performance gains and some physics applications.

  19. A generalized quantitative interpretation of dark-field contrast for highly concentrated microsphere suspensions

    PubMed Central

    Gkoumas, Spyridon; Villanueva-Perez, Pablo; Wang, Zhentian; Romano, Lucia; Abis, Matteo; Stampanoni, Marco

    2016-01-01

    In X-ray grating interferometry, dark-field contrast arises due to partial extinction of the detected interference fringes. This is also called visibility reduction and is attributed to small-angle scattering from unresolved structures in the imaged object. In recent years, analytical quantitative frameworks of dark-field contrast have been developed for highly diluted monodisperse microsphere suspensions with maximum 6% volume fraction. These frameworks assume that scattering particles are separated by large enough distances, which make any interparticle scattering interference negligible. In this paper, we start from the small-angle scattering intensity equation and, by linking Fourier and real-space, we introduce the structure factor and thus extend the analytical and experimental quantitative interpretation of dark-field contrast, for a range of suspensions with volume fractions reaching 40%. The structure factor accounts for interparticle scattering interference. Without introducing any additional fitting parameters, we successfully predict the experimental values measured at the TOMCAT beamline, Swiss Light Source. Finally, we apply this theoretical framework to an experiment probing a range of system correlation lengths by acquiring dark-field images at different energies. This proposed method has the potential to be applied in single-shot-mode using a polychromatic X-ray tube setup and a single-photon-counting energy-resolving detector. PMID:27734931

  20. Accelerating calculations of RNA secondary structure partition functions using GPUs

    PubMed Central

    2013-01-01

    Background RNA performs many diverse functions in the cell in addition to its role as a messenger of genetic information. These functions depend on its ability to fold to a unique three-dimensional structure determined by the sequence. The conformation of RNA is in part determined by its secondary structure, or the particular set of contacts between pairs of complementary bases. Prediction of the secondary structure of RNA from its sequence is therefore of great interest, but can be computationally expensive. In this work we accelerate computations of base-pair probababilities using parallel graphics processing units (GPUs). Results Calculation of the probabilities of base pairs in RNA secondary structures using nearest-neighbor standard free energy change parameters has been implemented using CUDA to run on hardware with multiprocessor GPUs. A modified set of recursions was introduced, which reduces memory usage by about 25%. GPUs are fastest in single precision, and for some hardware, restricted to single precision. This may introduce significant roundoff error. However, deviations in base-pair probabilities calculated using single precision were found to be negligible compared to those resulting from shifting the nearest-neighbor parameters by a random amount of magnitude similar to their experimental uncertainties. For large sequences running on our particular hardware, the GPU implementation reduces execution time by a factor of close to 60 compared with an optimized serial implementation, and by a factor of 116 compared with the original code. Conclusions Using GPUs can greatly accelerate computation of RNA secondary structure partition functions, allowing calculation of base-pair probabilities for large sequences in a reasonable amount of time, with a negligible compromise in accuracy due to working in single precision. The source code is integrated into the RNAstructure software package and available for download at http://rna.urmc.rochester.edu. PMID:24180434

  1. Negation’s Not Solved: Generalizability Versus Optimizability in Clinical Natural Language Processing

    PubMed Central

    Wu, Stephen; Miller, Timothy; Masanz, James; Coarr, Matt; Halgrim, Scott; Carrell, David; Clark, Cheryl

    2014-01-01

    A review of published work in clinical natural language processing (NLP) may suggest that the negation detection task has been “solved.” This work proposes that an optimizable solution does not equal a generalizable solution. We introduce a new machine learning-based Polarity Module for detecting negation in clinical text, and extensively compare its performance across domains. Using four manually annotated corpora of clinical text, we show that negation detection performance suffers when there is no in-domain development (for manual methods) or training data (for machine learning-based methods). Various factors (e.g., annotation guidelines, named entity characteristics, the amount of data, and lexical and syntactic context) play a role in making generalizability difficult, but none completely explains the phenomenon. Furthermore, generalizability remains challenging because it is unclear whether to use a single source for accurate data, combine all sources into a single model, or apply domain adaptation methods. The most reliable means to improve negation detection is to manually annotate in-domain training data (or, perhaps, manually modify rules); this is a strategy for optimizing performance, rather than generalizing it. These results suggest a direction for future work in domain-adaptive and task-adaptive methods for clinical NLP. PMID:25393544

  2. A dual-rating method for evaluating impact noise isolation of floor-ceiling assemblies.

    PubMed

    LoVerde, John J; Dong, D Wayland

    2017-01-01

    Impact Insulation Class (IIC), the single-number rating for evaluating the impact noise insulation of a floor-ceiling assembly, and the associated field testing ratings, are unsatisfactory because they do not have strong correlation with subjective reaction nor provide suitable detailed information for evaluation or design of floor-ceiling assemblies. Various proposals have been made for improving the method, but the data presented indicate that no single-number rating can adequately characterize the impact noise isolation of an assembly. For realistic impact noise sources and floor-ceiling assembly types, there are two frequency domains for impact noise, and the impact noise levels in the two domains can vary independently. Therefore, two ratings are required in order to satisfactorily evaluate the impact isolation provided by a floor-ceiling assembly. Two different ratings are introduced for measuring field impact isolation in the two frequency domains, using the existing impact source and measurement method. They are named low-frequency impact rating (LIR) and high-frequency impact rating (HIR). LIR and HIR are proposed to improve the current method for design and evaluation of floor-ceiling assemblies and also provide a better method for predicting subjective reaction.

  3. Instrumentation for low noise nanopore-based ionic current recording under laser illumination

    NASA Astrophysics Data System (ADS)

    Roelen, Zachary; Bustamante, José A.; Carlsen, Autumn; Baker-Murray, Aidan; Tabard-Cossa, Vincent

    2018-01-01

    We describe a nanopore-based optofluidic instrument capable of performing low-noise ionic current recordings of individual biomolecules under laser illumination. In such systems, simultaneous optical measurements generally introduce significant parasitic noise in the electrical signal, which can severely reduce the instrument sensitivity, critically hindering the monitoring of single-molecule events in the ionic current traces. Here, we present design rules and describe simple adjustments to the experimental setup to mitigate the different noise sources encountered when integrating optical components to an electrical nanopore system. In particular, we address the contributions to the electrical noise spectra from illuminating the nanopore during ionic current recording and mitigate those effects through control of the illumination source and the use of a PDMS layer on the SiNx membrane. We demonstrate the effectiveness of our noise minimization strategies by showing the detection of DNA translocation events during membrane illumination with a signal-to-noise ratio of ˜10 at 10 kHz bandwidth. The instrumental guidelines for noise minimization that we report are applicable to a wide range of nanopore-based optofluidic systems and offer the possibility of enhancing the quality of synchronous optical and electrical signals obtained during single-molecule nanopore-based analysis.

  4. [Economic effects of single-pack dental hygienic materials introduced into daily clinical practice].

    PubMed

    Sunakawa, Mitsuhiro; Matsumoto, Hiroyuki; Izumi, Yuichi

    2011-03-01

    To improve and maintain medical safety and quality, it is necessary to construct and manage a safe and economical medical system. Almost five years have passed since single-pack dental hygienic materials were introduced into daily clinical practice in the University Hospital, Faculty of Dentistry, Tokyo Medical and Dental University. The costs of purchasing hygienic materials themselves are higher when using outsourced sterilized single packed ones, compared with when using intra-murally sterilized ones in the past. Proper usage of single-pack hygienic materials sterilized with Ethylene Oxide Gas (EOG) would reduce waste of unused materials and save labor for staff in the Section of Central Supplies. Financially, the use of hygienic materials could be reduced if single-pack dental hygienic materials by outsourcing were introduced into the hospital, because all costs for sterilizing hygienic materials in the hospital could be eliminated.

  5. Source of the Kerr-Newman solution as a gravitating bag model: 50 years of the problem of the source of the Kerr solution

    NASA Astrophysics Data System (ADS)

    Burinskii, Alexander

    2016-01-01

    It is known that gravitational and electromagnetic fields of an electron are described by the ultra-extreme Kerr-Newman (KN) black hole solution with extremely high spin/mass ratio. This solution is singular and has a topological defect, the Kerr singular ring, which may be regularized by introducing the solitonic source based on the Higgs mechanism of symmetry breaking. The source represents a domain wall bubble interpolating between the flat region inside the bubble and external KN solution. It was shown recently that the source represents a supersymmetric bag model, and its structure is unambiguously determined by Bogomolnyi equations. The Dirac equation is embedded inside the bag consistently with twistor structure of the Kerr geometry, and acquires the mass from the Yukawa coupling with Higgs field. The KN bag turns out to be flexible, and for parameters of an electron, it takes the form of very thin disk with a circular string placed along sharp boundary of the disk. Excitation of this string by a traveling wave creates a circulating singular pole, indicating that the bag-like source of KN solution unifies the dressed and point-like electron in a single bag-string-quark system.

  6. Forced sound transmission through a finite-sized single leaf panel subject to a point source excitation.

    PubMed

    Wang, Chong

    2018-03-01

    In the case of a point source in front of a panel, the wavefront of the incident wave is spherical. This paper discusses spherical sound waves transmitting through a finite sized panel. The forced sound transmission performance that predominates in the frequency range below the coincidence frequency is the focus. Given the point source located along the centerline of the panel, forced sound transmission coefficient is derived through introducing the sound radiation impedance for spherical incident waves. It is found that in addition to the panel mass, forced sound transmission loss also depends on the distance from the source to the panel as determined by the radiation impedance. Unlike the case of plane incident waves, sound transmission performance of a finite sized panel does not necessarily converge to that of an infinite panel, especially when the source is away from the panel. For practical applications, the normal incidence sound transmission loss expression of plane incident waves can be used if the distance between the source and panel d and the panel surface area S satisfy d/S>0.5. When d/S ≈0.1, the diffuse field sound transmission loss expression may be a good approximation. An empirical expression for d/S=0  is also given.

  7. Engineering biosynthetic excitable tissues from unexcitable cells for electrophysiological and cell therapy studies.

    PubMed

    Kirkton, Robert D; Bursac, Nenad

    2011-01-01

    Patch-clamp recordings in single-cell expression systems have been traditionally used to study the function of ion channels. However, this experimental setting does not enable assessment of tissue-level function such as action potential (AP) conduction. Here we introduce a biosynthetic system that permits studies of both channel activity in single cells and electrical conduction in multicellular networks. We convert unexcitable somatic cells into an autonomous source of electrically excitable and conducting cells by stably expressing only three membrane channels. The specific roles that these expressed channels have on AP shape and conduction are revealed by different pharmacological and pacing protocols. Furthermore, we demonstrate that biosynthetic excitable cells and tissues can repair large conduction defects within primary 2- and 3-dimensional cardiac cell cultures. This approach enables novel studies of ion channel function in a reproducible tissue-level setting and may stimulate the development of new cell-based therapies for excitable tissue repair.

  8. MetaBAT, an efficient tool for accurately reconstructing single genomes from complex microbial communities

    DOE PAGES

    Kang, Dongwan D.; Froula, Jeff; Egan, Rob; ...

    2015-01-01

    Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically formsmore » hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.« less

  9. Taxonomies of networks from community structure

    PubMed Central

    Reid, Stephen; Porter, Mason A.; Mucha, Peter J.; Fricker, Mark D.; Jones, Nick S.

    2014-01-01

    The study of networks has become a substantial interdisciplinary endeavor that encompasses myriad disciplines in the natural, social, and information sciences. Here we introduce a framework for constructing taxonomies of networks based on their structural similarities. These networks can arise from any of numerous sources: they can be empirical or synthetic, they can arise from multiple realizations of a single process (either empirical or synthetic), they can represent entirely different systems in different disciplines, etc. Because mesoscopic properties of networks are hypothesized to be important for network function, we base our comparisons on summaries of network community structures. Although we use a specific method for uncovering network communities, much of the introduced framework is independent of that choice. After introducing the framework, we apply it to construct a taxonomy for 746 networks and demonstrate that our approach usefully identifies similar networks. We also construct taxonomies within individual categories of networks, and we thereby expose nontrivial structure. For example, we create taxonomies for similarity networks constructed from both political voting data and financial data. We also construct network taxonomies to compare the social structures of 100 Facebook networks and the growth structures produced by different types of fungi. PMID:23030977

  10. Taxonomies of networks from community structure

    NASA Astrophysics Data System (ADS)

    Onnela, Jukka-Pekka; Fenn, Daniel J.; Reid, Stephen; Porter, Mason A.; Mucha, Peter J.; Fricker, Mark D.; Jones, Nick S.

    2012-09-01

    The study of networks has become a substantial interdisciplinary endeavor that encompasses myriad disciplines in the natural, social, and information sciences. Here we introduce a framework for constructing taxonomies of networks based on their structural similarities. These networks can arise from any of numerous sources: They can be empirical or synthetic, they can arise from multiple realizations of a single process (either empirical or synthetic), they can represent entirely different systems in different disciplines, etc. Because mesoscopic properties of networks are hypothesized to be important for network function, we base our comparisons on summaries of network community structures. Although we use a specific method for uncovering network communities, much of the introduced framework is independent of that choice. After introducing the framework, we apply it to construct a taxonomy for 746 networks and demonstrate that our approach usefully identifies similar networks. We also construct taxonomies within individual categories of networks, and we thereby expose nontrivial structure. For example, we create taxonomies for similarity networks constructed from both political voting data and financial data. We also construct network taxonomies to compare the social structures of 100 Facebook networks and the growth structures produced by different types of fungi.

  11. Rapid scoring of genes in microbial pan-genome-wide association studies with Scoary.

    PubMed

    Brynildsrud, Ola; Bohlin, Jon; Scheffer, Lonneke; Eldholm, Vegard

    2016-11-25

    Genome-wide association studies (GWAS) have become indispensable in human medicine and genomics, but very few have been carried out on bacteria. Here we introduce Scoary, an ultra-fast, easy-to-use, and widely applicable software tool that scores the components of the pan-genome for associations to observed phenotypic traits while accounting for population stratification, with minimal assumptions about evolutionary processes. We call our approach pan-GWAS to distinguish it from traditional, single nucleotide polymorphism (SNP)-based GWAS. Scoary is implemented in Python and is available under an open source GPLv3 license at https://github.com/AdmiralenOla/Scoary .

  12. A multi-staining chip using hydrophobic valves for exfoliative cytology in cancer

    NASA Astrophysics Data System (ADS)

    Lee, Tae Hee; Bu, Jiyoon; Moon, Jung Eun; Kim, Young Jun; Kang, Yoon-Tae; Cho, Young-Ho; Kim, In Sik

    2017-07-01

    Exfoliative cytology is a highly established technique for the diagnosis of tumors. Various microfluidic devices have been developed to minimize the sample numbers by conjugating multiple antibodies in a single sample. However, the previous multi-staining devices require complex control lines and valves operated by external power sources, to deliver multiple antibodies separately for a single sample. In addition, most of these devices are composed of hydrophobic materials, causing unreliable results due to the non-specific binding of antibodies. Here, we present a multi-staining chip using hydrophobic valves, which is formed by the partial treatment of 2-hydroxyethyl methacrylate (HEMA). Our chip consists of a circular chamber, divided into six equal fan-shaped regions. Switchable injection ports are located at the center of the chamber and at the middle of the arc of each fan-shaped zone. Thus, our device is beneficial for minimizing the control lines, since pre-treatment solutions flow from the center to outer ports, while six different antibodies are introduced oppositely from the outer ports. Furthermore, hydrophobic narrow channels, connecting the central region and each of the six fan-shaped zones, are closed by capillary effect, thus preventing the fluidic mixing without external power sources. Meanwhile, HEMA treatment on the exterior region results in hydrophobic-to-hydrophilic transition and prevents the non-specific binding of antibodies. For the application, we measured the expression of six different antibodies in a single sample using our device. The expression levels of each antibody highly matched the conventional immunocytochemistry results. Our device enables cancer screening with a small number of antibodies for a single sample.

  13. Systematic analysis of the contributions of stochastic voltage gated channels to neuronal noise

    PubMed Central

    O'Donnell, Cian; van Rossum, Mark C. W.

    2014-01-01

    Electrical signaling in neurons is mediated by the opening and closing of large numbers of individual ion channels. The ion channels' state transitions are stochastic and introduce fluctuations in the macroscopic current through ion channel populations. This creates an unavoidable source of intrinsic electrical noise for the neuron, leading to fluctuations in the membrane potential and spontaneous spikes. While this effect is well known, the impact of channel noise on single neuron dynamics remains poorly understood. Most results are based on numerical simulations. There is no agreement, even in theoretical studies, on which ion channel type is the dominant noise source, nor how inclusion of additional ion channel types affects voltage noise. Here we describe a framework to calculate voltage noise directly from an arbitrary set of ion channel models, and discuss how this can be use to estimate spontaneous spike rates. PMID:25360105

  14. CellTracker (not only) for dummies.

    PubMed

    Piccinini, Filippo; Kiss, Alexa; Horvath, Peter

    2016-03-15

    Time-lapse experiments play a key role in studying the dynamic behavior of cells. Single-cell tracking is one of the fundamental tools for such analyses. The vast majority of the recently introduced cell tracking methods are limited to fluorescently labeled cells. An equally important limitation is that most software cannot be effectively used by biologists without reasonable expertise in image processing. Here we present CellTracker, a user-friendly open-source software tool for tracking cells imaged with various imaging modalities, including fluorescent, phase contrast and differential interference contrast (DIC) techniques. CellTracker is written in MATLAB (The MathWorks, Inc., USA). It works with Windows, Macintosh and UNIX-based systems. Source code and graphical user interface (GUI) are freely available at: http://celltracker.website/ horvath.peter@brc.mta.hu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. In-house zinc SAD phasing at Cu Kα edge.

    PubMed

    Kim, Min-Kyu; Lee, Sangmin; An, Young Jun; Jeong, Chang-Sook; Ji, Chang-Jun; Lee, Jin-Won; Cha, Sun-Shin

    2013-07-01

    De novo zinc single-wavelength anomalous dispersion (Zn-SAD) phasing has been demonstrated with the 1.9 Å resolution data of glucose isomerase and 2.6 Å resolution data of Staphylococcus aureus Fur (SaFur) collected using in-house Cu Kα X-ray source. The successful in-house Zn-SAD phasing of glucose isomerase, based on the anomalous signals of both zinc ions introduced to crystals by soaking and native sulfur atoms, drove us to determine the structure of SaFur, a zinc-containing transcription factor, by Zn-SAD phasing using in-house X-ray source. The abundance of zinc-containing proteins in nature, the easy zinc derivatization of the protein surface, no need of synchrotron access, and the successful experimental phasing with the modest 2.6 Å resolution SAD data indicate that inhouse Zn-SAD phasing can be widely applicable to structure determination.

  16. PLUMED 2: New feathers for an old bird

    NASA Astrophysics Data System (ADS)

    Tribello, Gareth A.; Bonomi, Massimiliano; Branduardi, Davide; Camilloni, Carlo; Bussi, Giovanni

    2014-02-01

    Enhancing sampling and analyzing simulations are central issues in molecular simulation. Recently, we introduced PLUMED, an open-source plug-in that provides some of the most popular molecular dynamics (MD) codes with implementations of a variety of different enhanced sampling algorithms and collective variables (CVs). The rapid changes in this field, in particular new directions in enhanced sampling and dimensionality reduction together with new hardware, require a code that is more flexible and more efficient. We therefore present PLUMED 2 here—a complete rewrite of the code in an object-oriented programming language (C++). This new version introduces greater flexibility and greater modularity, which both extends its core capabilities and makes it far easier to add new methods and CVs. It also has a simpler interface with the MD engines and provides a single software library containing both tools and core facilities. Ultimately, the new code better serves the ever-growing community of users and contributors in coping with the new challenges arising in the field.

  17. Volumetric Two-photon Imaging of Neurons Using Stereoscopy (vTwINS)

    PubMed Central

    Song, Alexander; Charles, Adam S.; Koay, Sue Ann; Gauthier, Jeff L.; Thiberge, Stephan Y.; Pillow, Jonathan W.; Tank, David W.

    2017-01-01

    Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large scale recording of neural activity in vivo. Here we introduce volumetric Two-photon Imaging of Neurons using Stereoscopy (vTwINS), a volumetric calcium imaging method that employs an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced “image pairs” in the resulting 2D image, and the separation distance between images is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a novel orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrate vTwINS by imaging neural population activity in mouse primary visual cortex and hippocampus. Our results demonstrate that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame-rate. PMID:28319111

  18. Advanced complex trait analysis.

    PubMed

    Gray, A; Stewart, I; Tenesa, A

    2012-12-01

    The Genome-wide Complex Trait Analysis (GCTA) software package can quantify the contribution of genetic variation to phenotypic variation for complex traits. However, as those datasets of interest continue to increase in size, GCTA becomes increasingly computationally prohibitive. We present an adapted version, Advanced Complex Trait Analysis (ACTA), demonstrating dramatically improved performance. We restructure the genetic relationship matrix (GRM) estimation phase of the code and introduce the highly optimized parallel Basic Linear Algebra Subprograms (BLAS) library combined with manual parallelization and optimization. We introduce the Linear Algebra PACKage (LAPACK) library into the restricted maximum likelihood (REML) analysis stage. For a test case with 8999 individuals and 279,435 single nucleotide polymorphisms (SNPs), we reduce the total runtime, using a compute node with two multi-core Intel Nehalem CPUs, from ∼17 h to ∼11 min. The source code is fully available under the GNU Public License, along with Linux binaries. For more information see http://www.epcc.ed.ac.uk/software-products/acta. a.gray@ed.ac.uk Supplementary data are available at Bioinformatics online.

  19. High-performance semiconductor quantum-dot single-photon sources

    NASA Astrophysics Data System (ADS)

    Senellart, Pascale; Solomon, Glenn; White, Andrew

    2017-11-01

    Single photons are a fundamental element of most quantum optical technologies. The ideal single-photon source is an on-demand, deterministic, single-photon source delivering light pulses in a well-defined polarization and spatiotemporal mode, and containing exactly one photon. In addition, for many applications, there is a quantum advantage if the single photons are indistinguishable in all their degrees of freedom. Single-photon sources based on parametric down-conversion are currently used, and while excellent in many ways, scaling to large quantum optical systems remains challenging. In 2000, semiconductor quantum dots were shown to emit single photons, opening a path towards integrated single-photon sources. Here, we review the progress achieved in the past few years, and discuss remaining challenges. The latest quantum dot-based single-photon sources are edging closer to the ideal single-photon source, and have opened new possibilities for quantum technologies.

  20. Quantifying intrinsic and extrinsic control of single-cell fates in cancer and stem/progenitor cell pedigrees with competing risks analysis

    PubMed Central

    Cornwell, J. A.; Hallett, R. M.; der Mauer, S. Auf; Motazedian, A.; Schroeder, T.; Draper, J. S.; Harvey, R. P.; Nordon, R. E.

    2016-01-01

    The molecular control of cell fate and behaviour is a central theme in biology. Inherent heterogeneity within cell populations requires that control of cell fate is studied at the single-cell level. Time-lapse imaging and single-cell tracking are powerful technologies for acquiring cell lifetime data, allowing quantification of how cell-intrinsic and extrinsic factors control single-cell fates over time. However, cell lifetime data contain complex features. Competing cell fates, censoring, and the possible inter-dependence of competing fates, currently present challenges to modelling cell lifetime data. Thus far such features are largely ignored, resulting in loss of data and introducing a source of bias. Here we show that competing risks and concordance statistics, previously applied to clinical data and the study of genetic influences on life events in twins, respectively, can be used to quantify intrinsic and extrinsic control of single-cell fates. Using these statistics we demonstrate that 1) breast cancer cell fate after chemotherapy is dependent on p53 genotype; 2) granulocyte macrophage progenitors and their differentiated progeny have concordant fates; and 3) cytokines promote self-renewal of cardiac mesenchymal stem cells by symmetric divisions. Therefore, competing risks and concordance statistics provide a robust and unbiased approach for evaluating hypotheses at the single-cell level. PMID:27250534

  1. Simulations indicate that scores of lionfish (Pterois volitans) colonized the Atlantic Ocean.

    PubMed

    Selwyn, Jason D; Johnson, John E; Downey-Wall, Alan M; Bynum, Adam M; Hamner, Rebecca M; Hogan, J Derek; Bird, Christopher E

    2017-01-01

    The invasion of the western Atlantic Ocean by the Indo-Pacific red lionfish ( Pterois volitans ) has had devastating consequences for marine ecosystems. Estimating the number of colonizing lionfish can be useful in identifying the introduction pathway and can inform policy decisions aimed at preventing similar invasions. It is well-established that at least ten lionfish were initially introduced. However, that estimate has not faced probabilistic scrutiny and is based solely on the number of haplotypes in the maternally-inherited mitochondrial control region. To rigorously estimate the number of lionfish that were introduced, we used a forward-time, Wright-Fisher, population genetic model in concert with a demographic, life-history model to simulate the invasion across a range of source population sizes and colonizing population fecundities. Assuming a balanced sex ratio and no Allee effects, the simulations indicate that the Atlantic population was founded by 118 (54-514, 95% HPD) lionfish from the Indo-Pacific, the Caribbean by 84 (22-328, 95% HPD) lionfish from the Atlantic, and the Gulf of Mexico by at least 114 (no upper bound on 95% HPD) lionfish from the Caribbean. Increasing the size, and therefore diversity, of the Indo-Pacific source population and fecundity of the founding population caused the number of colonists to decrease, but with rapidly diminishing returns. When the simulation was parameterized to minimize the number of colonists (high θ and relative fecundity), 96 (48-216, 95% HPD) colonists were most likely. In a more realistic scenario with Allee effects (e.g., 50% reduction in fecundity) plaguing the colonists, the most likely number of lionfish increased to 272 (106-950, 95% HPD). These results, in combination with other published data, support the hypothesis that lionfish were introduced to the Atlantic via the aquarium trade, rather than shipping. When building the model employed here, we made assumptions that minimize the number of colonists, such as the lionfish being introduced in a single event. While we conservatively modelled the introduction pathway as a single release of lionfish in one location, it is more likely that a combination of smaller and larger releases from a variety of aquarium trade stakeholders occurred near Miami, Florida, which could have led to even larger numbers of colonists than simulated here. Efforts to prevent future invasions via the aquarium trade should focus on the education of stakeholders and the prohibition of release, with adequate rewards for compliance and penalties for violations.

  2. Simulations indicate that scores of lionfish (Pterois volitans) colonized the Atlantic Ocean

    PubMed Central

    Selwyn, Jason D.; Johnson, John E.; Downey-Wall, Alan M.; Bynum, Adam M.; Hamner, Rebecca M.; Hogan, J. Derek

    2017-01-01

    The invasion of the western Atlantic Ocean by the Indo-Pacific red lionfish (Pterois volitans) has had devastating consequences for marine ecosystems. Estimating the number of colonizing lionfish can be useful in identifying the introduction pathway and can inform policy decisions aimed at preventing similar invasions. It is well-established that at least ten lionfish were initially introduced. However, that estimate has not faced probabilistic scrutiny and is based solely on the number of haplotypes in the maternally-inherited mitochondrial control region. To rigorously estimate the number of lionfish that were introduced, we used a forward-time, Wright-Fisher, population genetic model in concert with a demographic, life-history model to simulate the invasion across a range of source population sizes and colonizing population fecundities. Assuming a balanced sex ratio and no Allee effects, the simulations indicate that the Atlantic population was founded by 118 (54–514, 95% HPD) lionfish from the Indo-Pacific, the Caribbean by 84 (22–328, 95% HPD) lionfish from the Atlantic, and the Gulf of Mexico by at least 114 (no upper bound on 95% HPD) lionfish from the Caribbean. Increasing the size, and therefore diversity, of the Indo-Pacific source population and fecundity of the founding population caused the number of colonists to decrease, but with rapidly diminishing returns. When the simulation was parameterized to minimize the number of colonists (high θ and relative fecundity), 96 (48–216, 95% HPD) colonists were most likely. In a more realistic scenario with Allee effects (e.g., 50% reduction in fecundity) plaguing the colonists, the most likely number of lionfish increased to 272 (106–950, 95% HPD). These results, in combination with other published data, support the hypothesis that lionfish were introduced to the Atlantic via the aquarium trade, rather than shipping. When building the model employed here, we made assumptions that minimize the number of colonists, such as the lionfish being introduced in a single event. While we conservatively modelled the introduction pathway as a single release of lionfish in one location, it is more likely that a combination of smaller and larger releases from a variety of aquarium trade stakeholders occurred near Miami, Florida, which could have led to even larger numbers of colonists than simulated here. Efforts to prevent future invasions via the aquarium trade should focus on the education of stakeholders and the prohibition of release, with adequate rewards for compliance and penalties for violations. PMID:29302383

  3. DAMAS Processing for a Phased Array Study in the NASA Langley Jet Noise Laboratory

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Humphreys, William M.; Plassman, Gerald e.

    2010-01-01

    A jet noise measurement study was conducted using a phased microphone array system for a range of jet nozzle configurations and flow conditions. The test effort included convergent and convergent/divergent single flow nozzles, as well as conventional and chevron dual-flow core and fan configurations. Cold jets were tested with and without wind tunnel co-flow, whereas, hot jets were tested only with co-flow. The intent of the measurement effort was to allow evaluation of new phased array technologies for their ability to separate and quantify distributions of jet noise sources. In the present paper, the array post-processing method focused upon is DAMAS (Deconvolution Approach for the Mapping of Acoustic Sources) for the quantitative determination of spatial distributions of noise sources. Jet noise is highly complex with stationary and convecting noise sources, convecting flows that are the sources themselves, and shock-related and screech noise for supersonic flow. The analysis presented in this paper addresses some processing details with DAMAS, for the array positioned at 90 (normal) to the jet. The paper demonstrates the applicability of DAMAS and how it indicates when strong coherence is present. Also, a new approach to calibrating the array focus and position is introduced and demonstrated.

  4. Time-of-flight mass spectrometry: Introduction to the basics.

    PubMed

    Boesl, Ulrich

    2017-01-01

    The intention of this tutorial is to introduce into the basic concepts of time-of-flight mass spectrometry, beginning with the most simple single-stage ion source with linear field-free drift region and continuing with two-stage ion sources combined with field-free drift regions and ion reflectors-the so-called reflectrons. Basic formulas are presented and discussed with the focus on understanding the physical relations of geometric and electric parameters, initial distribution of ionic parameters, ion flight times, and ion flight time incertitude. This tutorial is aimed to help the applicant to identify sources of flight time broadening which limit good mass resolution and sources of ion losses which limit sensitivity; it is aimed to stimulate creativity for new experimental approaches by discussing a choice of instrumental options and to encourage those who toy with the idea to build an own time-of-flight mass spectrometer. Large parts of mathematics are shifted into a separate chapter in order not to overburden the text with too many mathematical deviations. Rather, thumb-rule formulas are supplied for first estimations of geometry and potentials when designing a home-built instrument, planning experiments, or searching for sources of flight time broadening. © 2016 Wiley Periodicals, Inc. Mass Spec Rev 36:86-109, 2017. © 2016 Wiley Periodicals, Inc.

  5. Advances in the computation of the Sjöstrand, Rossi, and Feynman distributions

    DOE PAGES

    Talamo, A.; Gohar, Y.; Gabrielli, F.; ...

    2017-02-01

    This study illustrates recent computational advances in the application of the Sjöstrand (area), Rossi, and Feynman methods to estimate the effective multiplication factor of a subcritical system driven by an external neutron source. The methodologies introduced in this study have been validated with the experimental results from the KUKA facility of Japan by Monte Carlo (MCNP6 and MCNPX) and deterministic (ERANOS, VARIANT, and PARTISN) codes. When the assembly is driven by a pulsed neutron source generated by a particle accelerator and delayed neutrons are at equilibrium, the Sjöstrand method becomes extremely fast if the integral of the reaction rate frommore » a single pulse is split into two parts. These two integrals distinguish between the neutron counts during and after the pulse period. To conclude, when the facility is driven by a spontaneous fission neutron source, the timestamps of the detector neutron counts can be obtained up to the nanosecond precision using MCNP6, which allows obtaining the Rossi and Feynman distributions.« less

  6. Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing

    NASA Astrophysics Data System (ADS)

    Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline

    2017-11-01

    Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.

  7. Automated deconvolution of structured mixtures from heterogeneous tumor genomic data

    PubMed Central

    Roman, Theodore; Xie, Lu

    2017-01-01

    With increasing appreciation for the extent and importance of intratumor heterogeneity, much attention in cancer research has focused on profiling heterogeneity on a single patient level. Although true single-cell genomic technologies are rapidly improving, they remain too noisy and costly at present for population-level studies. Bulk sequencing remains the standard for population-scale tumor genomics, creating a need for computational tools to separate contributions of multiple tumor clones and assorted stromal and infiltrating cell populations to pooled genomic data. All such methods are limited to coarse approximations of only a few cell subpopulations, however. In prior work, we demonstrated the feasibility of improving cell type deconvolution by taking advantage of substructure in genomic mixtures via a strategy called simplicial complex unmixing. We improve on past work by introducing enhancements to automate learning of substructured genomic mixtures, with specific emphasis on genome-wide copy number variation (CNV) data, as well as the ability to process quantitative RNA expression data, and heterogeneous combinations of RNA and CNV data. We introduce methods for dimensionality estimation to better decompose mixture model substructure; fuzzy clustering to better identify substructure in sparse, noisy data; and automated model inference methods for other key model parameters. We further demonstrate their effectiveness in identifying mixture substructure in true breast cancer CNV data from the Cancer Genome Atlas (TCGA). Source code is available at https://github.com/tedroman/WSCUnmix PMID:29059177

  8. Ways to suppress click and pop for class D amplifiers

    NASA Astrophysics Data System (ADS)

    Haishi, Wang; Bo, Zhang; Jiang, Sun

    2012-08-01

    Undesirable audio click and pop may be generated in a speaker or headphone. Compared to linear (class A/B/AB) amplifiers, class D amplifiers that comprise of an input stage and a modulation stage are more prone to producing click and pop. This article analyzes sources that generate click and pop in class D amplifiers, and corresponding ways to suppress them. For a class D amplifier with a single-ended input, click and pop is likely to be due to two factors. One is from a voltage difference (VDIF) between the voltage of an input capacitance (VCIN) and a reference voltage (VREF) of the input stage, and the other one is from the non-linear switching during the setting up of the bias and feedback voltages/currents (BFVC) of the modulation stage. In this article, a fast charging loop is introduced into the input stage to charge VCIN to roughly near VREF. Then a correction loop further charges or discharges VCIN, substantially equalizing it with VREF. Dummy switches are introduced into the modulation stage to provide switching signals for setting up BFVC, and the power switches are disabled until the BFVC are set up successfully. A two channel single-ended class D amplifier with the above features is fabricated with 0.5 μm Bi-CMOS process. Road test and fast Fourier transform analysis indicate that there is no noticeable click and pop.

  9. Computational inverse methods of heat source in fatigue damage problems

    NASA Astrophysics Data System (ADS)

    Chen, Aizhou; Li, Yuan; Yan, Bo

    2018-04-01

    Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.

  10. Source identification of autochthonous-introduced Plasmodium vivax Malaria, Spain.

    PubMed

    Barrado, Laura; Ezpeleta, Carmen; Rubio, José Miguel; Martín, Carmen; Azcona, José Manuel; Arteaga, Miren; Beristain, Xabier; Navascués, Ana; Ongay, Eva; Castilla, Jesús

    2017-02-01

    In 2014, an autochthonous case of introduced malaria caused by Plasmodium vivax was identified in Spain. The strain that infected this patient was identical to that of a prior imported case from Pakistan. This is the first case where the source of infection could be identified since elimination in Spain.

  11. Identifying the institutional decision process to introduce decentralized sanitation in the city of Kunming (China).

    PubMed

    Medilanski, Edi; Chuan, Liang; Mosler, Hans-Joachim; Schertenleib, Roland; Larsen, Tove A

    2007-05-01

    We conducted a study of the institutional barriers to introducing urine source separation in the urban area of Kunming, China. On the basis of a stakeholder analysis, we constructed stakeholder diagrams showing the relative importance of decision-making power and (positive) interest in the topic. A hypothetical decision-making process for the urban case was derived based on a successful pilot project in a periurban area. All our results were evaluated by the stakeholders. We concluded that although a number of primary stakeholders have a large interest in testing urine source separation also in an urban context, most of the key stakeholders would be reluctant to this idea. However, the success in the periurban area showed that even a single, well-received pilot project can trigger the process of broad dissemination of new technologies. Whereas the institutional setting for such a pilot project is favorable in Kunming, a major challenge will be to adapt the technology to the demands of an urban population. Methodologically, we developed an approach to corroborate a stakeholder analysis with the perception of the stakeholders themselves. This is important not only in order to validate the analysis but also to bridge the theoretical gap between stakeholder analysis and stakeholder involvement. We also show that in disagreement with the assumption of most policy theories, local stakeholders consider informal decision pathways to be of great importance in actual policy-making.

  12. Bridgehead Effect in the Worldwide Invasion of the Biocontrol Harlequin Ladybird

    PubMed Central

    Lombaert, Eric; Guillemaud, Thomas; Cornuet, Jean-Marie; Malausa, Thibaut; Facon, Benoît; Estoup, Arnaud

    2010-01-01

    Recent studies of the routes of worldwide introductions of alien organisms suggest that many widespread invasions could have stemmed not from the native range, but from a particularly successful invasive population, which serves as the source of colonists for remote new territories. We call here this phenomenon the invasive bridgehead effect. Evaluating the likelihood of such a scenario is heuristically challenging. We solved this problem by using approximate Bayesian computation methods to quantitatively compare complex invasion scenarios based on the analysis of population genetics (microsatellite variation) and historical (first observation dates) data. We applied this approach to the Harlequin ladybird Harmonia axyridis (HA), a coccinellid native to Asia that was repeatedly introduced as a biocontrol agent without becoming established for decades. We show that the recent burst of worldwide invasions of HA followed a bridgehead scenario, in which an invasive population in eastern North America acted as the source of the colonists that invaded the European, South American and African continents, with some admixture with a biocontrol strain in Europe. This demonstration of a mechanism of invasion via a bridgehead has important implications both for invasion theory (i.e., a single evolutionary shift in the bridgehead population versus multiple changes in case of introduced populations becoming invasive independently) and for ongoing efforts to manage invasions by alien organisms (i.e., heightened vigilance against invasive bridgeheads). PMID:20305822

  13. Identifying the Institutional Decision Process to Introduce Decentralized Sanitation in the City of Kunming (China)

    NASA Astrophysics Data System (ADS)

    Medilanski, Edi; Chuan, Liang; Mosler, Hans-Joachim; Schertenleib, Roland; Larsen, Tove A.

    2007-05-01

    We conducted a study of the institutional barriers to introducing urine source separation in the urban area of Kunming, China. On the basis of a stakeholder analysis, we constructed stakeholder diagrams showing the relative importance of decision-making power and (positive) interest in the topic. A hypothetical decision-making process for the urban case was derived based on a successful pilot project in a periurban area. All our results were evaluated by the stakeholders. We concluded that although a number of primary stakeholders have a large interest in testing urine source separation also in an urban context, most of the key stakeholders would be reluctant to this idea. However, the success in the periurban area showed that even a single, well-received pilot project can trigger the process of broad dissemination of new technologies. Whereas the institutional setting for such a pilot project is favorable in Kunming, a major challenge will be to adapt the technology to the demands of an urban population. Methodologically, we developed an approach to corroborate a stakeholder analysis with the perception of the stakeholders themselves. This is important not only in order to validate the analysis but also to bridge the theoretical gap between stakeholder analysis and stakeholder involvement. We also show that in disagreement with the assumption of most policy theories, local stakeholders consider informal decision pathways to be of great importance in actual policy-making.

  14. Bubble colloidal AFM probes formed from ultrasonically generated bubbles.

    PubMed

    Vakarelski, Ivan U; Lee, Judy; Dagastine, Raymond R; Chan, Derek Y C; Stevens, Geoffrey W; Grieser, Franz

    2008-02-05

    Here we introduce a simple and effective experimental approach to measuring the interaction forces between two small bubbles (approximately 80-140 microm) in aqueous solution during controlled collisions on the scale of micrometers to nanometers. The colloidal probe technique using atomic force microscopy (AFM) was extended to measure interaction forces between a cantilever-attached bubble and surface-attached bubbles of various sizes. By using an ultrasonic source, we generated numerous small bubbles on a mildly hydrophobic surface of a glass slide. A single bubble picked up with a strongly hydrophobized V-shaped cantilever was used as the colloidal probe. Sample force measurements were used to evaluate the pure water bubble cleanliness and the general consistency of the measurements.

  15. Inhalation exposure to cleaning products: application of a two-zone model.

    PubMed

    Earnest, C Matt; Corsi, Richard L

    2013-01-01

    In this study, modifications were made to previously applied two-zone models to address important factors that can affect exposures during cleaning tasks. Specifically, we expand on previous applications of the two-zone model by (1) introducing the source in discrete elements (source-cells) as opposed to a complete instantaneous release, (2) placing source cells in both the inner (near person) and outer zones concurrently, (3) treating each source cell as an independent mixture of multiple constituents, and (4) tracking the time-varying liquid concentration and emission rate of each constituent in each source cell. Three experiments were performed in an environmentally controlled chamber with a thermal mannequin and a simplified pure chemical source to simulate emissions from a cleaning product. Gas phase concentration measurements were taken in the bulk air and in the breathing zone of the mannequin to evaluate the model. The mean ratio of the integrated concentration in the mannequin's breathing zone to the concentration in the outer zone was 4.3 (standard deviation, σ = 1.6). The mean ratio of measured concentration in the breathing zone to predicted concentrations in the inner zone was 0.81 (σ = 0.16). Intake fractions ranged from 1.9 × 10(-3) to 2.7 × 10(-3). Model results reasonably predict those of previous exposure monitoring studies and indicate the inadequacy of well-mixed single-zone model applications for some but not all cleaning events.

  16. Single photon source with individualized single photon certifications

    NASA Astrophysics Data System (ADS)

    Migdall, Alan L.; Branning, David A.; Castelletto, Stefania; Ware, M.

    2002-12-01

    As currently implemented, single-photon sources cannot be made to produce single photons with high probability, while simultaneously suppressing the probability of yielding two or more photons. Because of this, single photon sources cannot really produce single photons on demand. We describe a multiplexed system that allows the probabilities of producing one and more photons to be adjusted independently, enabling a much better approximation of a source of single photons on demand. The scheme uses a heralded photon source based on parametric downconversion, but by effectively breaking the trigger detector area into multiple regions, we are able to extract more information about a heralded photon than is possible with a conventional arrangement. This scheme allows photons to be produced along with a quantitative 'certification' that they are single photons. Some of the single-photon certifications can be significantly better than what is possible with conventional downconversion sources, as well as being better than faint laser sources. With such a source of more tightly certified single photons, it should be possible to improve the maximum secure bit rate possible over a quantum cryptographic link. We present an analysis of the relative merits of this method over the conventional arrangement.

  17. Single-mode 140 nm swept light source realized by using SSG-DBR lasers

    NASA Astrophysics Data System (ADS)

    Fujiwara, N.; Yoshimura, R.; Kato, K.; Ishii, H.; Kano, F.; Kawaguchi, Y.; Kondo, Y.; Ohbayashi, K.; Oohashi, H.

    2008-02-01

    We demonstrate a single-mode and fast wavelength swept light source by using Superestrucuture grating distributed Bragg reflector (SSG-DBR) lasers for use in optical frequency-domain reflectometry optical coherence tomography. The SSG-DBR lasers provide single-mode operation resulting in high coherency. Response of the wavelength tuning is very fast; several nanoseconds, but there was an unintentional wavelength drift resulting from a thermal drift due to injecting tuning current. The dri1ft unfortunately requires long time to converge; more than a few milliseconds. For suppressing the wavelength drift, we introduced Thermal Drift Compensation mesa (TDC) parallel to the laser mesa with the spacing of 20 μm. By controlling TDC current to satisfy the total electric power injected into both the laser mesa and the TDC mesa, the thermal drift can be suppressed. In the present work, we fabricated 4 wavelength's kinds of SSG-DBR laser, which covers respective wavelength band; S-band (1496-1529 nm), C-band (1529-1564 nm), L --band (1564-1601 nm), and L +-band (1601-1639). We set the frequency channel of each laser with the spacing 6.25 GHz and 700 channels. The total frequency channel number is 2800 channels (700 ch × 4 lasers). We simultaneously operated the 4 lasers with a time interval of 500 ns/channel. A wavelength tuning range of more than 140 nm was achieved within 350 μs. The output power was controlled to be 10 mW for all channels. A single-mode, accurate, wide, and fast wavelength sweep was demonstrated with the SSG-DBR lasers having TDC mesa structure for the first time.

  18. Single-source precursors for ternary chalcopyrite materials, and methods of making and using the same

    NASA Technical Reports Server (NTRS)

    Banger, Kulbinder K. (Inventor); Hepp, Aloysius F. (Inventor); Harris, Jerry D. (Inventor); Jin, Michael Hyun-Chul (Inventor); Castro, Stephanie L. (Inventor)

    2006-01-01

    A single source precursor for depositing ternary I-III-VI.sub.2 chalcopyrite materials useful as semiconductors. The single source precursor has the I-III-VI.sub.2 stoichiometry built into a single precursor molecular structure which degrades on heating or pyrolysis to yield the desired I-III-VI.sub.2 ternary chalcopyrite. The single source precursors effectively degrade to yield the ternary chalcopyrite at low temperature, e.g. below 500.degree. C., and are useful to deposit thin film ternary chalcopyrite layers via a spray CVD technique. The ternary single source precursors according to the invention can be used to provide nanocrystallite structures useful as quantum dots. A method of making the ternary single source precursors is also provided.

  19. Algorithms and analytical solutions for rapidly approximating long-term dispersion from line and area sources

    NASA Astrophysics Data System (ADS)

    Barrett, Steven R. H.; Britter, Rex E.

    Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105-8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.

  20. 40 CFR 125.57 - Law governing issuance of a section 301(h) modified permit.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... the discharge of any pollutant from a publicly owned treatment works into marine waters, if the... pretreatment requirements for sources introducing waste into such treatment works will be enforced; (6) In the... pretreatment requirement in effect, sources introducing waste into such works are in compliance with all...

  1. Nā Inoa Hōkū: Hawaiian and Polynesian star names

    NASA Astrophysics Data System (ADS)

    Ruggles, Clive L. N.; Kaipo Mahelona, John; Kawena Johnson, Rubellite

    2015-08-01

    In this paper we report on a 15-year project to construct a comprehensive catalogue of Hawaiian star names documented in historical sources, which is being published this year.While a number of Hawaiian star names are well known, a major challenge is to separate reliable first-hand information, mostly in Hawaiian-language archival sources dating back to the mid-19th century, from later commentaries and interpretations, many of which have introduced assumptions and errors that have become embedded in the literature. Some new star names have also been introduced recently, in the traditional style, as part of the living tradition of Hawaiian and Polynesian voyaging.The starting point for our project was a catalogue of Hawaiian and Polynesian star names published by two of the authors (Johnson and Mahelona) 40 years ago, which contained many first-hand translations of primary sources researched in archives during the 1950s to 1970s. Since that time, a number of new primary sources have been identified, and these and other primary sources have been translated or re-translated as part of the project. The sources, often fragmentary, reveal much more than just the use of star observations for navigation and wayfinding, hugely important as this was. There was no single tradition but a complex and dynamic body of astronomical knowledge. Particular star names are not always consistently applied to the same stars. Accounts of physical characteristics such as the dates and times of appearance and disappearance of particular stars do not necessarily make sense in a Western, objective sense: they may, for example, represent times when the asterisms in question became important for divinatory purposes.Such challenges make it all the more important to construct a resource that is as reliable as possible for future scholars, not only within Hawaiian cultural studies but also for comparative analyses with star names in other parts of Polynesia, which have the potential to shed important new light on beliefs and practices brought to the Hawaiian Islands by the earliest Polynesian settlers.

  2. 40 CFR 411.26 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... GUIDELINES AND STANDARDS CEMENT MANUFACTURING POINT SOURCE CATEGORY Leaching Subcategory § 411.26 Pretreatment standards for new sources. Any new source subject to this subpart that introduces process...

  3. Quad-Chip Double-Balanced Frequency Tripler

    NASA Technical Reports Server (NTRS)

    Lin, Robert H.; Ward, John S.; Bruneau, Peter J.; Mehdi, Imran; Thomas, Bertrand C.; Maestrini, Alain

    2010-01-01

    Solid-state frequency multipliers are used to produce tunable broadband sources at millimeter and submillimeter wavelengths. The maximum power produced by a single chip is limited by the electrical breakdown of the semiconductor and by the thermal management properties of the chip. The solution is to split the drive power to a frequency tripler using waveguides to divide the power among four chips, then recombine the output power from the four chips back into a single waveguide. To achieve this, a waveguide branchline quadrature hybrid coupler splits a 100-GHz input signal into two paths with a 90 relative phase shift. These two paths are split again by a pair of waveguide Y-junctions. The signals from the four outputs of the Y-junctions are tripled in frequency using balanced Schottky diode frequency triplers before being recombined with another pair of Y-junctions. A final waveguide branchline quadrature hybrid coupler completes the combination. Using four chips instead of one enables using four-times higher power input, and produces a nearly four-fold power output as compared to using a single chip. The phase shifts introduced by the quadrature hybrid couplers provide isolation for the input and output waveguides, effectively eliminating standing waves between it and surrounding components. This is accomplished without introducing the high losses and expense of ferrite isolators. A practical use of this technology is to drive local oscillators as was demonstrated around 300 GHz for a heterodyne spectrometer operating in the 2-3-THz band. Heterodyne spectroscopy in this frequency band is especially valuable for astrophysics due to the presence of a very large number of molecular spectral lines. Besides high-resolution radar and spectrographic screening applications, this technology could also be useful for laboratory spectroscopy.

  4. The extension of the parametrization of the radio source coordinates in geodetic VLBI and its impact on the time series analysis

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2017-07-01

    The radio sources within the most recent celestial reference frame (CRF) catalog ICRF2 are represented by a single, time-invariant coordinate pair. The datum sources were chosen mainly according to certain statistical properties of their position time series. Yet, such statistics are not applicable unconditionally, and also ambiguous. However, ignoring systematics in the source positions of the datum sources inevitably leads to a degradation of the quality of the frame and, therefore, also of the derived quantities such as the Earth orientation parameters. One possible approach to overcome these deficiencies is to extend the parametrization of the source positions, similarly to what is done for the station positions. We decided to use the multivariate adaptive regression splines algorithm to parametrize the source coordinates. It allows a great deal of automation, by combining recursive partitioning and spline fitting in an optimal way. The algorithm finds the ideal knot positions for the splines and, thus, the best number of polynomial pieces to fit the data autonomously. With that we can correct the ICRF2 a priori coordinates for our analysis and eliminate the systematics in the position estimates. This allows us to introduce also special handling sources into the datum definition, leading to on average 30 % more sources in the datum. We find that not only the CPO can be improved by more than 10 % due to the improved geometry, but also the station positions, especially in the early years of VLBI, can benefit greatly.

  5. Nitrogen-vacancy centers in diamond: nanoscale sensors for physics and biology.

    PubMed

    Schirhagl, Romana; Chang, Kevin; Loretz, Michael; Degen, Christian L

    2014-01-01

    Crystal defects in diamond have emerged as unique objects for a variety of applications, both because they are very stable and because they have interesting optical properties. Embedded in nanocrystals, they can serve, for example, as robust single-photon sources or as fluorescent biomarkers of unlimited photostability and low cytotoxicity. The most fascinating aspect, however, is the ability of some crystal defects, most prominently the nitrogen-vacancy (NV) center, to locally detect and measure a number of physical quantities, such as magnetic and electric fields. This metrology capacity is based on the quantum mechanical interactions of the defect's spin state. In this review, we introduce the new and rapidly evolving field of nanoscale sensing based on single NV centers in diamond. We give a concise overview of the basic properties of diamond, from synthesis to electronic and magnetic properties of embedded NV centers. We describe in detail how single NV centers can be harnessed for nanoscale sensing, including the physical quantities that may be detected, expected sensitivities, and the most common measurement protocols. We conclude by highlighting a number of the diverse and exciting applications that may be enabled by these novel sensors, ranging from measurements of ion concentrations and membrane potentials to nanoscale thermometry and single-spin nuclear magnetic resonance.

  6. Linearly polarized emission from an embedded quantum dot using nanowire morphology control.

    PubMed

    Foster, Andrew P; Bradley, John P; Gardner, Kirsty; Krysa, Andrey B; Royall, Ben; Skolnick, Maurice S; Wilson, Luke R

    2015-03-11

    GaAs nanowires with elongated cross sections are formed using a catalyst-free growth technique. This is achieved by patterning elongated nanoscale openings within a silicon dioxide growth mask on a (111)B GaAs substrate. It is observed that MOVPE-grown vertical nanowires with cross section elongated in the [21̅1̅] and [1̅12] directions remain faithful to the geometry of the openings. An InGaAs quantum dot with weak radial confinement is realized within each nanowire by briefly introducing indium into the reactor during nanowire growth. Photoluminescence emission from an embedded nanowire quantum dot is strongly linearly polarized (typically >90%) with the polarization direction coincident with the axis of elongation. Linearly polarized PL emission is a result of embedding the quantum dot in an anisotropic nanowire structure that supports a single strongly confined, linearly polarized optical mode. This research provides a route to the bottom-up growth of linearly polarized single photon sources of interest for quantum information applications.

  7. Structural elucidation of direct analysis in real time ionized nerve agent simulants with infrared multiple photon dissociation spectroscopy.

    PubMed

    Rummel, Julia L; Steill, Jeffrey D; Oomens, Jos; Contreras, Cesar S; Pearson, Wright L; Szczepanski, Jan; Powell, David H; Eyler, John R

    2011-06-01

    Infrared multiple photon dissociation (IRMPD) was used to generate vibrational spectra of ions produced with a direct analysis in real time (DART) ionization source coupled to a 4.7 T Fourier transform ion cyclotron resonance (FT-ICR) mass spectrometer. The location of protonation on the nerve agent simulants diisopropyl methylphosphonate (DIMP) and dimethyl methylphosphonate (DMMP) was studied while solutions of the compounds were introduced for extended periods of time with a syringe pump. Theoretical vibrational spectra were generated with density functional theory calculations. Visual comparison of experimental mid-IR IRMPD spectra and theoretical spectra could not establish definitively if a single structure or a mixture of conformations was present for the protonated parent of each compound. However, theoretical calculations, near-ir IRMPD spectra, and frequency-to-frequency and statistical comparisons indicated that the protonation site for both DIMP and DMMP was predominantly, if not exclusively, the phosphonyl oxygen instead of one of the oxygen atoms with only single bonds.

  8. Violation of a Bell-like inequality in single-neutron interferometry.

    PubMed

    Hasegawa, Yuji; Loidl, Rudolf; Badurek, Gerald; Baron, Matthias; Rauch, Helmut

    2003-09-04

    Non-local correlations between spatially separated systems have been extensively discussed in the context of the Einstein, Podolsky and Rosen (EPR) paradox and Bell's inequalities. Many proposals and experiments designed to test hidden variable theories and the violation of Bell's inequalities have been reported; usually, these involve correlated photons, although recently an experiment was performed with (9)Be(+) ions. Nevertheless, it is of considerable interest to show that such correlations (arising from quantum mechanical entanglement) are not simply a peculiarity of photons. Here we measure correlations between two degrees of freedom (comprising spatial and spin components) of single neutrons; this removes the need for a source of entangled neutron pairs, which would present a considerable technical challenge. A Bell-like inequality is introduced to clarify the correlations that can arise between observables of otherwise independent degrees of freedom. We demonstrate the violation of this Bell-like inequality: our measured value is 2.051 +/- 0.019, clearly above the value of 2 predicted by classical hidden variable theories.

  9. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy

    PubMed Central

    2017-01-01

    Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package. PMID:28100584

  10. A mercury arc lamp-based multi-color confocal real time imaging system for cellular structure and function.

    PubMed

    Saito, Kenta; Kobayashi, Kentaro; Tani, Tomomi; Nagai, Takeharu

    2008-01-01

    Multi-point scanning confocal microscopy using a Nipkow disk enables the acquisition of fluorescent images with high spatial and temporal resolutions. Like other single-point scanning confocal systems that use Galvano meter mirrors, a commercially available Nipkow spinning disk confocal unit, Yokogawa CSU10, requires lasers as the excitation light source. The choice of fluorescent dyes is strongly restricted, however, because only a limited number of laser lines can be introduced into a single confocal system. To overcome this problem, we developed an illumination system in which light from a mercury arc lamp is scrambled to make homogeneous light by passing it through a multi-mode optical fiber. This illumination system provides incoherent light with continuous wavelengths, enabling the observation of a wide range of fluorophores. Using this optical system, we demonstrate both the high-speed imaging (up to 100 Hz) of intracellular Ca(2+) propagation, and the multi-color imaging of Ca(2+) and PKC-gamma dynamics in living cells.

  11. Engineering biosynthetic excitable tissues from unexcitable cells for electrophysiological and cell therapy studies

    PubMed Central

    Kirkton, Robert D.; Bursac, Nenad

    2012-01-01

    Patch-clamp recordings in single-cell expression systems have been traditionally used to study the function of ion channels. However, this experimental setting does not enable assessment of tissue-level function such as action potential (AP) conduction. Here we introduce a biosynthetic system that permits studies of both channel activity in single cells and electrical conduction in multicellular networks. We convert unexcitable somatic cells into an autonomous source of electrically excitable and conducting cells by stably expressing only three membrane channels. The specific roles that these expressed channels have on AP shape and conduction are revealed by different pharmacological and pacing protocols. Furthermore, we demonstrate that biosynthetic excitable cells and tissues can repair large conduction defects within primary 2- and 3-dimensional cardiac cell cultures. This approach enables novel studies of ion channel function in a reproducible tissue-level setting and may stimulate the development of new cell-based therapies for excitable tissue repair. PMID:21556054

  12. PCR-mediated site-directed mutagenesis.

    PubMed

    Carey, Michael F; Peterson, Craig L; Smale, Stephen T

    2013-08-01

    Unlike traditional site-directed mutagenesis, this protocol requires only a single PCR step using full plasmid amplification to generate point mutants. The method can introduce small mutations into promoter sites and is even better suited for introducing single or double mutations into proteins. It is elegant in its simplicity and can be applied quite easily in any laboratory using standard protein expression vectors and commercially available reagents.

  13. Uncertainty Quantification For Physical and Numerical Diffusion Models In Inertial Confinement Fusion Simulations

    NASA Astrophysics Data System (ADS)

    Rana, Verinder S.

    This thesis concerns simulations of Inertial Confinement Fusion. Inertial confinement is carried out in a large scale facility at National Ignition Facility. The experiments have failed to reproduce design calculations, and so uncertainty quantification of calculations is an important asset. Uncertainties can be classified as aleatoric or epistemic. This thesis is concerned with aleatoric uncertainty quantification. Among the many uncertain aspects that affect the simulations, we have narrowed our study of possible uncertainties. The first source of uncertainty we present is the amount of pre-heating of the fuel done by hot electrons. The second source of uncertainty we consider is the effect of the algorithmic and physical transport diffusion and their effect on the hot spot thermodynamics. Physical transport mechanisms play an important role for the entire duration of the ICF capsule, so modeling them correctly becomes extremely vital. In addition, codes that simulate material mixing introduce numerical (algorithmically) generated transport across the material interfaces. This adds another layer of uncertainty in the solution through the artificially added diffusion. The third source of uncertainty we consider is physical model uncertainty. The fourth source of uncertainty we focus on a single localized surface perturbation (a divot) which creates a perturbation to the solution that can potentially enter the hot spot to diminish the thermonuclear environment. Jets of ablator material are hypothesized to enter the hot spot and cool the core, contributing to the observed lower reactions than predicted levels. A plasma transport package, Transport for Inertial Confinement Fusion (TICF) has been implemented into the Radiation Hydrodynamics code FLASH, from the University of Chicago. TICF has thermal, viscous and mass diffusion models that span the entire ICF implosion regime. We introduced a Quantum Molecular Dynamics calibrated thermal conduction model due to Hu for thermal transport. The numerical approximation uncertainties are introduced by the choice of a hydrodynamic solver for a particular flow. Solvers tend to be diffusive at material interfaces and the Front Tracking (FT) algorithm, which is an already available software code in the form of an API, helps to ameliorate such effects. The FT algorithm has also been implemented in FLASH and we use this to study the effect that divots can have on the hot spot properties.

  14. DASMI: exchanging, annotating and assessing molecular interaction data.

    PubMed

    Blankenburg, Hagen; Finn, Robert D; Prlić, Andreas; Jenkinson, Andrew M; Ramírez, Fidel; Emig, Dorothea; Schelhorn, Sven-Eric; Büch, Joachim; Lengauer, Thomas; Albrecht, Mario

    2009-05-15

    Ever increasing amounts of biological interaction data are being accumulated worldwide, but they are currently not readily accessible to the biologist at a single site. New techniques are required for retrieving, sharing and presenting data spread over the Internet. We introduce the DASMI system for the dynamic exchange, annotation and assessment of molecular interaction data. DASMI is based on the widely used Distributed Annotation System (DAS) and consists of a data exchange specification, web servers for providing the interaction data and clients for data integration and visualization. The decentralized architecture of DASMI affords the online retrieval of the most recent data from distributed sources and databases. DASMI can also be extended easily by adding new data sources and clients. We describe all DASMI components and demonstrate their use for protein and domain interactions. The DASMI tools are available at http://www.dasmi.de/ and http://ipfam.sanger.ac.uk/graph. The DAS registry and the DAS 1.53E specification is found at http://www.dasregistry.org/.

  15. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    PubMed

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  16. Scaled-model guidelines for formation-flying solar coronagraph missions.

    PubMed

    Landini, Federico; Romoli, Marco; Baccani, Cristian; Focardi, Mauro; Pancrazzi, Maurizio; Galano, Damien; Kirschner, Volker

    2016-02-15

    Stray light suppression is the main concern in designing a solar coronagraph. The main contribution to the stray light for an externally occulted space-borne solar coronagraph is the light diffracted by the occulter and scattered by the optics. It is mandatory to carefully evaluate the diffraction generated by an external occulter and the impact that it has on the stray light signal on the focal plane. The scientific need for observations to cover a large portion of the heliosphere with an inner field of view as close as possible to the photospheric limb supports the ambition of launching formation-flying giant solar coronagraphs. Their dimension prevents the possibility of replicating the flight geometry in a clean laboratory environment, and the strong need for a scaled model is thus envisaged. The problem of scaling a coronagraph has already been faced for exoplanets, for a single point source on axis at infinity. We face the problem here by adopting an original approach and by introducing the scaling of the solar disk as an extended source.

  17. Frequency stabilization in injection controlled pulsed CO2 lasers

    NASA Technical Reports Server (NTRS)

    Menzies, Robert T.; Ancellet, Gerard M.

    1987-01-01

    Longitudinal mode selection by injection has been demonstrated as a viable technique for tailoring a TEA-CO2 laser with pulse energies of a Joule or greater to fit the requirements of a coherent lidar transmitter. Once reliable generation of single-longitudinal-mode (SLM) pulses is obtained, one can study the intrapulse frequency variation and attempt to determine the sources of frequency sweeping, or chirp. These sources include the effect of the decaying plasma, the thermal gradient due to the energy dissipation associated with the laser mechanism itself, and the pressure shift of the center frequency of the laser transition. The use of the positive-branch unstable resonator as an efficient means of coupling a discharge with transverse spatial dimensions of the order of centimeters to an optical cavity mode introduces another concern: namely, what can be done to emphasize transverse mode discrimination in an unstable resonator cavity while maintaining high coupling efficiency. These issues are briefly discussed in the paper, and representative experimental examples are included.

  18. Relative multiplexing for minimising switching in linear-optical quantum computing

    NASA Astrophysics Data System (ADS)

    Gimeno-Segovia, Mercedes; Cable, Hugo; Mendoza, Gabriel J.; Shadbolt, Pete; Silverstone, Joshua W.; Carolan, Jacques; Thompson, Mark G.; O'Brien, Jeremy L.; Rudolph, Terry

    2017-06-01

    Many existing schemes for linear-optical quantum computing (LOQC) depend on multiplexing (MUX), which uses dynamic routing to enable near-deterministic gates and sources to be constructed using heralded, probabilistic primitives. MUXing accounts for the overwhelming majority of active switching demands in current LOQC architectures. In this manuscript we introduce relative multiplexing (RMUX), a general-purpose optimisation which can dramatically reduce the active switching requirements for MUX in LOQC, and thereby reduce hardware complexity and energy consumption, as well as relaxing demands on performance for various photonic components. We discuss the application of RMUX to the generation of entangled states from probabilistic single-photon sources, and argue that an order of magnitude improvement in the rate of generation of Bell states can be achieved. In addition, we apply RMUX to the proposal for percolation of a 3D cluster state by Gimeno-Segovia et al (2015 Phys. Rev. Lett. 115 020502), and we find that RMUX allows an 2.4× increase in loss tolerance for this architecture.

  19. Impacts of biological control and invasive species on a non-target native Hawaiian insect.

    PubMed

    Johnson, M Tracy; Follett, Peter A; Taylor, Andrew D; Jones, Vincent P

    2005-02-01

    The potential for classical biological control to cause unintended harm to native species was evaluated in the case of the endemic Hawaiian koa bug, Coleotichus blackburniae White (Hemiptera: Scutelleridae), and parasitoids introduced to Hawaii for control of an agricultural pest, the southern green stink bug, Nezara viridula (L.) (Hemiptera: Pentatomidae). Parasitism of C. blackburniae eggs, nymphs and adults by biocontrol agents was quantified across a wide range of habitats and compared to other sources of mortality. Egg mortality due to the biocontrol agent Trissolcus basalis Wollaston (Hymenoptera: Scelionidae) was low (maximum 26%) and confined to elevations below 500 m on a single host plant. Predation, mainly by alien spiders and ants, was the greatest source of egg mortality (maximum 87%). Parasitism of adult C. blackburniae by the biocontrol agent Trichopoda pilipes (F.) (Diptera: Tachinidae) was near zero at 21 of 24 sites surveyed. Three sites with high bug density had higher levels of T. pilipes parasitism, reaching maxima of 70% among adult female bugs, 100% among males and 50% among fifth instars. Male-biased parasitism indicated that T. pilipes is adapted to using male aggregation pheromone for finding C. blackburniae hosts. The relative impacts of biocontrol agents and other sources of mortality were compared using life tables. Invasive species, particularly generalist egg predators, had the greatest impacts on C. blackburniae populations. Effects of intentionally introduced parasitoids were relatively minor, although the tachinid T. pilipes showed potential for large impacts at individual sites. In retrospect, non-target attacks by biological control agents on C. blackburniae were predictable, but the environmental range and magnitude of impacts would have been difficult to foresee.

  20. A singly charged ion source for radioactive {sup 11}C ion acceleration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katagiri, K.; Noda, A.; Nagatsu, K.

    2016-02-15

    A new singly charged ion source using electron impact ionization has been developed to realize an isotope separation on-line system for simultaneous positron emission tomography imaging and heavy-ion cancer therapy using radioactive {sup 11}C ion beams. Low-energy electron beams are used in the electron impact ion source to produce singly charged ions. Ionization efficiency was calculated in order to decide the geometric parameters of the ion source and to determine the required electron emission current for obtaining high ionization efficiency. Based on these considerations, the singly charged ion source was designed and fabricated. In testing, the fabricated ion source wasmore » found to have favorable performance as a singly charged ion source.« less

  1. Dosimetric characterizations of GZP6 60Co high dose rate brachytherapy sources: application of superimposition method

    PubMed Central

    Bahreyni Toossi, Mohammad Taghi; Ghorbani, Mahdi; Mowlavi, Ali Asghar; Meigooni, Ali Soleimani

    2012-01-01

    Background Dosimetric characteristics of a high dose rate (HDR) GZP6 Co-60 brachytherapy source have been evaluated following American Association of Physicists in MedicineTask Group 43U1 (AAPM TG-43U1) recommendations for their clinical applications. Materials and methods MCNP-4C and MCNPX Monte Carlo codes were utilized to calculate dose rate constant, two dimensional (2D) dose distribution, radial dose function and 2D anisotropy function of the source. These parameters of this source are compared with the available data for Ralstron 60Co and microSelectron192Ir sources. Besides, a superimposition method was developed to extend the obtained results for the GZP6 source No. 3 to other GZP6 sources. Results The simulated value for dose rate constant for GZP6 source was 1.104±0.03 cGyh-1U-1. The graphical and tabulated radial dose function and 2D anisotropy function of this source are presented here. The results of these investigations show that the dosimetric parameters of GZP6 source are comparable to those for the Ralstron source. While dose rate constant for the two 60Co sources are similar to that for the microSelectron192Ir source, there are differences between radial dose function and anisotropy functions. Radial dose function of the 192Ir source is less steep than both 60Co source models. In addition, the 60Co sources are showing more isotropic dose distribution than the 192Ir source. Conclusions The superimposition method is applicable to produce dose distributions for other source arrangements from the dose distribution of a single source. The calculated dosimetric quantities of this new source can be introduced as input data to the GZP6 treatment planning system (TPS) and to validate the performance of the TPS. PMID:23077455

  2. Capturing domain knowledge from multiple sources: the rare bone disorders use case.

    PubMed

    Groza, Tudor; Tudorache, Tania; Robinson, Peter N; Zankl, Andreas

    2015-01-01

    Lately, ontologies have become a fundamental building block in the process of formalising and storing complex biomedical information. The community-driven ontology curation process, however, ignores the possibility of multiple communities building, in parallel, conceptualisations of the same domain, and thus providing slightly different perspectives on the same knowledge. The individual nature of this effort leads to the need of a mechanism to enable us to create an overarching and comprehensive overview of the different perspectives on the domain knowledge. We introduce an approach that enables the loose integration of knowledge emerging from diverse sources under a single coherent interoperable resource. To accurately track the original knowledge statements, we record the provenance at very granular levels. We exemplify the approach in the rare bone disorders domain by proposing the Rare Bone Disorders Ontology (RBDO). Using RBDO, researchers are able to answer queries, such as: "What phenotypes describe a particular disorder and are common to all sources?" or to understand similarities between disorders based on divergent groupings (classifications) provided by the underlying sources. RBDO is available at http://purl.org/skeletome/rbdo. In order to support lightweight query and integration, the knowledge captured by RBDO has also been made available as a SPARQL Endpoint at http://bio-lark.org/se_skeldys.html.

  3. Convergent cross-mapping and pairwise asymmetric inference.

    PubMed

    McCracken, James M; Weigel, Robert S

    2014-12-01

    Convergent cross-mapping (CCM) is a technique for computing specific kinds of correlations between sets of times series. It was introduced by Sugihara et al. [Science 338, 496 (2012).] and is reported to be "a necessary condition for causation" capable of distinguishing causality from standard correlation. We show that the relationships between CCM correlations proposed by Sugihara et al. do not, in general, agree with intuitive concepts of "driving" and as such should not be considered indicative of causality. It is shown that the fact that the CCM algorithm implies causality is a function of system parameters for simple linear and nonlinear systems. For example, in a circuit containing a single resistor and inductor, both voltage and current can be identified as the driver depending on the frequency of the source voltage. It is shown that the CCM algorithm, however, can be modified to identify relationships between pairs of time series that are consistent with intuition for the considered example systems for which CCM causality analysis provided nonintuitive driver identifications. This modification of the CCM algorithm is introduced as "pairwise asymmetric inference" (PAI) and examples of its use are presented.

  4. [Watershed water environment pollution models and their applications: a review].

    PubMed

    Zhu, Yao; Liang, Zhi-Wei; Li, Wei; Yang, Yi; Yang, Mu-Yi; Mao, Wei; Xu, Han-Li; Wu, Wei-Xiang

    2013-10-01

    Watershed water environment pollution model is the important tool for studying watershed environmental problems. Through the quantitative description of the complicated pollution processes of whole watershed system and its parts, the model can identify the main sources and migration pathways of pollutants, estimate the pollutant loadings, and evaluate their impacts on water environment, providing a basis for watershed planning and management. This paper reviewed the watershed water environment models widely applied at home and abroad, with the focuses on the models of pollutants loading (GWLF and PLOAD), water quality of received water bodies (QUAL2E and WASP), and the watershed models integrated pollutant loadings and water quality (HSPF, SWAT, AGNPS, AnnAGNPS, and SWMM), and introduced the structures, principles, and main characteristics as well as the limitations in practical applications of these models. The other models of water quality (CE-QUAL-W2, EFDC, and AQUATOX) and watershed models (GLEAMS and MIKE SHE) were also briefly introduced. Through the case analysis on the applications of single model and integrated models, the development trend and application prospect of the watershed water environment pollution models were discussed.

  5. Phonon Conduction in Silicon Nanobeam Labyrinths

    DOE PAGES

    Park, Woosung; Romano, Giuseppe; Ahn, Ethan C.; ...

    2017-07-24

    Here we study single-crystalline silicon nanobeams having 470 nm width and 80 nm thickness cross section, where we produce tortuous thermal paths (i.e. labyrinths) by introducing slits to control the impact of the unobstructed “line-of-sight” (LOS) between the heat source and heat sink. The labyrinths range from straight nanobeams with a complete LOS along the entire length to nanobeams in which the LOS ranges from partially to entirely blocked by introducing slits, s = 95, 195, 245, 295 and 395 nm. The measured thermal conductivity of the samples decreases monotonically from ~47 W m -1K -1 for straight beam tomore » ~31 W m -1 K -1 for slit width of 395 nm. A model prediction through a combination of the Boltzmann transport equation and ab initio calculations shows an excellent agreement with the experimental data to within ~8%. The model prediction for the most tortuous path (s = 395 nm) is reduced by ~14% compared to a straight beam of equivalent cross section. This study suggests that LOS is an important metric for characterizing and interpreting phonon propagation in nanostructures.« less

  6. GRAVIDY, a GPU modular, parallel direct-summation N-body integrator: dynamics with softening

    NASA Astrophysics Data System (ADS)

    Maureira-Fredes, Cristián; Amaro-Seoane, Pau

    2018-01-01

    A wide variety of outstanding problems in astrophysics involve the motion of a large number of particles under the force of gravity. These include the global evolution of globular clusters, tidal disruptions of stars by a massive black hole, the formation of protoplanets and sources of gravitational radiation. The direct-summation of N gravitational forces is a complex problem with no analytical solution and can only be tackled with approximations and numerical methods. To this end, the Hermite scheme is a widely used integration method. With different numerical techniques and special-purpose hardware, it can be used to speed up the calculations. But these methods tend to be computationally slow and cumbersome to work with. We present a new graphics processing unit (GPU), direct-summation N-body integrator written from scratch and based on this scheme, which includes relativistic corrections for sources of gravitational radiation. GRAVIDY has high modularity, allowing users to readily introduce new physics, it exploits available computational resources and will be maintained by regular updates. GRAVIDY can be used in parallel on multiple CPUs and GPUs, with a considerable speed-up benefit. The single-GPU version is between one and two orders of magnitude faster than the single-CPU version. A test run using four GPUs in parallel shows a speed-up factor of about 3 as compared to the single-GPU version. The conception and design of this first release is aimed at users with access to traditional parallel CPU clusters or computational nodes with one or a few GPU cards.

  7. Single-Molecule Transistor from Graphene Nanoelectrodes and Novel Functional Materials From Self-assembly

    NASA Astrophysics Data System (ADS)

    Xu, Qizhi

    This thesis introduces a new strategy to fabricate single molecular transistor by utilizing the covalent chemistry to reconnect the molecule with the electroburnt graphene nanogap. We studied the effect of coupling chemistry and molecular length on the efficiency of reconnection between the molecule and the graphene. With this technique, we are also able to observe the Coulomb Blockade phenomenon, which is a characteristics of single-electron transistors. The high yield and versatility of this approach augur well for creating a new generation of sensors, switches, and other functional devices using graphene contacts. This thesis also introduces a new type of organic single-crystal p-n heterojunction inspired from the ball-and-socket shape-complementarity between fullerene and contorted dibenzotetrathienocoronene (c-DBTTC). We studied the influence of temperature, pressure, and time on the self-assembly process of contorted dibenzotetrathienocoronene on the as-grown fullerene crystals. We also utilized fluorescence microscopy to investigate the charge transfer in this type of p-n heterojunction. Finally, this thesis introduces one-dimensional and two-dimensional programming in solid-state materials from superatom macrocycles. We find that the linkers that bridges the two superatoms determine the distance and electronic coupling between the two superatoms in the macrocycle, which in turn determines the way they self-assembled in the solid-state materials. The thesis is composed of four chapters. The first chapter introduces why we are in terested in molecular transistors and new functional materials, and what has been done so far. The second chapter described the approach we developed to assemble single molecule into circuits with graphene electrodes. The third chapter details the method to fabricate the organic single-crystal C60-DBTTC p-n heterojunction, which is of great importance to understand their charge transfer process. The last chapter introduced a new series of superatom macrocycles and their self-assembly into solid-state materials with electron acceptor tetracyanoethylene.

  8. SOFIA: a flexible source finder for 3D spectral line data

    NASA Astrophysics Data System (ADS)

    Serra, Paolo; Westmeier, Tobias; Giese, Nadine; Jurek, Russell; Flöer, Lars; Popping, Attila; Winkel, Benjamin; van der Hulst, Thijs; Meyer, Martin; Koribalski, Bärbel S.; Staveley-Smith, Lister; Courtois, Hélène

    2015-04-01

    We introduce SOFIA, a flexible software application for the detection and parametrization of sources in 3D spectral line data sets. SOFIA combines for the first time in a single piece of software a set of new source-finding and parametrization algorithms developed on the way to future H I surveys with ASKAP (WALLABY, DINGO) and APERTIF. It is designed to enable the general use of these new algorithms by the community on a broad range of data sets. The key advantages of SOFIA are the ability to: search for line emission on multiple scales to detect 3D sources in a complete and reliable way, taking into account noise level variations and the presence of artefacts in a data cube; estimate the reliability of individual detections; look for signal in arbitrarily large data cubes using a catalogue of 3D coordinates as a prior; provide a wide range of source parameters and output products which facilitate further analysis by the user. We highlight the modularity of SOFIA, which makes it a flexible package allowing users to select and apply only the algorithms useful for their data and science questions. This modularity makes it also possible to easily expand SOFIA in order to include additional methods as they become available. The full SOFIA distribution, including a dedicated graphical user interface, is publicly available for download.

  9. Conservation of eelgrass (Zostera marina) genetic diversity in a mesocosm-based restoration experiment.

    PubMed

    Ort, Brian S; Cohen, C Sarah; Boyer, Katharyn E; Reynolds, Laura K; Tam, Sheh May; Wyllie-Echeverria, Sandy

    2014-01-01

    Eelgrass (Zostera marina) forms the foundation of an important shallow coastal community in protected estuaries and bays. Widespread population declines have stimulated restoration efforts, but these have often overlooked the importance of maintaining the evolutionary potential of restored populations by minimizing the reduction in genetic diversity that typically accompanies restoration. In an experiment simulating a small-scale restoration, we tested the effectiveness of a buoy-deployed seeding technique to maintain genetic diversity comparable to the seed source populations. Seeds from three extant source populations in San Francisco Bay were introduced into eighteen flow-through baywater mesocosms. Following seedling establishment, we used seven polymorphic microsatellite loci to compare genetic diversity indices from 128 shoots to those found in the source populations. Importantly, allelic richness and expected heterozygosity were not significantly reduced in the mesocosms, which also preserved the strong population differentiation present among source populations. However, the inbreeding coefficient F IS was elevated in two of the three sets of mesocosms when they were grouped according to their source population. This is probably a Wahlund effect from confining all half-siblings within each spathe to a single mesocosm, elevating F IS when the mesocosms were considered together. The conservation of most alleles and preservation of expected heterozygosity suggests that this seeding technique is an improvement over whole-shoot transplantation in the conservation of genetic diversity in eelgrass restoration efforts.

  10. Conservation of Eelgrass (Zostera marina) Genetic Diversity in a Mesocosm-Based Restoration Experiment

    PubMed Central

    Ort, Brian S.; Cohen, C. Sarah; Boyer, Katharyn E.; Reynolds, Laura K.; Tam, Sheh May; Wyllie-Echeverria, Sandy

    2014-01-01

    Eelgrass (Zostera marina) forms the foundation of an important shallow coastal community in protected estuaries and bays. Widespread population declines have stimulated restoration efforts, but these have often overlooked the importance of maintaining the evolutionary potential of restored populations by minimizing the reduction in genetic diversity that typically accompanies restoration. In an experiment simulating a small-scale restoration, we tested the effectiveness of a buoy-deployed seeding technique to maintain genetic diversity comparable to the seed source populations. Seeds from three extant source populations in San Francisco Bay were introduced into eighteen flow-through baywater mesocosms. Following seedling establishment, we used seven polymorphic microsatellite loci to compare genetic diversity indices from 128 shoots to those found in the source populations. Importantly, allelic richness and expected heterozygosity were not significantly reduced in the mesocosms, which also preserved the strong population differentiation present among source populations. However, the inbreeding coefficient F IS was elevated in two of the three sets of mesocosms when they were grouped according to their source population. This is probably a Wahlund effect from confining all half-siblings within each spathe to a single mesocosm, elevating F IS when the mesocosms were considered together. The conservation of most alleles and preservation of expected heterozygosity suggests that this seeding technique is an improvement over whole-shoot transplantation in the conservation of genetic diversity in eelgrass restoration efforts. PMID:24586683

  11. Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments.

    PubMed

    Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier

    2016-01-05

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  12. Photon-HDF5: An Open File Format for Timestamp-Based Single-Molecule Fluorescence Experiments

    PubMed Central

    Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier

    2016-01-01

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long-term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode, photomultiplier tube, or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc.) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. The format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference Python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. To encourage adoption by the academic and commercial communities, all software is released under the MIT open source license. PMID:26745406

  13. Photon-HDF5: an open file format for single-molecule fluorescence experiments using photon-counting detectors

    DOE PAGES

    Ingargiola, A.; Laurence, T. A.; Boutelle, R.; ...

    2015-12-23

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less

  14. Novel techniques for characterization of hydrocarbon emission sources in the Barnett Shale

    NASA Astrophysics Data System (ADS)

    Nathan, Brian Joseph

    Changes in ambient atmospheric hydrocarbon concentrations can have both short-term and long-term effects on the atmosphere and on human health. Thus, accurate characterization of emissions sources is critically important. The recent boom in shale gas production has led to an increase in hydrocarbon emissions from associated processes, though the exact extent is uncertain. As an original quantification technique, a model airplane equipped with a specially-designed, open-path methane sensor was flown multiple times over a natural gas compressor station in the Barnett Shale in October 2013. A linear optimization was introduced to a standard Gaussian plume model in an effort to determine the most probable emission rate coming from the station. This is shown to be a suitable approach given an ideal source with a single, central plume. Separately, an analysis was performed to characterize the nonmethane hydrocarbons in the Barnett during the same period. Starting with ambient hourly concentration measurements of forty-six hydrocarbon species, Lagrangian air parcel trajectories were implemented in a meteorological model to extend the resolution of these measurements and achieve domain-fillings of the region for the period of interest. A self-organizing map (a type of unsupervised classification) was then utilized to reduce the dimensionality of the total multivariate set of grids into characteristic one-dimensional signatures. By also introducing a self-organizing map classification of the contemporary wind measurements, the spatial hydrocarbon characterizations are analyzed for periods with similar wind conditions. The accuracy of the classification is verified through assessment of observed spatial mixing ratio enhancements of key species, through site-comparisons with a related long-term study, and through a random forest analysis (an ensemble learning method of supervised classification) to determine the most important species for defining key classes. The hydrocarbon classification is shown to have performed very well in identifying expected signatures near and downwind-of oil and gas facilities with active permits, which showcases this method's usefulness for future regional hydrocarbon source-apportionment analyses.

  15. TMSEEG: A MATLAB-Based Graphical User Interface for Processing Electrophysiological Signals during Transcranial Magnetic Stimulation.

    PubMed

    Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C; Wong, Willy; Daskalakis, Zafiris J; Farzan, Faranak

    2016-01-01

    Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research.

  16. TMSEEG: A MATLAB-Based Graphical User Interface for Processing Electrophysiological Signals during Transcranial Magnetic Stimulation

    PubMed Central

    Atluri, Sravya; Frehlich, Matthew; Mei, Ye; Garcia Dominguez, Luis; Rogasch, Nigel C.; Wong, Willy; Daskalakis, Zafiris J.; Farzan, Faranak

    2016-01-01

    Concurrent recording of electroencephalography (EEG) during transcranial magnetic stimulation (TMS) is an emerging and powerful tool for studying brain health and function. Despite a growing interest in adaptation of TMS-EEG across neuroscience disciplines, its widespread utility is limited by signal processing challenges. These challenges arise due to the nature of TMS and the sensitivity of EEG to artifacts that often mask TMS-evoked potentials (TEP)s. With an increase in the complexity of data processing methods and a growing interest in multi-site data integration, analysis of TMS-EEG data requires the development of a standardized method to recover TEPs from various sources of artifacts. This article introduces TMSEEG, an open-source MATLAB application comprised of multiple algorithms organized to facilitate a step-by-step procedure for TMS-EEG signal processing. Using a modular design and interactive graphical user interface (GUI), this toolbox aims to streamline TMS-EEG signal processing for both novice and experienced users. Specifically, TMSEEG provides: (i) targeted removal of TMS-induced and general EEG artifacts; (ii) a step-by-step modular workflow with flexibility to modify existing algorithms and add customized algorithms; (iii) a comprehensive display and quantification of artifacts; (iv) quality control check points with visual feedback of TEPs throughout the data processing workflow; and (v) capability to label and store a database of artifacts. In addition to these features, the software architecture of TMSEEG ensures minimal user effort in initial setup and configuration of parameters for each processing step. This is partly accomplished through a close integration with EEGLAB, a widely used open-source toolbox for EEG signal processing. In this article, we introduce TMSEEG, validate its features and demonstrate its application in extracting TEPs across several single- and multi-pulse TMS protocols. As the first open-source GUI-based pipeline for TMS-EEG signal processing, this toolbox intends to promote the widespread utility and standardization of an emerging technology in brain research. PMID:27774054

  17. Study of a new design of p-N semiconductor detector array for nuclear medicine imaging by monte carlo simulation codes.

    PubMed

    Hajizadeh-Safar, M; Ghorbani, M; Khoshkharam, S; Ashrafi, Z

    2014-07-01

    Gamma camera is an important apparatus in nuclear medicine imaging. Its detection part is consists of a scintillation detector with a heavy collimator. Substitution of semiconductor detectors instead of scintillator in these cameras has been effectively studied. In this study, it is aimed to introduce a new design of P-N semiconductor detector array for nuclear medicine imaging. A P-N semiconductor detector composed of N-SnO2 :F, and P-NiO:Li, has been introduced through simulating with MCNPX monte carlo codes. Its sensitivity with different factors such as thickness, dimension, and direction of emission photons were investigated. It is then used to configure a new design of an array in one-dimension and study its spatial resolution for nuclear medicine imaging. One-dimension array with 39 detectors was simulated to measure a predefined linear distribution of Tc(99_m) activity and its spatial resolution. The activity distribution was calculated from detector responses through mathematical linear optimization using LINPROG code on MATLAB software. Three different configurations of one-dimension detector array, horizontal, vertical one sided, and vertical double-sided were simulated. In all of these configurations, the energy windows of the photopeak were ± 1%. The results show that the detector response increases with an increase of dimension and thickness of the detector with the highest sensitivity for emission photons 15-30° above the surface. Horizontal configuration array of detectors is not suitable for imaging of line activity sources. The measured activity distribution with vertical configuration array, double-side detectors, has no similarity with emission sources and hence is not suitable for imaging purposes. Measured activity distribution using vertical configuration array, single side detectors has a good similarity with sources. Therefore, it could be introduced as a suitable configuration for nuclear medicine imaging. It has been shown that using semiconductor P-N detectors such as P-NiO:Li, N-SnO2 :F for gamma detection could be possibly applicable for design of a one dimension array configuration with suitable spatial resolution of 2.7 mm for nuclear medicine imaging.

  18. Population genomic analyses reveal a history of range expansion and trait evolution across the native and invaded range of yellow starthistle (Centaurea solstitialis)

    PubMed Central

    BARKER, BRITTANY S.; ANDONIAN, KRIKOR; SWOPE, SARAH M.; LUSTER, DOUGLAS G.; DLUGOSCH, KATRINA M.

    2017-01-01

    Identifying sources of genetic variation and reconstructing invasion routes for non-native introduced species is central to understanding the circumstances under which they may evolve increased invasiveness. In this study, we used genome-wide single nucleotide polymorphisms to study the colonization history of Centaurea solstitialis in its native range in Eurasia and invasions into the Americas. We leveraged this information to pinpoint key evolutionary shifts in plant size, a focal trait associated with invasiveness in this species. Our analyses revealed clear population genomic structure of potential source populations in Eurasia, including deep differentiation of a lineage found in the southern Apennine and Balkan Peninsulas and divergence among populations in Asia, eastern Europe, and western Europe. We found strongest support for an evolutionary scenario in which western European populations were derived from an ancient admixture event between populations from eastern Europe and Asia, and subsequently served as the main genetic ‘bridgehead’ for introductions to the Americas. Introductions to California appear to be from a single source region, and multiple, independent introductions of divergent genotypes likely occurred into the Pacific Northwest. Plant size has evolved significantly at three points during range expansion, including a large size increase in the lineage responsible for the aggressive invasion of California’s interior. These results reveal a long history of colonization, admixture, and trait evolution in C. solstitialis, and suggest routes for improving evidence-based management decisions for one of the most ecologically and economically damaging invasive species in the western United States. PMID:28029713

  19. Optimization of knowledge sharing through multi-forum using cloud computing architecture

    NASA Astrophysics Data System (ADS)

    Madapusi Vasudevan, Sriram; Sankaran, Srivatsan; Muthuswamy, Shanmugasundaram; Ram, N. Sankar

    2011-12-01

    Knowledge sharing is done through various knowledge sharing forums which requires multiple logins through multiple browser instances. Here a single Multi-Forum knowledge sharing concept is introduced which requires only one login session which makes user to connect multiple forums and display the data in a single browser window. Also few optimization techniques are introduced here to speed up the access time using cloud computing architecture.

  20. Characterization of superconducting nanowire single-photon detector with artificial constrictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Ling; Liu, Dengkuan; Wu, Junjie

    2014-06-15

    Statistical studies on the performance of different superconducting nanowire single-photon detectors (SNSPDs) on one chip suggested that random constrictions existed in the nanowire that were barely registered by scanning electron microscopy. With the aid of advanced e-beam lithography, artificial geometric constrictions were fabricated on SNSPDs as well as single nanowires. In this way, we studied the influence of artificial constrictions on SNSPDs in a straight forward manner. By introducing artificial constrictions with different wire widths in single nanowires, we concluded that the dark counts of SNSPDs originate from a single constriction. Further introducing artificial constrictions in SNSPDs, we studied themore » relationship between detection efficiency and kinetic inductance and the bias current, confirming the hypothesis that constrictions exist in SNSPDs.« less

  1. Evaluating Web Sources in an EAP Course: Introducing a Multi-Trait Instrument for Feedback and Assessment

    ERIC Educational Resources Information Center

    Stapleton, Paul; Helms-Park, Rena

    2006-01-01

    This paper introduces the Website Acceptability Tiered Checklist (WATCH), a preliminary version of a multi-trait scale that could be used by instructors and students to assess the quality of websites chosen as source materials in students' research papers in a Humanities program. The scale includes bands for assessing: (i) the authority and…

  2. Single-particle states vs. collective modes: friends or enemies ?

    NASA Astrophysics Data System (ADS)

    Otsuka, T.; Tsunoda, Y.; Togashi, T.; Shimizu, N.; Abe, T.

    2018-05-01

    The quantum self-organization is introduced as one of the major underlying mechanisms of the quantum many-body systems. In the case of atomic nuclei as an example, two types of the motion of nucleons, single-particle states and collective modes, dominate the structure of the nucleus. The collective mode arises as the balance between the effect of the mode-driving force (e.g., quadrupole force for the ellipsoidal deformation) and the resistance power against it. The single-particle energies are one of the sources to produce such resistance power: a coherent collective motion is more hindered by larger spacings between relevant single particle states. Thus, the single-particle state and the collective mode are "enemies" against each other. However, the nuclear forces are rich enough so as to enhance relevant collective mode by reducing the resistance power by changing single-particle energies for each eigenstate through monopole interactions. This will be verified with the concrete example taken from Zr isotopes. Thus, the quantum self-organization occurs: single-particle energies can be self-organized by (i) two quantum liquids, e.g., protons and neutrons, (ii) monopole interaction (to control resistance). In other words, atomic nuclei are not necessarily like simple rigid vases containing almost free nucleons, in contrast to the naïve Fermi liquid picture. Type II shell evolution is considered to be a simple visible case involving excitations across a (sub)magic gap. The quantum self-organization becomes more important in heavier nuclei where the number of active orbits and the number of active nucleons are larger.

  3. Optimized Biasing of Pump Laser Diodes in a Highly Reliable Metrology Source for Long-Duration Space Missions

    NASA Technical Reports Server (NTRS)

    Poberezhskiy, Ilya Y; Chang, Daniel H.; Erlig, Herman

    2011-01-01

    Optical metrology system reliability during a prolonged space mission is often limited by the reliability of pump laser diodes. We developed a metrology laser pump module architecture that meets NASA SIM Lite instrument optical power and reliability requirements by combining the outputs of multiple single-mode pump diodes in a low-loss, high port count fiber coupler. We describe Monte-Carlo simulations used to calculate the reliability of the laser pump module and introduce a combined laser farm aging parameter that serves as a load-sharing optimization metric. Employing these tools, we select pump module architecture, operating conditions, biasing approach and perform parameter sensitivity studies to investigate the robustness of the obtained solution.

  4. Development of axisymmetric lattice Boltzmann flux solver for complex multiphase flows

    NASA Astrophysics Data System (ADS)

    Wang, Yan; Shu, Chang; Yang, Li-Ming; Yuan, Hai-Zhuan

    2018-05-01

    This paper presents an axisymmetric lattice Boltzmann flux solver (LBFS) for simulating axisymmetric multiphase flows. In the solver, the two-dimensional (2D) multiphase LBFS is applied to reconstruct macroscopic fluxes excluding axisymmetric effects. Source terms accounting for axisymmetric effects are introduced directly into the governing equations. As compared to conventional axisymmetric multiphase lattice Boltzmann (LB) method, the present solver has the kinetic feature for flux evaluation and avoids complex derivations of external forcing terms. In addition, the present solver also saves considerable computational efforts in comparison with three-dimensional (3D) computations. The capability of the proposed solver in simulating complex multiphase flows is demonstrated by studying single bubble rising in a circular tube. The obtained results compare well with the published data.

  5. On the DEAP-3600 resurfacing

    NASA Astrophysics Data System (ADS)

    Giampa, P.

    2018-01-01

    The DEAP-3600 experiment is a single-phase detector that can hold up to 3600 kg of liquid argon to search for dark matter at SNOLAB in Sudbury Canada, 6800 ft. underground. The projected sensitivity to the spin-independent WIMP-nucleon cross-section is 10-46 cm2 for a WIMP mass of 100 GeV/c2. One of the primary background sources to the WIMP search are alpha decays occurring on the surface of the experiment, which only deposit a tiny fraction of their energy in the argon. The work reported here focuses on the development and operation of a custom designed robot, the Resurfacer, which was used to remove 500 micrometers from the inner-most layer of the detector's acrylic cryostat, thus removing contaminations introduced during construction.

  6. 40 CFR 421.65 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Copper... existing sources. The mass of wastewater pollutants in secondary copper process wastewater introduced into...

  7. 40 CFR 421.65 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Copper... existing sources. The mass of wastewater pollutants in secondary copper process wastewater introduced into...

  8. Single-case research design in pediatric psychology: considerations regarding data analysis.

    PubMed

    Cohen, Lindsey L; Feinstein, Amanda; Masuda, Akihiko; Vowles, Kevin E

    2014-03-01

    Single-case research allows for an examination of behavior and can demonstrate the functional relation between intervention and outcome in pediatric psychology. This review highlights key assumptions, methodological and design considerations, and options for data analysis. Single-case methodology and guidelines are reviewed with an in-depth focus on visual and statistical analyses. Guidelines allow for the careful evaluation of design quality and visual analysis. A number of statistical techniques have been introduced to supplement visual analysis, but to date, there is no consensus on their recommended use in single-case research design. Single-case methodology is invaluable for advancing pediatric psychology science and practice, and guidelines have been introduced to enhance the consistency, validity, and reliability of these studies. Experts generally agree that visual inspection is the optimal method of analysis in single-case design; however, statistical approaches are becoming increasingly evaluated and used to augment data interpretation.

  9. Droplet Microfluidics for Compartmentalized Cell Lysis and Extension of DNA from Single-Cells

    NASA Astrophysics Data System (ADS)

    Zimny, Philip; Juncker, David; Reisner, Walter

    Current single cell DNA analysis methods suffer from (i) bias introduced by the need for molecular amplification and (ii) limited ability to sequence repetitive elements, resulting in (iii) an inability to obtain information regarding long range genomic features. Recent efforts to circumvent these limitations rely on techniques for sensing single molecules of DNA extracted from single-cells. Here we demonstrate a droplet microfluidic approach for encapsulation and biochemical processing of single-cells inside alginate microparticles. In our approach, single-cells are first packaged inside the alginate microparticles followed by cell lysis, DNA purification, and labeling steps performed off-chip inside this microparticle system. The alginate microparticles are then introduced inside a micro/nanofluidic system where the alginate is broken down via a chelating buffer, releasing long DNA molecules which are then extended inside nanofluidic channels for analysis via standard mapping protocols.

  10. The NIRCam Optical Telescope Simulator (NOTES)

    NASA Technical Reports Server (NTRS)

    Kubalak, David; Hakun, Claef; Greeley, Bradford; Eichorn, William; Leviton, Douglas; Guishard, Corina; Gong, Qian; Warner, Thomas; Bugby, David; Robinson, Frederick; hide

    2007-01-01

    The Near Infra-Red Camera (NIRCam), the 0.6-5.0 micron imager and wavefront sensing instrument for the James Webb Space Telescope (JWST), will be used on orbit both as a science instrument, and to tune the alignment of the telescope. The NIRCam Optical Telescope Element Simulator (NOTES) will be used during ground testing to provide an external stimulus to verify wavefront error, imaging characteristics, and wavefront sensing performance of this crucial instrument. NOTES is being designed and built by NASA Goddard Space Flight Center with the help of Swales Aerospace and Orbital Sciences Corporation. It is a single-point imaging system that uses an elliptical mirror to form an U20 image of a point source. The point source will be fed via optical fibers from outside the vacuum chamber. A tip/tilt mirror is used to change the chief ray angle of the beam as it passes through the aperture stop and thus steer the image over NIRCam's field of view without moving the pupil or introducing field aberrations. Interchangeable aperture stop elements allow us to simulate perfect JWST wavefronts for wavefront error testing, or introduce transmissive phase plates to simulate a misaligned JWST segmented mirror for wavefront sensing verification. NOTES will be maintained at an operating temperature of 80K during testing using thermal switches, allowing it to operate within the same test chamber as the NIRCam instrument. We discuss NOTES' current design status and on-going development activities.

  11. High order statistical signatures from source-driven measurements of subcritical fissile systems

    NASA Astrophysics Data System (ADS)

    Mattingly, John Kelly

    1998-11-01

    This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.

  12. Polycrystalline ZnO and Mn-doped ZnO nanorod arrays with variable dopant content via a template based synthesis from Zn(II) and Mn(II) Schiff base type single source molecular precursors

    NASA Astrophysics Data System (ADS)

    Pashchanka, Mikhail; Hoffmann, Rudolf C.; Burghaus, Olaf; Corzilius, Björn; Cherkashinin, Gennady; Schneider, Jörg J.

    2011-01-01

    The synthesis and full characterisation of pure and Mn-doped polycrystalline zinc oxide nanorods with tailored dopant content are obtained via a single source molecular precursor approach using two Schiff base type coordination compounds is reported. The infiltration of precursor solutions into the cylindrical pores of a polycarbonate template and their thermal conversion into a ceramic green body followed by dissolution of the template gives the desired ZnO and Mn-doped ZnO nanomaterial as compact rods. The ZnO nanorods have a mean diameter between 170 and 180 nm or 60-70 nm, depending on the template pore size employed, comprising a length of 5-6 μm. These nanorods are composed of individual sub-5 nm ZnO nanocrystals. Exact doping of these hierarchically structured ZnO nanorods was achieved by introducing Mn(II) into the ZnO host lattice with the precursor complex Diaquo-bis[2-(meth-oxyimino)-propanoato]manganese, which allows to tailor the exact Mn(II) doping content of the ZnO rods. Investigation of the Mn-doped ZnO samples by XRD, TEM, XPS, PL and EPR, reveals that manganese occurs exclusively in its oxidation state + II and is distributed within the volume as well as on the surface of the ZnO host.

  13. An Independent Assessment of Anthropogenic Attribution Statements for Recent Extreme Temperature and Rainfall Events

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angélil, Oliver; Stone, Dáithí; Wehner, Michael

    The annual "State of the Climate" report, published in the Bulletin of the American Meteorological Society (BAMS), has included a supplement since 2011 composed of brief analyses of the human influence on recent major extreme weather events. There are now several dozen extreme weather events examined in these supplements, but these studies have all differed in their data sources as well as their approaches to defining the events, analyzing the events, and the consideration of the role of anthropogenic emissions. This study reexamines most of these events using a single analytical approach and a single set of climate model andmore » observational data sources. In response to recent studies recommending the importance of using multiple methods for extreme weather event attribution, results are compared from these analyses to those reported in the BAMS supplements collectively, with the aim of characterizing the degree to which the lack of a common methodological framework may or may not influence overall conclusions. Results are broadly similar to those reported earlier for extreme temperature events but disagree for a number of extreme precipitation events. Based on this, it is advised that the lack of comprehensive uncertainty analysis in recent extreme weather attribution studies is important and should be considered when interpreting results, but as yet it has not introduced a systematic bias across these studies.« less

  14. An Independent Assessment of Anthropogenic Attribution Statements for Recent Extreme Temperature and Rainfall Events

    DOE PAGES

    Angélil, Oliver; Stone, Dáithí; Wehner, Michael; ...

    2016-12-16

    The annual "State of the Climate" report, published in the Bulletin of the American Meteorological Society (BAMS), has included a supplement since 2011 composed of brief analyses of the human influence on recent major extreme weather events. There are now several dozen extreme weather events examined in these supplements, but these studies have all differed in their data sources as well as their approaches to defining the events, analyzing the events, and the consideration of the role of anthropogenic emissions. This study reexamines most of these events using a single analytical approach and a single set of climate model andmore » observational data sources. In response to recent studies recommending the importance of using multiple methods for extreme weather event attribution, results are compared from these analyses to those reported in the BAMS supplements collectively, with the aim of characterizing the degree to which the lack of a common methodological framework may or may not influence overall conclusions. Results are broadly similar to those reported earlier for extreme temperature events but disagree for a number of extreme precipitation events. Based on this, it is advised that the lack of comprehensive uncertainty analysis in recent extreme weather attribution studies is important and should be considered when interpreting results, but as yet it has not introduced a systematic bias across these studies.« less

  15. A phase coherence approach to identifying co-located earthquakes and tremor

    NASA Astrophysics Data System (ADS)

    Hawthorne, J. C.; Ampuero, J.-P.

    2018-05-01

    We present and use a phase coherence approach to identify seismic signals that have similar path effects but different source time functions: co-located earthquakes and tremor. The method used is a phase coherence-based implementation of empirical matched field processing, modified to suit tremor analysis. It works by comparing the frequency-domain phases of waveforms generated by two sources recorded at multiple stations. We first cross-correlate the records of the two sources at a single station. If the sources are co-located, this cross-correlation eliminates the phases of the Green's function. It leaves the relative phases of the source time functions, which should be the same across all stations so long as the spatial extent of the sources are small compared with the seismic wavelength. We therefore search for cross-correlation phases that are consistent across stations as an indication of co-located sources. We also introduce a method to obtain relative locations between the two sources, based on back-projection of interstation phase coherence. We apply this technique to analyse two tremor-like signals that are thought to be composed of a number of earthquakes. First, we analyse a 20 s long seismic precursor to a M 3.9 earthquake in central Alaska. The analysis locates the precursor to within 2 km of the mainshock, and it identifies several bursts of energy—potentially foreshocks or groups of foreshocks—within the precursor. Second, we examine several minutes of volcanic tremor prior to an eruption at Redoubt Volcano. We confirm that the tremor source is located close to repeating earthquakes identified earlier in the tremor sequence. The amplitude of the tremor diminishes about 30 s before the eruption, but the phase coherence results suggest that the tremor may persist at some level through this final interval.

  16. Improvement of single domain antibody stability by disulfide bond introduction.

    PubMed

    Hagihara, Yoshihisa; Saerens, Dirk

    2012-01-01

    The successful medical application of single domain antibodies largely depends on their functionality. This feature is partly determined by the intrinsic stability of the single domain. Therefore a lot of research has gone into the elucidation of rules to uniformly increase stability of antibodies. Recently, a novel intra-domain disulfide bond was independently discovered by two research groups, after either rational design or careful investigation of the naturally occurring camelid antibody repertoire. By introducing this particular disulfide bond within a single domain antibody, the conformational stability can be increased in general. In this chapter it is described how to introduce this extra intra-domain disulfide bond and how to estimate the biophysical and biochemical impact of this cystine on the domain.

  17. Multiple sparse volumetric priors for distributed EEG source reconstruction.

    PubMed

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-10-15

    We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Deterministic and storable single-photon source based on a quantum memory.

    PubMed

    Chen, Shuai; Chen, Yu-Ao; Strassel, Thorsten; Yuan, Zhen-Sheng; Zhao, Bo; Schmiedmayer, Jörg; Pan, Jian-Wei

    2006-10-27

    A single-photon source is realized with a cold atomic ensemble (87Rb atoms). A single excitation, written in an atomic quantum memory by Raman scattering of a laser pulse, is retrieved deterministically as a single photon at a predetermined time. It is shown that the production rate of single photons can be enhanced considerably by a feedback circuit while the single-photon quality is conserved. Such a single-photon source is well suited for future large-scale realization of quantum communication and linear optical quantum computation.

  19. Fabrication and performance of tuneable single-mode VCSELs emitting in the 750- to 1000-nm range

    NASA Astrophysics Data System (ADS)

    Grabherr, Martin; Wiedenmann, Dieter; Jaeger, Roland; King, Roger

    2005-03-01

    The growing demand on low cost high spectral purity laser sources at specific wavelengths for applications like tuneable diode laser absorption spectroscopy (TDLAS) and optical pumping of atomic clocks can be met by sophisticated single-mode VCSELs in the 760 to 980 nm wavelength range. Equipped with micro thermo electrical cooler (TEC) and thermistor inside a small standard TO46 package, the resulting wavelength tuning range is larger than +/- 2.5 nm. U-L-M photonics presents manufacturing aspects, device performance and reliability data on tuneable single-mode VCSELs at 760, 780, 794, 852, and 948 nm lately introduced to the market. According applications are O2 sensing, Rb pumping, Cs pumping, and moisture sensing, respectively. The first part of the paper dealing with manufacturing aspects focuses on control of resonance wavelength during epitaxial growth and process control during selective oxidation for current confinement. Acceptable resonance wavelength tolerance is as small as +/- 1nm and typical aperture size of oxide confined single-mode VCSELs is 3 &mum with only few hundred nm tolerance. Both of these major production steps significantly contribute to yield on wafer values. Key performance data for the presented single-mode VCSELs are: >0.5 mW of optical output power, >30 dB side mode suppression ratio, and extrapolated 10E7 h MTTF at room temperature based on several millions of real test hours. Finally, appropriate fiber coupling solutions will be presented and discussed.

  20. 40 CFR 463.36 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ....36 Pretreatment standards for new sources. (a) PSNS for bis(2-ethylhexyl) phthalate, di-n-butyl phthalate, and dimethyl phthalate are reserved. (b) Any new source subject to this subpart that introduces...

  1. Double-slit experiment with single wave-driven particles and its relation to quantum mechanics.

    PubMed

    Andersen, Anders; Madsen, Jacob; Reichelt, Christian; Rosenlund Ahl, Sonja; Lautrup, Benny; Ellegaard, Clive; Levinsen, Mogens T; Bohr, Tomas

    2015-07-01

    In a thought-provoking paper, Couder and Fort [Phys. Rev. Lett. 97, 154101 (2006)] describe a version of the famous double-slit experiment performed with droplets bouncing on a vertically vibrated fluid surface. In the experiment, an interference pattern in the single-particle statistics is found even though it is possible to determine unambiguously which slit the walking droplet passes. Here we argue, however, that the single-particle statistics in such an experiment will be fundamentally different from the single-particle statistics of quantum mechanics. Quantum mechanical interference takes place between different classical paths with precise amplitude and phase relations. In the double-slit experiment with walking droplets, these relations are lost since one of the paths is singled out by the droplet. To support our conclusions, we have carried out our own double-slit experiment, and our results, in particular the long and variable slit passage times of the droplets, cast strong doubt on the feasibility of the interference claimed by Couder and Fort. To understand theoretically the limitations of wave-driven particle systems as analogs to quantum mechanics, we introduce a Schrödinger equation with a source term originating from a localized particle that generates a wave while being simultaneously guided by it. We show that the ensuing particle-wave dynamics can capture some characteristics of quantum mechanics such as orbital quantization. However, the particle-wave dynamics can not reproduce quantum mechanics in general, and we show that the single-particle statistics for our model in a double-slit experiment with an additional splitter plate differs qualitatively from that of quantum mechanics.

  2. Do we really understand quantum mechanics? Strange correlations, paradoxes, and theorems

    NASA Astrophysics Data System (ADS)

    Laloë, F.

    2001-06-01

    This article presents a general discussion of several aspects of our present understanding of quantum mechanics. The emphasis is put on the very special correlations that this theory makes possible: They are forbidden by very general arguments based on realism and local causality. In fact, these correlations are completely impossible in any circumstance, except for very special situations designed by physicists especially to observe these purely quantum effects. Another general point that is emphasized is the necessity for the theory to predict the emergence of a single result in a single realization of an experiment. For this purpose, orthodox quantum mechanics introduces a special postulate: the reduction of the state vector, which comes in addition to the Schrödinger evolution postulate. Nevertheless, the presence in parallel of two evolution processes of the same object (the state vector) may be a potential source for conflicts; various attitudes that are possible to avoid this problem are discussed in this text. After a brief historical introduction, recalling how the very special status of the state vector has emerged in quantum mechanics, various conceptual difficulties are introduced and discussed. The Einstein-Podolsky-Rosen (EPR) theorem is presented with the help of a botanical parable, in a way that emphasizes how deeply the EPR reasoning is rooted into what is often called "scientific method." In another section the Greenberger-Horne-Zeilinger argument, the Hardy impossibilities, as well as the Bell-Kochen-Specker theorem are introduced in simple terms. The final two sections attempt to give a summary of the present situation: One section discusses nonlocality and entanglement as we see it presently, with brief mention of recent experiments; the last section contains a (nonexhaustive) list of various attitudes that are found among physicists, and that are helpful to alleviate the conceptual difficulties of quantum mechanics.

  3. Single Color Multiplexed ddPCR Copy Number Measurements and Single Nucleotide Variant Genotyping.

    PubMed

    Wood-Bouwens, Christina M; Ji, Hanlee P

    2018-01-01

    Droplet digital PCR (ddPCR) allows for accurate quantification of genetic events such as copy number variation and single nucleotide variants. Probe-based assays represent the current "gold-standard" for detection and quantification of these genetic events. Here, we introduce a cost-effective single color ddPCR assay that allows for single genome resolution quantification of copy number and single nucleotide variation.

  4. Symbiotic Stars in X-rays

    NASA Technical Reports Server (NTRS)

    Luna, G. J. M.; Sokoloski, J. L.; Mukai, K.; Nelson, T.

    2014-01-01

    Until recently, symbiotic binary systems in which a white dwarf accretes from a red giant were thought to be mainly a soft X-ray population. Here we describe the detection with the X-ray Telescope (XRT) on the Swift satellite of 9 white dwarf symbiotics that were not previously known to be X-ray sources and one that was previously detected as a supersoft X-ray source. The 9 new X-ray detections were the result of a survey of 41 symbiotic stars, and they increase the number of symbiotic stars known to be X-ray sources by approximately 30%. Swift/XRT detected all of the new X-ray sources at energies greater than 2 keV. Their X-ray spectra are consistent with thermal emission and fall naturally into three distinct groups. The first group contains those sources with a single, highly absorbed hard component, which we identify as probably coming from an accretion-disk boundary layer. The second group is composed of those sources with a single, soft X-ray spectral component, which likely arises in a region where low-velocity shocks produce X-ray emission, i.e. a colliding-wind region. The third group consists of those sources with both hard and soft X-ray spectral components. We also find that unlike in the optical, where rapid, stochastic brightness variations from the accretion disk typically are not seen, detectable UV flickering is a common property of symbiotic stars. Supporting our physical interpretation of the two X-ray spectral components, simultaneous Swift UV photometry shows that symbiotic stars with harder X-ray emission tend to have stronger UV flickering, which is usually associated with accretion through a disk. To place these new observations in the context of previous work on X-ray emission from symbiotic stars, we modified and extended the alpha/beta/gamma classification scheme for symbiotic-star X-ray spectra that was introduced by Muerset et al. based upon observations with the ROSAT satellite, to include a new sigma classification for sources with hard X-ray emission from the innermost accretion region. Since we have identified the elusive accretion component in the emission from a sample of symbiotic stars, our results have implications for the understanding of wind-fed mass transfer in wide binaries, and the accretion rate in one class of candidate progenitors of type Ia supernovae.

  5. Procedure for Separating Noise Sources in Measurements of Turbofan Engine Core Noise

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    The study of core noise from turbofan engines has become more important as noise from other sources like the fan and jet have been reduced. A multiple microphone and acoustic source modeling method to separate correlated and uncorrelated sources has been developed. The auto and cross spectrum in the frequency range below 1000 Hz is fitted with a noise propagation model based on a source couplet consisting of a single incoherent source with a single coherent source or a source triplet consisting of a single incoherent source with two coherent point sources. Examples are presented using data from a Pratt & Whitney PW4098 turbofan engine. The method works well.

  6. Source Term Model for Vortex Generator Vanes in a Navier-Stokes Computer Code

    NASA Technical Reports Server (NTRS)

    Waithe, Kenrick A.

    2004-01-01

    A source term model for an array of vortex generators was implemented into a non-proprietary Navier-Stokes computer code, OVERFLOW. The source term models the side force created by a vortex generator vane. The model is obtained by introducing a side force to the momentum and energy equations that can adjust its strength automatically based on the local flow. The model was tested and calibrated by comparing data from numerical simulations and experiments of a single low profile vortex generator vane on a flat plate. In addition, the model was compared to experimental data of an S-duct with 22 co-rotating, low profile vortex generators. The source term model allowed a grid reduction of about seventy percent when compared with the numerical simulations performed on a fully gridded vortex generator on a flat plate without adversely affecting the development and capture of the vortex created. The source term model was able to predict the shape and size of the stream-wise vorticity and velocity contours very well when compared with both numerical simulations and experimental data. The peak vorticity and its location were also predicted very well when compared to numerical simulations and experimental data. The circulation predicted by the source term model matches the prediction of the numerical simulation. The source term model predicted the engine fan face distortion and total pressure recovery of the S-duct with 22 co-rotating vortex generators very well. The source term model allows a researcher to quickly investigate different locations of individual or a row of vortex generators. The researcher is able to conduct a preliminary investigation with minimal grid generation and computational time.

  7. A Grid Sourcing and Adaptation Study Using Unstructured Grids for Supersonic Boom Prediction

    NASA Technical Reports Server (NTRS)

    Carter, Melissa B.; Deere, Karen A.

    2008-01-01

    NASA created the Supersonics Project as part of the NASA Fundamental Aeronautics Program to advance technology that will make a supersonic flight over land viable. Computational flow solvers have lacked the ability to accurately predict sonic boom from the near to far field. The focus of this investigation was to establish gridding and adaptation techniques to predict near-to-mid-field (<10 body lengths below the aircraft) boom signatures at supersonic speeds using the USM3D unstructured grid flow solver. The study began by examining sources along the body the aircraft, far field sourcing and far field boundaries. The study then examined several techniques for grid adaptation. During the course of the study, volume sourcing was introduced as a new way to source grids using the grid generation code VGRID. Two different methods of using the volume sources were examined. The first method, based on manual insertion of the numerous volume sources, made great improvements in the prediction capability of USM3D for boom signatures. The second method (SSGRID), which uses an a priori adaptation approach to stretch and shear the original unstructured grid to align the grid and pressure waves, showed similar results with a more automated approach. Due to SSGRID s results and ease of use, the rest of the study focused on developing a best practice using SSGRID. The best practice created by this study for boom predictions using the CFD code USM3D involved: 1) creating a small cylindrical outer boundary either 1 or 2 body lengths in diameter (depending on how far below the aircraft the boom prediction is required), 2) using a single volume source under the aircraft, and 3) using SSGRID to stretch and shear the grid to the desired length.

  8. Single-Sex Schools and Classrooms. The Informed Educator Series

    ERIC Educational Resources Information Center

    Clarke, Suzanne

    2007-01-01

    In October 2006, the U.S. Department of Education introduced the so-called "single-sex regulations," which brought the issue of single-sex education to the forefront of discussion among educators, policymakers, and parents. Anecdotal evidence suggests that single-sex education can have a positive impact on student achievement. However,…

  9. High power industrial picosecond laser from IR to UV

    NASA Astrophysics Data System (ADS)

    Saby, Julien; Sangla, Damien; Pierrot, Simonette; Deslandes, Pierre; Salin, François

    2013-02-01

    Many industrial applications such as glass cutting, ceramic micro-machining or photovoltaic processes require high average and high peak power Picosecond pulses. The main limitation for the expansion of the picosecond market is the cost of high power picosecond laser sources, which is due to the complexity of the architecture used for picosecond pulse amplification, and the difficulty to keep an excellent beam quality at high average power. Amplification with fibers is a good technology to achieve high power in picosecond regime but, because of its tight confinement over long distances, light undergoes dramatic non linearities while propagating in fibers. One way to avoid strong non linearities is to increase fiber's mode area. Nineteen missing holes fibers offering core diameter larger than 80μm have been used over the past few years [1-3] but it has been shown that mode instabilities occur at approximately 100W average output power in these fibers [4]. Recently a new fiber design has been introduced, in which HOMs are delocalized from the core to the clad, preventing from HOMs amplification [5]. In these so-called Large Pitch Fibers, threshold for mode instabilities is increased to 294W offering robust single-mode operation below this power level [6]. We have demonstrated a high power-high efficiency industrial picosecond source using single-mode Large Pitch rod-type fibers doped with Ytterbium. Large Pitch Rod type fibers can offer a unique combination of single-mode output with a very large mode area from 40 μm up to 100μm and very high gain. This enables to directly amplify a low power-low energy Mode Locked Fiber laser with a simple amplification architecture, achieving very high power together with singlemode output independent of power level or repetition rate.

  10. Precision toxicology based on single cell sequencing: an evolving trend in toxicological evaluations and mechanism exploration.

    PubMed

    Zhang, Boyang; Huang, Kunlun; Zhu, Liye; Luo, Yunbo; Xu, Wentao

    2017-07-01

    In this review, we introduce a new concept, precision toxicology: the mode of action of chemical- or drug-induced toxicity can be sensitively and specifically investigated by isolating a small group of cells or even a single cell with typical phenotype of interest followed by a single cell sequencing-based analysis. Precision toxicology can contribute to the better detection of subtle intracellular changes in response to exogenous substrates, and thus help researchers find solutions to control or relieve the toxicological effects that are serious threats to human health. We give examples for single cell isolation and recommend laser capture microdissection for in vivo studies and flow cytometric sorting for in vitro studies. In addition, we introduce the procedures for single cell sequencing and describe the expected application of these techniques to toxicological evaluations and mechanism exploration, which we believe will become a trend in toxicology.

  11. New perspectives on self-similarity for shallow thrust earthquakes

    NASA Astrophysics Data System (ADS)

    Denolle, Marine A.; Shearer, Peter M.

    2016-09-01

    Scaling of dynamic rupture processes from small to large earthquakes is critical to seismic hazard assessment. Large subduction earthquakes are typically remote, and we mostly rely on teleseismic body waves to extract information on their slip rate functions. We estimate the P wave source spectra of 942 thrust earthquakes of magnitude Mw 5.5 and above by carefully removing wave propagation effects (geometrical spreading, attenuation, and free surface effects). The conventional spectral model of a single-corner frequency and high-frequency falloff rate does not explain our data, and we instead introduce a double-corner-frequency model, modified from the Haskell propagating source model, with an intermediate falloff of f-1. The first corner frequency f1 relates closely to the source duration T1, its scaling follows M0∝T13 for Mw<7.5, and changes to M0∝T12 for larger earthquakes. An elliptical rupture geometry better explains the observed scaling than circular crack models. The second time scale T2 varies more weakly with moment, M0∝T25, varies weakly with depth, and can be interpreted either as expressions of starting and stopping phases, as a pulse-like rupture, or a dynamic weakening process. Estimated stress drops and scaled energy (ratio of radiated energy over seismic moment) are both invariant with seismic moment. However, the observed earthquakes are not self-similar because their source geometry and spectral shapes vary with earthquake size. We find and map global variations of these source parameters.

  12. High-precision coseismic displacement estimation with a single-frequency GPS receiver

    NASA Astrophysics Data System (ADS)

    Guo, Bofeng; Zhang, Xiaohong; Ren, Xiaodong; Li, Xingxing

    2015-07-01

    To improve the performance of Global Positioning System (GPS) in the earthquake/tsunami early warning and rapid response applications, minimizing the blind zone and increasing the stability and accuracy of both the rapid source and rupture inversion, the density of existing GPS networks must be increased in the areas at risk. For economic reasons, low-cost single-frequency receivers would be preferable to make the sparse dual-frequency GPS networks denser. When using single-frequency GPS receivers, the main problem that must be solved is the ionospheric delay, which is a critical factor when determining accurate coseismic displacements. In this study, we introduce a modified Satellite-specific Epoch-differenced Ionospheric Delay (MSEID) model to compensate for the effect of ionospheric error on single-frequency GPS receivers. In the MSEID model, the time-differenced ionospheric delays observed from a regional dual-frequency GPS network to a common satellite are fitted to a plane rather than part of a sphere, and the parameters of this plane are determined by using the coordinates of the stations. When the parameters are known, time-differenced ionospheric delays for a single-frequency GPS receiver could be derived from the observations of those dual-frequency receivers. Using these ionospheric delay corrections, coseismic displacements of a single-frequency GPS receiver can be accurately calculated based on time-differenced carrier-phase measurements in real time. The performance of the proposed approach is validated using 5 Hz GPS data collected during the 2012 Nicoya Peninsula Earthquake (Mw 7.6, 2012 September 5) in Costa Rica. This shows that the proposed approach improves the accuracy of the displacement of a single-frequency GPS station, and coseismic displacements with an accuracy of a few centimetres are achieved over a 10-min interval.

  13. Three-dimensional periodic dielectric structures having photonic Dirac points

    DOEpatents

    Bravo-Abad, Jorge; Joannopoulos, John D.; Soljacic, Marin

    2015-06-02

    The dielectric, three-dimensional photonic materials disclosed herein feature Dirac-like dispersion in quasi-two-dimensional systems. Embodiments include a face-centered cubic (fcc) structure formed by alternating layers of dielectric rods and dielectric slabs patterned with holes on respective triangular lattices. This fcc structure also includes a defect layer, which may comprise either dielectric rods or a dielectric slab with patterned with holes. This defect layer introduces Dirac cone dispersion into the fcc structure's photonic band structure. Examples of these fcc structures enable enhancement of the spontaneous emission coupling efficiency (the .beta.-factor) over large areas, contrary to the conventional wisdom that the .beta.-factor degrades as the system's size increases. These results enable large-area, low-threshold lasers; single-photon sources; quantum information processing devices; and energy harvesting systems.

  14. Spray-coated carbon nanotube thin-film transistors with striped transport channels

    NASA Astrophysics Data System (ADS)

    Jeong, Minho; Lee, Kunhak; Choi, Eunsuk; Kim, Ahsung; Lee, Seung-Beck

    2012-12-01

    We present results for the transfer characteristics of carbon nanotube thin-film transistors (CNT-TFTs) that utilize single-walled carbon nanotube thin-films prepared by direct spray-coating on the substrate. By varying the number of spray-coatings (Nsp) and the concentration of nanotubes in solution (CNT), it was possible to control the conductivity of the spray-coated nanotube thin-film from 129 to 0.1 kΩ/□. Also, by introducing stripes into the channel of the CNT-TFT, and thereby reducing the number of metallic percolation paths between source and drain, it was possible to enhance the on/off current ratio 1000-fold, from 10 to 104, demonstrating that it may be possible to utilize spray-coating as a method to fabricate CNT-TFTs for large area switching array applications.

  15. Differential Optical Synthetic Aperture Radar

    DOEpatents

    Stappaerts, Eddy A.

    2005-04-12

    A new differential technique for forming optical images using a synthetic aperture is introduced. This differential technique utilizes a single aperture to obtain unique (N) phases that can be processed to produce a synthetic aperture image at points along a trajectory. This is accomplished by dividing the aperture into two equal "subapertures", each having a width that is less than the actual aperture, along the direction of flight. As the platform flies along a given trajectory, a source illuminates objects and the two subapertures are configured to collect return signals. The techniques of the invention is designed to cancel common-mode errors, trajectory deviations from a straight line, and laser phase noise to provide the set of resultant (N) phases that can produce an image having a spatial resolution corresponding to a synthetic aperture.

  16. Interference Mitigation Effects on Synthetic Aperture Radar Coherent Data Products

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Musgrove, Cameron

    For synthetic aperture radars radio frequency interference from sources external to the radar system and techniques to mitigate the interference can degrade the quality of the image products. Usually the radar system designer will try to balance the amount of mitigation for an acceptable amount of interference to optimize the image quality. This dissertation examines the effect of interference mitigation upon coherent data products of fine resolution, high frequency synthetic aperture radars using stretch processing. Novel interference mitigation techniques are introduced that operate on single or multiple apertures of data that increase average coherence compared to existing techniques. New metricsmore » are applied to evaluate multiple mitigation techniques for image quality and average coherence. The underlying mechanism for interference mitigation techniques that affect coherence is revealed.« less

  17. The extended Beer-Lambert theory for ray tracing modeling of LED chip-scaled packaging application with multiple luminescence materials

    NASA Astrophysics Data System (ADS)

    Yuan, Cadmus C. A.

    2015-12-01

    Optical ray tracing modeling applied Beer-Lambert method in the single luminescence material system to model the white light pattern from blue LED light source. This paper extends such algorithm to a mixed multiple luminescence material system by introducing the equivalent excitation and emission spectrum of individual luminescence materials. The quantum efficiency numbers of individual material and self-absorption of the multiple luminescence material system are considered as well. By this combination, researchers are able to model the luminescence characteristics of LED chip-scaled packaging (CSP), which provides simple process steps and the freedom of the luminescence material geometrical dimension. The method will be first validated by the experimental results. Afterward, a further parametric investigation has been then conducted.

  18. Rapid depth estimation for compact magnetic sources using a semi-automated spectrum-based method

    NASA Astrophysics Data System (ADS)

    Clifton, Roger

    2017-04-01

    This paper describes a spectrum-based algorithmic procedure for rapid reconnaissance for compact bodies at depths of interest using magnetic line data. The established method of obtaining depth to source from power spectra requires an interpreter to subjectively select just a single slope along the power spectrum. However, many slopes along the spectrum are, at least partially, indicative of the depth if the shape of the source is known. In particular, if the target is assumed to be a point dipole, all spectral slopes are determined by the depth, noise permitting. The concept of a `depth spectrum' is introduced, where the power spectrum in a travelling window or gate of data is remapped so that a single dipole in the gate would be represented as a straight line at its depth on the y-axis of the spectrum. In demonstration, the depths of two known ironstones are correctly displayed. When a second body is in the gate, the two anomalies interfere, leaving interference patterns on the depth spectra that are themselves diagnostic. A formula has been derived for the purpose. Because there is no need for manual selection of slopes along the spectrum, the process runs rapidly along flight lines with a continuously varying display, where the interpreter can pick out a persistent depth signal among the more rapidly varying noise. Interaction is nevertheless necessary, because the interpreter often needs to pass across an anomaly of interest several times, separating out interfering bodies, and resolving the slant range to the body from adjacent flight lines. Because a look-up table is used rather than a formula, the elementary structure used for the mapping can be adapted by including an extra dipole, possibly with a different inclination.

  19. On the Assessment of Acoustic Scattering and Shielding by Time Domain Boundary Integral Equation Solutions

    NASA Technical Reports Server (NTRS)

    Hu, Fang Q.; Pizzo, Michelle E.; Nark, Douglas M.

    2016-01-01

    Based on the time domain boundary integral equation formulation of the linear convective wave equation, a computational tool dubbed Time Domain Fast Acoustic Scattering Toolkit (TD-FAST) has recently been under development. The time domain approach has a distinct advantage that the solutions at all frequencies are obtained in a single computation. In this paper, the formulation of the integral equation, as well as its stabilization by the Burton-Miller type reformulation, is extended to cases of a constant mean flow in an arbitrary direction. In addition, a "Source Surface" is also introduced in the formulation that can be employed to encapsulate regions of noise sources and to facilitate coupling with CFD simulations. This is particularly useful for applications where the noise sources are not easily described by analytical source terms. Numerical examples are presented to assess the accuracy of the formulation, including a computation of noise shielding by a thin barrier motivated by recent Historical Baseline F31A31 open rotor noise shielding experiments. Furthermore, spatial resolution requirements of the time domain boundary element method are also assessed using point per wavelength metrics. It is found that, using only constant basis functions and high-order quadrature for surface integration, relative errors of less than 2% may be obtained when the surface spatial resolution is 5 points-per-wavelength (PPW) or 25 points-per-wavelength squared (PPW2).

  20. Adjustable supercontinuum laser source with low coherence length and low timing jitter

    NASA Astrophysics Data System (ADS)

    Andreana, Marco; Bertrand, Anthony; Hernandez, Yves; Leproux, Philippe; Couderc, Vincent; Hilaire, Stéphane; Huss, Guillaume; Giannone, Domenico; Tonello, Alessandro; Labruyère, Alexis; Rongeat, Nelly; Nérin, Philippe

    2010-04-01

    This paper introduces a supercontinuum (SC) laser source emitting from 400 nm to beyond 1750 nm, with adjustable pulse repetition rate (from 250 kHz to 1 MHz) and duration (from ~200 ps to ~2 ns). This device makes use of an internally-modulated 1.06 μm semiconductor laser diode as pump source. The output radiation is then amplified through a preamplifier (based on single-mode Yb-doped fibres) followed by a booster (based on a double-clad Yb-doped fibre). The double-clad fibre output is then spliced to an air-silica microstructured optical fibre (MOF). The small core diameter of the double-clad fibre allows reducing the splice loss. The strongly nonlinear propagation regime in the MOF leads to the generation of a SC extending from the violet to the nearinfrared wavelengths. On the Stokes side of the 1.06 μm pump line, i.e., in the anomalous dispersion regime, the spectrum is composed of an incoherent distribution of quasi-solitonic components. Therefore, the SC source is characterised by a low coherence length, which can be tuned by simply modifying pulse duration, that is closely related to the number of quasi-solitonic components brought into play. Finally, the internal modulation of the laser diode permits to achieve excellent temporal stability, both in terms of average power and pulse-to-pulse period.

  1. Improved Single-Source Precursors for Solar-Cell Absorbers

    NASA Technical Reports Server (NTRS)

    Banger, Kulbinder K.; Harris, Jerry; Hepp, Aloysius

    2007-01-01

    Improved single-source precursor compounds have been invented for use in spray chemical vapor deposition (spray CVD) of chalcopyrite semiconductor absorber layers of thin-film cells. A "single-source precursor compound" is a single molecular compound that contains all the required elements, which when used under the spray CVD conditions, thermally decomposes to form CuIn(x)Ga(1-x)S(y)Se(2-y).

  2. DETECTORS AND EXPERIMENTAL METHODS: Equivalent properties of single event burnout induced by different sources

    NASA Astrophysics Data System (ADS)

    Yang, Shi-Yu; Cao, Zhou; Da, Dao-An; Xue, Yu-Xiong

    2009-05-01

    The experimental results of single event burnout induced by heavy ions and 252Cf fission fragments in power MOSFET devices have been investigated. It is concluded that the characteristics of single event burnout induced by 252Cf fission fragments is consistent to that in heavy ions. The power MOSFET in the “turn-off" state is more susceptible to single event burnout than it is in the “turn-on" state. The thresholds of the drain-source voltage for single event burnout induced by 173 MeV bromine ions and 252Cf fission fragments are close to each other, and the burnout cross section is sensitive to variation of the drain-source voltage above the threshold of single event burnout. In addition, the current waveforms of single event burnouts induced by different sources are similar. Different power MOSFET devices may have different probabilities for the occurrence of single event burnout.

  3. Calibration and error analysis of metal-oxide-semiconductor field-effect transistor dosimeters for computed tomography radiation dosimetry.

    PubMed

    Trattner, Sigal; Prinsen, Peter; Wiegert, Jens; Gerland, Elazar-Lars; Shefer, Efrat; Morton, Tom; Thompson, Carla M; Yagil, Yoad; Cheng, Bin; Jambawalikar, Sachin; Al-Senan, Rani; Amurao, Maxwell; Halliburton, Sandra S; Einstein, Andrew J

    2017-12-01

    Metal-oxide-semiconductor field-effect transistors (MOSFETs) serve as a helpful tool for organ radiation dosimetry and their use has grown in computed tomography (CT). While different approaches have been used for MOSFET calibration, those using the commonly available 100 mm pencil ionization chamber have not incorporated measurements performed throughout its length, and moreover, no previous work has rigorously evaluated the multiple sources of error involved in MOSFET calibration. In this paper, we propose a new MOSFET calibration approach to translate MOSFET voltage measurements into absorbed dose from CT, based on serial measurements performed throughout the length of a 100-mm ionization chamber, and perform an analysis of the errors of MOSFET voltage measurements and four sources of error in calibration. MOSFET calibration was performed at two sites, to determine single calibration factors for tube potentials of 80, 100, and 120 kVp, using a 100-mm-long pencil ion chamber and a cylindrical computed tomography dose index (CTDI) phantom of 32 cm diameter. The dose profile along the 100-mm ion chamber axis was sampled in 5 mm intervals by nine MOSFETs in the nine holes of the CTDI phantom. Variance of the absorbed dose was modeled as a sum of the MOSFET voltage measurement variance and the calibration factor variance, the latter being comprised of three main subcomponents: ionization chamber reading variance, MOSFET-to-MOSFET variation and a contribution related to the fact that the average calibration factor of a few MOSFETs was used as an estimate for the average value of all MOSFETs. MOSFET voltage measurement error was estimated based on sets of repeated measurements. The calibration factor overall voltage measurement error was calculated from the above analysis. Calibration factors determined were close to those reported in the literature and by the manufacturer (~3 mV/mGy), ranging from 2.87 to 3.13 mV/mGy. The error σ V of a MOSFET voltage measurement was shown to be proportional to the square root of the voltage V: σV=cV where c = 0.11 mV. A main contributor to the error in the calibration factor was the ionization chamber reading error with 5% error. The usage of a single calibration factor for all MOSFETs introduced an additional error of about 5-7%, depending on the number of MOSFETs that were used to determine the single calibration factor. The expected overall error in a high-dose region (~30 mGy) was estimated to be about 8%, compared to 6% when an individual MOSFET calibration was performed. For a low-dose region (~3 mGy), these values were 13% and 12%. A MOSFET calibration method was developed using a 100-mm pencil ion chamber and a CTDI phantom, accompanied by an absorbed dose error analysis reflecting multiple sources of measurement error. When using a single calibration factor, per tube potential, for different MOSFETs, only a small error was introduced into absorbed dose determinations, thus supporting the use of a single calibration factor for experiments involving many MOSFETs, such as those required to accurately estimate radiation effective dose. © 2017 American Association of Physicists in Medicine.

  4. CdTe Timepix detectors for single-photon spectroscopy and linear polarimetry of high-flux hard x-ray radiation.

    PubMed

    Hahn, C; Weber, G; Märtin, R; Höfer, S; Kämpfer, T; Stöhlker, Th

    2016-04-01

    Single-photon spectroscopy of pulsed, high-intensity sources of hard X-rays - such as laser-generated plasmas - is often hampered by the pileup of several photons absorbed by the unsegmented, large-volume sensors routinely used for the detection of high-energy radiation. Detectors based on the Timepix chip, with a segmentation pitch of 55 μm and the possibility to be equipped with high-Z sensor chips, constitute an attractive alternative to commonly used passive solutions such as image plates. In this report, we present energy calibration and characterization measurements of such devices. The achievable energy resolution is comparable to that of scintillators for γ spectroscopy. Moreover, we also introduce a simple two-detector Compton polarimeter setup with a polarimeter quality of (98 ± 1)%. Finally, a proof-of-principle polarimetry experiment is discussed, where we studied the linear polarization of bremsstrahlung emitted by a laser-driven plasma and found an indication of the X-ray polarization direction depending on the polarization state of the incident laser pulse.

  5. Multiple subduction imprints in the mantle below Italy detected in a single lava flow

    NASA Astrophysics Data System (ADS)

    Nikogosian, Igor; Ersoy, Özlem; Whitehouse, Martin; Mason, Paul R. D.; de Hoog, Jan C. M.; Wortel, Rinus; van Bergen, Manfred J.

    2016-09-01

    Post-collisional magmatism reflects the regional subduction history prior to collision but the link between the two is complex and often poorly understood. The collision of continents along a convergent plate boundary commonly marks the onset of a variety of transitional geodynamic processes. Typical responses include delamination of subducting lithosphere, crustal thickening in the overriding plate, slab detachment and asthenospheric upwelling, or the complete termination of convergence. A prominent example is the Western-Central Mediterranean, where the ongoing slow convergence of Africa and Europe (Eurasia) has been accommodated by a variety of spreading and subduction systems that dispersed remnants of subducted lithosphere into the mantle, creating a compositionally wide spectrum of magmatism. Using lead isotope compositions of a set of melt inclusions in magmatic olivine crystals we detect exceptional heterogeneity in the mantle domain below Central Italy, which we attribute to the presence of continental material, introduced initially by Alpine and subsequently by Apennine subduction. We show that superimposed subduction imprints of a mantle source can be tapped during a melting episode millions of years later, and are recorded in a single lava flow.

  6. SNPdbe: constructing an nsSNP functional impacts database.

    PubMed

    Schaefer, Christian; Meier, Alice; Rost, Burkhard; Bromberg, Yana

    2012-02-15

    Many existing databases annotate experimentally characterized single nucleotide polymorphisms (SNPs). Each non-synonymous SNP (nsSNP) changes one amino acid in the gene product (single amino acid substitution;SAAS). This change can either affect protein function or be neutral in that respect. Most polymorphisms lack experimental annotation of their functional impact. Here, we introduce SNPdbe-SNP database of effects, with predictions of computationally annotated functional impacts of SNPs. Database entries represent nsSNPs in dbSNP and 1000 Genomes collection, as well as variants from UniProt and PMD. SAASs come from >2600 organisms; 'human' being the most prevalent. The impact of each SAAS on protein function is predicted using the SNAP and SIFT algorithms and augmented with experimentally derived function/structure information and disease associations from PMD, OMIM and UniProt. SNPdbe is consistently updated and easily augmented with new sources of information. The database is available as an MySQL dump and via a web front end that allows searches with any combination of organism names, sequences and mutation IDs. http://www.rostlab.org/services/snpdbe.

  7. Integration of polarization-multiplexing and phase-shifting in nanometric two dimensional self-mixing measurement.

    PubMed

    Tao, Yufeng; Xia, Wei; Wang, Ming; Guo, Dongmei; Hao, Hui

    2017-02-06

    Integration of phase manipulation and polarization multiplexing was introduced to self-mixing interferometry (SMI) for high-sensitive measurement. Light polarizations were used to increase measuring path number and predict manifold merits for potential applications. Laser source was studied as a microwave-photonic resonator optically-injected by double reflected lights on a two-feedback-factor analytical model. Independent external paths exploited magnesium-oxide doped lithium niobate crystals at perpendicular polarizations to transfer interferometric phases into amplitudes of harmonics. Theoretical resolutions reached angstrom level. By integrating two techniques, this SMI outperformed the conventional single-path SMIs by simultaneous dual-targets measurement on single laser tube with high sensitivity and low speckle noise. In experimental demonstration, by nonlinear filtering method, a custom-made phase-resolved algorithm real-time figured out instantaneous two-dimensional displacements with nanometer resolution. Experimental comparisons to lock-in technique and a commercial Ploytec-5000 laser Doppler velocity meter validated this two-path SMI in micron range without optical cross-talk. Moreover, accuracy subjected to slewing rates of crystals could be flexibly adjusted.

  8. Hybrid quantum-classical modeling of quantum dot devices

    NASA Astrophysics Data System (ADS)

    Kantner, Markus; Mittnenzweig, Markus; Koprucki, Thomas

    2017-11-01

    The design of electrically driven quantum dot devices for quantum optical applications asks for modeling approaches combining classical device physics with quantum mechanics. We connect the well-established fields of semiclassical semiconductor transport theory and the theory of open quantum systems to meet this requirement. By coupling the van Roosbroeck system with a quantum master equation in Lindblad form, we introduce a new hybrid quantum-classical modeling approach, which provides a comprehensive description of quantum dot devices on multiple scales: it enables the calculation of quantum optical figures of merit and the spatially resolved simulation of the current flow in realistic semiconductor device geometries in a unified way. We construct the interface between both theories in such a way, that the resulting hybrid system obeys the fundamental axioms of (non)equilibrium thermodynamics. We show that our approach guarantees the conservation of charge, consistency with the thermodynamic equilibrium and the second law of thermodynamics. The feasibility of the approach is demonstrated by numerical simulations of an electrically driven single-photon source based on a single quantum dot in the stationary and transient operation regime.

  9. Stable isotope ratios of marijuana. I. Carbon and nitrogen stable isotopes describe growth conditions.

    PubMed

    West, Jason B; Hurley, Janet M; Ehleringer, James R

    2009-01-01

    There remains significant uncertainty in illicit marijuana cultivation. We analyzed the delta(13)C and delta(15)N of 508 domestic samples from known U.S.A. counties, 31 seized from a single location, 5 samples grown in Mexico and Colombia, and 10 northwest border seizures. For a subset, inflorescences and leaves were analyzed separately. These data revealed a strong correspondence, with inflorescences having slightly higher delta(13)C and delta(15)N values than leaves. A framework for interpreting these results is introduced and evaluated. Samples identified as outdoor-grown by delta(13)C were generally recorded as such by the Drug Enforcement Administration (DEA). DEA-classified indoor-grown samples had the most negative delta(13)C values, consistent with indoor cultivation, although many were also in the outdoor-grown domain. Delta(15)N indicated a wide range of fertilizers across the dataset. Samples seized at the single location suggested multiple sources. Northwest border delta(13)C values suggested indoor growth, whereas for the Mexican and Colombian samples they indicated outdoor growth.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingargiola, A.; Laurence, T. A.; Boutelle, R.

    We introduce Photon-HDF5, an open and efficient file format to simplify exchange and long term accessibility of data from single-molecule fluorescence experiments based on photon-counting detectors such as single-photon avalanche diode (SPAD), photomultiplier tube (PMT) or arrays of such detectors. The format is based on HDF5, a widely used platform- and language-independent hierarchical file format for which user-friendly viewers are available. Photon-HDF5 can store raw photon data (timestamp, channel number, etc) from any acquisition hardware, but also setup and sample description, information on provenance, authorship and other metadata, and is flexible enough to include any kind of custom data. Themore » format specifications are hosted on a public website, which is open to contributions by the biophysics community. As an initial resource, the website provides code examples to read Photon-HDF5 files in several programming languages and a reference python library (phconvert), to create new Photon-HDF5 files and convert several existing file formats into Photon-HDF5. As a result, to encourage adoption by the academic and commercial communities, all software is released under the MIT open source license.« less

  11. CdTe Timepix detectors for single-photon spectroscopy and linear polarimetry of high-flux hard x-ray radiation

    NASA Astrophysics Data System (ADS)

    Hahn, C.; Weber, G.; Märtin, R.; Höfer, S.; Kämpfer, T.; Stöhlker, Th.

    2016-04-01

    Single-photon spectroscopy of pulsed, high-intensity sources of hard X-rays — such as laser-generated plasmas — is often hampered by the pileup of several photons absorbed by the unsegmented, large-volume sensors routinely used for the detection of high-energy radiation. Detectors based on the Timepix chip, with a segmentation pitch of 55 μm and the possibility to be equipped with high-Z sensor chips, constitute an attractive alternative to commonly used passive solutions such as image plates. In this report, we present energy calibration and characterization measurements of such devices. The achievable energy resolution is comparable to that of scintillators for γ spectroscopy. Moreover, we also introduce a simple two-detector Compton polarimeter setup with a polarimeter quality of (98 ± 1)%. Finally, a proof-of-principle polarimetry experiment is discussed, where we studied the linear polarization of bremsstrahlung emitted by a laser-driven plasma and found an indication of the X-ray polarization direction depending on the polarization state of the incident laser pulse.

  12. CdTe Timepix detectors for single-photon spectroscopy and linear polarimetry of high-flux hard x-ray radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hahn, C., E-mail: christoph.hahn@uni-jena.de; Höfer, S.; Kämpfer, T.

    Single-photon spectroscopy of pulsed, high-intensity sources of hard X-rays — such as laser-generated plasmas — is often hampered by the pileup of several photons absorbed by the unsegmented, large-volume sensors routinely used for the detection of high-energy radiation. Detectors based on the Timepix chip, with a segmentation pitch of 55 μm and the possibility to be equipped with high-Z sensor chips, constitute an attractive alternative to commonly used passive solutions such as image plates. In this report, we present energy calibration and characterization measurements of such devices. The achievable energy resolution is comparable to that of scintillators for γ spectroscopy.more » Moreover, we also introduce a simple two-detector Compton polarimeter setup with a polarimeter quality of (98 ± 1)%. Finally, a proof-of-principle polarimetry experiment is discussed, where we studied the linear polarization of bremsstrahlung emitted by a laser-driven plasma and found an indication of the X-ray polarization direction depending on the polarization state of the incident laser pulse.« less

  13. Non-blinking (Zn)CuInS/ZnS Quantum Dots Prepared by In Situ Interfacial Alloying Approach

    PubMed Central

    Zhang, Aidi; Dong, Chaoqing; Li, Liang; Yin, Jinjin; Liu, Heng; Huang, Xiangyi; Ren, Jicun

    2015-01-01

    Semiconductor quantum dots (QDs) are very important optical nanomaterials with a wide range of potential applications. However, blinking behavior of single QD is an intrinsic drawback for some biological and photoelectric applications based on single-particle emission. Herein we present a rational strategy for fabrication of non-blinking (Zn)CuInS/ZnS QDs in organic phase through in situ interfacial alloying approach. This new strategy includes three steps: synthesis of CuInS QDs, eliminating the interior traps of QDs by forming graded (Zn)CuInS alloyed QDs, modifying the surface traps of QDs by introducing ZnS shells onto (Zn)CuInS QDs using alkylthiols as sulfur source and surface ligands. The suppressed blinking mechanism was mainly attributed to modifying QDs traps from interior to exterior via a step-by-step modification. Non-blinking QDs show high quantum yield, symmetric emission spectra and excellent crystallinity, and will enable applications from biology to optoelectronics that were previously hindered by blinking behavior of traditional QDs. PMID:26458511

  14. Funding for tuberculosis research-an urgent crisis of political will, human rights, and global solidarity.

    PubMed

    Frick, Mike

    2017-03-01

    Tuberculosis (TB) killed more people in 2015 than any other single infectious agent, but funding for research to develop better prevention, diagnosis, and treatment methods for TB declined to its lowest level in 7 years. TB research and development (R&D) is woefully underfunded, a situation best viewed as a crisis of political will and a failure on the part of governments to see unmet innovation needs in the TB response as a human rights issue requiring immediate action. Over 60% of available money for TB R&D comes from public sources, and 67% of public money comes from a single country: the USA. The election of Donald Trump to the US presidency in November 2016 has introduced great uncertainty into the support that science generally, and TB research in particular, will receive in the coming years. Advocacy on the part of all actors-from civil society to TB-affected communities to scientists themselves-is urgently needed to increase US government support for TB research moving forward. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  15. Entangling quantum-logic gate operated with an ultrabright semiconductor single-photon source.

    PubMed

    Gazzano, O; Almeida, M P; Nowak, A K; Portalupi, S L; Lemaître, A; Sagnes, I; White, A G; Senellart, P

    2013-06-21

    We demonstrate the unambiguous entangling operation of a photonic quantum-logic gate driven by an ultrabright solid-state single-photon source. Indistinguishable single photons emitted by a single semiconductor quantum dot in a micropillar optical cavity are used as target and control qubits. For a source brightness of 0.56 photons per pulse, the measured truth table has an overlap with the ideal case of 68.4±0.5%, increasing to 73.0±1.6% for a source brightness of 0.17 photons per pulse. The gate is entangling: At a source brightness of 0.48, the Bell-state fidelity is above the entangling threshold of 50% and reaches 71.0±3.6% for a source brightness of 0.15.

  16. Randomness Amplification under Minimal Fundamental Assumptions on the Devices

    NASA Astrophysics Data System (ADS)

    Ramanathan, Ravishankar; Brandão, Fernando G. S. L.; Horodecki, Karol; Horodecki, Michał; Horodecki, Paweł; Wojewódka, Hanna

    2016-12-01

    Recently, the physically realistic protocol amplifying the randomness of Santha-Vazirani sources producing cryptographically secure random bits was proposed; however, for reasons of practical relevance, the crucial question remained open regarding whether this can be accomplished under the minimal conditions necessary for the task. Namely, is it possible to achieve randomness amplification using only two no-signaling components and in a situation where the violation of a Bell inequality only guarantees that some outcomes of the device for specific inputs exhibit randomness? Here, we solve this question and present a device-independent protocol for randomness amplification of Santha-Vazirani sources using a device consisting of two nonsignaling components. We show that the protocol can amplify any such source that is not fully deterministic into a fully random source while tolerating a constant noise rate and prove the composable security of the protocol against general no-signaling adversaries. Our main innovation is the proof that even the partial randomness certified by the two-party Bell test [a single input-output pair (u* , x* ) for which the conditional probability P (x*|u*) is bounded away from 1 for all no-signaling strategies that optimally violate the Bell inequality] can be used for amplification. We introduce the methodology of a partial tomographic procedure on the empirical statistics obtained in the Bell test that ensures that the outputs constitute a linear min-entropy source of randomness. As a technical novelty that may be of independent interest, we prove that the Santha-Vazirani source satisfies an exponential concentration property given by a recently discovered generalized Chernoff bound.

  17. Observational constraints on the physical nature of submillimetre source multiplicity: chance projections are common

    NASA Astrophysics Data System (ADS)

    Hayward, Christopher C.; Chapman, Scott C.; Steidel, Charles C.; Golob, Anneya; Casey, Caitlin M.; Smith, Daniel J. B.; Zitrin, Adi; Blain, Andrew W.; Bremer, Malcolm N.; Chen, Chian-Chou; Coppin, Kristen E. K.; Farrah, Duncan; Ibar, Eduardo; Michałowski, Michał J.; Sawicki, Marcin; Scott, Douglas; van der Werf, Paul; Fazio, Giovanni G.; Geach, James E.; Gurwell, Mark; Petitpas, Glen; Wilner, David J.

    2018-05-01

    Interferometric observations have demonstrated that a significant fraction of single-dish submillimetre (submm) sources are blends of multiple submm galaxies (SMGs), but the nature of this multiplicity, i.e. whether the galaxies are physically associated or chance projections, has not been determined. We performed spectroscopy of 11 SMGs in six multicomponent submm sources, obtaining spectroscopic redshifts for nine of them. For an additional two component SMGs, we detected continuum emission but no obvious features. We supplement our observed sources with four single-dish submm sources from the literature. This sample allows us to statistically constrain the physical nature of single-dish submm source multiplicity for the first time. In three (3/7, { or} 43^{+39 }_{ -33} {per cent at 95 {per cent} confidence}) of the single-dish sources for which the nature of the blending is unambiguous, the components for which spectroscopic redshifts are available are physically associated, whereas 4/7 (57^{+33 }_{ -39} per cent) have at least one unassociated component. When components whose spectra exhibit continuum but no features and for which the photometric redshift is significantly different from the spectroscopic redshift of the other component are also considered, 6/9 (67^{+26 }_{ -37} per cent) of the single-dish sources are comprised of at least one unassociated component SMG. The nature of the multiplicity of one single-dish source is ambiguous. We conclude that physically associated systems and chance projections both contribute to the multicomponent single-dish submm source population. This result contradicts the conventional wisdom that bright submm sources are solely a result of merger-induced starbursts, as blending of unassociated galaxies is also important.

  18. High genetic diversity and absence of founder effects in a worldwide aquatic invader.

    PubMed

    Lejeusne, Christophe; Saunier, Alice; Petit, Nicolas; Béguer, Mélanie; Otani, Michio; Carlton, James T; Rico, Ciro; Green, Andy J

    2014-07-24

    The introduced oriental shrimp Palaemon macrodactylus has recently become widespread in temperate estuaries worldwide. However, this recent worldwide spread outside of its native range arises after a previous introduction to the US Pacific coast, where it was restricted for more than 30 years. Using a phylogeographic approach, the present work investigates the genetic history of the invasion of this decapod worldwide. Japan acted as the main native source area for worldwide introduced populations, but other native areas (likely South Korea and China) may act as source populations as well. The recently introduced European and NW Atlantic populations result from colonization from both Japan and an unknown area of the native range, although colonization from the NE Pacific could not be ruled out. Most introduced populations had higher haplotypic diversity than most native populations. P. macrodactylus has a strong potential to become one of the most widespread introduced species and may become the dominant estuarine shrimp in Europe. The ecological and economic consequences of this invasion remain to be thoroughly evaluated.

  19. Analysis of oil-pipeline distribution of multiple products subject to delivery time-windows

    NASA Astrophysics Data System (ADS)

    Jittamai, Phongchai

    This dissertation defines the operational problems of, and develops solution methodologies for, a distribution of multiple products into oil pipeline subject to delivery time-windows constraints. A multiple-product oil pipeline is a pipeline system composing of pipes, pumps, valves and storage facilities used to transport different types of liquids. Typically, products delivered by pipelines are petroleum of different grades moving either from production facilities to refineries or from refineries to distributors. Time-windows, which are generally used in logistics and scheduling areas, are incorporated in this study. The distribution of multiple products into oil pipeline subject to delivery time-windows is modeled as multicommodity network flow structure and mathematically formulated. The main focus of this dissertation is the investigation of operating issues and problem complexity of single-source pipeline problems and also providing solution methodology to compute input schedule that yields minimum total time violation from due delivery time-windows. The problem is proved to be NP-complete. The heuristic approach, a reversed-flow algorithm, is developed based on pipeline flow reversibility to compute input schedule for the pipeline problem. This algorithm is implemented in no longer than O(T·E) time. This dissertation also extends the study to examine some operating attributes and problem complexity of multiple-source pipelines. The multiple-source pipeline problem is also NP-complete. A heuristic algorithm modified from the one used in single-source pipeline problems is introduced. This algorithm can also be implemented in no longer than O(T·E) time. Computational results are presented for both methodologies on randomly generated problem sets. The computational experience indicates that reversed-flow algorithms provide good solutions in comparison with the optimal solutions. Only 25% of the problems tested were more than 30% greater than optimal values and approximately 40% of the tested problems were solved optimally by the algorithms.

  20. Epitaxial growth of thermally stable cobalt films on Au(111)

    NASA Astrophysics Data System (ADS)

    Haag, N.; Laux, M.; Stöckl, J.; Kollamana, J.; Seidel, J.; Großmann, N.; Fetzer, R.; Kelly, L. L.; Wei, Z.; Stadtmüller, B.; Cinchetti, M.; Aeschlimann, M.

    2016-10-01

    Ferromagnetic thin films play a fundamental role in spintronic applications as a source for spin polarized carriers and in fundamental studies as ferromagnetic substrates. However, it is challenging to produce such metallic films with high structural quality and chemical purity on single crystalline substrates since the diffusion barrier across the metal-metal interface is usually smaller than the thermal activation energy necessary for smooth surface morphologies. Here, we introduce epitaxial thin Co films grown on an Au(111) single crystal surface as a thermally stable ferromagnetic thin film. Our structural investigations reveal an identical growth of thin Co/Au(111) films compared to Co bulk single crystals with large monoatomic Co terraces with an average width of 500 Å, formed after thermal annealing at 575 K. Combining our results from photoemission and Auger electron spectroscopy, we provide evidence that no significant diffusion of Au into the near surface region of the Co film takes place for this temperature and that no Au capping layer is formed on top of Co films. Furthermore, we show that the electronic valence band is dominated by a strong spectral contribution from a Co 3d band and a Co derived surface resonance in the minority band. Both states lead to an overall negative spin polarization at the Fermi energy.

  1. A statistical framework for applying RNA profiling to chemical hazard detection.

    PubMed

    Kostich, Mitchell S

    2017-12-01

    Use of 'omics technologies in environmental science is expanding. However, application is mostly restricted to characterizing molecular steps leading from toxicant interaction with molecular receptors to apical endpoints in laboratory species. Use in environmental decision-making is limited, due to difficulty in elucidating mechanisms in sufficient detail to make quantitative outcome predictions in any single species or in extending predictions to aquatic communities. Here we introduce a mechanism-agnostic statistical approach, supplementing mechanistic investigation by allowing probabilistic outcome prediction even when understanding of molecular pathways is limited, and facilitating extrapolation from results in laboratory test species to predictions about aquatic communities. We use concepts familiar to environmental managers, supplemented with techniques employed for clinical interpretation of 'omics-based biomedical tests. We describe the framework in step-wise fashion, beginning with single test replicates of a single RNA variant, then extending to multi-gene RNA profiling, collections of test replicates, and integration of complementary data. In order to simplify the presentation, we focus on using RNA profiling for distinguishing presence versus absence of chemical hazards, but the principles discussed can be extended to other types of 'omics measurements, multi-class problems, and regression. We include a supplemental file demonstrating many of the concepts using the open source R statistical package. Published by Elsevier Ltd.

  2. Double optical gating

    NASA Astrophysics Data System (ADS)

    Gilbertson, Steve

    The observation and control of dynamics in atomic and molecular targets requires the use of laser pulses with duration less than the characteristic timescale of the process which is to be manipulated. For electron dynamics, this time scale is on the order of attoseconds where 1 attosecond = 10 -18 seconds. In order to generate pulses on this time scale, different gating methods have been proposed. The idea is to extract or "gate" a single pulse from an attosecond pulse train and switch off all the other pulses. While previous methods have had some success, they are very difficult to implement and so far very few labs have access to these unique light sources. The purpose of this work is to introduce a new method, called double optical gating (DOG), and to demonstrate its effectiveness at generating high contrast single isolated attosecond pulses from multi-cycle lasers. First, the method is described in detail and is investigated in the spectral domain. The resulting attosecond pulses produced are then temporally characterized through attosecond streaking. A second method of gating, called generalized double optical gating (GDOG), is also introduced. This method allows attosecond pulse generation directly from a carrier-envelope phase un-stabilized laser system for the first time. Next the methods of DOG and GDOG are implemented in attosecond applications like high flux pulses and extreme broadband spectrum generation. Finally, the attosecond pulses themselves are used in experiments. First, an attosecond/femtosecond cross correlation is used for characterization of spatial and temporal properties of femtosecond pulses. Then, an attosecond pump, femtosecond probe experiment is conducted to observe and control electron dynamics in helium for the first time.

  3. Modeling, Development and Control of Multilevel Converters for Power System Application =

    NASA Astrophysics Data System (ADS)

    Vahedi, Hani

    The main goal of this project is to develop a multilevel converter topology to be useful in power system applications. Although many topologies are introduced rapidly using a bunch of switches and isolated dc sources, having a single-dc-source multilevel inverter is still a matter of controversy. In fact, each isolated dc source means a bulky transformer and a rectifier that have their own losses and costs forcing the industries to avoid entering in this topic conveniently. On the other hand, multilevel inverters topologies with single-dc-source require associated controllers to regulate the dc capacitors voltages in order to have multilevel voltage waveform at the output. Thus, a complex controller would not interest investors properly. Consequently, developing a single-dc-source multilevel inverter topology along with a light and reliable voltage control is still a challenging topic to replace the 2-level inverters in the market effectively. The first effort in this project was devoted to the PUC7 inverter to design a simple and yet efficient controller. A new modelling is performed on the PUC7 inverter and it has been simplified to first order system. Afterwards, a nonlinear cascaded controller is designed and applied to regulate the capacitor voltage at 1/3 of the DC source amplitude and to generate 7 identical voltage levels at the output supplying different type of loads such as RL or rectifier harmonic ones. In next work, the PUC5 topology is proposed as a remedy to the PUC7 that requires a complicated controller to operate properly. The capacitor voltage is regulated at half of dc source amplitude to generate 5 voltage levels at the output. Although the 7-level voltage waveform is replaced by a 5-level one in PUC5 topology, it is shown that the PUC5 needs a very simple and reliable voltage balancing technique due to having some redundant switching states. Moreover, a sensor-less voltage balancing technique is designed and implemented on the PUC5 inverter successfully to work in both stand-alone and gridconnected mode of operation. Eventually, a modified configuration of the PUC5 topology is presented to work as a buck PFC rectifier. The internal performance of the rectifier is like a buck converter to generate stepped down DC voltages at the two output terminals while the grid sees a boost converter externally. As well, a decoupled voltage/current controller is designed and applied to balance the output voltages identically and synchronize the input current with grid voltage to have a PFC operation acceptably. A power balance analysis is done to show the load variation range limit. All the theoretical and simulation studies are validated by experimental results completely.

  4. Use of FEC coding to improve statistical multiplexing performance for video transport over ATM networks

    NASA Astrophysics Data System (ADS)

    Kurceren, Ragip; Modestino, James W.

    1998-12-01

    The use of forward error-control (FEC) coding, possibly in conjunction with ARQ techniques, has emerged as a promising approach for video transport over ATM networks for cell-loss recovery and/or bit error correction, such as might be required for wireless links. Although FEC provides cell-loss recovery capabilities it also introduces transmission overhead which can possibly cause additional cell losses. A methodology is described to maximize the number of video sources multiplexed at a given quality of service (QoS), measured in terms of decoded cell loss probability, using interlaced FEC codes. The transport channel is modelled as a block interference channel (BIC) and the multiplexer as single server, deterministic service, finite buffer supporting N users. Based upon an information-theoretic characterization of the BIC and large deviation bounds on the buffer overflow probability, the described methodology provides theoretically achievable upper limits on the number of sources multiplexed. Performance of specific coding techniques using interlaced nonbinary Reed-Solomon (RS) codes and binary rate-compatible punctured convolutional (RCPC) codes is illustrated.

  5. Accretion states in X-ray binaries and their connection to GeV emission

    NASA Astrophysics Data System (ADS)

    Koerding, Elmar

    Accretion onto compact objects is intrinsically a multi-wavelength phenomenon: it shows emis-sion components visible from the radio to GeV bands. In X-ray binaries one can well observe the evolution of a single source under changes of the accretion rate and thus study the interplay between the different emission components.I will introduce the phenomenology of X-ray bina-ries and their accretion states and present our current understanding of the interplay between the optically thin and optically thick part of the accretion flow and the jet.The recent detection of the Fermi Large Area Telescope of a variable high-energy source coinciding with the position of the x-ray binary Cygnus X-3 will be presented. Its identification with Cygnus X-3 has been secured by the detection of its orbital period in gamma rays, as well as the correlation of the LAT flux with radio emission from the relativistic jets of Cygnus X-3. This will be interpreted in the context of the accretion states of the X-ray binary.

  6. Microphysical explanation of the RH-dependent water affinity of biogenic organic aerosol and its importance for climate

    DOE PAGES

    Rastak, N.; Pajunoja, A.; Acosta Navarro, J. C.; ...

    2017-04-28

    A large fraction of atmospheric organic aerosol (OA) originates from natural emissions that are oxidized in the atmosphere to form secondary organic aerosol (SOA). Isoprene (IP) and monoterpenes (MT) are the most important precursors of SOA originating from forests. The climate impacts from OA are currently estimated through parameterizations of water uptake that drastically simplify the complexity of OA. We combine laboratory experiments, thermodynamic modeling, field observations, and climate modeling to (1) explain the molecular mechanisms behind RH-dependent SOA water-uptake with solubility and phase separation; (2) show that laboratory data on IP- and MT-SOA hygroscopicity are representative of ambient datamore » with corresponding OA source profiles; and (3) demonstrate the sensitivity of the modeled aerosol climate effect to assumed OA water affinity. We conclude that the commonly used single-parameter hygroscopicity framework can introduce significant error when quantifying the climate effects of organic aerosol. The results highlight the need for better constraints on the overall global OA mass loadings and its molecular composition, including currently underexplored anthropogenic and marine OA sources.« less

  7. Unbound motion on a Schwarzschild background: Practical approaches to frequency domain computations

    NASA Astrophysics Data System (ADS)

    Hopper, Seth

    2018-03-01

    Gravitational perturbations due to a point particle moving on a static black hole background are naturally described in Regge-Wheeler gauge. The first-order field equations reduce to a single master wave equation for each radiative mode. The master function satisfying this wave equation is a linear combination of the metric perturbation amplitudes with a source term arising from the stress-energy tensor of the point particle. The original master functions were found by Regge and Wheeler (odd parity) and Zerilli (even parity). Subsequent work by Moncrief and then Cunningham, Price and Moncrief introduced new master variables which allow time domain reconstruction of the metric perturbation amplitudes. Here, I explore the relationship between these different functions and develop a general procedure for deriving new higher-order master functions from ones already known. The benefit of higher-order functions is that their source terms always converge faster at large distance than their lower-order counterparts. This makes for a dramatic improvement in both the speed and accuracy of frequency domain codes when analyzing unbound motion.

  8. Microphysical explanation of the RH-dependent water affinity of biogenic organic aerosol and its importance for climate

    NASA Astrophysics Data System (ADS)

    Rastak, N.; Pajunoja, A.; Acosta Navarro, J. C.; Ma, J.; Song, M.; Partridge, D. G.; Kirkevâg, A.; Leong, Y.; Hu, W. W.; Taylor, N. F.; Lambe, A.; Cerully, K.; Bougiatioti, A.; Liu, P.; Krejci, R.; Petäjä, T.; Percival, C.; Davidovits, P.; Worsnop, D. R.; Ekman, A. M. L.; Nenes, A.; Martin, S.; Jimenez, J. L.; Collins, D. R.; Topping, D. O.; Bertram, A. K.; Zuend, A.; Virtanen, A.; Riipinen, I.

    2017-05-01

    A large fraction of atmospheric organic aerosol (OA) originates from natural emissions that are oxidized in the atmosphere to form secondary organic aerosol (SOA). Isoprene (IP) and monoterpenes (MT) are the most important precursors of SOA originating from forests. The climate impacts from OA are currently estimated through parameterizations of water uptake that drastically simplify the complexity of OA. We combine laboratory experiments, thermodynamic modeling, field observations, and climate modeling to (1) explain the molecular mechanisms behind RH-dependent SOA water-uptake with solubility and phase separation; (2) show that laboratory data on IP- and MT-SOA hygroscopicity are representative of ambient data with corresponding OA source profiles; and (3) demonstrate the sensitivity of the modeled aerosol climate effect to assumed OA water affinity. We conclude that the commonly used single-parameter hygroscopicity framework can introduce significant error when quantifying the climate effects of organic aerosol. The results highlight the need for better constraints on the overall global OA mass loadings and its molecular composition, including currently underexplored anthropogenic and marine OA sources.

  9. On-chip dual-comb source for spectroscopy.

    PubMed

    Dutt, Avik; Joshi, Chaitanya; Ji, Xingchen; Cardenas, Jaime; Okawachi, Yoshitomo; Luke, Kevin; Gaeta, Alexander L; Lipson, Michal

    2018-03-01

    Dual-comb spectroscopy is a powerful technique for real-time, broadband optical sampling of molecular spectra, which requires no moving components. Recent developments with microresonator-based platforms have enabled frequency combs at the chip scale. However, the need to precisely match the resonance wavelengths of distinct high quality-factor microcavities has hindered the development of on-chip dual combs. We report the simultaneous generation of two microresonator combs on the same chip from a single laser, drastically reducing experimental complexity. We demonstrate broadband optical spectra spanning 51 THz and low-noise operation of both combs by deterministically tuning into soliton mode-locked states using integrated microheaters, resulting in narrow (<10 kHz) microwave beat notes. We further use one comb as a reference to probe the formation dynamics of the other comb, thus introducing a technique to investigate comb evolution without auxiliary lasers or microwave oscillators. We demonstrate high signal-to-noise ratio absorption spectroscopy spanning 170 nm using the dual-comb source over a 20-μs acquisition time. Our device paves the way for compact and robust spectrometers at nanosecond time scales enabled by large beat-note spacings (>1 GHz).

  10. Microphysical explanation of the RH-dependent water affinity of biogenic organic aerosol and its importance for climate.

    PubMed

    Rastak, N; Pajunoja, A; Acosta Navarro, J C; Ma, J; Song, M; Partridge, D G; Kirkevåg, A; Leong, Y; Hu, W W; Taylor, N F; Lambe, A; Cerully, K; Bougiatioti, A; Liu, P; Krejci, R; Petäjä, T; Percival, C; Davidovits, P; Worsnop, D R; Ekman, A M L; Nenes, A; Martin, S; Jimenez, J L; Collins, D R; Topping, D O; Bertram, A K; Zuend, A; Virtanen, A; Riipinen, I

    2017-05-28

    A large fraction of atmospheric organic aerosol (OA) originates from natural emissions that are oxidized in the atmosphere to form secondary organic aerosol (SOA). Isoprene (IP) and monoterpenes (MT) are the most important precursors of SOA originating from forests. The climate impacts from OA are currently estimated through parameterizations of water uptake that drastically simplify the complexity of OA. We combine laboratory experiments, thermodynamic modeling, field observations, and climate modeling to (1) explain the molecular mechanisms behind RH-dependent SOA water-uptake with solubility and phase separation; (2) show that laboratory data on IP- and MT-SOA hygroscopicity are representative of ambient data with corresponding OA source profiles; and (3) demonstrate the sensitivity of the modeled aerosol climate effect to assumed OA water affinity. We conclude that the commonly used single-parameter hygroscopicity framework can introduce significant error when quantifying the climate effects of organic aerosol. The results highlight the need for better constraints on the overall global OA mass loadings and its molecular composition, including currently underexplored anthropogenic and marine OA sources.

  11. Progress Toward Improving Jet Noise Predictions in Hot Jets

    NASA Technical Reports Server (NTRS)

    Khavaran, Abbas; Kenzakowski, Donald C.

    2007-01-01

    An acoustic analogy methodology for improving noise predictions in hot round jets is presented. Past approaches have often neglected the impact of temperature fluctuations on the predicted sound spectral density, which could be significant for heated jets, and this has yielded noticeable acoustic under-predictions in such cases. The governing acoustic equations adopted here are a set of linearized, inhomogeneous Euler equations. These equations are combined into a single third order linear wave operator when the base flow is considered as a locally parallel mean flow. The remaining second-order fluctuations are regarded as the equivalent sources of sound and are modeled. It is shown that the hot jet effect may be introduced primarily through a fluctuating velocity/enthalpy term. Modeling this additional source requires specialized inputs from a RANS-based flowfield simulation. The information is supplied using an extension to a baseline two equation turbulence model that predicts total enthalpy variance in addition to the standard parameters. Preliminary application of this model to a series of unheated and heated subsonic jets shows significant improvement in the acoustic predictions at the 90 degree observer angle.

  12. Microphysical explanation of the RH-dependent water affinity of biogenic organic aerosol and its importance for climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rastak, N.; Pajunoja, A.; Acosta Navarro, J. C.

    A large fraction of atmospheric organic aerosol (OA) originates from natural emissions that are oxidized in the atmosphere to form secondary organic aerosol (SOA). Isoprene (IP) and monoterpenes (MT) are the most important precursors of SOA originating from forests. The climate impacts from OA are currently estimated through parameterizations of water uptake that drastically simplify the complexity of OA. We combine laboratory experiments, thermodynamic modeling, field observations, and climate modeling to (1) explain the molecular mechanisms behind RH-dependent SOA water-uptake with solubility and phase separation; (2) show that laboratory data on IP- and MT-SOA hygroscopicity are representative of ambient datamore » with corresponding OA source profiles; and (3) demonstrate the sensitivity of the modeled aerosol climate effect to assumed OA water affinity. We conclude that the commonly used single-parameter hygroscopicity framework can introduce significant error when quantifying the climate effects of organic aerosol. The results highlight the need for better constraints on the overall global OA mass loadings and its molecular composition, including currently underexplored anthropogenic and marine OA sources.« less

  13. The localization of focal heart activity via body surface potential measurements: tests in a heterogeneous torso phantom

    NASA Astrophysics Data System (ADS)

    Wetterling, F.; Liehr, M.; Schimpf, P.; Liu, H.; Haueisen, J.

    2009-09-01

    The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.

  14. Augmented switching linear dynamical system model for gas concentration estimation with MOX sensors in an open sampling system.

    PubMed

    Di Lello, Enrico; Trincavelli, Marco; Bruyninckx, Herman; De Laet, Tinne

    2014-07-11

    In this paper, we introduce a Bayesian time series model approach for gas concentration estimation using Metal Oxide (MOX) sensors in Open Sampling System (OSS). Our approach focuses on the compensation of the slow response of MOX sensors, while concurrently solving the problem of estimating the gas concentration in OSS. The proposed Augmented Switching Linear System model allows to include all the sources of uncertainty arising at each step of the problem in a single coherent probabilistic formulation. In particular, the problem of detecting on-line the current sensor dynamical regime and estimating the underlying gas concentration under environmental disturbances and noisy measurements is formulated and solved as a statistical inference problem. Our model improves, with respect to the state of the art, where system modeling approaches have been already introduced, but only provided an indirect relative measures proportional to the gas concentration and the problem of modeling uncertainty was ignored. Our approach is validated experimentally and the performances in terms of speed of and quality of the gas concentration estimation are compared with the ones obtained using a photo-ionization detector.

  15. Augmented Switching Linear Dynamical System Model for Gas Concentration Estimation with MOX Sensors in an Open Sampling System

    PubMed Central

    Di Lello, Enrico; Trincavelli, Marco; Bruyninckx, Herman; De Laet, Tinne

    2014-01-01

    In this paper, we introduce a Bayesian time series model approach for gas concentration estimation using Metal Oxide (MOX) sensors in Open Sampling System (OSS). Our approach focuses on the compensation of the slow response of MOX sensors, while concurrently solving the problem of estimating the gas concentration in OSS. The proposed Augmented Switching Linear System model allows to include all the sources of uncertainty arising at each step of the problem in a single coherent probabilistic formulation. In particular, the problem of detecting on-line the current sensor dynamical regime and estimating the underlying gas concentration under environmental disturbances and noisy measurements is formulated and solved as a statistical inference problem. Our model improves, with respect to the state of the art, where system modeling approaches have been already introduced, but only provided an indirect relative measures proportional to the gas concentration and the problem of modeling uncertainty was ignored. Our approach is validated experimentally and the performances in terms of speed of and quality of the gas concentration estimation are compared with the ones obtained using a photo-ionization detector. PMID:25019637

  16. Accelerating the Original Profile Kernel.

    PubMed

    Hamp, Tobias; Goldberg, Tatyana; Rost, Burkhard

    2013-01-01

    One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.

  17. Quantum measurement-induced dynamics of many-body ultracold bosonic and fermionic systems in optical lattices

    NASA Astrophysics Data System (ADS)

    Mazzucchi, Gabriel; Kozlowski, Wojciech; Caballero-Benitez, Santiago F.; Elliott, Thomas J.; Mekhov, Igor B.

    2016-02-01

    Trapping ultracold atoms in optical lattices enabled numerous breakthroughs uniting several disciplines. Coupling these systems to quantized light leads to a plethora of new phenomena and has opened up a new field of study. Here we introduce an unusual additional source of competition in a many-body strongly correlated system: We prove that quantum backaction of global measurement is able to efficiently compete with intrinsic short-range dynamics of an atomic system. The competition becomes possible due to the ability to change the spatial profile of a global measurement at a microscopic scale comparable to the lattice period without the need of single site addressing. In coherence with a general physical concept, where new competitions typically lead to new phenomena, we demonstrate nontrivial dynamical effects such as large-scale multimode oscillations, long-range entanglement, and correlated tunneling, as well as selective suppression and enhancement of dynamical processes beyond the projective limit of the quantum Zeno effect. We demonstrate both the breakup and protection of strongly interacting fermion pairs by measurement. Such a quantum optical approach introduces into many-body physics novel processes, objects, and methods of quantum engineering, including the design of many-body entangled environments for open systems.

  18. Single-channel mixed signal blind source separation algorithm based on multiple ICA processing

    NASA Astrophysics Data System (ADS)

    Cheng, Xiefeng; Li, Ji

    2017-01-01

    Take separating the fetal heart sound signal from the mixed signal that get from the electronic stethoscope as the research background, the paper puts forward a single-channel mixed signal blind source separation algorithm based on multiple ICA processing. Firstly, according to the empirical mode decomposition (EMD), the single-channel mixed signal get multiple orthogonal signal components which are processed by ICA. The multiple independent signal components are called independent sub component of the mixed signal. Then by combining with the multiple independent sub component into single-channel mixed signal, the single-channel signal is expanded to multipath signals, which turns the under-determined blind source separation problem into a well-posed blind source separation problem. Further, the estimate signal of source signal is get by doing the ICA processing. Finally, if the separation effect is not very ideal, combined with the last time's separation effect to the single-channel mixed signal, and keep doing the ICA processing for more times until the desired estimated signal of source signal is get. The simulation results show that the algorithm has good separation effect for the single-channel mixed physiological signals.

  19. Multi-voxel pattern analysis reveals increased memory targeting and reduced use of retrieved details during single-agenda source monitoring

    PubMed Central

    McDuff, Susan G. R.; Frankel, Hillary C.; Norman, Kenneth A.

    2009-01-01

    We used multi-voxel pattern analysis (MVPA) of fMRI data to gain insight into how subjects’ retrieval agendas influence source memory judgments (was item X studied using source Y?). In Experiment 1, we used a single-agenda test where subjects judged whether items were studied with the targeted source or not. In Experiment 2, we used a multi-agenda test where subjects judged whether items were studied using the targeted source, studied using a different source, or nonstudied. To evaluate the differences between single- and multi-agenda source monitoring, we trained a classifier to detect source-specific fMRI activity at study, and then we applied the classifier to data from the test phase. We focused on trials where the targeted source and the actual source differed, so we could use MVPA to track neural activity associated with both the targeted source and the actual source. Our results indicate that single-agenda monitoring was associated with increased focus on the targeted source (as evidenced by increased targeted-source activity, relative to baseline) and reduced use of information relating to the actual, non-target source. In the multi-agenda experiment, high-levels of actual-source activity were associated with increased correct rejections, suggesting that subjects were using recollection of actual-source information to avoid source memory errors. In the single-agenda experiment, there were comparable levels of actual-source activity (suggesting that recollection was taking place), but the relationship between actual-source activity and behavior was absent (suggesting that subjects were failing to make proper use of this information). PMID:19144851

  20. Assessing the Financial Benefits of Faster Development Times: The Case of Single-source Versus Multi-vendor Outsourced Biopharmaceutical Manufacturing.

    PubMed

    DiMasi, Joseph A; Smith, Zachary; Getz, Kenneth A

    2018-05-10

    The extent to which new drug developers can benefit financially from shorter development times has implications for development efficiency and innovation incentives. We provided a real-world example of such gains by using recent estimates of drug development costs and returns. Time and fee data were obtained on 5 single-source manufacturing projects. Time and fees were modeled for these projects as if the drug substance and drug product processes had been contracted separately from 2 vendors. The multi-vendor model was taken as the base case, and financial impacts from single-source contracting were determined relative to the base case. The mean and median after-tax financial benefits of shorter development times from single-source contracting were $44.7 million and $34.9 million, respectively (2016 dollars). The after-tax increases in sponsor fees from single-source contracting were small in comparison (mean and median of $0.65 million and $0.25 million). For the data we examined, single-source contracting yielded substantial financial benefits over multi-source contracting, even after accounting for somewhat higher sponsor fees. Copyright © 2018 Elsevier HS Journals, Inc. All rights reserved.

  1. 40 CFR 467.65 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sources. 467.65 Section 467.65 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED... Subcategory § 467.65 Pretreatment standards for existing sources. Except as provided in 40 CFR 403.7 and 403.13, any existing source subject to this subpart which introduces pollutants into a publicly owned...

  2. Integrating diverse forage sources reduces feed gaps on mixed crop-livestock farms.

    PubMed

    Bell, L W; Moore, A D; Thomas, D T

    2017-12-04

    Highly variable climates induce large variability in the supply of forage for livestock and so farmers must manage their livestock systems to reduce the risk of feed gaps (i.e. periods when livestock feed demand exceeds forage supply). However, mixed crop-livestock farmers can utilise a range of feed sources on their farms to help mitigate these risks. This paper reports on the development and application of a simple whole-farm feed-energy balance calculator which is used to evaluate the frequency and magnitude of feed gaps. The calculator matches long-term simulations of variation in forage and metabolisable energy supply from diverse sources against energy demand for different livestock enterprises. Scenarios of increasing the diversity of forage sources in livestock systems is investigated for six locations selected to span Australia's crop-livestock zone. We found that systems relying on only one feed source were prone to higher risk of feed gaps, and hence, would often have to reduce stocking rates to mitigate these risks or use supplementary feed. At all sites, by adding more feed sources to the farm feedbase the continuity of supply of both fresh and carry-over forage was improved, reducing the frequency and magnitude of feed deficits. However, there were diminishing returns from making the feedbase more complex, with combinations of two to three feed sources typically achieving the maximum benefits in terms of reducing the risk of feed gaps. Higher stocking rates could be maintained while limiting risk when combinations of other feed sources were introduced into the feedbase. For the same level of risk, a feedbase relying on a diversity of forage sources could support stocking rates 1.4 to 3 times higher than if they were using a single pasture source. This suggests that there is significant capacity to mitigate both risk of feed gaps at the same time as increasing 'safe' stocking rates through better integration of feed sources on mixed crop-livestock farms across diverse regions and climates.

  3. WE-DE-201-08: Multi-Source Rotating Shield Brachytherapy Apparatus for Prostate Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dadkhah, H; Wu, X; Kim, Y

    Purpose: To introduce a novel multi-source rotating shield brachytherapy (RSBT) apparatus for the precise simultaneous angular and linear positioning of all partially-shielded 153Gd radiation sources in interstitial needles for treating prostate cancer. The mechanism is designed to lower the detrimental dose to healthy tissues, the urethra in particular, relative to conventional high-dose-rate brachytherapy (HDR-BT) techniques. Methods: Following needle implantation, the delivery system is docked to the patient template. Each needle is coupled to a multi-source afterloader catheter by a connector passing through a shaft. The shafts are rotated by translating a moving template between two stationary templates. Shaft walls asmore » well as moving template holes are threaded such that the resistive friction produced between the two parts exerts enough force on the shafts to bring about the rotation. Rotation of the shaft is then transmitted to the shielded source via several keys. Thus, shaft angular position is fully correlated with the position of the moving template. The catheter angles are simultaneously incremented throughout treatment as needed, and only a single 360° rotation of all catheters is needed for a full treatment. For each rotation angle, source depth in each needle is controlled by a multi-source afterloader, which is proposed as an array of belt-driven linear actuators, each of which drives a source wire. Results: Optimized treatment plans based on Monte Carlo dose calculations demonstrated RSBT with the proposed apparatus reduced urethral D{sub 1cc} below that of conventional HDR-BT by 35% for urethral dose gradient volume within 3 mm of the urethra surface. Treatment time to deliver 20 Gy with multi-source RSBT apparatus using nineteen 62.4 GBq {sup 153}Gd sources is 117 min. Conclusions: The proposed RSBT delivery apparatus in conjunction with multiple nitinol catheter-mounted platinum-shielded {sup 153}Gd sources enables a mechanically feasible urethra-sparing treatment technique for prostate cancer in a clinically reasonable timeframe.« less

  4. Generation of Single Photons and Entangled Photon Pairs from a Quantum Dot

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y.; Pelton, M.; Santori, C.; Solomon, G. S.

    2002-10-01

    Current quantum cryptography systems are limited by the Poissonian photon statistics of a standard light source: a security loophole is opened up by the possibility of multiple-photon pulses. By replacing the source with a single-photon emitter, transmission rates of secure information can be improved. A single photon source is also essential to implement a linear optics quantum computer. We have investigated the use of single self-assembled InAs/GaAs quantum dots as such single-photon sources, and have seen a hundred-fold reduction in the multi-photon probability as compared to Poissonian pulses. An extension of our experiment should also allow for the generation of triggered, polarizationentangled photon pairs.

  5. 77 FR 58404 - Announcing the Award of Two Urgent Single-Source Grants To Support Unaccompanied Alien Children...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-20

    ...] Announcing the Award of Two Urgent Single-Source Grants To Support Unaccompanied Alien Children Program...) announces the award of two urgent single-source grants from the Unaccompanied Alien Children's Program to... providing services under the Unaccompanied Alien Children's program. Award Grantee organization Location...

  6. 75 FR 48691 - Single Source Cooperative Agreement Award for the World Health Organization (WHO) To Continue...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Single Source Cooperative Agreement Award for the World... Agreement Award for the World Health Organization (WHO) To Continue Development of Sustainable Influenza... September 29, 2013. In FY 2010, BARDA plans to provide a Single Source Continuation Award to the World...

  7. FRETBursts: An Open Source Toolkit for Analysis of Freely-Diffusing Single-Molecule FRET

    PubMed Central

    Lerner, Eitan; Chung, SangYoon; Weiss, Shimon; Michalet, Xavier

    2016-01-01

    Single-molecule Förster Resonance Energy Transfer (smFRET) allows probing intermolecular interactions and conformational changes in biomacromolecules, and represents an invaluable tool for studying cellular processes at the molecular scale. smFRET experiments can detect the distance between two fluorescent labels (donor and acceptor) in the 3-10 nm range. In the commonly employed confocal geometry, molecules are free to diffuse in solution. When a molecule traverses the excitation volume, it emits a burst of photons, which can be detected by single-photon avalanche diode (SPAD) detectors. The intensities of donor and acceptor fluorescence can then be related to the distance between the two fluorophores. While recent years have seen a growing number of contributions proposing improvements or new techniques in smFRET data analysis, rarely have those publications been accompanied by software implementation. In particular, despite the widespread application of smFRET, no complete software package for smFRET burst analysis is freely available to date. In this paper, we introduce FRETBursts, an open source software for analysis of freely-diffusing smFRET data. FRETBursts allows executing all the fundamental steps of smFRET bursts analysis using state-of-the-art as well as novel techniques, while providing an open, robust and well-documented implementation. Therefore, FRETBursts represents an ideal platform for comparison and development of new methods in burst analysis. We employ modern software engineering principles in order to minimize bugs and facilitate long-term maintainability. Furthermore, we place a strong focus on reproducibility by relying on Jupyter notebooks for FRETBursts execution. Notebooks are executable documents capturing all the steps of the analysis (including data files, input parameters, and results) and can be easily shared to replicate complete smFRET analyzes. Notebooks allow beginners to execute complex workflows and advanced users to customize the analysis for their own needs. By bundling analysis description, code and results in a single document, FRETBursts allows to seamless share analysis workflows and results, encourages reproducibility and facilitates collaboration among researchers in the single-molecule community. PMID:27532626

  8. No difference in the competitive ability of introduced and native Trifolium provenances when grown with soil biota from their introduced and native ranges

    PubMed Central

    Shelby, Natasha; Hulme, Philip E.; van der Putten, Wim H.; McGinn, Kevin J.; Weser, Carolin; Duncan, Richard P.

    2016-01-01

    The evolution of increased competitive ability (EICA) hypothesis could explain why some introduced plant species perform better outside their native ranges. The EICA hypothesis proposes that introduced plants escape specialist pathogens or herbivores leading to selection for resources to be reallocated away from defence and towards greater competitive ability. We tested the hypothesis that escape from soil-borne enemies has led to increased competitive ability in three non-agricultural Trifolium (Fabaceae) species native to Europe that were introduced to New Zealand in the 19th century. Trifolium performance is intimately tied to rhizosphere biota. Thus, we grew plants from one introduced (New Zealand) and two native (Spain and the UK) provenances for each of three species in pots inoculated with soil microbiota collected from the rhizosphere beneath conspecifics in the introduced and native ranges. Plants were grown singly and in competition with conspecifics from a different provenance in order to compare competitive ability in the presence of different microbial communities. In contrast to the predictions of the EICA hypothesis, we found no difference in the competitive ability of introduced and native provenances when grown with soil microbiota from either the native or introduced range. Although plants from introduced provenances of two species grew more slowly than native provenances in native-range soils, as predicted by the EICA hypothesis, plants from the introduced provenance were no less competitive than native conspecifics. Overall, the growth rate of plants grown singly was a poor predictor of their competitive ability, highlighting the importance of directly quantifying plant performance in competitive scenarios, rather than relying on surrogate measures such as growth rate. PMID:26969431

  9. Binary encoding of multiplexed images in mixed noise.

    PubMed

    Lalush, David S

    2008-09-01

    Binary coding of multiplexed signals and images has been studied in the context of spectroscopy with models of either purely constant or purely proportional noise, and has been shown to result in improved noise performance under certain conditions. We consider the case of mixed noise in an imaging system consisting of multiple individually-controllable sources (X-ray or near-infrared, for example) shining on a single detector. We develop a mathematical model for the noise in such a system and show that the noise is dependent on the properties of the binary coding matrix and on the average number of sources used for each code. Each binary matrix has a characteristic linear relationship between the ratio of proportional-to-constant noise and the noise level in the decoded image. We introduce a criterion for noise level, which is minimized via a genetic algorithm search. The search procedure results in the discovery of matrices that outperform the Hadamard S-matrices at certain levels of mixed noise. Simulation of a seven-source radiography system demonstrates that the noise model predicts trends and rank order of performance in regions of nonuniform images and in a simple tomosynthesis reconstruction. We conclude that the model developed provides a simple framework for analysis, discovery, and optimization of binary coding patterns used in multiplexed imaging systems.

  10. The Simple Concurrent Online Processing System (SCOPS) - An open-source interface for remotely sensed data processing

    NASA Astrophysics Data System (ADS)

    Warren, M. A.; Goult, S.; Clewley, D.

    2018-06-01

    Advances in technology allow remotely sensed data to be acquired with increasingly higher spatial and spectral resolutions. These data may then be used to influence government decision making and solve a number of research and application driven questions. However, such large volumes of data can be difficult to handle on a single personal computer or on older machines with slower components. Often the software required to process data is varied and can be highly technical and too advanced for the novice user to fully understand. This paper describes an open-source tool, the Simple Concurrent Online Processing System (SCOPS), which forms part of an airborne hyperspectral data processing chain that allows users accessing the tool over a web interface to submit jobs and process data remotely. It is demonstrated using Natural Environment Research Council Airborne Research Facility (NERC-ARF) instruments together with other free- and open-source tools to take radiometrically corrected data from sensor geometry into geocorrected form and to generate simple or complex band ratio products. The final processed data products are acquired via an HTTP download. SCOPS can cut data processing times and introduce complex processing software to novice users by distributing jobs across a network using a simple to use web interface.

  11. Development of a Novel Wireless Electric Power Transfer System for Space Applications

    NASA Technical Reports Server (NTRS)

    VazquezRamos, Gabriel; Yuan, Jiann-Shiun

    2011-01-01

    This paper will introduce a new implementation for wireless electric power transfer systems: space applications. Due to the risks that constitute the use of electrical connector for some space missions/applications, a simple wireless power system design approach will be evaluated as an alternative for the use of electrical connectors. This approach takes into consideration the overall system performance by designing the magnetic resonance elements and by verifying the overall system electrical behavior. System characterization is accomplished by executing circuit and analytical simulations using Matlab(TradeMark) and LTSpiceIV(TradeMark) software packages. The design methodology was validated by two different experiments: frequency consideration (design of three magnetic elements) and a small scale proof-ofconcept prototype. Experiment results shows successful wireless power transfer for all the cases studied. The proof-of-concept prototype provided approx.4 W of wireless power to the load (light bulb) at a separation of 3 cm from the source. In addition. a resonant circuit was designed and installed to the battery terminals of a handheld radio without batteries, making it tum on at a separation of approx.5 cm or less from the source. It was also demonstrated by prototype experimentation that multiple loads can be powered wirelessly at the same time with a single electric power source.

  12. Shape reconstruction of irregular bodies with multiple complementary data sources

    NASA Astrophysics Data System (ADS)

    Kaasalainen, M.; Viikinkoski, M.

    2012-07-01

    We discuss inversion methods for shape reconstruction with complementary data sources. The current main sources are photometry, adaptive optics or other images, occultation timings, and interferometry, and the procedure can readily be extended to include range-Doppler radar and thermal infrared data as well. We introduce the octantoid, a generally applicable shape support that can be automatically used for surface types encountered in planetary research, including strongly nonconvex or non-starlike shapes. We present models of Kleopatra and Hermione from multimodal data as examples of this approach. An important concept in this approach is the optimal weighting of the various data modes. We define the maximum compatibility estimate, a multimodal generalization of the maximum likelihood estimate, for this purpose. We also present a specific version of the procedure for asteroid flyby missions, with which one can reconstruct the complete shape of the target by using the flyby-based map of a part of the surface together with other available data. Finally, we show that the relative volume error of a shape solution is usually approximately equal to the relative shape error rather than its multiple. Our algorithms are trivially parallelizable, so running the code on a CUDA-enabled graphics processing unit is some two orders of magnitude faster than the usual single-processor mode.

  13. Nonlinear derating of high-intensity focused ultrasound beams using Gaussian modal sums.

    PubMed

    Dibaji, Seyed Ahmad Reza; Banerjee, Rupak K; Soneson, Joshua E; Myers, Matthew R

    2013-11-01

    A method is introduced for using measurements made in water of the nonlinear acoustic pressure field produced by a high-intensity focused ultrasound transducer to compute the acoustic pressure and temperature rise in a tissue medium. The acoustic pressure harmonics generated by nonlinear propagation are represented as a sum of modes having a Gaussian functional dependence in the radial direction. While the method is derived in the context of Gaussian beams, final results are applicable to general transducer profiles. The focal acoustic pressure is obtained by solving an evolution equation in the axial variable. The nonlinear term in the evolution equation for tissue is modeled using modal amplitudes measured in water and suitably reduced using a combination of "source derating" (experiments in water performed at a lower source acoustic pressure than in tissue) and "endpoint derating" (amplitudes reduced at the target location). Numerical experiments showed that, with proper combinations of source derating and endpoint derating, direct simulations of acoustic pressure and temperature in tissue could be reproduced by derating within 5% error. Advantages of the derating approach presented include applicability over a wide range of gains, ease of computation (a single numerical quadrature is required), and readily obtained temperature estimates from the water measurements.

  14. Fast method of cross-talk effect reduction in biomedical imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Nowakowski, Maciej; Kolenderska, Sylwia M.; Borycki, Dawid; Wojtkowski, Maciej

    2016-03-01

    Optical imaging of biological samples or living tissue structures requires light delivery to a region of interest and then collection of scattered light or fluorescent light in order to reconstruct an image of the object. When the coherent illumination light enters bulky biological object, each of scattering center (single molecule, group of molecules or other sample feature) acts as a secondary light source. As a result, scattered spherical waves from these secondary sources interact with each other, generating cross-talk noise between optical channels (eigenmodes). The cross-talk effect have serious impact on the performance of the imaging systems. In particular it reduces an ability of optical system to transfer high spatial frequencies thereby reducing its resolution. In this work we present a fast method to eliminate all unwanted waves combination, that overlap at image plane, suppressing recovery of high spatial frequencies by using the spatio-temporal optical coherence manipulation (STOC, [1]). In this method a number of phase mask is introduced to illuminating beam by spatial light modulator in a time of single image acquisition. We use a digital mirror device (DMD) in order to rapid cross-talk noise reduction (up to 22kHz modulation frequency) when imaging living biological cells in vivo by using full-field microscopy setup with double pass arrangement. This, to our best knowledge, has never been shown before. [1] D. Borycki, M. Nowakowski, and M. Wojtkowski, Opt. Lett. 38, 4817 (2013).

  15. Generation of continuous-wave 194 nm laser for mercury ion optical frequency standard

    NASA Astrophysics Data System (ADS)

    Zou, Hongxin; Wu, Yue; Chen, Guozhu; Shen, Yong; Liu, Qu; Precision measurement; atomic clock Team

    2015-05-01

    194 nm continuous-wave (CW) laser is an essential part in mercury ion optical frequency standard. The continuous-wave tunable radiation sources in the deep ultraviolet (DUV) region of the spectrum is also serviceable in high-resolution spectroscopy with many atomic and molecular lines. We introduce a scheme to generate continuous-wave 194 nm radiation with SFM in a Beta Barium Borate (BBO) crystal here. The two source beams are at 718 nm and 266 nm, respectively. Due to the property of BBO, critical phase matching (CPM) is implemented. One bow-tie cavity is used to resonantly enhance the 718 nm beam while the 266 nm makes a single pass, which makes the configuration easy to implement. Considering the walk-off effect in CPM, the cavity mode is designed to be elliptical so that the conversion efficiency can be promoted. Since the 266 nm radiation is generated by a 532 nm laser through SHG in a BBO crystal with a large walk-off angle, the output mode is quite non-Gaussian. To improve mode matching, we shaped the 266 nm beam into Gaussian modes with a cylindrical lens and iris diaphragm. As a result, 2.05 mW 194 nm radiation can be generated. As we know, this is the highest power for 194 nm CW laser using SFM in BBO with just single resonance. The work is supported by the National Natural Science Foundation of China (Grant No. 91436103 and No. 11204374).

  16. Indoor 3D Route Modeling Based On Estate Spatial Data

    NASA Astrophysics Data System (ADS)

    Zhang, H.; Wen, Y.; Jiang, J.; Huang, W.

    2014-04-01

    Indoor three-dimensional route model is essential for space intelligence navigation and emergency evacuation. This paper is motivated by the need of constructing indoor route model automatically and as far as possible. By comparing existing building data sources, this paper firstly explained the reason why the estate spatial management data is chosen as the data source. Then, an applicable method of construction three-dimensional route model in a building is introduced by establishing the mapping relationship between geographic entities and their topological expression. This data model is a weighted graph consist of "node" and "path" to express the spatial relationship and topological structure of a building components. The whole process of modelling internal space of a building is addressed by two key steps: (1) each single floor route model is constructed, including path extraction of corridor using Delaunay triangulation algorithm with constrained edge, fusion of room nodes into the path; (2) the single floor route model is connected with stairs and elevators and the multi-floor route model is eventually generated. In order to validate the method in this paper, a shopping mall called "Longjiang New City Plaza" in Nanjing is chosen as a case of study. And the whole building space is constructed according to the modelling method above. By integrating of existing path finding algorithm, the usability of this modelling method is verified, which shows the indoor three-dimensional route modelling method based on estate spatial data in this paper can support indoor route planning and evacuation route design very well.

  17. Development of Theoretical and Computational Methods for Single-Source Bathymetric Data

    DTIC Science & Technology

    2016-09-15

    Methods for Single-Source N00014-16-1-2035 Bathymetric Data Sb. GRANT NUMBER 11893686 Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER...A method is outlined for fusing the information inherent in such source documents, at different scales, into a single picture for the marine...algorithm reliability, which reflects the degree of inconsistency of the source documents, is also provided. A conceptual outline of the method , and a

  18. Introducing Aliphatic Substitution with a Discovery Experiment Using Competing Electrophiles

    ERIC Educational Resources Information Center

    Curran, Timothy P.; Mostovoy, Amelia J.; Curran, Margaret E.; Berger, Clara

    2016-01-01

    A facile, discovery-based experiment is described that introduces aliphatic substitution in an introductory undergraduate organic chemistry curriculum. Unlike other discovery-based experiments that examine substitution using two competing nucleophiles with a single electrophile, this experiment compares two isomeric, competing electrophiles…

  19. Boson Sampling with Single-Photon Fock States from a Bright Solid-State Source.

    PubMed

    Loredo, J C; Broome, M A; Hilaire, P; Gazzano, O; Sagnes, I; Lemaitre, A; Almeida, M P; Senellart, P; White, A G

    2017-03-31

    A boson-sampling device is a quantum machine expected to perform tasks intractable for a classical computer, yet requiring minimal nonclassical resources as compared to full-scale quantum computers. Photonic implementations to date employed sources based on inefficient processes that only simulate heralded single-photon statistics when strongly reducing emission probabilities. Boson sampling with only single-photon input has thus never been realized. Here, we report on a boson-sampling device operated with a bright solid-state source of single-photon Fock states with high photon-number purity: the emission from an efficient and deterministic quantum dot-micropillar system is demultiplexed into three partially indistinguishable single photons, with a single-photon purity 1-g^{(2)}(0) of 0.990±0.001, interfering in a linear optics network. Our demultiplexed source is between 1 and 2 orders of magnitude more efficient than current heralded multiphoton sources based on spontaneous parametric down-conversion, allowing us to complete the boson-sampling experiment faster than previous equivalent implementations.

  20. Room temperature single photon source using fiber-integrated hexagonal boron nitride

    NASA Astrophysics Data System (ADS)

    Vogl, Tobias; Lu, Yuerui; Lam, Ping Koy

    2017-07-01

    Single photons are a key resource for quantum optics and optical quantum information processing. The integration of scalable room temperature quantum emitters into photonic circuits remains to be a technical challenge. Here we utilize a defect center in hexagonal boron nitride (hBN) attached by Van der Waals force onto a multimode fiber as a single photon source. We perform an optical characterization of the source in terms of spectrum, state lifetime, power saturation and photostability. A special feature of our source is that it allows for easy switching between fiber-coupled and free space single photon generation modes. In order to prove the quantum nature of the emission we measure the second-order correlation function {{g}(2)}≤ft(τ \\right) . For both fiber-coupled and free space emission, the {{g}(2)}≤ft(τ \\right) dips below 0.5 indicating operation in the single photon regime. The results so far demonstrate the feasibility of 2D material single photon sources for scalable photonic quantum information processing.

  1. Coordinated single-phase control scheme for voltage unbalance reduction in low voltage network.

    PubMed

    Pullaguram, Deepak; Mishra, Sukumar; Senroy, Nilanjan

    2017-08-13

    Low voltage (LV) distribution systems are typically unbalanced in nature due to unbalanced loading and unsymmetrical line configuration. This situation is further aggravated by single-phase power injections. A coordinated control scheme is proposed for single-phase sources, to reduce voltage unbalance. A consensus-based coordination is achieved using a multi-agent system, where each agent estimates the averaged global voltage and current magnitudes of individual phases in the LV network. These estimated values are used to modify the reference power of individual single-phase sources, to ensure system-wide balanced voltages and proper power sharing among sources connected to the same phase. Further, the high X / R ratio of the filter, used in the inverter of the single-phase source, enables control of reactive power, to minimize voltage unbalance locally. The proposed scheme is validated by simulating a LV distribution network with multiple single-phase sources subjected to various perturbations.This article is part of the themed issue 'Energy management: flexibility, risk and optimization'. © 2017 The Author(s).

  2. On-Chip Waveguide Coupling of a Layered Semiconductor Single-Photon Source.

    PubMed

    Tonndorf, Philipp; Del Pozo-Zamudio, Osvaldo; Gruhler, Nico; Kern, Johannes; Schmidt, Robert; Dmitriev, Alexander I; Bakhtinov, Anatoly P; Tartakovskii, Alexander I; Pernice, Wolfram; Michaelis de Vasconcellos, Steffen; Bratschitsch, Rudolf

    2017-09-13

    Fully integrated quantum technology based on photons is in the focus of current research, because of its immense potential concerning performance and scalability. Ideally, the single-photon sources, the processing units, and the photon detectors are all combined on a single chip. Impressive progress has been made for on-chip quantum circuits and on-chip single-photon detection. In contrast, nonclassical light is commonly coupled onto the photonic chip from the outside, because presently only few integrated single-photon sources exist. Here, we present waveguide-coupled single-photon emitters in the layered semiconductor gallium selenide as promising on-chip sources. GaSe crystals with a thickness below 100 nm are placed on Si 3 N 4 rib or slot waveguides, resulting in a modified mode structure efficient for light coupling. Using optical excitation from within the Si 3 N 4 waveguide, we find nonclassicality of generated photons routed on the photonic chip. Thus, our work provides an easy-to-implement and robust light source for integrated quantum technology.

  3. Propagation characteristics of audible noise generated by single corona source under positive DC voltage

    NASA Astrophysics Data System (ADS)

    Li, Xuebao; Cui, Xiang; Lu, Tiebing; Wang, Donglai

    2017-10-01

    The directivity and lateral profile of corona-generated audible noise (AN) from a single corona source are measured through experiments carried out in the semi-anechoic laboratory. The experimental results show that the waveform of corona-generated AN consists of a series of random sound pressure pulses whose pulse amplitudes decrease with the increase of measurement distance. A single corona source can be regarded as a non-directional AN source, and the A-weighted SPL (sound pressure level) decreases 6 dB(A) as doubling the measurement distance. Then, qualitative explanations for the rationality of treating the single corona source as a point source are given on the basis of the Ingard's theory for sound generation in corona discharge. Furthermore, we take into consideration of the ground reflection and the air attenuation to reconstruct the propagation features of AN from the single corona source. The calculated results agree with the measurement well, which validates the propagation model. Finally, the influence of the ground reflection on the SPL is presented in the paper.

  4. SU-G-201-03: Automation of High Dose Rate Brachytherapy Quality Assurance: Development of a Radioluminescent Detection System for Simultaneous Detection of Activity, Timing, and Positioning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jenkins, C; Xing, L; Fahimian, B

    Purpose: Accuracy of positioning, timing and activity is of critical importance for High Dose Rate (HDR) brachytherapy delivery. Respective measurements via film autoradiography, stop-watches and well chambers can be cumbersome, crude or lack dynamic source evaluation capabilities. To address such limitations, a single device radioluminescent detection system enabling automated real-time quantification of activity, position and timing accuracy is presented and experimentally evaluated. Methods: A radioluminescent sheet was fabricated by mixing Gd?O?S:Tb with PDMS and incorporated into a 3D printed device where it was fixated below a CMOS digital camera. An Ir-192 HDR source (VS2000, VariSource iX) with an effective activemore » length of 5 mm was introduced using a 17-gauge stainless steel needle below the sheet. Pixel intensity values for determining activity were taken from an ROI centered on the source location. A calibration curve relating intensity values to activity was generated and used to evaluate automated activity determination with data gathered over 6 weeks. Positioning measurements were performed by integrating images for an entire delivery and fitting peaks to the resulting profile. Timing measurements were performed by evaluating source location and timestamps from individual images. Results: Average predicted activity error over 6 weeks was .35 ± .5%. The distance between four dwell positions was determined by the automated system to be 1.99 ± .02 cm. The result from autoradiography was 2.00 ± .03 cm. The system achieved a time resolution of 10 msec and determined the dwell time to be 1.01 sec ± .02 sec. Conclusion: The system was able to successfully perform automated detection of activity, positioning and timing concurrently under a single setup. Relative to radiochromic and radiographic film-based autoradiography, which can only provide a static evaluation positioning, optical detection of temporary radiation induced luminescence enables dynamic detection of position enabling automated quantification of timing with millisecond accuracy.« less

  5. 76 FR 78015 - Announcing the Award of a Single-Source Grant to Support Services for Haitian Medical Evacuees to...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-15

    ... Single-Source Grant to Support Services for Haitian Medical Evacuees to the Florida Department of...: Notice to award a single-source grant to support medical evacuees from the Haiti earthquake of 2010. CFDA... supportive social services to Haitian medical evacuees affected by the earthquake in 2010. The Haitian...

  6. 77 FR 65896 - Award of a Single-Source Replacement Grant to SOS Children's Villages Illinois in Chicago, IL

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-31

    ....623] Award of a Single-Source Replacement Grant to SOS Children's Villages Illinois in Chicago, IL... (FYSB) announces the award of a single-source replacement grant to SOS Children's Villages Illinois in... grant. ACYF/FYSB has designated SOS Children's Villages Illinois, a 501(c)(3) non-profit organization...

  7. Electrically driven polarized single-photon emission from an InGaN quantum dot in a GaN nanowire.

    PubMed

    Deshpande, Saniya; Heo, Junseok; Das, Ayan; Bhattacharya, Pallab

    2013-01-01

    In a classical light source, such as a laser, the photon number follows a Poissonian distribution. For quantum information processing and metrology applications, a non-classical emitter of single photons is required. A single quantum dot is an ideal source of single photons and such single-photon sources in the visible spectral range have been demonstrated with III-nitride and II-VI-based single quantum dots. It has been suggested that short-wavelength blue single-photon emitters would be useful for free-space quantum cryptography, with the availability of high-speed single-photon detectors in this spectral region. Here we demonstrate blue single-photon emission with electrical injection from an In0.25Ga0.75N quantum dot in a single nanowire. The emitted single photons are linearly polarized along the c axis of the nanowire with a degree of linear polarization of ~70%.

  8. A hierarchical preconditioner for the electric field integral equation on unstructured meshes based on primal and dual Haar bases

    NASA Astrophysics Data System (ADS)

    Adrian, S. B.; Andriulli, F. P.; Eibert, T. F.

    2017-02-01

    A new hierarchical basis preconditioner for the electric field integral equation (EFIE) operator is introduced. In contrast to existing hierarchical basis preconditioners, it works on arbitrary meshes and preconditions both the vector and the scalar potential within the EFIE operator. This is obtained by taking into account that the vector and the scalar potential discretized with loop-star basis functions are related to the hypersingular and the single layer operator (i.e., the well known integral operators from acoustics). For the single layer operator discretized with piecewise constant functions, a hierarchical preconditioner can easily be constructed. Thus the strategy we propose in this work for preconditioning the EFIE is the transformation of the scalar and the vector potential into operators equivalent to the single layer operator and to its inverse. More specifically, when the scalar potential is discretized with star functions as source and testing functions, the resulting matrix is a single layer operator discretized with piecewise constant functions and multiplied left and right with two additional graph Laplacian matrices. By inverting these graph Laplacian matrices, the discretized single layer operator is obtained, which can be preconditioned with the hierarchical basis. Dually, when the vector potential is discretized with loop functions, the resulting matrix can be interpreted as a hypersingular operator discretized with piecewise linear functions. By leveraging on a scalar Calderón identity, we can interpret this operator as spectrally equivalent to the inverse single layer operator. Then we use a linear-in-complexity, closed-form inverse of the dual hierarchical basis to precondition the hypersingular operator. The numerical results show the effectiveness of the proposed preconditioner and the practical impact of theoretical developments in real case scenarios.

  9. 40 CFR 421.195 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Indium Subcategory Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average mg...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Indium... existing sources. The mass of wastewater pollutants in secondary indium process wastewater introduced into...

  10. 40 CFR 421.195 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Indium Subcategory Pollutant or pollutant property Maximum for any 1 day Maximum for monthly average mg...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Indium... existing sources. The mass of wastewater pollutants in secondary indium process wastewater introduced into...

  11. A Method for the Analysis of Information Use in Source-Based Writing

    ERIC Educational Resources Information Center

    Sormunen, Eero; Heinstrom, Jannica; Romu, Leena; Turunen, Risto

    2012-01-01

    Introduction: Past research on source-based writing assignments has hesitated to scrutinize how students actually use information afforded by sources. This paper introduces a method for the analysis of text transformations from sources to texts composed. The method is aimed to serve scholars in building a more detailed understanding of how…

  12. 40 CFR 421.46 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Primary Copper Smelting... wastewater pollutants in primary copper smelting process wastewater introduced into a POTW shall not exceed...

  13. 40 CFR 421.66 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Copper Subcategory... wastewater pollutants in secondary copper process wastewater introduced into a POTW shall not exceed the...

  14. 40 CFR 421.66 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Copper Subcategory... wastewater pollutants in secondary copper process wastewater introduced into a POTW shall not exceed the...

  15. 40 CFR 421.46 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Primary Copper Smelting... wastewater pollutants in primary copper smelting process wastewater introduced into a POTW shall not exceed...

  16. The new flora of the northeastern USA: quantifying introduced plant species occupancy in forest ecosystems

    Treesearch

    Bethany K. Schulz; Andrew N. Gray

    2013-01-01

    Introduced plant species have significant negative impacts in many ecosystems and are found in many forests around the world. Some factors linked to the distribution of introduced species include fragmentation and disturbance, native species richness, and climatic and physical conditions of the landscape. However, there are few data sources that enable the assessment...

  17. Multisource least-squares reverse-time migration with structure-oriented filtering

    NASA Astrophysics Data System (ADS)

    Fan, Jing-Wen; Li, Zhen-Chun; Zhang, Kai; Zhang, Min; Liu, Xue-Tong

    2016-09-01

    The technology of simultaneous-source acquisition of seismic data excited by several sources can significantly improve the data collection efficiency. However, direct imaging of simultaneous-source data or blended data may introduce crosstalk noise and affect the imaging quality. To address this problem, we introduce a structure-oriented filtering operator as preconditioner into the multisource least-squares reverse-time migration (LSRTM). The structure-oriented filtering operator is a nonstationary filter along structural trends that suppresses crosstalk noise while maintaining structural information. The proposed method uses the conjugate-gradient method to minimize the mismatch between predicted and observed data, while effectively attenuating the interference noise caused by exciting several sources simultaneously. Numerical experiments using synthetic data suggest that the proposed method can suppress the crosstalk noise and produce highly accurate images.

  18. On-chip low loss heralded source of pure single photons.

    PubMed

    Spring, Justin B; Salter, Patrick S; Metcalf, Benjamin J; Humphreys, Peter C; Moore, Merritt; Thomas-Peter, Nicholas; Barbieri, Marco; Jin, Xian-Min; Langford, Nathan K; Kolthammer, W Steven; Booth, Martin J; Walmsley, Ian A

    2013-06-03

    A key obstacle to the experimental realization of many photonic quantum-enhanced technologies is the lack of low-loss sources of single photons in pure quantum states. We demonstrate a promising solution: generation of heralded single photons in a silica photonic chip by spontaneous four-wave mixing. A heralding efficiency of 40%, corresponding to a preparation efficiency of 80% accounting for detector performance, is achieved due to efficient coupling of the low-loss source to optical fibers. A single photon purity of 0.86 is measured from the source number statistics without narrow spectral filtering, and confirmed by direct measurement of the joint spectral intensity. We calculate that similar high-heralded-purity output can be obtained from visible to telecom spectral regions using this approach. On-chip silica sources can have immediate application in a wide range of single-photon quantum optics applications which employ silica photonics.

  19. Missing data imputation: focusing on single imputation.

    PubMed

    Zhang, Zhongheng

    2016-01-01

    Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations.

  20. Missing data imputation: focusing on single imputation

    PubMed Central

    2016-01-01

    Complete case analysis is widely used for handling missing data, and it is the default method in many statistical packages. However, this method may introduce bias and some useful information will be omitted from analysis. Therefore, many imputation methods are developed to make gap end. The present article focuses on single imputation. Imputations with mean, median and mode are simple but, like complete case analysis, can introduce bias on mean and deviation. Furthermore, they ignore relationship with other variables. Regression imputation can preserve relationship between missing values and other variables. There are many sophisticated methods exist to handle missing values in longitudinal data. This article focuses primarily on how to implement R code to perform single imputation, while avoiding complex mathematical calculations. PMID:26855945

  1. Quantitative analysis of single-molecule force spectroscopy on folded chromatin fibers

    PubMed Central

    Meng, He; Andresen, Kurt; van Noort, John

    2015-01-01

    Single-molecule techniques allow for picoNewton manipulation and nanometer accuracy measurements of single chromatin fibers. However, the complexity of the data, the heterogeneity of the composition of individual fibers and the relatively large fluctuations in extension of the fibers complicate a structural interpretation of such force-extension curves. Here we introduce a statistical mechanics model that quantitatively describes the extension of individual fibers in response to force on a per nucleosome basis. Four nucleosome conformations can be distinguished when pulling a chromatin fiber apart. A novel, transient conformation is introduced that coexists with single wrapped nucleosomes between 3 and 7 pN. Comparison of force-extension curves between single nucleosomes and chromatin fibers shows that embedding nucleosomes in a fiber stabilizes the nucleosome by 10 kBT. Chromatin fibers with 20- and 50-bp linker DNA follow a different unfolding pathway. These results have implications for accessibility of DNA in fully folded and partially unwrapped chromatin fibers and are vital for understanding force unfolding experiments on nucleosome arrays. PMID:25779043

  2. 40 CFR 424.16 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric Furnaces With Wet Air... subject to this subpart that introduces process wastewater pollutants into a publicly owned treatment...

  3. 40 CFR 424.16 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric Furnaces With Wet Air... subject to this subpart that introduces process wastewater pollutants into a publicly owned treatment...

  4. 40 CFR 424.16 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric Furnaces With Wet Air... subject to this subpart that introduces process wastewater pollutants into a publicly owned treatment...

  5. 40 CFR 424.16 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... GUIDELINES AND STANDARDS FERROALLOY MANUFACTURING POINT SOURCE CATEGORY Open Electric Furnaces With Wet Air... subject to this subpart that introduces process wastewater pollutants into a publicly owned treatment...

  6. 40 CFR 415.474 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sulfate, nickel chloride, nickel nitrate, or nickel fluoborate which introduces pollutants into a publicly... for existing sources (PSES): Subpart AU—Nickel Sulfate, Nickel Chloride, Nickel Nitrate, Nickel...

  7. Novel directed search strategy to detect continuous gravitational waves from neutron stars in low- and high-eccentricity binary systems

    NASA Astrophysics Data System (ADS)

    Leaci, Paola; Astone, Pia; D'Antonio, Sabrina; Frasca, Sergio; Palomba, Cristiano; Piccinni, Ornella; Mastrogiovanni, Simone

    2017-06-01

    We describe a novel, very fast and robust, directed search incoherent method (which means that the phase information is lost) for periodic gravitational waves from neutron stars in binary systems. As a directed search, we assume the source sky position to be known with enough accuracy, but all other parameters (including orbital ones) are supposed to be unknown. We exploit the frequency modulation due to source orbital motion to unveil the signal signature by commencing from a collection of time and frequency peaks (the so-called "peakmap"). We validate our algorithm (pipeline), adding 131 artificial continuous-wave signals from pulsars in binary systems to simulated detector Gaussian noise, characterized by a power spectral density Sh=4 ×10-24 Hz-1 /2 in the frequency interval [70, 200] Hz, which is overall commensurate with the advanced detector design sensitivities. The pipeline detected 128 signals, and the weakest signal injected (added) and detected has a gravitational-wave strain amplitude of ˜10-24, assuming one month of gapless data collected by a single advanced detector. We also provide sensitivity estimations, which show that, for a single-detector data covering one month of observation time, depending on the source orbital Doppler modulation, we can detect signals with an amplitude of ˜7 ×10-25. By using three detectors, and one year of data, we would easily gain a factor 3 in sensitivity, translating into being able to detect weaker signals. We also discuss the parameter estimate proficiency of our method, as well as computational budget: sifting one month of single-detector data and 131 Hz-wide frequency range takes roughly 2.4 CPU hours. Hence, the current procedure can be readily applied in ally-sky schemes, sieving in parallel as many sky positions as permitted by the available computational power. Finally, we introduce (ongoing and future) approaches to attain sensitivity improvements and better accuracy on parameter estimates in view of the use on real advanced detector data.

  8. Stochastic summation of empirical Green's functions

    USGS Publications Warehouse

    Wennerberg, Leif

    1990-01-01

    Two simple strategies are presented that use random delay times for repeatedly summing the record of a relatively small earthquake to simulate the effects of a larger earthquake. The simulations do not assume any fault plane geometry or rupture dynamics, but realy only on the ω−2 spectral model of an earthquake source and elementary notions of source complexity. The strategies simulate ground motions for all frequencies within the bandwidth of the record of the event used as a summand. The first strategy, which introduces the basic ideas, is a single-stage procedure that consists of simply adding many small events with random time delays. The probability distribution for delays has the property that its amplitude spectrum is determined by the ratio of ω−2 spectra, and its phase spectrum is identically zero. A simple expression is given for the computation of this zero-phase scaling distribution. The moment rate function resulting from the single-stage simulation is quite simple and hence is probably not realistic for high-frequency (>1 Hz) ground motion of events larger than ML∼ 4.5 to 5. The second strategy is a two-stage summation that simulates source complexity with a few random subevent delays determined using the zero-phase scaling distribution, and then clusters energy around these delays to get an ω−2 spectrum for the sum. Thus, the two-stage strategy allows simulations of complex events of any size for which the ω−2 spectral model applies. Interestingly, a single-stage simulation with too few ω−2records to get a good fit to an ω−2 large-event target spectrum yields a record whose spectral asymptotes are consistent with the ω−2 model, but that includes a region in its spectrum between the corner frequencies of the larger and smaller events reasonably approximated by a power law trend. This spectral feature has also been discussed as reflecting the process of partial stress release (Brune, 1970), an asperity failure (Boatwright, 1984), or the breakdown of ω−2 scaling due to rupture significantly longer than the width of the seismogenic zone (Joyner, 1984).

  9. 78 FR 27240 - Announcing the Award of a New Single-Source Award to the National Council on Family Violence in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-09

    ....095] Announcing the Award of a New Single-Source Award to the National Council on Family Violence in... single-source cooperative agreement to the National Council on Family Violence to support the National...), Administration on Children, Youth and Families (ACYF), Family and Youth Services Bureau (FYSB), Division of...

  10. High-Performance Single-Photon Sources via Spatial Multiplexing

    DTIC Science & Technology

    2014-01-01

    ingredient for tasks such as quantum cryptography , quantum repeater, quantum teleportation, quantum computing, and truly-random number generation. Recently...SECURITY CLASSIFICATION OF: Single photons sources are desired for many potential quantum information applications. One common method to produce...photons sources are desired for many potential quantum information applications. One common method to produce single photons is based on a “heralding

  11. Methods for forming particles from single source precursors

    DOEpatents

    Fox, Robert V [Idaho Falls, ID; Rodriguez, Rene G [Pocatello, ID; Pak, Joshua [Pocatello, ID

    2011-08-23

    Single source precursors are subjected to carbon dioxide to form particles of material. The carbon dioxide may be in a supercritical state. Single source precursors also may be subjected to supercritical fluids other than supercritical carbon dioxide to form particles of material. The methods may be used to form nanoparticles. In some embodiments, the methods are used to form chalcopyrite materials. Devices such as, for example, semiconductor devices may be fabricated that include such particles. Methods of forming semiconductor devices include subjecting single source precursors to carbon dioxide to form particles of semiconductor material, and establishing electrical contact between the particles and an electrode.

  12. 40 CFR 415.366 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS INORGANIC CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Copper Salts... CFR 403.7, any new source subject to this subpart and producing copper sulfate, copper chloride, copper iodide, or copper nitrate which introduces pollutants into a publicly owned treatment works must...

  13. 40 CFR 415.366 - Pretreatment standards for new sources (PSNS).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS INORGANIC CHEMICALS MANUFACTURING POINT SOURCE CATEGORY Copper Salts... CFR 403.7, any new source subject to this subpart and producing copper sulfate, copper chloride, copper iodide, or copper nitrate which introduces pollutants into a publicly owned treatment works must...

  14. 40 CFR 421.85 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Primary Zinc... existing sources. The mass of wastewater pollutants in primary zinc process wastewater introduced into a POTW shall not exceed the following values: (a) Subpart H—Zinc Reduction Furnace Wet Air Pollution...

  15. Synthesis and Characterization of a Novel Borazine-Type UV Photo-Induced Polymerization of Ceramic Precursors.

    PubMed

    Wei, Dan; Chen, Lixin; Xu, Tingting; He, Weiqi; Wang, Yi

    2016-06-21

    A preceramic polymer of B,B',B''-(dimethyl)ethyl-acrylate-silyloxyethyl-borazine was synthesized by three steps from a molecular single-source precursor and characterized by Fourier transform infrared (FTIR) and nuclear magnetic resonance (NMR) spectrometry. Six-member borazine rings and acrylate groups were effectively introduced into the preceramic polymer to activate UV photo-induced polymerization. Photo-Differential Scanning Calorimetry (Photo-DSC) and real-time FTIR techniques were adapted to investigate the photo-polymerization process. The results revealed that the borazine derivative exhibited dramatic activity by UV polymerization, the double-bond conversion of which reached a maximum in 40 s. Furthermore, the properties of the pyrogenetic products were studied by scanning electron microscopy (SEM) and X-ray diffraction (XRD), which proved the ceramic annealed at 1100 °C retained the amorphous phase.

  16. Electro-optic spatial decoding on the spherical-wavefront Coulomb fields of plasma electron sources.

    PubMed

    Huang, K; Esirkepov, T; Koga, J K; Kotaki, H; Mori, M; Hayashi, Y; Nakanii, N; Bulanov, S V; Kando, M

    2018-02-13

    Detections of the pulse durations and arrival timings of relativistic electron beams are important issues in accelerator physics. Electro-optic diagnostics on the Coulomb fields of electron beams have the advantages of single shot and non-destructive characteristics. We present a study of introducing the electro-optic spatial decoding technique to laser wakefield acceleration. By placing an electro-optic crystal very close to a gas target, we discovered that the Coulomb field of the electron beam possessed a spherical wavefront and was inconsistent with the previously widely used model. The field structure was demonstrated by experimental measurement, analytic calculations and simulations. A temporal mapping relationship with generality was derived in a geometry where the signals had spherical wavefronts. This study could be helpful for the applications of electro-optic diagnostics in laser plasma acceleration experiments.

  17. A split-step method to include electron–electron collisions via Monte Carlo in multiple rate equation simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less

  18. Ultrahigh-speed non-invasive widefield angiography

    NASA Astrophysics Data System (ADS)

    Blatter, Cedric; Klein, Thomas; Grajciar, Branislav; Schmoll, Tilman; Wieser, Wolfgang; Andre, Raphael; Huber, Robert; Leitgeb, Rainer A.

    2012-07-01

    Retinal and choroidal vascular imaging is an important diagnostic benefit for ocular diseases such as age-related macular degeneration. The current gold standard for vessel visualization is fluorescence angiography. We present a potential non-invasive alternative to image blood vessels based on functional Fourier domain optical coherence tomography (OCT). For OCT to compete with the field of view and resolution of angiography while maintaining motion artifacts to a minimum, ultrahigh-speed imaging has to be introduced. We employ Fourier domain mode locking swept source technology that offers high quality imaging at an A-scan rate of up to 1.68 MHz. We present retinal angiogram over ˜48 deg acquired in a few seconds in a single recording without the need of image stitching. OCT at 1060 nm allows for high penetration in the choroid and efficient separate characterization of the retinal and choroidal vascularization.

  19. Blind Channel Equalization with Colored Source Based on Constrained Optimization Methods

    NASA Astrophysics Data System (ADS)

    Wang, Yunhua; DeBrunner, Linda; DeBrunner, Victor; Zhou, Dayong

    2008-12-01

    Tsatsanis and Xu have applied the constrained minimum output variance (CMOV) principle to directly blind equalize a linear channel—a technique that has proven effective with white inputs. It is generally assumed in the literature that their CMOV method can also effectively equalize a linear channel with a colored source. In this paper, we prove that colored inputs will cause the equalizer to incorrectly converge due to inadequate constraints. We also introduce a new blind channel equalizer algorithm that is based on the CMOV principle, but with a different constraint that will correctly handle colored sources. Our proposed algorithm works for channels with either white or colored inputs and performs equivalently to the trained minimum mean-square error (MMSE) equalizer under high SNR. Thus, our proposed algorithm may be regarded as an extension of the CMOV algorithm proposed by Tsatsanis and Xu. We also introduce several methods to improve the performance of our introduced algorithm in the low SNR condition. Simulation results show the superior performance of our proposed methods.

  20. 40 CFR 415.364 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... sulfate, copper chloride, copper iodide, or copper nitrate which introduces pollutants into a publicly... for existing sources (PSES): Subpart AJ—Copper Sulfate, Copper Chloride, Copper Iodide, Copper Nitrate...

  1. 40 CFR 415.364 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... sulfate, copper chloride, copper iodide, or copper nitrate which introduces pollutants into a publicly... for existing sources (PSES): Subpart AJ—Copper Sulfate, Copper Chloride, Copper Iodide, Copper Nitrate...

  2. 40 CFR 415.364 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... sulfate, copper chloride, copper iodide, or copper nitrate which introduces pollutants into a publicly... for existing sources (PSES): Subpart AJ—Copper Sulfate, Copper Chloride, Copper Iodide, Copper Nitrate...

  3. 40 CFR 415.364 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... sulfate, copper chloride, copper iodide, or copper nitrate which introduces pollutants into a publicly... for existing sources (PSES): Subpart AJ—Copper Sulfate, Copper Chloride, Copper Iodide, Copper Nitrate...

  4. Single photon sources with single semiconductor quantum dots

    NASA Astrophysics Data System (ADS)

    Shan, Guang-Cun; Yin, Zhang-Qi; Shek, Chan Hung; Huang, Wei

    2014-04-01

    In this contribution, we briefly recall the basic concepts of quantum optics and properties of semiconductor quantum dot (QD) which are necessary to the understanding of the physics of single-photon generation with single QDs. Firstly, we address the theory of quantum emitter-cavity system, the fluorescence and optical properties of semiconductor QDs, and the photon statistics as well as optical properties of the QDs. We then review the localization of single semiconductor QDs in quantum confined optical microcavity systems to achieve their overall optical properties and performances in terms of strong coupling regime, efficiency, directionality, and polarization control. Furthermore, we will discuss the recent progress on the fabrication of single photon sources, and various approaches for embedding single QDs into microcavities or photonic crystal nanocavities and show how to extend the wavelength range. We focus in particular on new generations of electrically driven QD single photon source leading to high repetition rates, strong coupling regime, and high collection efficiencies at elevated temperature operation. Besides, new developments of room temperature single photon emission in the strong coupling regime are reviewed. The generation of indistinguishable photons and remaining challenges for practical single-photon sources are also discussed.

  5. Seismic Analysis Code (SAC): Development, porting, and maintenance within a legacy code base

    NASA Astrophysics Data System (ADS)

    Savage, B.; Snoke, J. A.

    2017-12-01

    The Seismic Analysis Code (SAC) is the result of toil of many developers over almost a 40-year history. Initially a Fortran-based code, it has undergone major transitions in underlying bit size from 16 to 32, in the 1980s, and 32 to 64 in 2009; as well as a change in language from Fortran to C in the late 1990s. Maintenance of SAC, the program and its associated libraries, have tracked changes in hardware and operating systems including the advent of Linux in the early 1990, the emergence and demise of Sun/Solaris, variants of OSX processors (PowerPC and x86), and Windows (Cygwin). Traces of these systems are still visible in source code and associated comments. A major concern while improving and maintaining a routinely used, legacy code is a fear of introducing bugs or inadvertently removing favorite features of long-time users. Prior to 2004, SAC was maintained and distributed by LLNL (Lawrence Livermore National Lab). In that year, the license was transferred from LLNL to IRIS (Incorporated Research Institutions for Seismology), but the license is not open source. However, there have been thousands of downloads a year of the package, either source code or binaries for specific system. Starting in 2004, the co-authors have maintained the SAC package for IRIS. In our updates, we fixed bugs, incorporated newly introduced seismic analysis procedures (such as EVALRESP), added new, accessible features (plotting and parsing), and improved the documentation (now in HTML and PDF formats). Moreover, we have added modern software engineering practices to the development of SAC including use of recent source control systems, high-level tests, and scripted, virtualized environments for rapid testing and building. Finally, a "sac-help" listserv (administered by IRIS) was setup for SAC-related issues and is the primary avenue for users seeking advice and reporting bugs. Attempts are always made to respond to issues and bugs in a timely fashion. For the past thirty-plus years, SAC files contained a fixed-length header. Time and distance-related values are stored in single precision, which has become a problem with the increase in desired precision for data compared to thirty years ago. A future goal is to address this precision problem, but in a backward compatible manner. We would also like to transition SAC to a more open source license.

  6. Assessing Model Characterization of Single Source ...

    EPA Pesticide Factsheets

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  7. A generalization of the double-corner-frequency source spectral model and its use in the SCEC BBP validation exercise

    USGS Publications Warehouse

    Boore, David M.; Di Alessandro, Carola; Abrahamson, Norman A.

    2014-01-01

    The stochastic method of simulating ground motions requires the specification of the shape and scaling with magnitude of the source spectrum. The spectral models commonly used are either single-corner-frequency or double-corner-frequency models, but the latter have no flexibility to vary the high-frequency spectral levels for a specified seismic moment. Two generalized double-corner-frequency ω2 source spectral models are introduced, one in which two spectra are multiplied together, and another where they are added. Both models have a low-frequency dependence controlled by the seismic moment, and a high-frequency spectral level controlled by the seismic moment and a stress parameter. A wide range of spectral shapes can be obtained from these generalized spectral models, which makes them suitable for inversions of data to obtain spectral models that can be used in ground-motion simulations in situations where adequate data are not available for purely empirical determinations of ground motions, as in stable continental regions. As an example of the use of the generalized source spectral models, data from up to 40 stations from seven events, plus response spectra at two distances and two magnitudes from recent ground-motion prediction equations, were inverted to obtain the parameters controlling the spectral shapes, as well as a finite-fault factor that is used in point-source, stochastic-method simulations of ground motion. The fits to the data are comparable to or even better than those from finite-fault simulations, even for sites close to large earthquakes.

  8. psiTurk: An open-source framework for conducting replicable behavioral experiments online.

    PubMed

    Gureckis, Todd M; Martin, Jay; McDonnell, John; Rich, Alexander S; Markant, Doug; Coenen, Anna; Halpern, David; Hamrick, Jessica B; Chan, Patricia

    2016-09-01

    Online data collection has begun to revolutionize the behavioral sciences. However, conducting carefully controlled behavioral experiments online introduces a number of new of technical and scientific challenges. The project described in this paper, psiTurk, is an open-source platform which helps researchers develop experiment designs which can be conducted over the Internet. The tool primarily interfaces with Amazon's Mechanical Turk, a popular crowd-sourcing labor market. This paper describes the basic architecture of the system and introduces new users to the overall goals. psiTurk aims to reduce the technical hurdles for researchers developing online experiments while improving the transparency and collaborative nature of the behavioral sciences.

  9. Singles correlation energy contributions in solids

    NASA Astrophysics Data System (ADS)

    Klimeš, Jiří; Kaltak, Merzuk; Maggio, Emanuele; Kresse, Georg

    2015-09-01

    The random phase approximation to the correlation energy often yields highly accurate results for condensed matter systems. However, ways how to improve its accuracy are being sought and here we explore the relevance of singles contributions for prototypical solid state systems. We set out with a derivation of the random phase approximation using the adiabatic connection and fluctuation dissipation theorem, but contrary to the most commonly used derivation, the density is allowed to vary along the coupling constant integral. This yields results closely paralleling standard perturbation theory. We re-derive the standard singles of Görling-Levy perturbation theory [A. Görling and M. Levy, Phys. Rev. A 50, 196 (1994)], highlight the analogy of our expression to the renormalized singles introduced by Ren and coworkers [Phys. Rev. Lett. 106, 153003 (2011)], and introduce a new approximation for the singles using the density matrix in the random phase approximation. We discuss the physical relevance and importance of singles alongside illustrative examples of simple weakly bonded systems, including rare gas solids (Ne, Ar, Xe), ice, adsorption of water on NaCl, and solid benzene. The effect of singles on covalently and metallically bonded systems is also discussed.

  10. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    PubMed

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  11. Invariant models in the inversion of gravity and magnetic fields and their derivatives

    NASA Astrophysics Data System (ADS)

    Ialongo, Simone; Fedi, Maurizio; Florio, Giovanni

    2014-11-01

    In potential field inversion problems we usually solve underdetermined systems and realistic solutions may be obtained by introducing a depth-weighting function in the objective function. The choice of the exponent of such power-law is crucial. It was suggested to determine it from the field-decay due to a single source-block; alternatively it has been defined as the structural index of the investigated source distribution. In both cases, when k-order derivatives of the potential field are considered, the depth-weighting exponent has to be increased by k with respect that of the potential field itself, in order to obtain consistent source model distributions. We show instead that invariant and realistic source-distribution models are obtained using the same depth-weighting exponent for the magnetic field and for its k-order derivatives. A similar behavior also occurs in the gravity case. In practice we found that the depth weighting-exponent is invariant for a given source-model and equal to that of the corresponding magnetic field, in the magnetic case, and of the 1st derivative of the gravity field, in the gravity case. In the case of the regularized inverse problem, with depth-weighting and general constraints, the mathematical demonstration of such invariance is difficult, because of its non-linearity, and of its variable form, due to the different constraints used. However, tests performed on a variety of synthetic cases seem to confirm the invariance of the depth-weighting exponent. A final consideration regards the role of the regularization parameter; we show that the regularization can severely affect the depth to the source because the estimated depth tends to increase proportionally with the size of the regularization parameter. Hence, some care is needed in handling the combined effect of the regularization parameter and depth weighting.

  12. Quantification of source impact to PM using three-dimensional weighted factor model analysis on multi-site data

    NASA Astrophysics Data System (ADS)

    Shi, Guoliang; Peng, Xing; Huangfu, Yanqi; Wang, Wei; Xu, Jiao; Tian, Yingze; Feng, Yinchang; Ivey, Cesunica E.; Russell, Armistead G.

    2017-07-01

    Source apportionment technologies are used to understand the impacts of important sources of particulate matter (PM) air quality, and are widely used for both scientific studies and air quality management. Generally, receptor models apportion speciated PM data from a single sampling site. With the development of large scale monitoring networks, PM speciation are observed at multiple sites in an urban area. For these situations, the models should account for three factors, or dimensions, of the PM, including the chemical species concentrations, sampling periods and sampling site information, suggesting the potential power of a three-dimensional source apportionment approach. However, the principle of three-dimensional Parallel Factor Analysis (Ordinary PARAFAC) model does not always work well in real environmental situations for multi-site receptor datasets. In this work, a new three-way receptor model, called "multi-site three way factor analysis" model is proposed to deal with the multi-site receptor datasets. Synthetic datasets were developed and introduced into the new model to test its performance. Average absolute error (AAE, between estimated and true contributions) for extracted sources were all less than 50%. Additionally, three-dimensional ambient datasets from a Chinese mega-city, Chengdu, were analyzed using this new model to assess the application. Four factors are extracted by the multi-site WFA3 model: secondary source have the highest contributions (64.73 and 56.24 μg/m3), followed by vehicular exhaust (30.13 and 33.60 μg/m3), crustal dust (26.12 and 29.99 μg/m3) and coal combustion (10.73 and 14.83 μg/m3). The model was also compared to PMF, with general agreement, though PMF suggested a lower crustal contribution.

  13. Using a pseudo-dynamic source inversion approach to improve earthquake source imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Song, S. G.; Dalguer, L. A.; Clinton, J. F.

    2014-12-01

    Imaging a high-resolution spatio-temporal slip distribution of an earthquake rupture is a core research goal in seismology. In general we expect to obtain a higher quality source image by improving the observational input data (e.g. using more higher quality near-source stations). However, recent studies show that increasing the surface station density alone does not significantly improve source inversion results (Custodio et al. 2005; Zhang et al. 2014). We introduce correlation structures between the kinematic source parameters: slip, rupture velocity, and peak slip velocity (Song et al. 2009; Song and Dalguer 2013) in the non-linear source inversion. The correlation structures are physical constraints derived from rupture dynamics that effectively regularize the model space and may improve source imaging. We name this approach pseudo-dynamic source inversion. We investigate the effectiveness of this pseudo-dynamic source inversion method by inverting low frequency velocity waveforms from a synthetic dynamic rupture model of a buried vertical strike-slip event (Mw 6.5) in a homogeneous half space. In the inversion, we use a genetic algorithm in a Bayesian framework (Moneli et al. 2008), and a dynamically consistent regularized Yoffe function (Tinti, et al. 2005) was used for a single-window slip velocity function. We search for local rupture velocity directly in the inversion, and calculate the rupture time using a ray-tracing technique. We implement both auto- and cross-correlation of slip, rupture velocity, and peak slip velocity in the prior distribution. Our results suggest that kinematic source model estimates capture the major features of the target dynamic model. The estimated rupture velocity closely matches the target distribution from the dynamic rupture model, and the derived rupture time is smoother than the one we searched directly. By implementing both auto- and cross-correlation of kinematic source parameters, in comparison to traditional smoothing constraints, we are in effect regularizing the model space in a more physics-based manner without loosing resolution of the source image. Further investigation is needed to tune the related parameters of pseudo-dynamic source inversion and relative weighting between the prior and the likelihood function in the Bayesian inversion.

  14. 40 CFR 421.295 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... existing sources. The mass of wastewater pollutants in secondary tin process wastewater introduced into a POTW must not exceed the following values: (a) Tin smelter SO2 scrubber. PSES for the Secondary Tin...

  15. 40 CFR 421.295 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... existing sources. The mass of wastewater pollutants in secondary tin process wastewater introduced into a POTW must not exceed the following values: (a) Tin smelter SO2 scrubber. PSES for the Secondary Tin...

  16. 40 CFR 421.295 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... existing sources. The mass of wastewater pollutants in secondary tin process wastewater introduced into a POTW must not exceed the following values: (a) Tin smelter SO2 scrubber. PSES for the Secondary Tin...

  17. 40 CFR 421.295 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... existing sources. The mass of wastewater pollutants in secondary tin process wastewater introduced into a POTW must not exceed the following values: (a) Tin smelter SO2 scrubber. PSES for the Secondary Tin...

  18. 40 CFR 421.295 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) EFFLUENT GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Tin... existing sources. The mass of wastewater pollutants in secondary tin process wastewater introduced into a POTW must not exceed the following values: (a) Tin smelter SO2 scrubber. PSES for the Secondary Tin...

  19. A Conceptual Framework for Primary Source Practices

    ERIC Educational Resources Information Center

    Ensminger, David C.; Fry, Michelle L.

    2012-01-01

    This article introduces a descriptive conceptual framework to provide teachers with a means of recognizing and describing instructional activities that use primary sources. The framework provides structure for professional development programs that have been established to train teachers to access and integrate primary sources into lessons. The…

  20. Interpreting Conjoined Noun Phrases and Conjoined Clauses: Collective vs. Distributive Preferences

    PubMed Central

    Clifton, Charles; Frazier, Lyn

    2012-01-01

    Two experiments are reported that show that introducing event participants in a conjoined noun phrase (NP) favors a single event (collective) interpretation while introducing them in separate clauses favors a separate events (distributive) interpretation. In Experiment 1, acceptability judgments were speeded when the bias of a predicate toward separate events vs. a single event matched the presumed bias of how the subjects’ referents were introduced (as conjoined noun phrases or in conjoined clauses). In Experiment 2, reading of a phrase containing an anaphor following conjoined noun phrases was facilitated when the anaphor was they, relative to when it was neither/each of them; the opposite pattern was found when the anaphor followed conjoined clauses. We argue that comprehension was facilitated when the form of an anaphor was appropriate for how its antecedents were introduced. These results address the very general problem of how we individuate entities and events when presented with a complex situation, and show that different linguistic forms can guide how we construe a situation.. The results also indicate that there is no general penalty for introducing the entities or events separately – in distinct clauses as ‘split’ antecedents. PMID:22512324

  1. Behavior of a Single Langmuir Probe in a Magnetic Field.

    ERIC Educational Resources Information Center

    Pytlinski, J. T.; And Others

    1978-01-01

    Describes an experiment to demonstrate the influence of a magnetic field on the behavior of a single Langmuir probe. The experiment introduces the student to magnetically supported plasma and particle behavior in a magnetic field. (GA)

  2. Correction for the detector-dead-time effect on the second-order correlation of stationary sub-Poissonian light in a two-detector configuration

    NASA Astrophysics Data System (ADS)

    Ann, Byoung-moo; Song, Younghoon; Kim, Junki; Yang, Daeho; An, Kyungwon

    2015-08-01

    Exact measurement of the second-order correlation function g(2 )(t ) of a light source is essential when investigating the photon statistics and the light generation process of the source. For a stationary single-mode light source, the Mandel Q factor is directly related to g(2 )(0 ) . For a large mean photon number in the mode, the deviation of g(2 )(0 ) from unity is so small that even a tiny error in measuring g(2 )(0 ) would result in an inaccurate Mandel Q . In this work, we address the detector-dead-time effect on g(2 )(0 ) of stationary sub-Poissonian light. It is then found that detector dead time can induce a serious error in g(2 )(0 ) and thus in Mandel Q in those cases even in a two-detector configuration. Utilizing the cavity-QED microlaser, a well-established sub-Poissonian light source, we measured g(2 )(0 ) with two different types of photodetectors with different dead times. We also introduced prolonged dead time by intentionally deleting the photodetection events following a preceding one within a specified time interval. We found that the observed Q of the cavity-QED microlaser was underestimated by 19% with respect to the dead-time-free Q when its mean photon number was about 600. We derived an analytic formula which well explains the behavior of the g(2 )(0 ) as a function of the dead time.

  3. Anomalous single-electron transfer in common-gate quadruple-dot single-electron devices with asymmetric junction capacitances

    NASA Astrophysics Data System (ADS)

    Imai, Shigeru; Ito, Masato

    2018-06-01

    In this paper, anomalous single-electron transfer in common-gate quadruple-dot turnstile devices with asymmetric junction capacitances is revealed. That is, the islands have the same total number of excess electrons at high and low gate voltages of the swing that transfers a single electron. In another situation, two electrons enter the islands from the source and two electrons leave the islands for the source and drain during a gate voltage swing cycle. First, stability diagrams of the turnstile devices are presented. Then, sequences of single-electron tunneling events by gate voltage swings are investigated, which demonstrate the above-mentioned anomalous single-electron transfer between the source and the drain. The anomalous single-electron transfer can be understood by regarding the four islands as “three virtual islands and a virtual source or drain electrode of a virtual triple-dot device”. The anomalous behaviors of the four islands are explained by the normal behavior of the virtual islands transferring a single electron and the behavior of the virtual electrode.

  4. 40 CFR 419.15 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... source subject to this subpart which introduces pollutants into a publicly owned treatment works must...). The following standards apply to the total refinery flow contribution to the POTW: Pollutant or...

  5. 40 CFR 419.45 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... source subject to this subpart which introduces pollutants into a publicly owned treatment works must...). The following standards apply to the total refinery flow contribution to the POTW: Pollutant or...

  6. 40 CFR 419.55 - Pretreatment standards for existing sources (PSES).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... source subject to this subpart which introduces pollutants into a publicly owned treatment works must... following standards apply to the total refinery flow contribution to the POTW: Pollutant or pollutant...

  7. Choosing the Best Method to Introduce Accounting.

    ERIC Educational Resources Information Center

    Guerrieri, Donald J.

    1988-01-01

    Of the traditional approaches to teaching accounting--single entry, journal, "T" account, balance sheet, and accounting equation--the author recommends the accounting equation approach. It is the foundation of the double entry system, new material is easy to introduce, and it provides students with a rationale for understanding basic concepts.…

  8. Generating single microwave photons in a circuit.

    PubMed

    Houck, A A; Schuster, D I; Gambetta, J M; Schreier, J A; Johnson, B R; Chow, J M; Frunzio, L; Majer, J; Devoret, M H; Girvin, S M; Schoelkopf, R J

    2007-09-20

    Microwaves have widespread use in classical communication technologies, from long-distance broadcasts to short-distance signals within a computer chip. Like all forms of light, microwaves, even those guided by the wires of an integrated circuit, consist of discrete photons. To enable quantum communication between distant parts of a quantum computer, the signals must also be quantum, consisting of single photons, for example. However, conventional sources can generate only classical light, not single photons. One way to realize a single-photon source is to collect the fluorescence of a single atom. Early experiments measured the quantum nature of continuous radiation, and further advances allowed triggered sources of photons on demand. To allow efficient photon collection, emitters are typically placed inside optical or microwave cavities, but these sources are difficult to employ for quantum communication on wires within an integrated circuit. Here we demonstrate an on-chip, on-demand single-photon source, where the microwave photons are injected into a wire with high efficiency and spectral purity. This is accomplished in a circuit quantum electrodynamics architecture, with a microwave transmission line cavity that enhances the spontaneous emission of a single superconducting qubit. When the qubit spontaneously emits, the generated photon acts as a flying qubit, transmitting the quantum information across a chip. We perform tomography of both the qubit and the emitted photons, clearly showing that both the quantum phase and amplitude are transferred during the emission. Both the average power and voltage of the photon source are characterized to verify performance of the system. This single-photon source is an important addition to a rapidly growing toolbox for quantum optics on a chip.

  9. Single-Sex School Boys' Perceptions of Coeducational Classroom Learning Environments

    ERIC Educational Resources Information Center

    Yates, Shirley M.

    2011-01-01

    Reviews in many countries have found little evidence of consistent advantages in either single-sex education or coeducation. Over the last three decades, coeducation has been introduced into many single-sex schools, but there is a dearth of evidence from the student perspective of the impact of such changes on the classroom learning environment.…

  10. Single Event Effects mitigation with TMRG tool

    NASA Astrophysics Data System (ADS)

    Kulis, S.

    2017-01-01

    Single Event Effects (SEE) are a major concern for integrated circuits exposed to radiation. There have been several techniques proposed to protect circuits against radiation-induced upsets. Among the others, the Triple Modular Redundancy (TMR) technique is one of the most popular. The purpose of the Triple Modular Redundancy Generator (TMRG) tool is to automatize the process of triplicating digital circuits freeing the designer from introducing the TMR code manually at the implementation stage. It helps to ensure that triplicated logic is maintained through the design process. Finally, the tool streamlines the process of introducing SEE in gate level simulations for final verification.

  11. 40 CFR 421.95 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Plants Subcategory § 421.95 Pretreatment standards for existing sources. Except as provided in 40 CFR 403... standards for existing sources. The mass of wastewater pollutants in metallurgical acid plant blowdown introduced into a POTW shall not exceed the following values: Subpart I—Metallurgical Acid Plant—PSES...

  12. 40 CFR 421.95 - Pretreatment standards for existing sources.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Plants Subcategory § 421.95 Pretreatment standards for existing sources. Except as provided in 40 CFR 403... standards for existing sources. The mass of wastewater pollutants in metallurgical acid plant blowdown introduced into a POTW shall not exceed the following values: Subpart I—Metallurgical Acid Plant—PSES...

  13. An Exact Form of Lilley's Equation with a Velocity Quadrupole/Temperature Dipole Source Term

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2001-01-01

    There have been several attempts to introduce approximations into the exact form of Lilley's equation in order to express the source term as the sum of a quadrupole whose strength is quadratic in the fluctuating velocities and a dipole whose strength is proportional to the temperature fluctuations. The purpose of this note is to show that it is possible to choose the dependent (i.e., the pressure) variable so that this type of result can be derived directly from the Euler equations without introducing any additional approximations.

  14. The development of the time-keeping clock with TS-1 single chip microcomputer.

    NASA Astrophysics Data System (ADS)

    Zhou, Jiguang; Li, Yongan

    The authors have developed a time-keeping clock with Intel 8751 single chip microcomputer that has been successfully used in time-keeping station. The hard-soft ware design and performance of the clock are introduced.

  15. Isolation of anti-toxin single domain antibodies from a semi-synthetic spiny dogfish shark display library.

    PubMed

    Liu, Jinny L; Anderson, George P; Goldman, Ellen R

    2007-11-19

    Shark heavy chain antibody, also called new antigen receptor (NAR), consists of one single Variable domain (VH), containing only two complementarity-determining regions (CDRs). The antigen binding affinity and specificity are mainly determined by these two CDRs. The good solubility, excellent thermal stability and complex sequence variation of small single domain antibodies (sdAbs) make them attractive alternatives to conventional antibodies. In this report, we construct and characterize a diversity enhanced semi-synthetic NAR V display library based on naturally occurring NAR V sequences. A semi-synthetic shark sdAb display library with a complexity close to 1e9 was constructed. This was achieved by introducing size and sequence variations in CDR3 using randomized CDR3 primers of three different lengths. Binders against three toxins, staphylococcal enterotoxin B (SEB), ricin, and botulinum toxin A (BoNT/A) complex toxoid, were isolated from panning the display library. Soluble sdAbs from selected binders were purified and evaluated using direct binding and thermal stability assays on the Luminex 100. In addition, sandwich assays using sdAb as the reporter element were developed to demonstrate their utility for future sensor applications. We demonstrated the utility of a newly created hyper diversified shark NAR displayed library to serve as a source of thermal stable sdAbs against a variety of toxins.

  16. Single-shot imaging with higher-dimensional encoding using magnetic field monitoring and concomitant field correction.

    PubMed

    Testud, Frederik; Gallichan, Daniel; Layton, Kelvin J; Barmet, Christoph; Welz, Anna M; Dewdney, Andrew; Cocosco, Chris A; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim

    2015-03-01

    PatLoc (Parallel Imaging Technique using Localized Gradients) accelerates imaging and introduces a resolution variation across the field-of-view. Higher-dimensional encoding employs more spatial encoding magnetic fields (SEMs) than the corresponding image dimensionality requires, e.g. by applying two quadratic and two linear spatial encoding magnetic fields to reconstruct a 2D image. Images acquired with higher-dimensional single-shot trajectories can exhibit strong artifacts and geometric distortions. In this work, the source of these artifacts is analyzed and a reliable correction strategy is derived. A dynamic field camera was built for encoding field calibration. Concomitant fields of linear and nonlinear spatial encoding magnetic fields were analyzed. A combined basis consisting of spherical harmonics and concomitant terms was proposed and used for encoding field calibration and image reconstruction. A good agreement between the analytical solution for the concomitant fields and the magnetic field simulations of the custom-built PatLoc SEM coil was observed. Substantial image quality improvements were obtained using a dynamic field camera for encoding field calibration combined with the proposed combined basis. The importance of trajectory calibration for single-shot higher-dimensional encoding is demonstrated using the combined basis including spherical harmonics and concomitant terms, which treats the concomitant fields as an integral part of the encoding. © 2014 Wiley Periodicals, Inc.

  17. Isolation of anti-toxin single domain antibodies from a semi-synthetic spiny dogfish shark display library

    PubMed Central

    Liu, Jinny L; Anderson, George P; Goldman, Ellen R

    2007-01-01

    Background Shark heavy chain antibody, also called new antigen receptor (NAR), consists of one single Variable domain (VH), containing only two complementarity-determining regions (CDRs). The antigen binding affinity and specificity are mainly determined by these two CDRs. The good solubility, excellent thermal stability and complex sequence variation of small single domain antibodies (sdAbs) make them attractive alternatives to conventional antibodies. In this report, we construct and characterize a diversity enhanced semi-synthetic NAR V display library based on naturally occurring NAR V sequences. Results A semi-synthetic shark sdAb display library with a complexity close to 1e9 was constructed. This was achieved by introducing size and sequence variations in CDR3 using randomized CDR3 primers of three different lengths. Binders against three toxins, staphylococcal enterotoxin B (SEB), ricin, and botulinum toxin A (BoNT/A) complex toxoid, were isolated from panning the display library. Soluble sdAbs from selected binders were purified and evaluated using direct binding and thermal stability assays on the Luminex 100. In addition, sandwich assays using sdAb as the reporter element were developed to demonstrate their utility for future sensor applications. Conclusion We demonstrated the utility of a newly created hyper diversified shark NAR displayed library to serve as a source of thermal stable sdAbs against a variety of toxins. PMID:18021450

  18. Sensitivity improvement of one-shot Fourier spectroscopic imager for realization of noninvasive blood glucose sensors in smartphones

    NASA Astrophysics Data System (ADS)

    Kawashima, Natsumi; Nogo, Kosuke; Hosono, Satsuki; Nishiyama, Akira; Wada, Kenji; Ishimaru, Ichiro

    2016-11-01

    The use of the wide-field-stop and beam-expansion method for sensitivity enhancement of one-shot Fourier spectroscopy is proposed to realize health care sensors installed in smartphones for daily monitoring. When measuring the spectral components of human bodies noninvasively, diffuse reflected light from biological membranes is too weak for detection using conventional hyperspectral cameras. One-shot Fourier spectroscopy is a spatial phase-shift-type interferometer that can determine the one-dimensional spectral characteristics from a single frame. However, this method has low sensitivity, so that only the spectral characteristics of light sources with direct illumination can be obtained, because a single slit is used as a field stop. The sensitivity of the proposed spectroscopic method is improved by using the wide-field-stop and beam-expansion method. The use of a wider field stop slit width increases the detected light intensity; however, this simultaneously narrows the diffraction angle. The narrower collimated objective beam diameter degrades the visibility of interferograms. Therefore, a plane-concave cylindrical lens between the objective plane and the single slit is introduced to expand the beam diameter. The resulting sensitivity improvement achieved when using the wide-field-stop and beam-expansion method allows the spectral characteristics of hemoglobin to be obtained noninvasively from a human palm using a midget lamp.

  19. What are single photons good for?

    NASA Astrophysics Data System (ADS)

    Sangouard, Nicolas; Zbinden, Hugo

    2012-10-01

    In a long-held preconception, photons play a central role in present-day quantum technologies. But what are sources producing photons one by one good for precisely? Well, in opposition to what many suggest, we show that single-photon sources are not helpful for point to point quantum key distribution because faint laser pulses do the job comfortably. However, there is no doubt about the usefulness of sources producing single photons for future quantum technologies. In particular, we show how single-photon sources could become the seed of a revolution in the framework of quantum communication, making the security of quantum key distribution device-independent or extending quantum communication over many hundreds of kilometers. Hopefully, these promising applications will provide a guideline for researchers to develop more and more efficient sources, producing narrowband, pure and indistinguishable photons at appropriate wavelengths.

  20. Dynamic radioactive particle source

    DOEpatents

    Moore, Murray E; Gauss, Adam Benjamin; Justus, Alan Lawrence

    2012-06-26

    A method and apparatus for providing a timed, synchronized dynamic alpha or beta particle source for testing the response of continuous air monitors (CAMs) for airborne alpha or beta emitters is provided. The method includes providing a radioactive source; placing the radioactive source inside the detection volume of a CAM; and introducing an alpha or beta-emitting isotope while the CAM is in a normal functioning mode.

  1. Molybdenum enhanced low-temperature deposition of crystalline silicon nitride

    DOEpatents

    Lowden, Richard A.

    1994-01-01

    A process for chemical vapor deposition of crystalline silicon nitride which comprises the steps of: introducing a mixture of a silicon source, a molybdenum source, a nitrogen source, and a hydrogen source into a vessel containing a suitable substrate; and thermally decomposing the mixture to deposit onto the substrate a coating comprising crystalline silicon nitride containing a dispersion of molybdenum silicide.

  2. THE EFFECT OF UNRESOLVED BINARIES ON GLOBULAR CLUSTER PROPER-MOTION DISPERSION PROFILES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bianchini, P.; Norris, M. A.; Ven, G. van de

    2016-03-20

    High-precision kinematic studies of globular clusters (GCs) require an accurate knowledge of all possible sources of contamination. Among other sources, binary stars can introduce systematic biases in the kinematics. Using a set of Monte Carlo cluster simulations with different concentrations and binary fractions, we investigate the effect of unresolved binaries on proper-motion dispersion profiles, treating the simulations like Hubble Space Telescope proper-motion samples. Since GCs evolve toward a state of partial energy equipartition, more-massive stars lose energy and decrease their velocity dispersion. As a consequence, on average, binaries have a lower velocity dispersion, since they are more-massive kinematic tracers. Wemore » show that, in the case of clusters with high binary fractions (initial binary fractions of 50%) and high concentrations (i.e., closer to energy equipartition), unresolved binaries introduce a color-dependent bias in the velocity dispersion of main-sequence stars of the order of 0.1–0.3 km s{sup −1} (corresponding to 1%−6% of the velocity dispersion), with the reddest stars having a lower velocity dispersion, due to the higher fraction of contaminating binaries. This bias depends on the ability to distinguish binaries from single stars, on the details of the color–magnitude diagram and the photometric errors. We apply our analysis to the HSTPROMO data set of NGC 7078 (M15) and show that no effect ascribable to binaries is observed, consistent with the low binary fraction of the cluster. Our work indicates that binaries do not significantly bias proper-motion velocity-dispersion profiles, but should be taken into account in the error budget of kinematic analyses.« less

  3. Transient Expression of CRISPR/Cas9 Machinery Targeting TcNPR3 Enhances Defense Response in Theobroma cacao.

    PubMed

    Fister, Andrew S; Landherr, Lena; Maximova, Siela N; Guiltinan, Mark J

    2018-01-01

    Theobroma cacao , the source of cocoa, suffers significant losses to a variety of pathogens resulting in reduced incomes for millions of farmers in developing countries. Development of disease resistant cacao varieties is an essential strategy to combat this threat, but is limited by sources of genetic resistance and the slow generation time of this tropical tree crop. In this study, we present the first application of genome editing technology in cacao, using Agrobacterium-mediated transient transformation to introduce CRISPR/Cas9 components into cacao leaves and cotyledon cells. As a first proof of concept, we targeted the cacao Non-Expressor of Pathogenesis-Related 3 (TcNPR3) gene, a suppressor of the defense response. After demonstrating activity of designed single-guide RNAs (sgRNA) in vitro , we used Agrobacterium to introduce a CRISPR/Cas9 system into leaf tissue, and identified the presence of deletions in 27% of TcNPR3 copies in the treated tissues. The edited tissue exhibited an increased resistance to infection with the cacao pathogen Phytophthora tropicalis and elevated expression of downstream defense genes. Analysis of off-target mutagenesis in sequences similar to sgRNA target sites using high-throughput sequencing did not reveal mutations above background sequencing error rates. These results confirm the function of NPR3 as a repressor of the cacao immune system and demonstrate the application of CRISPR/Cas9 as a powerful functional genomics tool for cacao. Several stably transformed and genome edited somatic embryos were obtained via Agrobacterium -mediated transformation, and ongoing work will test the effectiveness of this approach at a whole plant level.

  4. Transient Expression of CRISPR/Cas9 Machinery Targeting TcNPR3 Enhances Defense Response in Theobroma cacao

    PubMed Central

    Fister, Andrew S.; Landherr, Lena; Maximova, Siela N.; Guiltinan, Mark J.

    2018-01-01

    Theobroma cacao, the source of cocoa, suffers significant losses to a variety of pathogens resulting in reduced incomes for millions of farmers in developing countries. Development of disease resistant cacao varieties is an essential strategy to combat this threat, but is limited by sources of genetic resistance and the slow generation time of this tropical tree crop. In this study, we present the first application of genome editing technology in cacao, using Agrobacterium-mediated transient transformation to introduce CRISPR/Cas9 components into cacao leaves and cotyledon cells. As a first proof of concept, we targeted the cacao Non-Expressor of Pathogenesis-Related 3 (TcNPR3) gene, a suppressor of the defense response. After demonstrating activity of designed single-guide RNAs (sgRNA) in vitro, we used Agrobacterium to introduce a CRISPR/Cas9 system into leaf tissue, and identified the presence of deletions in 27% of TcNPR3 copies in the treated tissues. The edited tissue exhibited an increased resistance to infection with the cacao pathogen Phytophthora tropicalis and elevated expression of downstream defense genes. Analysis of off-target mutagenesis in sequences similar to sgRNA target sites using high-throughput sequencing did not reveal mutations above background sequencing error rates. These results confirm the function of NPR3 as a repressor of the cacao immune system and demonstrate the application of CRISPR/Cas9 as a powerful functional genomics tool for cacao. Several stably transformed and genome edited somatic embryos were obtained via Agrobacterium-mediated transformation, and ongoing work will test the effectiveness of this approach at a whole plant level. PMID:29552023

  5. Improvement of a wind-tunnel sampling system for odour and VOCs.

    PubMed

    Wang, X; Jiang, J; Kaye, R

    2001-01-01

    Wind-tunnel systems are widely used for collecting odour emission samples from surface area sources. Consequently, a portable wind-tunnel system was developed at the University of New South Wales that was easy to handle and suitable for sampling from liquid surfaces. Development work was undertaken to ensure even air-flows above the emitting surface and to optimise air velocities to simulate real situations. However, recovery efficiencies for emissions have not previously been studied for wind-tunnel systems. A series of experiments was carried out for determining and improving the recovery rate of the wind-tunnel sampling system by using carbon monoxide as a tracer gas. It was observed by mass balance that carbon monoxide recovery rates were initially only 37% to 48% from a simulated surface area emission source. It was therefore apparent that further development work was required to improve recovery efficiencies. By analysing the aerodynamic character of air movement and CO transportation inside the wind-tunnel, it was determined that the apparent poor recoveries resulted from uneven mixing at the sample collection point. A number of modifications were made for the mixing chamber of the wind-tunnel system. A special sampling chamber extension and a sampling manifold with optimally distributed sampling orifices were developed for the wind-tunnel sampling system. The simulation experiments were repeated with the new sampling system. Over a series of experiments, the recovery efficiency of sampling was improved to 83-100% with an average of 90%, where the CO tracer gas was introduced at a single point and 92-102% with an average of 97%, where the CO tracer gas was introduced along a line transverse to the sweep air. The stability and accuracy of the new system were determined statistically and are reported.

  6. Advanced processing and simulation of MRS data using the FID appliance (FID-A)-An open source, MATLAB-based toolkit.

    PubMed

    Simpson, Robin; Devenyi, Gabriel A; Jezzard, Peter; Hennessy, T Jay; Near, Jamie

    2017-01-01

    To introduce a new toolkit for simulation and processing of magnetic resonance spectroscopy (MRS) data, and to demonstrate some of its novel features. The FID appliance (FID-A) is an open-source, MATLAB-based software toolkit for simulation and processing of MRS data. The software is designed specifically for processing data with multiple dimensions (eg, multiple radiofrequency channels, averages, spectral editing dimensions). It is equipped with functions for importing data in the formats of most major MRI vendors (eg, Siemens, Philips, GE, Agilent) and for exporting data into the formats of several common processing software packages (eg, LCModel, jMRUI, Tarquin). This paper introduces the FID-A software toolkit and uses examples to demonstrate its novel features, namely 1) the use of a spectral registration algorithm to carry out useful processing routines automatically, 2) automatic detection and removal of motion-corrupted scans, and 3) the ability to perform several major aspects of the MRS computational workflow from a single piece of software. This latter feature is illustrated through both high-level processing of in vivo GABA-edited MEGA-PRESS MRS data, as well as detailed quantum mechanical simulations to generate an accurate LCModel basis set for analysis of the same data. All of the described processing steps resulted in a marked improvement in spectral quality compared with unprocessed data. Fitting of MEGA-PRESS data using a customized basis set resulted in improved fitting accuracy compared with a generic MEGA-PRESS basis set. The FID-A software toolkit enables high-level processing of MRS data and accurate simulation of in vivo MRS experiments. Magn Reson Med 77:23-33, 2017. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  7. Complete genome of Pieris rapae, a resilient alien, a cabbage pest, and a source of anti-cancer proteins

    PubMed Central

    Kinch, Lisa N.; Borek, Dominika; Otwinowski, Zbyszek; Grishin, Nick V.

    2016-01-01

    The Small Cabbage White ( Pieris rapae) is originally a Eurasian butterfly. Being accidentally introduced into North America, Australia, and New Zealand a century or more ago, it spread throughout the continents and rapidly established as one of the most abundant butterfly species. Although it is a serious pest of cabbage and other mustard family plants with its caterpillars reducing crops to stems, it is also a source of pierisin, a protein unique to the Whites that shows cytotoxicity to cancer cells. To better understand the unusual biology of this omnipresent agriculturally and medically important butterfly, we sequenced and annotated the complete genome from USA specimens. At 246 Mbp, it is among the smallest Lepidoptera genomes reported to date. While 1.5% positions in the genome are heterozygous, they are distributed highly non-randomly along the scaffolds, and nearly 20% of longer than 1000 base-pair segments are SNP-free (median length: 38000 bp). Computational simulations of population evolutionary history suggest that American populations started from a very small number of introduced individuals, possibly a single fertilized female, which is in agreement with historical literature. Comparison to other Lepidoptera genomes reveals several unique families of proteins that may contribute to the unusual resilience of Pieris. The nitrile-specifier proteins divert the plant defense chemicals to non-toxic products. The apoptosis-inducing pierisins could offer a defense mechanism against parasitic wasps. While only two pierisins from Pieris rapae were characterized before, the genome sequence revealed eight, offering additional candidates as anti-cancer drugs. The reference genome we obtained lays the foundation for future studies of the Cabbage White and other Pieridae species. PMID:28163896

  8. Source Detection with Bayesian Inference on ROSAT All-Sky Survey Data Sample

    NASA Astrophysics Data System (ADS)

    Guglielmetti, F.; Voges, W.; Fischer, R.; Boese, G.; Dose, V.

    2004-07-01

    We employ Bayesian inference for the joint estimation of sources and background on ROSAT All-Sky Survey (RASS) data. The probabilistic method allows for detection improvement of faint extended celestial sources compared to the Standard Analysis Software System (SASS). Background maps were estimated in a single step together with the detection of sources without pixel censoring. Consistent uncertainties of background and sources are provided. The source probability is evaluated for single pixels as well as for pixel domains to enhance source detection of weak and extended sources.

  9. Developing a denoising filter for electron microscopy and tomography data in the cloud.

    PubMed

    Starosolski, Zbigniew; Szczepanski, Marek; Wahle, Manuel; Rusu, Mirabela; Wriggers, Willy

    2012-09-01

    The low radiation conditions and the predominantly phase-object image formation of cryo-electron microscopy (cryo-EM) result in extremely high noise levels and low contrast in the recorded micrographs. The process of single particle or tomographic 3D reconstruction does not completely eliminate this noise and is even capable of introducing new sources of noise during alignment or when correcting for instrument parameters. The recently developed Digital Paths Supervised Variance (DPSV) denoising filter uses local variance information to control regional noise in a robust and adaptive manner. The performance of the DPSV filter was evaluated in this review qualitatively and quantitatively using simulated and experimental data from cryo-EM and tomography in two and three dimensions. We also assessed the benefit of filtering experimental reconstructions for visualization purposes and for enhancing the accuracy of feature detection. The DPSV filter eliminates high-frequency noise artifacts (density gaps), which would normally preclude the accurate segmentation of tomography reconstructions or the detection of alpha-helices in single-particle reconstructions. This collaborative software development project was carried out entirely by virtual interactions among the authors using publicly available development and file sharing tools.

  10. Security of two-state and four-state practical quantum bit-commitment protocols

    NASA Astrophysics Data System (ADS)

    Loura, Ricardo; Arsenović, Dušan; Paunković, Nikola; Popović, Duška B.; Prvanović, Slobodan

    2016-12-01

    We study cheating strategies against a practical four-state quantum bit-commitment protocol [A. Danan and L. Vaidman, Quant. Info. Proc. 11, 769 (2012)], 10.1007/s11128-011-0284-4 and its two-state variant [R. Loura et al., Phys. Rev. A 89, 052336 (2014)], 10.1103/PhysRevA.89.052336 when the underlying quantum channels are noisy and the cheating party is constrained to using single-qubit measurements only. We show that simply inferring the transmitted photons' states by using the Breidbart basis, optimal for ambiguous (minimum-error) state discrimination, does not directly produce an optimal cheating strategy for this bit-commitment protocol. We introduce a strategy, based on certain postmeasurement processes and show it to have better chances at cheating than the direct approach. We also study to what extent sending forged geographical coordinates helps a dishonest party in breaking the binding security requirement. Finally, we investigate the impact of imperfect single-photon sources in the protocols. Our study shows that, in terms of the resources used, the four-state protocol is advantageous over the two-state version. The analysis performed can be straightforwardly generalized to any finite-qubit measurement, with the same qualitative results.

  11. Single-silicon CCD-CMOS platform for multi-spectral detection from terahertz to x-rays.

    PubMed

    Shalaby, Mostafa; Vicario, Carlo; Hauri, Christoph P

    2017-11-15

    Charge-coupled devices (CCDs) are a well-established imaging technology in the visible and x-ray frequency ranges. However, the small quantum photon energies of terahertz radiation have hindered the use of this mature semiconductor technological platform in this frequency range, leaving terahertz imaging totally dependent on low-resolution bolometer technologies. Recently, it has been shown that silicon CCDs can detect terahertz photons at a high field, but the detection sensitivity is limited. Here we show that silicon, complementary metal-oxide-semiconductor (CMOS) technology offers enhanced detection sensitivity of almost two orders of magnitude, compared to CCDs. Our findings allow us to extend the low-frequency terahertz cutoff to less than 2 THz, nearly closing the technological gap with electronic imagers operating up to 1 THz. Furthermore, with the silicon CCD/CMOS technology being sensitive to mid-infrared (mid-IR) and the x-ray ranges, we introduce silicon as a single detector platform from 1 EHz to 2 THz. This overcomes the present challenge in spatially overlapping a terahertz/mid-IR pump and x-ray probe radiation at facilities such as free electron lasers, synchrotron, and laser-based x-ray sources.

  12. Intracolonial genetic variation in the scleractinian coral Seriatopora hystrix

    NASA Astrophysics Data System (ADS)

    Maier, E.; Buckenmaier, A.; Tollrian, R.; Nürnberger, B.

    2012-06-01

    In recent years, increasing numbers of studies revealed intraorganismal genetic variation, primarily in modular organisms like plants or colonial marine invertebrates. Two underlying mechanisms are distinguished: Mosaicism is caused by somatic mutation, whereas chimerism originates from allogeneic fusion. We investigated the occurrence of intracolonial genetic variation at microsatellite loci in five natural populations of the scleractinian coral Seriatopora hystrix on the Great Barrier Reef. This coral is a widely distributed, brooding species that is at present a target of intensive population genetic research on reproduction and dispersal patterns. From each of 155 S. hystrix colonies, either two or three samples were genotyped at five or six loci. Twenty-seven (~17%) genetically heterogeneous colonies were found. Statistical analyses indicated the occurrence of both mosaicism and chimerism. In most cases, intracolonial variation was found only at a single allele. Our analyses suggest that somatic mutations present a major source of genetic heterogeneity within a single colony. Moreover, we observed large, apparently stable chimeric colonies that harbored clearly distinct genotypes and contrast these findings with the patterns typically observed in laboratory-based experiments. We discuss the error that mosaicism and chimerism introduce into population genetic analyses.

  13. [Immobilization of introduced bacteria and degradation of pyrene and benzo(alpha) pyrene in soil by immobilized bacteria].

    PubMed

    Wang, Xin; Li, Peijun; Song, Shouzhi; Zhong, Yong; Zhang, Hui; Verkhozina, E V

    2006-11-01

    In this study, introduced bacteria were applied in the bioremediation of pyrene and benzo (alpha) pyrene in organic pollutants-contaminated soils, aimed to test whether it was feasible to introduce bacteria to environmental engineering. Three introduced bacteria were immobilized separately or together to degrade the pyrene and benzo (alpha) pyrene in soil, taking dissociated bacteria as the control, and comparing with three indigenous bacteria. The results showed that immobilized introduced bacteria, either single or mixed, had higher degradation efficiency than dissociated bacteria. Compared with indigenous bacteria, some introduced bacteria had predominance to some degree. The introduced bacteria-mixture had better degradation efficiency after being immobilized. The degradation rate of pyrene and benzo(alpha) pyrene after treated with immobilized bacteria-( B61-B67)-mixture for 96 hours was 43.49% and 38.55%, respectively.

  14. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System.

    PubMed

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-06-27

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system.

  15. Resolution-Enhanced Harmonic and Interharmonic Measurement for Power Quality Analysis in Cyber-Physical Energy System

    PubMed Central

    Liu, Yanchi; Wang, Xue; Liu, Youda; Cui, Sujin

    2016-01-01

    Power quality analysis issues, especially the measurement of harmonic and interharmonic in cyber-physical energy systems, are addressed in this paper. As new situations are introduced to the power system, the impact of electric vehicles, distributed generation and renewable energy has introduced extra demands to distributed sensors, waveform-level information and power quality data analytics. Harmonics and interharmonics, as the most significant disturbances, require carefully designed detection methods for an accurate measurement of electric loads whose information is crucial to subsequent analyzing and control. This paper gives a detailed description of the power quality analysis framework in networked environment and presents a fast and resolution-enhanced method for harmonic and interharmonic measurement. The proposed method first extracts harmonic and interharmonic components efficiently using the single-channel version of Robust Independent Component Analysis (RobustICA), then estimates the high-resolution frequency from three discrete Fourier transform (DFT) samples with little additional computation, and finally computes the amplitudes and phases with the adaptive linear neuron network. The experiments show that the proposed method is time-efficient and leads to a better accuracy of the simulated and experimental signals in the presence of noise and fundamental frequency deviation, thus providing a deeper insight into the (inter)harmonic sources or even the whole system. PMID:27355946

  16. Specificity control for read alignments using an artificial reference genome-guided false discovery rate.

    PubMed

    Giese, Sven H; Zickmann, Franziska; Renard, Bernhard Y

    2014-01-01

    Accurate estimation, comparison and evaluation of read mapping error rates is a crucial step in the processing of next-generation sequencing data, as further analysis steps and interpretation assume the correctness of the mapping results. Current approaches are either focused on sensitivity estimation and thereby disregard specificity or are based on read simulations. Although continuously improving, read simulations are still prone to introduce a bias into the mapping error quantitation and cannot capture all characteristics of an individual dataset. We introduce ARDEN (artificial reference driven estimation of false positives in next-generation sequencing data), a novel benchmark method that estimates error rates of read mappers based on real experimental reads, using an additionally generated artificial reference genome. It allows a dataset-specific computation of error rates and the construction of a receiver operating characteristic curve. Thereby, it can be used for optimization of parameters for read mappers, selection of read mappers for a specific problem or for filtering alignments based on quality estimation. The use of ARDEN is demonstrated in a general read mapper comparison, a parameter optimization for one read mapper and an application example in single-nucleotide polymorphism discovery with a significant reduction in the number of false positive identifications. The ARDEN source code is freely available at http://sourceforge.net/projects/arden/.

  17. Genetic population structure of the recently introduced Asian clam, Potamocorbula amurensis, in San Francisco Bay

    USGS Publications Warehouse

    Duda, T. F.

    1994-01-01

    The genetic population structure of the recently introduced Asian clam, Potamocorbula amurensis, in San Francisco Bay was described using starch gel electrophoresis at eight presumptive loci. Specimens were taken from five environmentally distinct sites located throughout the bay. The population maintains a high degree of genetic variation, with a mean heterozygosity of 0.295, a mean polymorphism of 0.75, and an average of 3.70 alleles per locus. The population is genetically homogeneous, as evidenced from genetic distance values and F-statistics. However, heterogeneity of populations was indicated from a contingency chi-square test. Significant deviations from Hardy-Weinberg equilibrium and heterozygote deficiencies were found at the Lap-1 locus for all populations and at the Lap-2 locus for a single population. High levels of variability could represent a universal characteristic of invading species, the levels of variability in the source population(s), and/or the dynamics of the introduction. Lack of differentiation between subpopulations may be due to the immaturity of the San Francisco Bay population, the “general purpose” phenotype genetic strategy of the species, high rates of gene flow in the population, and/or the selective neutrality of the loci investigated.

  18. Efficient particle-in-cell simulation of auroral plasma phenomena using a CUDA enabled graphics processing unit

    NASA Astrophysics Data System (ADS)

    Sewell, Stephen

    This thesis introduces a software framework that effectively utilizes low-cost commercially available Graphic Processing Units (GPUs) to simulate complex scientific plasma phenomena that are modeled using the Particle-In-Cell (PIC) paradigm. The software framework that was developed conforms to the Compute Unified Device Architecture (CUDA), a standard for general purpose graphic processing that was introduced by NVIDIA Corporation. This framework has been verified for correctness and applied to advance the state of understanding of the electromagnetic aspects of the development of the Aurora Borealis and Aurora Australis. For each phase of the PIC methodology, this research has identified one or more methods to exploit the problem's natural parallelism and effectively map it for execution on the graphic processing unit and its host processor. The sources of overhead that can reduce the effectiveness of parallelization for each of these methods have also been identified. One of the novel aspects of this research was the utilization of particle sorting during the grid interpolation phase. The final representation resulted in simulations that executed about 38 times faster than simulations that were run on a single-core general-purpose processing system. The scalability of this framework to larger problem sizes and future generation systems has also been investigated.

  19. Near real-time PPP-based monitoring of the ionosphere using dual-frequency GPS/BDS/Galileo data

    NASA Astrophysics Data System (ADS)

    Liu, Zhinmin; Li, Yangyang; Li, Fei; Guo, Jinyun

    2018-03-01

    Ionosphere delay is very important to GNSS observations, since it is one of the main error sources which have to be mitigated even eliminated in order to determine reliable and precise positions. The ionosphere is a dispersive medium to radio signal, so the value of the group delay or phase advance of GNSS radio signal depends on the signal frequency. Ground-based GNSS stations have been used for ionosphere monitoring and modeling for a long time. In this paper we will introduce a novel approach suitable for single-receiver operation based on the precise point positioning (PPP) technique. One of the main characteristic is that only carrier-phase observations are used to avoid particular effects of pseudorange observations. The technique consists of introducing ionosphere ambiguity parameters obtained from PPP filter into the geometry-free combination of observations to estimate ionospheric delays. Observational data from stations that are capable of tracking the GPS/BDS/GALILEO from the International GNSS Service (IGS) Multi-GNSS Experiments (MGEX) network are processed. For the purpose of performance validation, ionospheric delays series derived from the novel approach are compared with the global ionospheric map (GIM) from Ionospheric Associate Analysis Centers (IAACs). The results are encouraging and offer potential solutions to the near real-time ionosphere monitoring.

  20. Large-scale, thick, self-assembled, nacre-mimetic brick-walls as fire barrier coatings on textiles

    NASA Astrophysics Data System (ADS)

    Das, Paramita; Thomas, Helga; Moeller, Martin; Walther, Andreas

    2017-01-01

    Highly loaded polymer/clay nanocomposites with layered structures are emerging as robust fire retardant surface coatings. However, time-intensive sequential deposition processes, e.g. layer-by-layer strategies, hinders obtaining large coating thicknesses and complicates an implementation into existing technologies. Here, we demonstrate a single-step, water-borne approach to prepare thick, self-assembling, hybrid fire barrier coatings of sodium carboxymethyl cellulose (CMC)/montmorillonite (MTM) with well-defined, bioinspired brick-wall nanostructure, and showcase their application on textile. The coating thickness on the textile is tailored using different concentrations of CMC/MTM (1-5 wt%) in the coating bath. While lower concentrations impart conformal coatings of fibers, thicker continuous coatings are obtained on the textile surface from highest concentration. Comprehensive fire barrier and fire retardancy tests elucidate the increasing fire barrier and retardancy properties with increasing coating thickness. The materials are free of halogen and heavy metal atoms, and are sourced from sustainable and partly even renewable building blocks. We further introduce an amphiphobic surface modification on the coating to impart oil and water repellency, as well as self-cleaning features. Hence, our study presents a generic, environmentally friendly, scalable, and one-pot coating approach that can be introduced into existing technologies to prepare bioinspired, thick, fire barrier nanocomposite coatings on diverse surfaces.

  1. Cost-effectiveness of Chlamydia antibody tests in subfertile women.

    PubMed

    Fiddelers, A A A; Land, J A; Voss, G; Kessels, A G H; Severens, J L

    2005-02-01

    For the evaluation of tubal function, Chlamydia antibody testing (CAT) has been introduced as a screening test. We compared six CAT screening strategies (five CAT tests and one combination of tests), with respect to their cost-effectiveness, by using IVF pregnancy rate as outcome measure. A decision analytic model was developed based on a source population of 1715 subfertile women. The model incorporates hysterosalpingography (HSG), laparoscopy and IVF. To calculate IVF pregnancy rates, costs, effects, cost-effectiveness and incremental costs per effect of the six different CAT screening strategies were determined. pELISA Medac turned out to be the most cost-effective CAT screening strategy (15 075 per IVF pregnancy), followed by MIF Anilabsystems (15 108). A combination of tests (pELISA Medac and MIF Anilabsystems; 15 127) did not improve the cost-effectiveness of the single strategies. Sensitivity analyses showed that the results are robust for changes in the baseline values of the model parameters. Only small differences were found between the screening strategies regarding the cost-effectiveness, although pELISA Medac was the most cost-effective strategy. Before introducing a particular CAT test into clinical practice, one should consider the effects and consequences of the entire screening strategy, instead of only the diagnostic accuracy of the test used.

  2. Functional Interrupts and Destructive Failures from Single Event Effect Testing of Point-Of-Load Devices

    NASA Technical Reports Server (NTRS)

    Chen, Dakai; Phan, Anthony; Kim, Hak; Swonger, James; Musil, Paul; LaBel, Kenneth

    2013-01-01

    We show examples of single event functional interrupt and destructive failure in modern POL devices. The increasing complexity and diversity of the design and process introduce hard SEE modes that are triggered by various mechanisms.

  3. Quantum self-organization and nuclear collectivities

    NASA Astrophysics Data System (ADS)

    Otsuka, T.; Tsunoda, Y.; Togashi, T.; Shimizu, N.; Abe, T.

    2018-02-01

    The quantum self-organization is introduced as one of the major underlying mechanisms of the quantum many-body systems. In the case of atomic nuclei as an example, two types of the motion of nucleons, single-particle states and collective modes, dominate the structure of the nucleus. The outcome of the collective mode is determined basically by the balance between the effect of the mode-driving force (e.g., quadrupole force for the ellipsoidal deformation) and the resistance power against it. The single-particle energies are one of the sources to produce such resistance power: a coherent collective motion is more hindered by larger gaps between relevant single particle states. Thus, the single-particle state and the collective mode are “enemies” each other. However, the nuclear forces are demonstrated to be rich enough so as to enhance relevant collective mode by reducing the resistance power by changing singleparticle energies for each eigenstate through monopole interactions. This will be verified with the concrete example taken from Zr isotopes. Thus, when the quantum self-organization occurs, single-particle energies can be self-organized, being enhanced by (i) two quantum liquids, e.g., protons and neutrons, (ii) two major force components, e.g., quadrupole interaction (to drive collective mode) and monopole interaction (to control resistance). In other words, atomic nuclei are not necessarily like simple rigid vases containing almost free nucleons, in contrast to the naïve Fermi liquid picture. Type II shell evolution is considered to be a simple visible case involving excitations across a (sub)magic gap. The quantum self-organization becomes more important in heavier nuclei where the number of active orbits and the number of active nucleons are larger. The quantum self-organization is a general phenomenon, and is expected to be found in other quantum systems.

  4. Foraging enrichment for stabled horses: effects on behaviour and selection.

    PubMed

    Goodwin, D; Davidson, H P B; Harris, P

    2002-11-01

    The restricted access to pasture experienced by many competition horses has been linked to the exhibition of stereotypic and redirected behaviour patterns. It has been suggested that racehorses provided with more than one source of forage are less likely to perform these patterns; however, the reasons for this are currently unclear. To investigate this in 4 replicated trials, up to 12 horses were introduced into each of 2 identical stables containing a single forage, or 6 forages for 5 min. To detect novelty effects, in the first and third trials the single forage was hay. In the second and fourth, it was the preferred forage from the preceding trial. Trials were videotaped and 12 mutually exclusive behaviour patterns compared. When hay was presented as the single forage (Trials 1 and 3), all recorded behaviour patterns were significantly different between stables; e.g. during Trial 3 in the 'Single' stable, horses looked over the stable door more frequently (P<0.001), moved for longer (P<0.001), foraged on straw bedding longer (P<0.001), and exhibited behaviour indicative of motivation to search for alternative resources (P<0.001) more frequently. When a previously preferred forage was presented as the single forage (Trials 2 and 4) behaviour was also significantly different between stables, e.g in Trial 4 horses looked out over the stable door more frequently (P<0.005) and foraged for longer in their straw bedding (P<0.005). Further study is required to determine whether these effects persist over longer periods. However, these trials indicate that enrichment of the stable environment through provision of multiple forages may have welfare benefits for horses, in reducing straw consumption and facilitating the expression of highly motivated foraging behaviour.

  5. True resolution enhancement for optical spectroscopy

    NASA Astrophysics Data System (ADS)

    Cooper, Justin T.; Oleske, Jeffrey B.

    2018-02-01

    Resolving spectrally adjacent peaks is important for techniques, such as tracking small shifts in Raman or fluorescence spectra, quantifying pharmaceutical polymorph ratios, or molecular orientation studies. Thus, suitable spectral resolution is a vital consideration when designing most spectroscopic systems. Most parameters that influence spectral resolution are fixed for a given system (spectrometer length, grating groove density, excitation source, CCD pixel size, etc.). Inflexible systems are non-problematic if the spectrometer is dedicated for a single purpose; however, these specifications cannot be optimized for different applications with wider range resolution requirements. Data processing techniques, including peak fitting, partial least squares, or principal component analysis, are typically used to achieve sub-optical resolution information. These techniques can be plagued by spectral artifacts introduced by post-processing as well as the subjective implementation of statistical parameters. TruRes™, from Andor Technology, uses an innovative optical means to greatly improve and expand the range of spectral resolutions accessible on a single setup. True spectral resolution enhancement of >30% is achieved without mathematical spectral alteration, dataprocessing, or spectrometer component changes. Discreet characteristic spectral lines from Laser-Induced Breakdown Spectroscopy (LIBS) and atomic calibration sources are now fully resolved from spectrally-adjacent peaks under otherwise identical configuration. TruRes™ has added advantage of increasing the spectral resolution without sacrificing bandpass. Using TruRes™ the Kymera 328i resolution can approach that of a 500 mm focal spectrometer. Furthermore, the bandpass of a 500 mm spectrograph with would be 50% narrower than the Kymera 328i with all other spectrometer components constant. However, the Kymera 328i with TruRes™ is able to preserve a 50% wider bandpass.

  6. Evaluation of realistic layouts for next generation on-scalp MEG: spatial information density maps.

    PubMed

    Riaz, Bushra; Pfeiffer, Christoph; Schneiderman, Justin F

    2017-08-01

    While commercial magnetoencephalography (MEG) systems are the functional neuroimaging state-of-the-art in terms of spatio-temporal resolution, MEG sensors have not changed significantly since the 1990s. Interest in newer sensors that operate at less extreme temperatures, e.g., high critical temperature (high-T c ) SQUIDs, optically-pumped magnetometers, etc., is growing because they enable significant reductions in head-to-sensor standoff (on-scalp MEG). Various metrics quantify the advantages of on-scalp MEG, but a single straightforward one is lacking. Previous works have furthermore been limited to arbitrary and/or unrealistic sensor layouts. We introduce spatial information density (SID) maps for quantitative and qualitative evaluations of sensor arrays. SID-maps present the spatial distribution of information a sensor array extracts from a source space while accounting for relevant source and sensor parameters. We use it in a systematic comparison of three practical on-scalp MEG sensor array layouts (based on high-T c SQUIDs) and the standard Elekta Neuromag TRIUX magnetometer array. Results strengthen the case for on-scalp and specifically high-T c SQUID-based MEG while providing a path for the practical design of future MEG systems. SID-maps are furthermore general to arbitrary magnetic sensor technologies and source spaces and can thus be used for design and evaluation of sensor arrays for magnetocardiography, magnetic particle imaging, etc.

  7. Flexible embedding of networks

    NASA Astrophysics Data System (ADS)

    Fernandez-Gracia, Juan; Buckee, Caroline; Onnela, Jukka-Pekka

    We introduce a model for embedding one network into another, focusing on the case where network A is much bigger than network B. Nodes from network A are assigned to the nodes in network B using an algorithm where we control the extent of localization of node placement in network B using a single parameter. Starting from an unassigned node in network A, called the source node, we first map this node to a randomly chosen node in network B, called the target node. We then assign the neighbors of the source node to the neighborhood of the target node using a random walk based approach. To assign each neighbor of the source node to one of the nodes in network B, we perform a random walk starting from the target node with stopping probability α. We repeat this process until all nodes in network A have been mapped to the nodes of network B. The simplicity of the model allows us to calculate key quantities of interest in closed form. By varying the parameter α, we are able to produce embeddings from very local (α = 1) to very global (α --> 0). We show how our calculations fit the simulated results, and we apply the model to study how social networks are embedded in geography and how the neurons of C. Elegans are embedded in the surrounding volume.

  8. Chapter 6. Climate and terrain

    Treesearch

    James N. Davis

    2004-01-01

    Our knowledge of the physical requirements of cultivated plants is far advanced in contrast to that of the native and introduced species used in range plantings. Cultivated plants are usually grown as single varieties of a species under specific controlled conditions to ensure maximum yields. Native and introduced range plants often grow in species mixtures on sites...

  9. Circles and the Lines That Intersect Them

    ERIC Educational Resources Information Center

    Clay, Ellen L.; Rhee, Katherine L.

    2014-01-01

    In this article, Clay and Rhee use the mathematics topic of circles and the lines that intersect them to introduce the idea of looking at the single mathematical idea of relationships--in this case, between angles and arcs--across a group of problems. They introduce the mathematics that underlies these relationships, beginning with the questions…

  10. A structurally based analytic model of growth and biomass dynamics in single species stands of conifers

    Treesearch

    Robin J. Tausch

    2015-01-01

    A theoretically based analytic model of plant growth in single species conifer communities based on the species fully occupying a site and fully using the site resources is introduced. Model derivations result in a single equation simultaneously describes changes over both, different site conditions (or resources available), and over time for each variable for each...

  11. CREMA-D: Crowd-sourced Emotional Multimodal Actors Dataset

    PubMed Central

    Cao, Houwei; Cooper, David G.; Keutmann, Michael K.; Gur, Ruben C.; Nenkova, Ani; Verma, Ragini

    2014-01-01

    People convey their emotional state in their face and voice. We present an audio-visual data set uniquely suited for the study of multi-modal emotion expression and perception. The data set consists of facial and vocal emotional expressions in sentences spoken in a range of basic emotional states (happy, sad, anger, fear, disgust, and neutral). 7,442 clips of 91 actors with diverse ethnic backgrounds were rated by multiple raters in three modalities: audio, visual, and audio-visual. Categorical emotion labels and real-value intensity values for the perceived emotion were collected using crowd-sourcing from 2,443 raters. The human recognition of intended emotion for the audio-only, visual-only, and audio-visual data are 40.9%, 58.2% and 63.6% respectively. Recognition rates are highest for neutral, followed by happy, anger, disgust, fear, and sad. Average intensity levels of emotion are rated highest for visual-only perception. The accurate recognition of disgust and fear requires simultaneous audio-visual cues, while anger and happiness can be well recognized based on evidence from a single modality. The large dataset we introduce can be used to probe other questions concerning the audio-visual perception of emotion. PMID:25653738

  12. Executing SPARQL Queries over the Web of Linked Data

    NASA Astrophysics Data System (ADS)

    Hartig, Olaf; Bizer, Christian; Freytag, Johann-Christoph

    The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.

  13. ToxPi Graphical User Interface 2.0: Dynamic exploration, visualization, and sharing of integrated data models.

    PubMed

    Marvel, Skylar W; To, Kimberly; Grimm, Fabian A; Wright, Fred A; Rusyn, Ivan; Reif, David M

    2018-03-05

    Drawing integrated conclusions from diverse source data requires synthesis across multiple types of information. The ToxPi (Toxicological Prioritization Index) is an analytical framework that was developed to enable integration of multiple sources of evidence by transforming data into integrated, visual profiles. Methodological improvements have advanced ToxPi and expanded its applicability, necessitating a new, consolidated software platform to provide functionality, while preserving flexibility for future updates. We detail the implementation of a new graphical user interface for ToxPi (Toxicological Prioritization Index) that provides interactive visualization, analysis, reporting, and portability. The interface is deployed as a stand-alone, platform-independent Java application, with a modular design to accommodate inclusion of future analytics. The new ToxPi interface introduces several features, from flexible data import formats (including legacy formats that permit backward compatibility) to similarity-based clustering to options for high-resolution graphical output. We present the new ToxPi interface for dynamic exploration, visualization, and sharing of integrated data models. The ToxPi interface is freely-available as a single compressed download that includes the main Java executable, all libraries, example data files, and a complete user manual from http://toxpi.org .

  14. Detection of localized inclusions of gold nanoparticles in Intralipid-1% by point-radiance spectroscopy

    NASA Astrophysics Data System (ADS)

    Grabtchak, Serge; Palmer, Tyler J.; Whelan, William M.

    2011-07-01

    Interstitial fiber-optic-based approaches used in both diagnostic and therapeutic applications rely on localized light-tissue interactions. We present an optical technique to identify spectrally and spatially specific exogenous chromophores in highly scattering turbid media. Point radiance spectroscopy is based on directional light collection at a single point with a side-firing fiber that can be rotated up to 360 deg. A side firing fiber accepts light within a well-defined, solid angle, thus potentially providing an improved spatial resolution. Measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ~43 deg cleaved fiber (i.e., radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid. Light was collected with 1 deg increments through 360 deg-segment. Gold nanoparticles , placed into a 3.5-mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a potential noninvasive optical modality for prostate cancer monitoring.

  15. A programmable metasurface with dynamic polarization, scattering and focusing control

    NASA Astrophysics Data System (ADS)

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-10-01

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.

  16. KLYNAC: Compact linear accelerator with integrated power supply

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malyzhenkov, Alexander

    Accelerators and accelerator-based light sources have a wide range of applications in science, engineering technology and medicine. Today the scienti c community is working towards improving the quality of the accelerated beam and its parameters while trying to develop technology for reducing accelerator size. This work describes a design of a compact linear accelerator (linac) prototype, resonant Klynac device, which is a combined linear accelerator and its power supply - klystron. The intended purpose of a Klynac device is to provide a compact and inexpensive alternative to a conventional 1 to 6 MeV accelerator, which typically requires a separate RFmore » source, an accelerator itself and all the associated hardware. Because the Klynac is a single structure, it has the potential to be much less sensitive to temperature variations than a system with separate klystron and linac. We start by introducing a simpli ed theoretical model for a Klynac device. We then demonstrate how a prototype is designed step-by-step using particle-in-cell simulation studies for mono- resonant and bi-resonant structures. Finally, we discuss design options from a stability point of view and required input power as well as behavior of competing modes for the actual built device.« less

  17. Attosecond Coherent Control of the Photo-Dissociation of Oxygen Molecules

    NASA Astrophysics Data System (ADS)

    Sturm, Felix; Ray, Dipanwita; Wright, Travis; Shivaram, Niranjan; Bocharova, Irina; Slaughter, Daniel; Ranitovic, Predrag; Belkacem, Ali; Weber, Thorsten

    2016-05-01

    Attosecond Coherent Control has emerged in recent years as a technique to manipulate the absorption and ionization in atoms as well as the dissociation of molecules on an attosecond time scale. Single attosecond pulses and attosecond pulse trains (APTs) can coherently excite multiple electronic states. The electronic and nuclear wave packets can then be coupled with a second pulse forming multiple interfering quantum pathways. We have built a high flux extreme ultraviolet (XUV) light source delivering APTs based on HHG that allows to selectively excite neutral and ion states in molecules. Our beamline provides spectral selectivity and attosecond interferometric control of the pulses. In the study presented here, we use APTs, generated by High Harmonic Generation in a high flux extreme ultraviolet light source, to ionize highly excited states of oxygen molecules. We identify the ionization/dissociation pathways revealing vibrational structure with ultra-high resolution ion 3D-momentum imaging spectroscopy. Furthermore, we introduce a delay between IR pulses and XUV/IR pulses to constructively or destructively interfere the ionization and dissociation pathways, thus, enabling the manipulation of both the O2+and the O+ ion yields with attosecond precision. Supported by DOE under Contract No. DE-AC02-05CH11231.

  18. A programmable metasurface with dynamic polarization, scattering and focusing control

    PubMed Central

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-01-01

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications. PMID:27774997

  19. A programmable metasurface with dynamic polarization, scattering and focusing control.

    PubMed

    Yang, Huanhuan; Cao, Xiangyu; Yang, Fan; Gao, Jun; Xu, Shenheng; Li, Maokun; Chen, Xibi; Zhao, Yi; Zheng, Yuejun; Li, Sijia

    2016-10-24

    Diverse electromagnetic (EM) responses of a programmable metasurface with a relatively large scale have been investigated, where multiple functionalities are obtained on the same surface. The unit cell in the metasurface is integrated with one PIN diode, and thus a binary coded phase is realized for a single polarization. Exploiting this anisotropic characteristic, reconfigurable polarization conversion is presented first. Then the dynamic scattering performance for two kinds of sources, i.e. a plane wave and a point source, is carefully elaborated. To tailor the scattering properties, genetic algorithm, normally based on binary coding, is coupled with the scattering pattern analysis to optimize the coding matrix. Besides, inverse fast Fourier transform (IFFT) technique is also introduced to expedite the optimization process of a large metasurface. Since the coding control of each unit cell allows a local and direct modulation of EM wave, various EM phenomena including anomalous reflection, diffusion, beam steering and beam forming are successfully demonstrated by both simulations and experiments. It is worthwhile to point out that a real-time switch among these functionalities is also achieved by using a field-programmable gate array (FPGA). All the results suggest that the proposed programmable metasurface has great potentials for future applications.

  20. Multitasking vs. multiplexing: Toward a normative account of limitations in the simultaneous execution of control-demanding behaviors

    PubMed Central

    Feng, S. F.; Schwemmer, M.; Gershman, S. J.; Cohen, J. D.

    2014-01-01

    Why is it that behaviors that rely on control, so striking in their diversity and flexibility, are also subject to such striking limitations? Typically, people cannot engage in more than a few — and usually only a single — control-demanding task at a time. This limitation was a defining element in the earliest conceptualizations of controlled processing, it remains one of the most widely accepted axioms of cognitive psychology, and is even the basis for some laws (e.g., against the use of mobile devices while driving). Remarkably, however, the source of this limitation is still not understood. Here, we examine one potential source of this limitation, in terms of a tradeoff between the flexibility and efficiency of representation (“multiplexing”) and the simultaneous engagement of different processing pathways (“multitasking”). We show that even a modest amount of multiplexing rapidly introduces cross-talk among processing pathways, thereby constraining the number that can be productively engaged at once. We propose that, given the large number of advantages of efficient coding, the human brain has favored this over the capacity for multitasking of control-demanding processes. PMID:24481850

  1. Multitasking versus multiplexing: Toward a normative account of limitations in the simultaneous execution of control-demanding behaviors.

    PubMed

    Feng, S F; Schwemmer, M; Gershman, S J; Cohen, J D

    2014-03-01

    Why is it that behaviors that rely on control, so striking in their diversity and flexibility, are also subject to such striking limitations? Typically, people cannot engage in more than a few-and usually only a single-control-demanding task at a time. This limitation was a defining element in the earliest conceptualizations of controlled processing; it remains one of the most widely accepted axioms of cognitive psychology, and is even the basis for some laws (e.g., against the use of mobile devices while driving). Remarkably, however, the source of this limitation is still not understood. Here, we examine one potential source of this limitation, in terms of a trade-off between the flexibility and efficiency of representation ("multiplexing") and the simultaneous engagement of different processing pathways ("multitasking"). We show that even a modest amount of multiplexing rapidly introduces cross-talk among processing pathways, thereby constraining the number that can be productively engaged at once. We propose that, given the large number of advantages of efficient coding, the human brain has favored this over the capacity for multitasking of control-demanding processes.

  2. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces.

    PubMed

    Grissmann, Sebastian; Zander, Thorsten O; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios.

  3. An automated workflow for parallel processing of large multiview SPIM recordings

    PubMed Central

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-01-01

    Summary: Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. Availability and implementation: The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT. The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows. Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction. Contact: schmied@mpi-cbg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:26628585

  4. Detection of localized inclusions of gold nanoparticles in Intralipid-1% by point-radiance spectroscopy.

    PubMed

    Grabtchak, Serge; Palmer, Tyler J; Whelan, William M

    2011-07-01

    Interstitial fiber-optic-based approaches used in both diagnostic and therapeutic applications rely on localized light-tissue interactions. We present an optical technique to identify spectrally and spatially specific exogenous chromophores in highly scattering turbid media. Point radiance spectroscopy is based on directional light collection at a single point with a side-firing fiber that can be rotated up to 360 deg. A side firing fiber accepts light within a well-defined, solid angle, thus potentially providing an improved spatial resolution. Measurements were performed using an 800-μm diameter isotropic spherical diffuser coupled to a halogen light source and a 600 μm, ∼43 deg cleaved fiber (i.e., radiance detector). The background liquid-based scattering phantom was fabricated using 1% Intralipid. Light was collected with 1 deg increments through 360 deg-segment. Gold nanoparticles , placed into a 3.5-mm diameter capillary tube were used as localized scatterers and absorbers introduced into the liquid phantom both on- and off-axis between source and detector. The localized optical inhomogeneity was detectable as an angular-resolved variation in the radiance polar plots. This technique is being investigated as a potential noninvasive optical modality for prostate cancer monitoring.

  5. An automated workflow for parallel processing of large multiview SPIM recordings.

    PubMed

    Schmied, Christopher; Steinbach, Peter; Pietzsch, Tobias; Preibisch, Stephan; Tomancak, Pavel

    2016-04-01

    Selective Plane Illumination Microscopy (SPIM) allows to image developing organisms in 3D at unprecedented temporal resolution over long periods of time. The resulting massive amounts of raw image data requires extensive processing interactively via dedicated graphical user interface (GUI) applications. The consecutive processing steps can be easily automated and the individual time points can be processed independently, which lends itself to trivial parallelization on a high performance computing (HPC) cluster. Here, we introduce an automated workflow for processing large multiview, multichannel, multiillumination time-lapse SPIM data on a single workstation or in parallel on a HPC cluster. The pipeline relies on snakemake to resolve dependencies among consecutive processing steps and can be easily adapted to any cluster environment for processing SPIM data in a fraction of the time required to collect it. The code is distributed free and open source under the MIT license http://opensource.org/licenses/MIT The source code can be downloaded from github: https://github.com/mpicbg-scicomp/snakemake-workflows Documentation can be found here: http://fiji.sc/Automated_workflow_for_parallel_Multiview_Reconstruction : schmied@mpi-cbg.de Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  6. SCALEUS: Semantic Web Services Integration for Biomedical Applications.

    PubMed

    Sernadela, Pedro; González-Castro, Lorena; Oliveira, José Luís

    2017-04-01

    In recent years, we have witnessed an explosion of biological data resulting largely from the demands of life science research. The vast majority of these data are freely available via diverse bioinformatics platforms, including relational databases and conventional keyword search applications. This type of approach has achieved great results in the last few years, but proved to be unfeasible when information needs to be combined or shared among different and scattered sources. During recent years, many of these data distribution challenges have been solved with the adoption of semantic web. Despite the evident benefits of this technology, its adoption introduced new challenges related with the migration process, from existent systems to the semantic level. To facilitate this transition, we have developed Scaleus, a semantic web migration tool that can be deployed on top of traditional systems in order to bring knowledge, inference rules, and query federation to the existent data. Targeted at the biomedical domain, this web-based platform offers, in a single package, straightforward data integration and semantic web services that help developers and researchers in the creation process of new semantically enhanced information systems. SCALEUS is available as open source at http://bioinformatics-ua.github.io/scaleus/ .

  7. On-chip dual-comb source for spectroscopy

    PubMed Central

    Dutt, Avik; Joshi, Chaitanya; Ji, Xingchen; Cardenas, Jaime; Okawachi, Yoshitomo; Luke, Kevin; Gaeta, Alexander L.; Lipson, Michal

    2018-01-01

    Dual-comb spectroscopy is a powerful technique for real-time, broadband optical sampling of molecular spectra, which requires no moving components. Recent developments with microresonator-based platforms have enabled frequency combs at the chip scale. However, the need to precisely match the resonance wavelengths of distinct high quality-factor microcavities has hindered the development of on-chip dual combs. We report the simultaneous generation of two microresonator combs on the same chip from a single laser, drastically reducing experimental complexity. We demonstrate broadband optical spectra spanning 51 THz and low-noise operation of both combs by deterministically tuning into soliton mode-locked states using integrated microheaters, resulting in narrow (<10 kHz) microwave beat notes. We further use one comb as a reference to probe the formation dynamics of the other comb, thus introducing a technique to investigate comb evolution without auxiliary lasers or microwave oscillators. We demonstrate high signal-to-noise ratio absorption spectroscopy spanning 170 nm using the dual-comb source over a 20-μs acquisition time. Our device paves the way for compact and robust spectrometers at nanosecond time scales enabled by large beat-note spacings (>1 GHz). PMID:29511733

  8. Pollution monitoring using networks of honey bees

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bromenshenk, J.J.; Dewart, M.L.; Thomas, J.M.

    1983-08-01

    Each year thousands of chemicals in large quantities are introduced into the global environment and the need for effective methods of monitoring these substances has steadily increased. Most monitoring programs rely upon instrumentation to measure specific contaminants in air, water, or soil. However, it has become apparent that humans and their environment are exposed to complex mixtures of chemicals rather than single entities. As our ability to detect ever smaller quantities of pollutants has increased, the biological significance of these findings has become more uncertain. Also, it is clear that monitoring efforts should shift from short-term studies of easily identifiablemore » sources in localized areas to long-term studies of multiple sources over widespread regions. Our investigations aim at providing better tools to meet these exigencies. Honey bees are discussed as an effective, long-term, self-sustaining system for monitoring environmental impacts. Our results indicate that the use of regional, and possibly national or international, capability can be realized with the aid of beekeepers in obtaining samples and conducting measurements. This approach has the added advantage of public involvement in environmental problem solving and protection of human health and environmental quality.« less

  9. Flow-oriented dynamic assembly algorithm in TCP over OBS networks

    NASA Astrophysics Data System (ADS)

    Peng, Shuping; Li, Zhengbin; He, Yongqi; Xu, Anshi

    2008-11-01

    OBS is envisioned as a promising infrastructure for the next generation optical network, and TCP is likely to be the dominant transport protocol in the next generation network. Therefore, it is necessary to evaluate the performance of TCP over OBS networks. The assembly at the ingress edge nodes will impact the network performance. There have been several Fixed Assembly Period (FAP) algorithms proposed. However, the assembly period in FAP is fixed, and it can not be adjusted according to the network condition. Moreover, in FAP, the packets from different TCP sources are assembled into one burst. In that case, if such a burst is dropped, the TCP windows of the corresponding sources will shrink and the throughput will be reduced. In this paper, we introduced a flow-oriented Dynamic Assembly Period (DAP) algorithm for TCP over OBS networks. Through comparing the previous and current burst lengths, DAP can track the variation of TCP window, and update the assembly period dynamically for the next assembly. The performance of DAP is evaluated over a single TCP connection and multiple connections, respectively. The simulation results show that DAP performs better than FAP at almost the whole range of burst dropping probability.

  10. Affective Aspects of Perceived Loss of Control and Potential Implications for Brain-Computer Interfaces

    PubMed Central

    Grissmann, Sebastian; Zander, Thorsten O.; Faller, Josef; Brönstrup, Jonas; Kelava, Augustin; Gramann, Klaus; Gerjets, Peter

    2017-01-01

    Most brain-computer interfaces (BCIs) focus on detecting single aspects of user states (e.g., motor imagery) in the electroencephalogram (EEG) in order to use these aspects as control input for external systems. This communication can be effective, but unaccounted mental processes can interfere with signals used for classification and thereby introduce changes in the signal properties which could potentially impede BCI classification performance. To improve BCI performance, we propose deploying an approach that potentially allows to describe different mental states that could influence BCI performance. To test this approach, we analyzed neural signatures of potential affective states in data collected in a paradigm where the complex user state of perceived loss of control (LOC) was induced. In this article, source localization methods were used to identify brain dynamics with source located outside but affecting the signal of interest originating from the primary motor areas, pointing to interfering processes in the brain during natural human-machine interaction. In particular, we found affective correlates which were related to perceived LOC. We conclude that additional context information about the ongoing user state might help to improve the applicability of BCIs to real-world scenarios. PMID:28769776

  11. Klynac: Compact Linear Accelerator with Integrated Power Supply

    NASA Astrophysics Data System (ADS)

    Malyzhenkov, A. V.

    Accelerators and accelerator-based light sources have a wide range of applications in science, engineering technology and medicine. Today the scientific community is working towards improving the quality of the accelerated beam and its parameters, while trying to develop technology for reducing accelerator size. This work describes a design of a compact linear accelerator (linac) prototype: resonant Klynac device, which is a combined linear accelerator and its power supply - klystron. The intended purpose of a Klynac device is to provide a compact and inexpensive alternative to a conventional 1 to 6 MeV accelerator, which typically requires a separate RF source, accelerator itself and all the associated hardware. Because the Klynac is a single structure, it has the potential to be much less sensitive to temperature variations than a system with separate klystron and linac. We start by introducing a simplified theoretical model for a Klynac device. We then demonstrate how a prototype is designed step-by-step using Particle-In-Cell simulation studies for mono-resonant and bi-resonant structures. Finally, we discuss design options from a stability point of view and required input power as well as behavior of competing modes for the actual built device.

  12. Single sources in the low-frequency gravitational wave sky: properties and time to detection by pulsar timing arrays

    NASA Astrophysics Data System (ADS)

    Kelley, Luke Zoltan; Blecha, Laura; Hernquist, Lars; Sesana, Alberto; Taylor, Stephen R.

    2018-06-01

    We calculate the properties, occurrence rates and detection prospects of individually resolvable `single sources' in the low-frequency gravitational wave (GW) spectrum. Our simulations use the population of galaxies and massive black hole binaries from the Illustris cosmological hydrodynamic simulations, coupled to comprehensive semi-analytic models of the binary merger process. Using mock pulsar timing arrays (PTA) with, for the first time, varying red-noise models, we calculate plausible detection prospects for GW single sources and the stochastic GW background (GWB). Contrary to previous results, we find that single sources are at least as detectable as the GW background. Using mock PTA, we find that these `foreground' sources (also `deterministic'/`continuous') are likely to be detected with ˜20 yr total observing baselines. Detection prospects, and indeed the overall properties of single sources, are only moderately sensitive to binary evolution parameters - namely eccentricity and environmental coupling, which can lead to differences of ˜5 yr in times to detection. Red noise has a stronger effect, roughly doubling the time to detection of the foreground between a white-noise only model (˜10-15 yr) and severe red noise (˜20-30 yr). The effect of red noise on the GWB is even stronger, suggesting that single source detections may be more robust. We find that typical signal-to-noise ratios for the foreground peak near f = 0.1 yr-1, and are much less sensitive to the continued addition of new pulsars to PTA.

  13. A stable wavelength-tunable triggered source of single photons and cascaded photon pairs at the telecom C-band

    NASA Astrophysics Data System (ADS)

    Zeuner, Katharina D.; Paul, Matthias; Lettner, Thomas; Reuterskiöld Hedlund, Carl; Schweickert, Lucas; Steinhauer, Stephan; Yang, Lily; Zichi, Julien; Hammar, Mattias; Jöns, Klaus D.; Zwiller, Val

    2018-04-01

    The implementation of fiber-based long-range quantum communication requires tunable sources of single photons at the telecom C-band. Stable and easy-to-implement wavelength-tunability of individual sources is crucial to (i) bring remote sources into resonance, (ii) define a wavelength standard, and (iii) ensure scalability to operate a quantum repeater. So far, the most promising sources for true, telecom single photons are semiconductor quantum dots, due to their ability to deterministically and reliably emit single and entangled photons. However, the required wavelength-tunability is hard to attain. Here, we show a stable wavelength-tunable quantum light source by integrating strain-released InAs quantum dots on piezoelectric substrates. We present triggered single-photon emission at 1.55 μm with a multi-photon emission probability as low as 0.097, as well as photon pair emission from the radiative biexciton-exciton cascade. We achieve a tuning range of 0.25 nm which will allow us to spectrally overlap remote quantum dots or tuning distant quantum dots into resonance with quantum memories. This opens up realistic avenues for the implementation of photonic quantum information processing applications at telecom wavelengths.

  14. The Cross-Wavelet Transform and Analysis of Quasi-periodic Behavior in the Pearson-Readhead VLBI Survey Sources

    NASA Astrophysics Data System (ADS)

    Kelly, Brandon C.; Hughes, Philip A.; Aller, Hugh D.; Aller, Margo F.

    2003-07-01

    We introduce an algorithm for applying a cross-wavelet transform to analysis of quasi-periodic variations in a time series and introduce significance tests for the technique. We apply a continuous wavelet transform and the cross-wavelet algorithm to the Pearson-Readhead VLBI survey sources using data obtained from the University of Michigan 26 m paraboloid at observing frequencies of 14.5, 8.0, and 4.8 GHz. Thirty of the 62 sources were chosen to have sufficient data for analysis, having at least 100 data points for a given time series. Of these 30 sources, a little more than half exhibited evidence for quasi-periodic behavior in at least one observing frequency, with a mean characteristic period of 2.4 yr and standard deviation of 1.3 yr. We find that out of the 30 sources, there were about four timescales for every 10 time series, and about half of those sources showing quasi-periodic behavior repeated the behavior in at least one other observing frequency.

  15. Integrated spatial multiplexing of heralded single-photon sources

    PubMed Central

    Collins, M.J.; Xiong, C.; Rey, I.H.; Vo, T.D.; He, J.; Shahnia, S.; Reardon, C.; Krauss, T.F.; Steel, M.J.; Clark, A.S.; Eggleton, B.J.

    2013-01-01

    The non-deterministic nature of photon sources is a key limitation for single-photon quantum processors. Spatial multiplexing overcomes this by enhancing the heralded single-photon yield without enhancing the output noise. Here the intrinsic statistical limit of an individual source is surpassed by spatially multiplexing two monolithic silicon-based correlated photon pair sources in the telecommunications band, demonstrating a 62.4% increase in the heralded single-photon output without an increase in unwanted multipair generation. We further demonstrate the scalability of this scheme by multiplexing photons generated in two waveguides pumped via an integrated coupler with a 63.1% increase in the heralded photon rate. This demonstration paves the way for a scalable architecture for multiplexing many photon sources in a compact integrated platform and achieving efficient two-photon interference, required at the core of optical quantum computing and quantum communication protocols. PMID:24107840

  16. Multi-photon absorption limits to heralded single photon sources

    PubMed Central

    Husko, Chad A.; Clark, Alex S.; Collins, Matthew J.; De Rossi, Alfredo; Combrié, Sylvain; Lehoucq, Gaëlle; Rey, Isabella H.; Krauss, Thomas F.; Xiong, Chunle; Eggleton, Benjamin J.

    2013-01-01

    Single photons are of paramount importance to future quantum technologies, including quantum communication and computation. Nonlinear photonic devices using parametric processes offer a straightforward route to generating photons, however additional nonlinear processes may come into play and interfere with these sources. Here we analyse spontaneous four-wave mixing (SFWM) sources in the presence of multi-photon processes. We conduct experiments in silicon and gallium indium phosphide photonic crystal waveguides which display inherently different nonlinear absorption processes, namely two-photon (TPA) and three-photon absorption (ThPA), respectively. We develop a novel model capturing these diverse effects which is in excellent quantitative agreement with measurements of brightness, coincidence-to-accidental ratio (CAR) and second-order correlation function g(2)(0), showing that TPA imposes an intrinsic limit on heralded single photon sources. We build on these observations to devise a new metric, the quantum utility (QMU), enabling further optimisation of single photon sources. PMID:24186400

  17. Rating the methodological quality of single-subject designs and n-of-1 trials: introducing the Single-Case Experimental Design (SCED) Scale.

    PubMed

    Tate, Robyn L; McDonald, Skye; Perdices, Michael; Togher, Leanne; Schultz, Regina; Savage, Sharon

    2008-08-01

    Rating scales that assess methodological quality of clinical trials provide a means to critically appraise the literature. Scales are currently available to rate randomised and non-randomised controlled trials, but there are none that assess single-subject designs. The Single-Case Experimental Design (SCED) Scale was developed for this purpose and evaluated for reliability. Six clinical researchers who were trained and experienced in rating methodological quality of clinical trials developed the scale and participated in reliability studies. The SCED Scale is an 11-item rating scale for single-subject designs, of which 10 items are used to assess methodological quality and use of statistical analysis. The scale was developed and refined over a 3-year period. Content validity was addressed by identifying items to reduce the main sources of bias in single-case methodology as stipulated by authorities in the field, which were empirically tested against 85 published reports. Inter-rater reliability was assessed using a random sample of 20/312 single-subject reports archived in the Psychological Database of Brain Impairment Treatment Efficacy (PsycBITE). Inter-rater reliability for the total score was excellent, both for individual raters (overall ICC = 0.84; 95% confidence interval 0.73-0.92) and for consensus ratings between pairs of raters (overall ICC = 0.88; 95% confidence interval 0.78-0.95). Item reliability was fair to excellent for consensus ratings between pairs of raters (range k = 0.48 to 1.00). The results were replicated with two independent novice raters who were trained in the use of the scale (ICC = 0.88, 95% confidence interval 0.73-0.95). The SCED Scale thus provides a brief and valid evaluation of methodological quality of single-subject designs, with the total score demonstrating excellent inter-rater reliability using both individual and consensus ratings. Items from the scale can also be used as a checklist in the design, reporting and critical appraisal of single-subject designs, thereby assisting to improve standards of single-case methodology.

  18. Synthesis and Characterization of the First Liquid Single Source Precursors for the Deposition of Ternary Chalcopyrite (CuInS2) Thin Film Materials

    NASA Technical Reports Server (NTRS)

    Banger, Kulbinder K.; Cowen, Jonathan; Hepp, Aloysius

    2002-01-01

    Molecular engineering of ternary single source precursors based on the [{PBu3}2Cu(SR')2In(SR')2] architecture have afforded the first liquid CIS ternary single source precursors (when R = Et, n-Pr), which are suitable for low temperature deposition (< 350 C). Thermogravimetric analyses (TGA) and modulated-differential scanning calorimetry (DSC) confirm their liquid phase and reduced stability. X-ray diffraction studies, energy dispersive analyzer (EDS), and scanning electron microscopy (SEM) support the formation of the single-phase chalcopyrite CuInS2 at low temperatures.

  19. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    NASA Astrophysics Data System (ADS)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  20. [Advances of consolidated bioprocessing based on recombinant strategy].

    PubMed

    Zheng, Zongbao; Zhao, Meina; Chen, Tao; Zhao, Xueming

    2013-10-01

    Lignocellulosic biomass represents an abundant, low-cost and renewable source of potentially fermentable sugars. It is acandidate besides petroleum as feedstock for fuel and chemical production. Recent researches on utilizing lignocellulosicsas feedstock boost development of numerous-promising processes for a variety of fuels and chemicals, such as biodiesel, biohydrogen and ethanol. However, high cost in depolymerization is a primary obstacle preventing the use of lignocellulosic biomass as feedstock. Consolidated bioprocessing (CBP), refers to the bioprocess without any exogenous cellulolyotic enzymes added, converting the lignocellulosic material into biochemicals directly, which could potentially avoid the cost of the dedicated enzyme generation step by incorporating enzyme-generating, biomass-degrading and bioproduct-producing capabilities into a single organism through genetic engineering. There are two CBP strategies, native strategy and recombinant strategy. We mainly introduce the recombinant strategy, including its principle, the two responding styles, the contributions of synthetic biology and metabolic engineering and the future challenges.

  1. The effect of interstellar absorption on measurements of the baryon acoustic peak in the Lyman α forest

    DOE PAGES

    Vadai, Yishay; Poznanski, Dovi; Baron, Dalya; ...

    2017-08-14

    In recent years, the autocorrelation of the hydrogen Lyman α forest has been used to observe the baryon acoustic peak at redshift 2 < z < 3.5 using tens of thousands of QSO spectra from the BOSS survey. However, the interstellar medium of the Milky Way introduces absorption lines into the spectrum of any extragalactic source. These lines, while weak and undetectable in a single BOSS spectrum, could potentially bias the cosmological signal. In order to examine this, we generate absorption line maps by stacking over a million spectra of galaxies and QSOs. Here, we find that the systematics introducedmore » are too small to affect the current accuracy of the baryon acoustic peak, but might be relevant to future surveys such as the Dark Energy Spectroscopic Instrument (DESI). We outline a method to account for this with future data sets.« less

  2. Machine learning-based augmented reality for improved surgical scene understanding.

    PubMed

    Pauly, Olivier; Diotte, Benoit; Fallavollita, Pascal; Weidert, Simon; Euler, Ekkehard; Navab, Nassir

    2015-04-01

    In orthopedic and trauma surgery, AR technology can support surgeons in the challenging task of understanding the spatial relationships between the anatomy, the implants and their tools. In this context, we propose a novel augmented visualization of the surgical scene that mixes intelligently the different sources of information provided by a mobile C-arm combined with a Kinect RGB-Depth sensor. Therefore, we introduce a learning-based paradigm that aims at (1) identifying the relevant objects or anatomy in both Kinect and X-ray data, and (2) creating an object-specific pixel-wise alpha map that permits relevance-based fusion of the video and the X-ray images within one single view. In 12 simulated surgeries, we show very promising results aiming at providing for surgeons a better surgical scene understanding as well as an improved depth perception. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. iQIST v0.7: An open source continuous-time quantum Monte Carlo impurity solver toolkit

    NASA Astrophysics Data System (ADS)

    Huang, Li

    2017-12-01

    In this paper, we present a new version of the iQIST software package, which is capable of solving various quantum impurity models by using the hybridization expansion (or strong coupling expansion) continuous-time quantum Monte Carlo algorithm. In the revised version, the software architecture is completely redesigned. New basis (intermediate representation or singular value decomposition representation) for the single-particle and two-particle Green's functions is introduced. A lot of useful physical observables are added, such as the charge susceptibility, fidelity susceptibility, Binder cumulant, and autocorrelation time. Especially, we optimize measurement for the two-particle Green's functions. Both the particle-hole and particle-particle channels are supported. In addition, the block structure of the two-particle Green's functions is exploited to accelerate the calculation. Finally, we fix some known bugs and limitations. The computational efficiency of the code is greatly enhanced.

  4. Structural basis for energy transduction by respiratory alternative complex III.

    PubMed

    Sousa, Joana S; Calisto, Filipa; Langer, Julian D; Mills, Deryck J; Refojo, Patrícia N; Teixeira, Miguel; Kühlbrandt, Werner; Vonck, Janet; Pereira, Manuela M

    2018-04-30

    Electron transfer in respiratory chains generates the electrochemical potential that serves as energy source for the cell. Prokaryotes can use a wide range of electron donors and acceptors and may have alternative complexes performing the same catalytic reactions as the mitochondrial complexes. This is the case for the alternative complex III (ACIII), a quinol:cytochrome c/HiPIP oxidoreductase. In order to understand the catalytic mechanism of this respiratory enzyme, we determined the structure of ACIII from Rhodothermus marinus at 3.9 Å resolution by single-particle cryo-electron microscopy. ACIII presents a so-far unique structure, for which we establish the arrangement of the cofactors (four iron-sulfur clusters and six c-type hemes) and propose the location of the quinol-binding site and the presence of two putative proton pathways in the membrane. Altogether, this structure provides insights into a mechanism for energy transduction and introduces ACIII as a redox-driven proton pump.

  5. Detection of Cherenkov Photons with Multi-Anode Photomultipliers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salazar, H.; Moreno, E.; Murrieta, T.

    2006-09-25

    The present paper describes the laboratory course given at the X Mexican Workshop on Particles and Fields. We describe the setup and procedure used to measure the Cherenkov circles produced by cosmic muons upon traversal of a simple glass radiator system. The main purpose of this exercise is to introduce the students to work with multi-anode photomultipliers such as the one used for this experiment (Hamamatsu R5900-M64), with which measurements requiring position sensitive detection of single photons can be successfully performed. We present a short introduction to multi-anode photomultipliers (MAPMT) and describe the setup and the procedure used to measuremore » the response of a MAPMT to a uniform source of light. Finally, we describe the setup and procedure used to measure the Cherenkov circles produced by cosmic muons upon traversal of a simple glass radiator system.« less

  6. LB3D: A parallel implementation of the Lattice-Boltzmann method for simulation of interacting amphiphilic fluids

    NASA Astrophysics Data System (ADS)

    Schmieschek, S.; Shamardin, L.; Frijters, S.; Krüger, T.; Schiller, U. D.; Harting, J.; Coveney, P. V.

    2017-08-01

    We introduce the lattice-Boltzmann code LB3D, version 7.1. Building on a parallel program and supporting tools which have enabled research utilising high performance computing resources for nearly two decades, LB3D version 7 provides a subset of the research code functionality as an open source project. Here, we describe the theoretical basis of the algorithm as well as computational aspects of the implementation. The software package is validated against simulations of meso-phases resulting from self-assembly in ternary fluid mixtures comprising immiscible and amphiphilic components such as water-oil-surfactant systems. The impact of the surfactant species on the dynamics of spinodal decomposition are tested and quantitative measurement of the permeability of a body centred cubic (BCC) model porous medium for a simple binary mixture is described. Single-core performance and scaling behaviour of the code are reported for simulations on current supercomputer architectures.

  7. Deflection Measurements of a Thermally Simulated Nuclear Core Using a High-Resolution CCD-Camera

    NASA Technical Reports Server (NTRS)

    Stanojev, B. J.; Houts, M.

    2004-01-01

    Space fission systems under consideration for near-term missions all use compact. fast-spectrum reactor cores. Reactor dimensional change with increasing temperature, which affects neutron leakage. is the dominant source of reactivity feedback in these systems. Accurately measuring core dimensional changes during realistic non-nuclear testing is therefore necessary in predicting the system nuclear equivalent behavior. This paper discusses one key technique being evaluated for measuring such changes. The proposed technique is to use a Charged Couple Device (CCD) sensor to obtain deformation readings of electrically heated prototypic reactor core geometry. This paper introduces a technique by which a single high spatial resolution CCD camera is used to measure core deformation in Real-Time (RT). Initial system checkout results are presented along with a discussion on how additional cameras could be used to achieve a three- dimensional deformation profile of the core during test.

  8. Limiting the public cost of stationary battery deployment by combining applications

    NASA Astrophysics Data System (ADS)

    Stephan, A.; Battke, B.; Beuse, M. D.; Clausdeinken, J. H.; Schmidt, T. S.

    2016-07-01

    Batteries could be central to low-carbon energy systems with high shares of intermittent renewable energy sources. However, the investment attractiveness of batteries is still perceived as low, eliciting calls for policy to support deployment. Here we show how the cost of battery deployment can potentially be minimized by introducing an aspect that has been largely overlooked in policy debates and underlying analyses: the fact that a single battery can serve multiple applications. Batteries thereby can not only tap into different value streams, but also combine different risk exposures. To address this gap, we develop a techno-economic model and apply it to the case of lithium-ion batteries serving multiple stationary applications in Germany. Our results show that batteries could be attractive for investors even now if non-market barriers impeding the combination of applications were removed. The current policy debate should therefore be refocused so as to encompass the removal of such barriers.

  9. All-photonic quantum repeaters

    PubMed Central

    Azuma, Koji; Tamaki, Kiyoshi; Lo, Hoi-Kwong

    2015-01-01

    Quantum communication holds promise for unconditionally secure transmission of secret messages and faithful transfer of unknown quantum states. Photons appear to be the medium of choice for quantum communication. Owing to photon losses, robust quantum communication over long lossy channels requires quantum repeaters. It is widely believed that a necessary and highly demanding requirement for quantum repeaters is the existence of matter quantum memories. Here we show that such a requirement is, in fact, unnecessary by introducing the concept of all-photonic quantum repeaters based on flying qubits. In particular, we present a protocol based on photonic cluster-state machine guns and a loss-tolerant measurement equipped with local high-speed active feedforwards. We show that, with such all-photonic quantum repeaters, the communication efficiency scales polynomially with the channel distance. Our result paves a new route towards quantum repeaters with efficient single-photon sources rather than matter quantum memories. PMID:25873153

  10. Tracing medical information over the Internet.

    PubMed

    Mutairi, S M

    2000-05-01

    The Internet became with do doubt a huge and valuable source of information for researchers. The wealth of information on the Internet is second to none and medical information is no exception. Yet with the vast expansion of the Internet and the World Wide Web in specie, to find the kind of information one is looking for, he/she needs to browse thousands of web sites and the experience would be like digging into a stack of hay looking for a needle. That's why search engines and subject indexes, as means to overcome this problem, were introduced and grew so rapidly. In general, there are three approaches to retrieve data from the World Wide Web; the subject directories, search engines and detailed subject indexes. However, there is no single comprehensive search engine or directory and it is recommended to use more than one with different keywords and synonymous.

  11. Computer simulations of sympatric speciation in a simple food web

    NASA Astrophysics Data System (ADS)

    Luz-Burgoa, K.; Dell, Tony; de Oliveira, S. Moss

    2005-07-01

    Galapagos finches, have motivated much theoretical research aimed at understanding the processes associated with the formation of the species. Inspired by them, in this paper we investigate the process of sympatric speciation in a simple food web model. For that we modify the individual-based Penna model that has been widely used to study aging as well as other evolutionary processes. Initially, our web consists of a primary food source and a single herbivore species that feeds on this resource. Subsequently we introduce a predator that feeds on the herbivore. In both instances we manipulate directly a basal resource distribution and monitor the changes in the populations. Sympatric speciation is obtained for the top species in both cases, and our results suggest that the speciation velocity depends on how far up, in the food chain, the focus population is feeding. Simulations are done with three different sexual imprintinglike mechanisms, in order to discuss adaptation by natural selection.

  12. Design of Xen Hybrid Multiple Police Model

    NASA Astrophysics Data System (ADS)

    Sun, Lei; Lin, Renhao; Zhu, Xianwei

    2017-10-01

    Virtualization Technology has attracted more and more attention. As a popular open-source virtualization tools, XEN is used more and more frequently. Xsm, XEN security model, has also been widespread concern. The safety status classification has not been established in the XSM, and it uses the virtual machine as a managed object to make Dom0 a unique administrative domain that does not meet the minimum privilege. According to these questions, we design a Hybrid multiple police model named SV_HMPMD that organically integrates multiple single security policy models include DTE,RBAC,BLP. It can fullfill the requirement of confidentiality and integrity for security model and use different particle size to different domain. In order to improve BLP’s practicability, the model introduce multi-level security labels. In order to divide the privilege in detail, we combine DTE with RBAC. In order to oversize privilege, we limit the privilege of domain0.

  13. Soft X-ray imaging of thick carbon-based materials using the normal incidence multilayer optics.

    PubMed

    Artyukov, I A; Feschenko, R M; Vinogradov, A V; Bugayev, Ye A; Devizenko, O Y; Kondratenko, V V; Kasyanov, Yu S; Hatano, T; Yamamoto, M; Saveliev, S V

    2010-10-01

    The high transparency of carbon-containing materials in the spectral region of "carbon window" (lambda approximately 4.5-5nm) introduces new opportunities for various soft X-ray microscopy applications. The development of efficient multilayer coated X-ray optics operating at the wavelengths of about 4.5nm has stimulated a series of our imaging experiments to study thick biological and synthetic objects. Our experimental set-up consisted of a laser plasma X-ray source generated with the 2nd harmonics of Nd-glass laser, scandium-based thin-film filters, Co/C multilayer mirror and X-ray film UF-4. All soft X-ray images were produced with a single nanosecond exposure and demonstrated appropriate absorption contrast and detector-limited spatial resolution. A special attention was paid to the 3D imaging of thick low-density foam materials to be used in design of laser fusion targets.

  14. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  15. Long-distance thermal temporal ghost imaging over optical fibers

    NASA Astrophysics Data System (ADS)

    Yao, Xin; Zhang, Wei; Li, Hao; You, Lixing; Wang, Zhen; Huang, Yidong

    2018-02-01

    A thermal ghost imaging scheme between two distant parties is proposed and experimentally demonstrated over long-distance optical fibers. In the scheme, the weak thermal light is split into two paths. Photons in one path are spatially diffused according to their frequencies by a spatial dispersion component, then illuminate the object and record its spatial transmission information. Photons in the other path are temporally diffused by a temporal dispersion component. By the coincidence measurement between photons of two paths, the object can be imaged in a way of ghost imaging, based on the frequency correlation between photons in the two paths. In the experiment, the weak thermal light source is prepared by the spontaneous four-wave mixing in a silicon waveguide. The temporal dispersion is introduced by single mode fibers of 50 km, which also could be looked as a fiber link. Experimental results show that this scheme can be realized over long-distance optical fibers.

  16. The intrinsic role of membrane morphology to reduce defectivity in advanced photochemicals

    NASA Astrophysics Data System (ADS)

    Kohyama, Tetsu; Wu, Aiwen; Miura, Kozue; Ohyashiki, Yasushi

    2018-03-01

    Defect source reduction in leading-edge iArF resists is a critical requirement to improve device performance and overall yield in lithography manufacturing processes. It is believed that some polar polymers can aggregate and be responsible for single or multiple micro-bridge defects. Further investigation into the formation of these defects is needed. We have previously presented the effective removal of gel-like polymers using nylon media [1]. However, as the industry is moving to smaller feature sizes, there is a need to further improve the defect removal efficiency. In this paper, a filter, comprised of a novel membrane called Azora with unique morphology and high flow performance is introduced. This new filter shows better on-wafer in an advanced ArF solution than conventional Nylon and UPE media. In addition, it shows improved stability during chemical storage. Results and possible retention mechanisms are discussed.

  17. The effect of interstellar absorption on measurements of the baryon acoustic peak in the Lyman α forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vadai, Yishay; Poznanski, Dovi; Baron, Dalya

    In recent years, the autocorrelation of the hydrogen Lyman α forest has been used to observe the baryon acoustic peak at redshift 2 < z < 3.5 using tens of thousands of QSO spectra from the BOSS survey. However, the interstellar medium of the Milky Way introduces absorption lines into the spectrum of any extragalactic source. These lines, while weak and undetectable in a single BOSS spectrum, could potentially bias the cosmological signal. In order to examine this, we generate absorption line maps by stacking over a million spectra of galaxies and QSOs. Here, we find that the systematics introducedmore » are too small to affect the current accuracy of the baryon acoustic peak, but might be relevant to future surveys such as the Dark Energy Spectroscopic Instrument (DESI). We outline a method to account for this with future data sets.« less

  18. Electromagnetic interference and shielding: An introduction (revised version of 1991-23)

    NASA Astrophysics Data System (ADS)

    Dehoop, A. T.; Quak, D.

    The basic equations of the electromagnetic field are summarized as far as they are needed in the theory of electromagnetic interference and shielding. Through the analysis of the planar electric current emitter, the propagation coefficient, attenuation coefficient, phase coefficient, wave-speed, wavelength, wave impedance, wave admittance, and power flow density of a wave are introduced. Next, the shielding effectiveness of a shielding plate and the shielding effectiveness of a shielding parallel-plate box are determined. In the latter, particular attention is given to the occurrence of internal resonance effects, which may degrade the shielding effectiveness. Further, a survey of some fundamental properties of a system of low frequency, multiconductor transmission lines is given. For a three conductor system with a plane of symmetry, the decomposition into the common mode and the differential mode of operation is discussed. Finally, expressions for the voltages and electric currents induced by external sources along a single transmission line are derived.

  19. Multigroup Monte Carlo on GPUs: Comparison of history- and event-based algorithms

    DOE PAGES

    Hamilton, Steven P.; Slattery, Stuart R.; Evans, Thomas M.

    2017-12-22

    This article presents an investigation of the performance of different multigroup Monte Carlo transport algorithms on GPUs with a discussion of both history-based and event-based approaches. Several algorithmic improvements are introduced for both approaches. By modifying the history-based algorithm that is traditionally favored in CPU-based MC codes to occasionally filter out dead particles to reduce thread divergence, performance exceeds that of either the pure history-based or event-based approaches. The impacts of several algorithmic choices are discussed, including performance studies on Kepler and Pascal generation NVIDIA GPUs for fixed source and eigenvalue calculations. Single-device performance equivalent to 20–40 CPU cores onmore » the K40 GPU and 60–80 CPU cores on the P100 GPU is achieved. Last, in addition, nearly perfect multi-device parallel weak scaling is demonstrated on more than 16,000 nodes of the Titan supercomputer.« less

  20. Pulsed energy synthesis and doping of silicon carbide

    DOEpatents

    Truher, J.B.; Kaschmitter, J.L.; Thompson, J.B.; Sigmon, T.W.

    1995-06-20

    A method for producing beta silicon carbide thin films by co-depositing thin films of amorphous silicon and carbon onto a substrate is disclosed, whereafter the films are irradiated by exposure to a pulsed energy source (e.g. excimer laser) to cause formation of the beta-SiC compound. Doped beta-SiC may be produced by introducing dopant gases during irradiation. Single layers up to a thickness of 0.5-1 micron have been produced, with thicker layers being produced by multiple processing steps. Since the electron transport properties of beta silicon carbide over a wide temperature range of 27--730 C is better than these properties of alpha silicon carbide, they have wide application, such as in high temperature semiconductors, including HETEROJUNCTION-junction bipolar transistors and power devices, as well as in high bandgap solar arrays, ultra-hard coatings, light emitting diodes, sensors, etc.

  1. Pulsed energy synthesis and doping of silicon carbide

    DOEpatents

    Truher, Joel B.; Kaschmitter, James L.; Thompson, Jesse B.; Sigmon, Thomas W.

    1995-01-01

    A method for producing beta silicon carbide thin films by co-depositing thin films of amorphous silicon and carbon onto a substrate, whereafter the films are irradiated by exposure to a pulsed energy source (e.g. excimer laser) to cause formation of the beta-SiC compound. Doped beta-SiC may be produced by introducing dopant gases during irradiation. Single layers up to a thickness of 0.5-1 micron have been produced, with thicker layers being produced by multiple processing steps. Since the electron transport properties of beta silicon carbide over a wide temperature range of 27.degree.-730.degree. C. is better than these properties of alpha silicon carbide, they have wide application, such as in high temperature semiconductors, including hetero-junction bipolar transistors and power devices, as well as in high bandgap solar arrays, ultra-hard coatings, light emitting diodes, sensors, etc.

  2. Secure self-calibrating quantum random-bit generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiorentino, M.; Santori, C.; Spillane, S. M.

    2007-03-15

    Random-bit generators (RBGs) are key components of a variety of information processing applications ranging from simulations to cryptography. In particular, cryptographic systems require 'strong' RBGs that produce high-entropy bit sequences, but traditional software pseudo-RBGs have very low entropy content and therefore are relatively weak for cryptography. Hardware RBGs yield entropy from chaotic or quantum physical systems and therefore are expected to exhibit high entropy, but in current implementations their exact entropy content is unknown. Here we report a quantum random-bit generator (QRBG) that harvests entropy by measuring single-photon and entangled two-photon polarization states. We introduce and implement a quantum tomographicmore » method to measure a lower bound on the 'min-entropy' of the system, and we employ this value to distill a truly random-bit sequence. This approach is secure: even if an attacker takes control of the source of optical states, a secure random sequence can be distilled.« less

  3. A GIS-based approach for comparative analysis of potential fire risk assessment

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Hu, Lieqiu; Liu, Huiping

    2007-06-01

    Urban fires are one of the most important sources of property loss and human casualty and therefore it is necessary to assess the potential fire risk with consideration of urban community safety. Two evaluation models are proposed, both of which are integrated with GIS. One is the single factor model concerning the accessibility of fire passage and the other is grey clustering approach based on the multifactor system. In the latter model, fourteen factors are introduced and divided into four categories involving security management, evacuation facility, construction resistance and fire fighting capability. A case study on campus of Beijing Normal University is presented to express the potential risk assessment models in details. A comparative analysis of the two models is carried out to validate the accuracy. The results are approximately consistent with each other. Moreover, modeling with GIS promotes the efficiency the potential risk assessment.

  4. Spectroscopy of the three-photon laser excitation of cold Rubidium Rydberg atoms in a magneto-optical trap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Entin, V. M.; Yakshina, E. A.; Tretyakov, D. B.

    2013-05-15

    The spectra of the three-photon laser excitation 5S{sub 1/2} {yields} 5P{sub 3/2} {yields} 6S{sub 1/2}nP of cold Rb Rydberg atoms in an operating magneto-optical trap based on continuous single-frequency lasers at each stage are studied. These spectra contain two partly overlapping peaks of different amplitudes, which correspond to coherent three-photon excitation and incoherent three-step excitation due to the presence of two different ways of excitation through the dressed states of intermediate levels. A four-level theoretical model based on optical Bloch equations is developed to analyze these spectra. Good agreement between the experimental and calculated data is achieved by introducing additionalmore » decay of optical coherence induced by a finite laser line width and other broadening sources (stray electromagnetic fields, residual Doppler broadening, interatomic interactions) into the model.« less

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, David F.; Aguirre, James E.; Parsons, Aaron R.

    Experiments aimed at detecting highly-redshifted 21 cm emission from the epoch of reionization (EoR) are plagued by the contamination of foreground emission. A potentially important source of contaminating foregrounds may be Faraday-rotated, polarized emission, which leaks into the estimate of the intrinsically unpolarized EoR signal. While these foregrounds' intrinsic polarization may not be problematic, the spectral structure introduced by the Faraday rotation could be. To better understand and characterize these effects, we present a simulation of the polarized sky between 120 and 180 MHz. We compute a single visibility, and estimate the three-dimensional power spectrum from that visibility using themore » delay spectrum approach presented in Parsons et al. Using the Donald C. Backer Precision Array to Probe the Epoch of Reionization as an example instrument, we show the expected leakage into the unpolarized power spectrum to be several orders of magnitude above the expected 21 cm EoR signal.« less

  6. Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals.

    PubMed

    Hedayatifar, L; Vahabi, M; Jafari, G R

    2011-08-01

    When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.

  7. Coupling detrended fluctuation analysis for analyzing coupled nonstationary signals

    NASA Astrophysics Data System (ADS)

    Hedayatifar, L.; Vahabi, M.; Jafari, G. R.

    2011-08-01

    When many variables are coupled to each other, a single case study could not give us thorough and precise information. When these time series are stationary, different methods of random matrix analysis and complex networks can be used. But, in nonstationary cases, the multifractal-detrended-cross-correlation-analysis (MF-DXA) method was introduced for just two coupled time series. In this article, we have extended the MF-DXA to the method of coupling detrended fluctuation analysis (CDFA) for the case when more than two series are correlated to each other. Here, we have calculated the multifractal properties of the coupled time series, and by comparing CDFA results of the original series with those of the shuffled and surrogate series, we can estimate the source of multifractality and the extent to which our series are coupled to each other. We illustrate the method by selected examples from air pollution and foreign exchange rates.

  8. Preparation of biomimetic nano-structured films with multi-scale roughness

    NASA Astrophysics Data System (ADS)

    Shelemin, A.; Nikitin, D.; Choukourov, A.; Kylián, O.; Kousal, J.; Khalakhan, I.; Melnichuk, I.; Slavínská, D.; Biederman, H.

    2016-06-01

    Biomimetic nano-structured films are valuable materials in various applications. In this study we introduce a fully vacuum-based approach for fabrication of such films. The method combines deposition of nanoparticles (NPs) by gas aggregation source and deposition of overcoat thin film that fixes the nanoparticles on a surface. This leads to the formation of nanorough surfaces which, depending on the chemical nature of the overcoat, may range from superhydrophilic to superhydrophobic. In addition, it is shown that by proper adjustment of the amount of NPs it is possible to tailor adhesive force on superhydrophobic surfaces. Finally, the possibility to produce NPs in a wide range of their size (45-240 nm in this study) makes it possible to produce surfaces not only with single scale roughness, but also with bi-modal or even multi-modal character. Such surfaces were found to be superhydrophobic with negligible water contact angle hysteresis and hence truly slippery.

  9. Molybdenum enhanced low-temperature deposition of crystalline silicon nitride

    DOEpatents

    Lowden, R.A.

    1994-04-05

    A process for chemical vapor deposition of crystalline silicon nitride is described which comprises the steps of: introducing a mixture of a silicon source, a molybdenum source, a nitrogen source, and a hydrogen source into a vessel containing a suitable substrate; and thermally decomposing the mixture to deposit onto the substrate a coating comprising crystalline silicon nitride containing a dispersion of molybdenum silicide. 5 figures.

  10. Quantum-limited evanescent single molecule sensing.

    NASA Astrophysics Data System (ADS)

    Bowen, Warwick; Mauranyapin, Nicolas; Madsen, Lars; Taylor, Michael; Waleed, Muhammad

    Sensors that are able to detect and track single unlabeled biomolecules are an important tool both to understand biomolecular dynamics and interactions, and for medical diagnostics operating at their ultimate detection limits. Recently, exceptional sensitivity has been achieved using the strongly enhanced evanescent fields provided by optical microcavities and plasmonic resonators. However, at high field intensities photodamage to the biological specimen becomes increasingly problematic. Here, we introduce a new approach that combines dark field illumination and heterodyne detection in an optical nanofibre. This allows operation at the fundamental precision limit introduced by quantisation of light. We achieve state-of-the-art sensitivity with a four order-of-magnitude reduction in optical intensity. This enables quantum noise limited tracking of single biomolecules as small as 3.5 nm and surface-molecule interactions to be montored over extended periods. By achieving quantum noise limited precision, our approach provides a pathway towards quantum-enhanced single-molecule biosensors. We acknkowledge financial support from AFOSR and AOARD.

  11. Single European currency and Monetary Union. Macroeconomic implications for pharmaceutical spending.

    PubMed

    Kanavos, P

    1998-01-01

    This article examines the potential implications of introducing a single currency among the Member States of the European Union for national pharmaceutical prices and spending. In doing so, it provides a brief account of the direct effects of introducing a single currency on pharmaceutical business. These are static in nature and include the elimination of exchange rate volatility and transaction costs, increased price transparency and limited potential for parallel trade. It subsequently analyses the potential medium and long term macroeconomic policy choices facing the Member States and their impact on pharmaceutical spending following the introduction of a single currency. These include policy directions in order to meet the Maastricht convergence criteria in the run-up to forming an Economic and Monetary Union (EMU) and the implications of EMU on national macroeconomic policy thereafter. This article argues that the necessity for tight fiscal policies across the EU and, in particular, in those Member States facing high budget deficits and overall debt levels, will continue to exert considerable downward pressure on pharmaceutical spending.

  12. A method to compute SEU fault probabilities in memory arrays with error correction

    NASA Technical Reports Server (NTRS)

    Gercek, Gokhan

    1994-01-01

    With the increasing packing densities in VLSI technology, Single Event Upsets (SEU) due to cosmic radiations are becoming more of a critical issue in the design of space avionics systems. In this paper, a method is introduced to compute the fault (mishap) probability for a computer memory of size M words. It is assumed that a Hamming code is used for each word to provide single error correction. It is also assumed that every time a memory location is read, single errors are corrected. Memory is read randomly whose distribution is assumed to be known. In such a scenario, a mishap is defined as two SEU's corrupting the same memory location prior to a read. The paper introduces a method to compute the overall mishap probability for the entire memory for a mission duration of T hours.

  13. Factors Influencing F/OSS Cloud Computing Software Product Success: A Quantitative Study

    ERIC Educational Resources Information Center

    Letort, D. Brian

    2012-01-01

    Cloud Computing introduces a new business operational model that allows an organization to shift information technology consumption from traditional capital expenditure to operational expenditure. This shift introduces challenges from both the adoption and creation vantage. This study evaluates factors that influence Free/Open Source Software…

  14. Astronomy in Denver: Probing Interstellar Circular Polarization with Polvis, a Full Stokes Single Shot Polarimeter

    NASA Astrophysics Data System (ADS)

    Wolfe, Tristan; Stencel, Robert E.

    2018-06-01

    Measurements of optical circular polarization (Stokes V) introduced by dust grains in the ISM are important for two main reasons. First of all, the polarization itself contains information about the metallic versus dielectric composition of the dust grains themselves (H. C. van de Hulst 1957, textbook). Additionally, circular polarization can help constrain the interstellar component of the polarization of any source that may have intrinsic polarization, which needs to be calibrated for astrophysical study. Though interstellar circular polarization has been observed (P. G. Martin 1972, MNRAS 159), most broadband measurements of ISM polarization include linear polarization only (Stokes Q and U), due to the relatively low circular polarization signal and the added instrumentation complexity of including V-measurement capability. Prior circular polarization measurements have also received very little follow-up in the past several decades, even as polarimeters have become more accurate due to advances in technology. The University of Denver is pursuing these studies with POLVIS, a prototype polarimeter that utilizes a stress-engineered optic ("SEO", A. K. Spilman and T. G. Brown 2007, Applied Optics IP 46) to produce polarization-dependent PSFs (A. M. Beckley and T. G. Brown 2010, Proc SPIE 7570). These PSFs are analyzed to provide simultaneous Stokes I, Q, U, and V measurements, in a single beam and single image, along the line-of-sight to point source-like objects. Polvis is the first polarimeter to apply these optics and measurement techniques for astronomical observations. We present the first results of this instrument in B, V, and R wavebands, providing a fresh look at full Stokes interstellar polarization. Importantly, this set of efforts will constrain the ISM contribution to the polarization with respect to intrinsic stellar components. The authors are grateful to the estate of William Herschel Womble for the support of astronomy at the University of Denver, and for funding provided by the Mt. Cuba Astronomical Foundation.

  15. Dosimetric effects of saline- versus water-filled balloon applicators for IORT using the model S700 electronic brachytherapy source.

    PubMed

    Redler, Gage; Templeton, Alistair; Zhen, Heming; Turian, Julius; Bernard, Damian; Chu, James C H; Griem, Katherine L; Liao, Yixiang

    The Xoft Axxent Electronic Brachytherapy System (Xoft, Inc., San Jose, CA) is a viable option for intraoperative radiation therapy (IORT) treatment of early-stage breast cancer. The low-energy (50-kVp) X-ray source simplifies shielding and increases relative biological effectiveness but increases dose distribution sensitivity to medium composition. Treatment planning systems typically assume homogenous water for brachytherapy dose calculations, including precalculated atlas plans for Xoft IORT. However, Xoft recommends saline for balloon applicator filling. This study investigates dosimetric differences due to increased effective atomic number (Z eff ) for saline (Z eff  = 7.56) versus water (Z eff  = 7.42). Balloon applicator diameters range from 3 to 6 cm. Monte Carlo N-Particle software is used to calculate dose at the surface (D s ) of and 1 cm away (D 1cm ) from the water-/saline-filled balloon applicator using a single dwell at the applicator center as a simple estimation of the dosimetry and multiple dwells simulating the clinical dose distributions for the atlas plans. Single-dwell plans show a 4.4-6.1% decrease in D s for the 3- to 6-cm diameter applicators due to the saline. Multidwell plans show similar results: 4.9% and 6.4% D s decrease, for 4-cm and 6-cm diameter applicators, respectively. For the single-dwell plans, D 1cm decreases 3.6-5.2% for the 3- to 6-cm diameter applicators. For the multidwell plans, D 1cm decreases 3.3% and 5.3% for the 4-cm and 6-cm applicators, respectively. The dosimetric effect introduced by saline versus water filling for Xoft balloon applicator-based IORT treatments is ∼5%. Users should be aware of this in the context of both treatment planning and patient outcome studies. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  16. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging

    PubMed Central

    Patel, Tapan P.; Man, Karen; Firestein, Bonnie L.; Meaney, David F.

    2017-01-01

    Background Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s–1000 +neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. New method Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. Results We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. Comparison with existing method(s) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. Conclusions We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. PMID:25629800

  17. DeepQA: improving the estimation of single protein model quality with deep belief networks.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Hou, Jie; Cheng, Jianlin

    2016-12-05

    Protein quality assessment (QA) useful for ranking and selecting protein models has long been viewed as one of the major challenges for protein tertiary structure prediction. Especially, estimating the quality of a single protein model, which is important for selecting a few good models out of a large model pool consisting of mostly low-quality models, is still a largely unsolved problem. We introduce a novel single-model quality assessment method DeepQA based on deep belief network that utilizes a number of selected features describing the quality of a model from different perspectives, such as energy, physio-chemical characteristics, and structural information. The deep belief network is trained on several large datasets consisting of models from the Critical Assessment of Protein Structure Prediction (CASP) experiments, several publicly available datasets, and models generated by our in-house ab initio method. Our experiments demonstrate that deep belief network has better performance compared to Support Vector Machines and Neural Networks on the protein model quality assessment problem, and our method DeepQA achieves the state-of-the-art performance on CASP11 dataset. It also outperformed two well-established methods in selecting good outlier models from a large set of models of mostly low quality generated by ab initio modeling methods. DeepQA is a useful deep learning tool for protein single model quality assessment and protein structure prediction. The source code, executable, document and training/test datasets of DeepQA for Linux is freely available to non-commercial users at http://cactus.rnet.missouri.edu/DeepQA/ .

  18. 76 FR 46308 - Fiscal Year (FY) 2011 Funding Opportunity

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-02

    ... Administration, HHS. ACTION: Notice of intent to award a Single Source Grant to the National Association of State... Services Administration (SAMHSA) is seeking to award a single source grant to the National Association of State Alcohol and Drug Abuse Directors (NASADAD) to provide assistance to substance abuse Single State...

  19. MHODE: a local-homogeneity theory for improved source-parameter estimation of potential fields

    NASA Astrophysics Data System (ADS)

    Fedi, Maurizio; Florio, Giovanni; Paoletti, Valeria

    2015-08-01

    We describe a multihomogeneity theory for source-parameter estimation of potential fields. Similar to what happens for random source models, where the monofractal scaling-law has been generalized into a multifractal law, we propose to generalize the homogeneity law into a multihomogeneity law. This allows a theoretically correct approach to study real-world potential fields, which are inhomogeneous and so do not show scale invariance, except in the asymptotic regions (very near to or very far from their sources). Since the scaling properties of inhomogeneous fields change with the scale of observation, we show that they may be better studied at a set of scales than at a single scale and that a multihomogeneous model is needed to explain its complex scaling behaviour. In order to perform this task, we first introduce fractional-degree homogeneous fields, to show that: (i) homogeneous potential fields may have fractional or integer degree; (ii) the source-distributions for a fractional-degree are not confined in a bounded region, similarly to some integer-degree models, such as the infinite line mass and (iii) differently from the integer-degree case, the fractional-degree source distributions are no longer uniform density functions. Using this enlarged set of homogeneous fields, real-world anomaly fields are studied at different scales, by a simple search, at any local window W, for the best homogeneous field of either integer or fractional-degree, this yielding a multiscale set of local homogeneity-degrees and depth estimations which we call multihomogeneous model. It is so defined a new technique of source parameter estimation (Multi-HOmogeneity Depth Estimation, MHODE), permitting retrieval of the source parameters of complex sources. We test the method with inhomogeneous fields of finite sources, such as faults or cylinders, and show its effectiveness also in a real-case example. These applications show the usefulness of the new concepts, multihomogeneity and fractional homogeneity-degree, to obtain valid estimates of the source parameters in a consistent theoretical framework, so overcoming the limitations imposed by global-homogeneity to widespread methods, such as Euler deconvolution.

  20. Elevated blood-lead levels among children living in the rural Philippines.

    PubMed

    Riddell, Travis J; Solon, Orville; Quimbo, Stella A; Tan, Cheryl May C; Butrick, Elizabeth; Peabody, John W

    2007-09-01

    Generally, lead poisoning is not considered a significant environmental hazard for children in rural areas of developing countries. With a prospectively designed policy experiment, the research community and the government are conducting a broad-based investigation to introduce and evaluate the impact of health policy reforms on children in a rural area of the Philippines - the Quality Improvement Demonstration Study (QIDS). As part of this study, we researched lead exposure in children under the age of five. We sampled a population of children from the Visayas region in the central Philippines, covering approximately one third of the country's geographical area. From December 2003 to September 2004, the survey collected blood lead levels (BLL) together with demographic, socioeconomic and child health data points. Supplemental field-testing among a sub-sample of the most exposed children assessed the sources of environmental lead exposure. Among children in this study, 21% (601 of 2861 children) had BLL greater than 10 microg/dl. BLL were associated independently with age, haemoglobin concentration, water source, roofing material, expenditures and history of breastfeeding. A follow-up assessment of possible environmental exposures among the sub-sample of children with elevated BLL revealed no single or predominant exposure source. Instead, there appear to be multiple potential sources, such as fossil-fuel combustion, lead paint (in or around 38% of homes) and household items. Elevated BLL are common among children in the Visayas, and may signify an under-recognized threat to children living in rural areas of other developing nations. This setting has varied environmental sources of lead. Observed correlates of BLL may be of clinical, environmental and public health utility to identify and mitigate the consequences of lead toxicity.

  1. Practical system for the generation of pulsed quantum frequency combs.

    PubMed

    Roztocki, Piotr; Kues, Michael; Reimer, Christian; Wetzel, Benjamin; Sciara, Stefania; Zhang, Yanbing; Cino, Alfonso; Little, Brent E; Chu, Sai T; Moss, David J; Morandotti, Roberto

    2017-08-07

    The on-chip generation of large and complex optical quantum states will enable low-cost and accessible advances for quantum technologies, such as secure communications and quantum computation. Integrated frequency combs are on-chip light sources with a broad spectrum of evenly-spaced frequency modes, commonly generated by four-wave mixing in optically-excited nonlinear micro-cavities, whose recent use for quantum state generation has provided a solution for scalable and multi-mode quantum light sources. Pulsed quantum frequency combs are of particular interest, since they allow the generation of single-frequency-mode photons, required for scaling state complexity towards, e.g., multi-photon states, and for quantum information applications. However, generation schemes for such pulsed combs have, to date, relied on micro-cavity excitation via lasers external to the sources, being neither versatile nor power-efficient, and impractical for scalable realizations of quantum technologies. Here, we introduce an actively-modulated, nested-cavity configuration that exploits the resonance pass-band characteristic of the micro-cavity to enable a mode-locked and energy-efficient excitation. We demonstrate that the scheme allows the generation of high-purity photons at large coincidence-to-accidental ratios (CAR). Furthermore, by increasing the repetition rate of the excitation field via harmonic mode-locking (i.e. driving the cavity modulation at harmonics of the fundamental repetition rate), we managed to increase the pair production rates (i.e. source efficiency), while maintaining a high CAR and photon purity. Our approach represents a significant step towards the realization of fully on-chip, stable, and versatile sources of pulsed quantum frequency combs, crucial for the development of accessible quantum technologies.

  2. Multiligand Metal-Phenolic Assembly from Green Tea Infusions.

    PubMed

    Rahim, Md Arifur; Björnmalm, Mattias; Bertleff-Zieschang, Nadja; Ju, Yi; Mettu, Srinivas; Leeming, Michael G; Caruso, Frank

    2018-03-07

    The synthesis of hybrid functional materials using the coordination-driven assembly of metal-phenolic networks (MPNs) is of interest in diverse areas of materials science. To date, MPN assembly has been explored as monoligand systems (i.e., containing a single type of phenolic ligand) where the phenolic components are primarily obtained from natural sources via extraction, isolation, and purification processes. Herein, we demonstrate the fabrication of MPNs from a readily available, crude phenolic source-green tea (GT) infusions. We employ our recently introduced rust-mediated continuous assembly strategy to prepare these GT MPN systems. The resulting hollow MPN capsules contain multiple phenolic ligands and have a shell thickness that can be controlled through the reaction time. These multiligand MPN systems have different properties compared to the analogous MPN systems reported previously. For example, the Young's modulus (as determined using colloidal-probe atomic force microscopy) of the GT MPN system presented herein is less than half that of MPN systems prepared using tannic acid and iron salt solutions, and the disassembly kinetics are faster (∼50%) than other, comparable MPN systems under identical disassembly conditions. Additionally, the use of rust-mediated assembly enables the formation of stable capsules under conditions where the conventional approach (i.e., using iron salt solutions) results in colloidally unstable dispersions. These differences highlight how the choice of phenolic ligand and its source, as well as the assembly protocol (e.g., using solution-based or solid-state iron sources), can be used to tune the properties of MPNs. The strategy presented herein expands the toolbox of MPN assembly while also providing new insights into the nature and robustness of metal-phenolic interfacial assembly when using solution-based or solid-state metal sources.

  3. Application and development of ion-source technology for radiation-effects testing of electronics

    NASA Astrophysics Data System (ADS)

    Kalvas, T.; Javanainen, A.; Kettunen, H.; Koivisto, H.; Tarvainen, O.; Virtanen, A.

    2017-09-01

    Studies of heavy-ion induced single event effect (SEE) on space electronics are necessary to verify the operation of the components in the harsh radiation environment. These studies are conducted by using high-energy heavy-ion beams to simulate the radiation effects in space. The ion beams are accelerated as so-called ion cocktails, containing several ion beam species with similar mass-to-charge ratio, covering a wide range of linear energy transfer (LET) values also present in space. The use of cocktails enables fast switching between beam species during testing. Production of these high-energy ion cocktails poses challenging requirements to the ion sources because in most laboratories reaching the necessary beam energies requires very high charge state ions. There are two main technologies producing these beams: The electron beam ion source EBIS and the electron cyclotron resonance ion source ECRIS. The EBIS is most suitable for pulsed accelerators, while ECRIS is most suitable for use with cyclotrons, which are the most common accelerators used in these applications. At the Accelerator Laboratory of the University of Jyväskylä (JYFL), radiation effects testing is currently performed using a K130 cyclotron and a 14 GHz ECRIS at a beam energy of 9.3 MeV/u. A new 18 GHz ECRIS, pushing the limits of the normal conducting ECR technology is under development at JYFL. The performances of existing 18 GHz ion sources have been compared, and based on this analysis, a 16.2 MeV/u beam cocktail with 1999 MeV 126Xe44+ being the most challenging component to has been chosen for development at JYFL. The properties of the suggested beam cocktail are introduced and discussed.

  4. Ultrafast single photon emitting quantum photonic structures based on a nano-obelisk.

    PubMed

    Kim, Je-Hyung; Ko, Young-Ho; Gong, Su-Hyun; Ko, Suk-Min; Cho, Yong-Hoon

    2013-01-01

    A key issue in a single photon source is fast and efficient generation of a single photon flux with high light extraction efficiency. Significant progress toward high-efficiency single photon sources has been demonstrated by semiconductor quantum dots, especially using narrow bandgap materials. Meanwhile, there are many obstacles, which restrict the use of wide bandgap semiconductor quantum dots as practical single photon sources in ultraviolet-visible region, despite offering free space communication and miniaturized quantum information circuits. Here we demonstrate a single InGaN quantum dot embedded in an obelisk-shaped GaN nanostructure. The nano-obelisk plays an important role in eliminating dislocations, increasing light extraction, and minimizing a built-in electric field. Based on the nano-obelisks, we observed nonconventional narrow quantum dot emission and positive biexciton binding energy, which are signatures of negligible built-in field in single InGaN quantum dots. This results in efficient and ultrafast single photon generation in the violet color region.

  5. Coherent optical processing using noncoherent light after source masking.

    PubMed

    Boopathi, V; Vasu, R M

    1992-01-10

    Coherent optical processing starting with spatially noncoherent illumination is described. Good spatial coherence is introduced in the far field by modulating a noncoherent source when masks with sharp autocorrelation are used. The far-field mutual coherence function of light is measured and it is seen that, for the masks and the source size used here, we get a fairly large area over which the mutual coherence function is high and flat. We demonstrate traditional coherent processing operations such as Fourier transformation and image deblurring when coherent light that is produced in the above fashion is used. A coherence-redundancy merit function is defined for this type of processing system. It is experimentally demonstrated that the processing system introduced here has superior blemish tolerance compared with a traditional processor that uses coherent illumination.

  6. About the Modeling of Radio Source Time Series as Linear Splines

    NASA Astrophysics Data System (ADS)

    Karbon, Maria; Heinkelmann, Robert; Mora-Diaz, Julian; Xu, Minghui; Nilsson, Tobias; Schuh, Harald

    2016-12-01

    Many of the time series of radio sources observed in geodetic VLBI show variations, caused mainly by changes in source structure. However, until now it has been common practice to consider source positions as invariant, or to exclude known misbehaving sources from the datum conditions. This may lead to a degradation of the estimated parameters, as unmodeled apparent source position variations can propagate to the other parameters through the least squares adjustment. In this paper we will introduce an automated algorithm capable of parameterizing the radio source coordinates as linear splines.

  7. Using Primary Sources on the Internet To Teach and Learn History. ERIC Digest.

    ERIC Educational Resources Information Center

    Shiroma, Deanne

    The Internet enables teachers to enhance the teaching and learning of history through quick and extensive access to primary sources. Introducing and using primary sources in the history classroom will almost certainly lead to active learning and development of critical thinking, reasoning, and problem solving. This Digest discusses: (1) types and…

  8. Protecting single-photon entanglement with practical entanglement source

    NASA Astrophysics Data System (ADS)

    Zhou, Lan; Ou-Yang, Yang; Wang, Lei; Sheng, Yu-Bo

    2017-06-01

    Single-photon entanglement (SPE) is important for quantum communication and quantum information processing. However, SPE is sensitive to photon loss. In this paper, we discuss a linear optical amplification protocol for protecting SPE. Different from the previous protocols, we exploit the practical spontaneous parametric down-conversion (SPDC) source to realize the amplification, for the ideal entanglement source is unavailable in current quantum technology. Moreover, we prove that the amplification using the entanglement generated from SPDC source as auxiliary is better than the amplification assisted with single photons. The reason is that the vacuum state from SPDC source will not affect the amplification, so that it can be eliminated automatically. This protocol may be useful in future long-distance quantum communications.

  9. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  10. Levitation of objects using acoustic energy

    NASA Technical Reports Server (NTRS)

    Whymark, R. R.

    1975-01-01

    Activated sound source establishes standing-wave pattern in gap between source and acoustic reflector. Solid or liquid material introduced in region will move to one of the low pressure areas produced at antinodes and remain suspended as long as acoustic signal is present.

  11. Development of Advanced Signal Processing and Source Imaging Methods for Superparamagnetic Relaxometry

    PubMed Central

    Huang, Ming-Xiong; Anderson, Bill; Huang, Charles W.; Kunde, Gerd J.; Vreeland, Erika C.; Huang, Jeffrey W.; Matlashov, Andrei N.; Karaulanov, Todor; Nettles, Christopher P.; Gomez, Andrew; Minser, Kayla; Weldon, Caroline; Paciotti, Giulio; Harsh, Michael; Lee, Roland R.; Flynn, Edward R.

    2017-01-01

    Superparamagnetic Relaxometry (SPMR) is a highly sensitive technique for the in vivo detection of tumor cells and may improve early stage detection of cancers. SPMR employs superparamagnetic iron oxide nanoparticles (SPION). After a brief magnetizing pulse is used to align the SPION, SPMR measures the time decay of SPION using Super-conducting Quantum Interference Device (SQUID) sensors. Substantial research has been carried out in developing the SQUID hardware and in improving the properties of the SPION. However, little research has been done in the pre-processing of sensor signals and post-processing source modeling in SPMR. In the present study, we illustrate new pre-processing tools that were developed to: 1) remove trials contaminated with artifacts, 2) evaluate and ensure that a single decay process associated with bounded SPION exists in the data, 3) automatically detect and correct flux jumps, and 4) accurately fit the sensor signals with different decay models. Furthermore, we developed an automated approach based on multi-start dipole imaging technique to obtain the locations and magnitudes of multiple magnetic sources, without initial guesses from the users. A regularization process was implemented to solve the ambiguity issue related to the SPMR source variables. A procedure based on reduced chi-square cost-function was introduced to objectively obtain the adequate number of dipoles that describe the data. The new pre-processing tools and multi-start source imaging approach have been successfully evaluated using phantom data. In conclusion, these tools and multi-start source modeling approach substantially enhance the accuracy and sensitivity in detecting and localizing sources from the SPMR signals. Furthermore, multi-start approach with regularization provided robust and accurate solutions for a poor SNR condition similar to the SPMR detection sensitivity in the order of 1000 cells. We believe such algorithms will help establishing the industrial standards for SPMR when applying the technique in pre-clinical and clinical settings. PMID:28072579

  12. Tri-channel single-mode terahertz quantum cascade laser.

    PubMed

    Wang, Tao; Liu, Jun-Qi; Liu, Feng-Qi; Wang, Li-Jun; Zhang, Jin-Chuan; Wang, Zhan-Guo

    2014-12-01

    We report on a compact THz quantum cascade laser source emitting at, individually controllable, three different wavelengths (92.6, 93.9, and 95.1 μm). This multiwavelength laser array can be used as a prototype of the emission source of THz wavelength division multiplex (WDM) wireless communication system. The source consists of three tapered single-mode distributed feedback (DFB) terahertz quantum cascade lasers fabricated monolithically on a single chip. All array elements feature longitudinal as well as lateral single-mode in the entire injection range. The peak output powers of individual lasers are 42, 73, and 37 mW at 10 K, respectively.

  13. Thin Film CuInS2 Prepared by Spray Pyrolysis with Single-Source Precursors

    NASA Technical Reports Server (NTRS)

    Jin, Michael H.; Banger, Kulinder K.; Harris, Jerry D.; Cowen, Jonathan E.; Hepp, Aloysius F.; Lyons, Valerie (Technical Monitor)

    2002-01-01

    Both horizontal hot-wall and vertical cold-wall atmospheric chemical spray pyrolysis processes deposited near single-phase stoichiometric CuInS2 thin films. Single-source precursors developed for ternary chalcopyrite materials were used for this study, and a new liquid phase single-source precursor was tested with a vertical cold-wall reactor. The depositions were carried out under an argon atmosphere, and the substrate temperature was kept at 400 C. Columnar grain structure was obtained with vapor deposition, and the granular structure was obtained with (liquid) droplet deposition. Conductive films were deposited with planar electrical resistivities ranging from 1 to 30 Omega x cm.

  14. Fused Silica Ion Trap Chip with Efficient Optical Collection System for Timekeeping, Sensing, and Emulation

    DTIC Science & Technology

    2015-01-22

    applications in fast single photon sources, quantum repeater circuitry, and high fidelity remote entanglement of atoms for quantum information protocols. We...fluorescence for motion/force sensors through Doppler velocimetry; and for the efficient collection of single photons from trapped ions for...Doppler velocimetry; and for the efficient collection of single photons from trapped ions for applications in fast single photon sources, quantum

  15. 75 FR 62840 - Award of a Single-Source Expansion Supplement to the University of Southern Maine, Muskie School...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-13

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Administration for Children and Families Award of a Single...: Children's Bureau, ACYF, ACF, HHS. ACTION: Notice. CFDA Number: 93.658. Legislative Authority: Section 476..., the Administration for Children and Families (ACF), Children's Bureau (CB) is awarding a single-source...

  16. Fast Purcell-enhanced single photon source in 1,550-nm telecom band from a resonant quantum dot-cavity coupling

    PubMed Central

    Birowosuto, Muhammad Danang; Sumikura, Hisashi; Matsuo, Shinji; Taniyama, Hideaki; van Veldhoven, Peter J.; Nötzel, Richard; Notomi, Masaya

    2012-01-01

    High-bit-rate nanocavity-based single photon sources in the 1,550-nm telecom band are challenges facing the development of fibre-based long-haul quantum communication networks. Here we report a very fast single photon source in the 1,550-nm telecom band, which is achieved by a large Purcell enhancement that results from the coupling of a single InAs quantum dot and an InP photonic crystal nanocavity. At a resonance, the spontaneous emission rate was enhanced by a factor of 5 resulting a record fast emission lifetime of 0.2 ns at 1,550 nm. We also demonstrate that this emission exhibits an enhanced anti-bunching dip. This is the first realization of nanocavity-enhanced single photon emitters in the 1,550-nm telecom band. This coupled quantum dot cavity system in the telecom band thus provides a bright high-bit-rate non-classical single photon source that offers appealing novel opportunities for the development of a long-haul quantum telecommunication system via optical fibres. PMID:22432053

  17. Fast Purcell-enhanced single photon source in 1,550-nm telecom band from a resonant quantum dot-cavity coupling.

    PubMed

    Birowosuto, Muhammad Danang; Sumikura, Hisashi; Matsuo, Shinji; Taniyama, Hideaki; van Veldhoven, Peter J; Nötzel, Richard; Notomi, Masaya

    2012-01-01

    High-bit-rate nanocavity-based single photon sources in the 1,550-nm telecom band are challenges facing the development of fibre-based long-haul quantum communication networks. Here we report a very fast single photon source in the 1,550-nm telecom band, which is achieved by a large Purcell enhancement that results from the coupling of a single InAs quantum dot and an InP photonic crystal nanocavity. At a resonance, the spontaneous emission rate was enhanced by a factor of 5 resulting a record fast emission lifetime of 0.2 ns at 1,550 nm. We also demonstrate that this emission exhibits an enhanced anti-bunching dip. This is the first realization of nanocavity-enhanced single photon emitters in the 1,550-nm telecom band. This coupled quantum dot cavity system in the telecom band thus provides a bright high-bit-rate non-classical single photon source that offers appealing novel opportunities for the development of a long-haul quantum telecommunication system via optical fibres.

  18. Separating Turbofan Engine Noise Sources Using Auto and Cross Spectra from Four Microphones

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2008-01-01

    The study of core noise from turbofan engines has become more important as noise from other sources such as the fan and jet were reduced. A multiple-microphone and acoustic-source modeling method to separate correlated and uncorrelated sources is discussed. The auto- and cross spectra in the frequency range below 1000 Hz are fitted with a noise propagation model based on a source couplet consisting of a single incoherent monopole source with a single coherent monopole source or a source triplet consisting of a single incoherent monopole source with two coherent monopole point sources. Examples are presented using data from a Pratt& Whitney PW4098 turbofan engine. The method separates the low-frequency jet noise from the core noise at the nozzle exit. It is shown that at low power settings, the core noise is a major contributor to the noise. Even at higher power settings, it can be more important than jet noise. However, at low frequencies, uncorrelated broadband noise and jet noise become the important factors as the engine power setting is increased.

  19. Energy Harvesting Research: The Road from Single Source to Multisource.

    PubMed

    Bai, Yang; Jantunen, Heli; Juuti, Jari

    2018-06-07

    Energy harvesting technology may be considered an ultimate solution to replace batteries and provide a long-term power supply for wireless sensor networks. Looking back into its research history, individual energy harvesters for the conversion of single energy sources into electricity are developed first, followed by hybrid counterparts designed for use with multiple energy sources. Very recently, the concept of a truly multisource energy harvester built from only a single piece of material as the energy conversion component is proposed. This review, from the aspect of materials and device configurations, explains in detail a wide scope to give an overview of energy harvesting research. It covers single-source devices including solar, thermal, kinetic and other types of energy harvesters, hybrid energy harvesting configurations for both single and multiple energy sources and single material, and multisource energy harvesters. It also includes the energy conversion principles of photovoltaic, electromagnetic, piezoelectric, triboelectric, electrostatic, electrostrictive, thermoelectric, pyroelectric, magnetostrictive, and dielectric devices. This is one of the most comprehensive reviews conducted to date, focusing on the entire energy harvesting research scene and providing a guide to seeking deeper and more specific research references and resources from every corner of the scientific community. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp; Zhang, Xu

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources andmore » pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.« less

  1. Method of Promoting Single Crystal Growth During Melt Growth of Semiconductors

    NASA Technical Reports Server (NTRS)

    Su, Ching-Hua (Inventor)

    2013-01-01

    The method of the invention promotes single crystal growth during fabrication of melt growth semiconductors. A growth ampoule and its tip have a semiconductor source material placed therein. The growth ampoule is placed in a first thermal environment that raises the temperature of the semiconductor source material to its liquidus temperature. The growth ampoule is then transitioned to a second thermal environment that causes the semiconductor source material in the growth ampoule's tip to attain a temperature that is below the semiconductor source material's solidus temperature. The growth ampoule so-transitioned is then mechanically perturbed to induce single crystal growth at the growth ampoule's tip.

  2. Towards the Experimental Assessment of the DQE in SPECT Scanners

    NASA Astrophysics Data System (ADS)

    Fountos, G. P.; Michail, C. M.

    2017-11-01

    The purpose of this work was to introduce the Detective Quantum Efficiency (DQE) in single photon emission computed tomography (SPECT) systems using a flood source. A Tc-99m-based flood source (Eγ = 140 keV) consisting of a radiopharmaceutical solution of dithiothreitol (DTT, 10-3 M)/Tc-99m(III)-DMSA, 40 mCi/40 ml bound to the grains of an Agfa MammoRay HDR Medical X-ray film) was prepared in laboratory. The source was placed between two PMMA blocks and images were obtained by using the brain tomographic acquisition protocol (DatScan-brain). The Modulation Transfer Function (MTF) was evaluated using the Iterative 2D algorithm. All imaging experiments were performed in a Siemens e-Cam gamma camera. The Normalized Noise Power spectra (NNPS) were obtained from the sagittal views of the source. The higher MTF values were obtained for the Flash Iterative 2D with 24 iterations and 20 subsets. The noise levels of the SPECT reconstructed images, in terms of the NNPS, were found to increase as the number of iterations increase. The behavior of the DQE was influenced by both MTF and NNPS. As the number of iterations was increased, higher MTF values were obtained, however with a parallel, increase of magnitude in image noise, as depicted from the NNPS results. DQE values, which were influenced by both MTF and NNPS, were found higher when the number of iterations results in resolution saturation. The method presented here is novel and easy to implement, requiring materials commonly found in clinical practice and can be useful in the quality control of SPECT scanners.

  3. Quantitative optical imaging and sensing by joint design of point spread functions and estimation algorithms

    NASA Astrophysics Data System (ADS)

    Quirin, Sean Albert

    The joint application of tailored optical Point Spread Functions (PSF) and estimation methods is an important tool for designing quantitative imaging and sensing solutions. By enhancing the information transfer encoded by the optical waves into an image, matched post-processing algorithms are able to complete tasks with improved performance relative to conventional designs. In this thesis, new engineered PSF solutions with image processing algorithms are introduced and demonstrated for quantitative imaging using information-efficient signal processing tools and/or optical-efficient experimental implementations. The use of a 3D engineered PSF, the Double-Helix (DH-PSF), is applied as one solution for three-dimensional, super-resolution fluorescence microscopy. The DH-PSF is a tailored PSF which was engineered to have enhanced information transfer for the task of localizing point sources in three dimensions. Both an information- and optical-efficient implementation of the DH-PSF microscope are demonstrated here for the first time. This microscope is applied to image single-molecules and micro-tubules located within a biological sample. A joint imaging/axial-ranging modality is demonstrated for application to quantifying sources of extended transverse and axial extent. The proposed implementation has improved optical-efficiency relative to prior designs due to the use of serialized cycling through select engineered PSFs. This system is demonstrated for passive-ranging, extended Depth-of-Field imaging and digital refocusing of random objects under broadband illumination. Although the serialized engineered PSF solution is an improvement over prior designs for the joint imaging/passive-ranging modality, it requires the use of multiple PSFs---a potentially significant constraint. Therefore an alternative design is proposed, the Single-Helix PSF, where only one engineered PSF is necessary and the chromatic behavior of objects under broadband illumination provides the necessary information transfer. The matched estimation algorithms are introduced along with an optically-efficient experimental system to image and passively estimate the distance to a test object. An engineered PSF solution is proposed for improving the sensitivity of optical wave-front sensing using a Shack-Hartmann Wave-front Sensor (SHWFS). The performance limits of the classical SHWFS design are evaluated and the engineered PSF system design is demonstrated to enhance performance. This system is fabricated and the mechanism for additional information transfer is identified.

  4. Single-source mechanical loading system produces biaxial stresses in cylinders

    NASA Technical Reports Server (NTRS)

    Flower, J. F.; Stafford, R. L.

    1967-01-01

    Single-source mechanical loading system proportions axial-to-hoop tension loads applied to cylindrical specimens. The system consists of hydraulic, pneumatic, and lever arrangements which produce biaxial loading ratios.

  5. The design of a multisource americium-beryllium (Am-Be) neutron irradiation facility using MCNP for the neutronic performance calculation.

    PubMed

    Sogbadji, R B M; Abrefah, R G; Nyarko, B J B; Akaho, E H K; Odoi, H C; Attakorah-Birinkorang, S

    2014-08-01

    The americium-beryllium neutron irradiation facility at the National Nuclear Research Institute (NNRI), Ghana, was re-designed with four 20 Ci sources using Monte Carlo N-Particle (MCNP) code to investigate the maximum amount of flux that is produced by the combined sources. The results were compared with a single source Am-Be irradiation facility. The main objective was to enable us to harness the maximum amount of flux for the optimization of neutron activation analysis and to enable smaller sample sized samples to be irradiated. Using MCNP for the design construction and neutronic performance calculation, it was realized that the single-source Am-Be design produced a thermal neutron flux of (1.8±0.0007)×10(6) n/cm(2)s and the four-source Am-Be design produced a thermal neutron flux of (5.4±0.0007)×10(6) n/cm(2)s which is a factor of 3.5 fold increase compared to the single-source Am-Be design. The criticality effective, k(eff), of the single-source and the four-source Am-Be designs were found to be 0.00115±0.0008 and 0.00143±0.0008, respectively. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Introducing Synchrotrons Into the Classroom

    ScienceCinema

    Bloch, Ashley; Lanzirotti, Tony

    2018-06-08

    Brookhaven's Introducing Synchrotrons Into the Classroom (InSynC) program gives teachers and their students access to the National Synchrotron Light Source through a competitive proposal process. The first batch of InSynC participants included a group of students from Islip Middle School, who used the massive machine to study the effectiveness of different what filters.

  7. Vegetation component of geothermal EIS studies: Introduced plants, ecosystem stability, and geothermal development

    NASA Astrophysics Data System (ADS)

    1994-10-01

    This paper contributes new information about the impacts from introduced plant invasions on the native Hawaiian vegetation as consequences of land disturbance and geothermal development activities. In this regard, most geothermal development is expected to act as another recurring source of physical disturbance which favors the spread and maintenance of introduced organisms throughout the region. Where geothermal exploration and development activities extend beyond existing agricultural and residential development, they will become the initial or sole source of disturbance to the naturalized vegetation of the area. Kilauea has a unique ecosystem adapted to the dynamics of a volcanically active landscape. The characteristics of this ecosystem need to be realized in order to understand the major threats to the ecosystem and to evaluate the effects of and mitigation for geothermal development in Puna. The native Puna vegetation is well adapted to disturbances associated with volcanic eruption, but it is ill-adapted to compete with alien plant species in secondary disturbances produced by human activities. Introduced plant and animal species have become a major threat to the continued presence of the native biota in the Puna region.

  8. An Open Port Sampling Interface for Liquid Introduction Atmospheric Pressure Ionization Mass Spectrometry

    DOE PAGES

    Van Berkel, Gary J.; Kertesz, Vilmos

    2015-08-25

    RATIONALE: A simple method to introduce unprocessed samples into a solvent for rapid characterization by liquid introduction atmospheric pressure ionization mass spectrometry has been lacking. The continuous flow, self-cleaning open port sampling interface introduced here fills this void. METHODS: The open port sampling interface used a vertically aligned, co-axial tube arrangement enabling solvent delivery to the sampling end of the device through the tubing annulus and solvent aspiration down the center tube and into the mass spectrometer ionization source via the commercial APCI emitter probe. The solvent delivery rate to the interface was set to exceed the aspiration rate creatingmore » a continuous sampling interface along with a constant, self-cleaning spillover of solvent from the top of the probe. RESULTS: Using the open port sampling interface with positive ion mode APCI and a hybrid quadrupole time of flight mass spectrometer, rapid, direct sampling and analysis possibilities are exemplified with plastics, ballpoint and felt tip ink pens, skin, and vegetable oils. These results demonstrated that the open port sampling interface could be used as a simple, versatile and self-cleaning system to rapidly introduce multiple types of unprocessed, sometimes highly concentrated and complex, samples into a solvent flow stream for subsequent ionization and analysis by mass spectrometry. The basic setup presented here could be incorporated with any self-aspirating liquid introduction ionization source (e.g., ESI, APCI, APPI, ICP, etc.) or any type of atmospheric pressure sampling ready mass spectrometer system. CONCLUSIONS: The open port sampling interface provides a means to introduce and quickly analyze unprocessed solid or liquid samples with liquid introduction atmospheric pressure ionization source without fear of sampling interface or ionization source contamination.« less

  9. Single Quantum Dot with Microlens and 3D-Printed Micro-objective as Integrated Bright Single-Photon Source

    PubMed Central

    2017-01-01

    Integrated single-photon sources with high photon-extraction efficiency are key building blocks for applications in the field of quantum communications. We report on a bright single-photon source realized by on-chip integration of a deterministic quantum dot microlens with a 3D-printed multilens micro-objective. The device concept benefits from a sophisticated combination of in situ 3D electron-beam lithography to realize the quantum dot microlens and 3D femtosecond direct laser writing for creation of the micro-objective. In this way, we obtain a high-quality quantum device with broadband photon-extraction efficiency of (40 ± 4)% and high suppression of multiphoton emission events with g(2)(τ = 0) < 0.02. Our results highlight the opportunities that arise from tailoring the optical properties of quantum emitters using integrated optics with high potential for the further development of plug-and-play fiber-coupled single-photon sources. PMID:28670600

  10. Single Molecule Nano-Metronome

    PubMed Central

    Buranachai, Chittanon; McKinney, Sean A.; Ha, Taekjip

    2008-01-01

    We constructed a DNA-based nano-mechanical device called the nano-metronome. Our device is made by introducing complementary single stranded overhangs at the two arms of the DNA four-way junction. The ticking rates of this stochastic metronome depend on ion concentrations and can be changed by a set of DNA-based switches to deactivate/reactivate the sticky end. Since the device displays clearly distinguishable responses even with a single basepair difference, it may lead to a single molecule sensor of minute sequence differences of a target DNA. PMID:16522050

  11. Minimizing the Diameter of a Network Using Shortcut Edges

    NASA Astrophysics Data System (ADS)

    Demaine, Erik D.; Zadimoghaddam, Morteza

    We study the problem of minimizing the diameter of a graph by adding k shortcut edges, for speeding up communication in an existing network design. We develop constant-factor approximation algorithms for different variations of this problem. We also show how to improve the approximation ratios using resource augmentation to allow more than k shortcut edges. We observe a close relation between the single-source version of the problem, where we want to minimize the largest distance from a given source vertex, and the well-known k-median problem. First we show that our constant-factor approximation algorithms for the general case solve the single-source problem within a constant factor. Then, using a linear-programming formulation for the single-source version, we find a (1 + ɛ)-approximation using O(klogn) shortcut edges. To show the tightness of our result, we prove that any ({3 over 2}-ɛ)-approximation for the single-source version must use Ω(klogn) shortcut edges assuming P ≠ NP.

  12. Single molecule targeted sequencing for cancer gene mutation detection.

    PubMed

    Gao, Yan; Deng, Liwei; Yan, Qin; Gao, Yongqian; Wu, Zengding; Cai, Jinsen; Ji, Daorui; Li, Gailing; Wu, Ping; Jin, Huan; Zhao, Luyang; Liu, Song; Ge, Liangjin; Deem, Michael W; He, Jiankui

    2016-05-19

    With the rapid decline in cost of sequencing, it is now affordable to examine multiple genes in a single disease-targeted clinical test using next generation sequencing. Current targeted sequencing methods require a separate step of targeted capture enrichment during sample preparation before sequencing. Although there are fast sample preparation methods available in market, the library preparation process is still relatively complicated for physicians to use routinely. Here, we introduced an amplification-free Single Molecule Targeted Sequencing (SMTS) technology, which combined targeted capture and sequencing in one step. We demonstrated that this technology can detect low-frequency mutations using artificially synthesized DNA sample. SMTS has several potential advantages, including simple sample preparation thus no biases and errors are introduced by PCR reaction. SMTS has the potential to be an easy and quick sequencing technology for clinical diagnosis such as cancer gene mutation detection, infectious disease detection, inherited condition screening and noninvasive prenatal diagnosis.

  13. Single Day Construction of Multigene Circuits with 3G Assembly.

    PubMed

    Halleran, Andrew D; Swaminathan, Anandh; Murray, Richard M

    2018-05-18

    The ability to rapidly design, build, and test prototypes is of key importance to every engineering discipline. DNA assembly often serves as a rate limiting step of the prototyping cycle for synthetic biology. Recently developed DNA assembly methods such as isothermal assembly and type IIS restriction enzyme systems take different approaches to accelerate DNA construction. We introduce a hybrid method, Golden Gate-Gibson (3G), that takes advantage of modular part libraries introduced by type IIS restriction enzyme systems and isothermal assembly's ability to build large DNA constructs in single pot reactions. Our method is highly efficient and rapid, facilitating construction of entire multigene circuits in a single day. Additionally, 3G allows generation of variant libraries enabling efficient screening of different possible circuit constructions. We characterize the efficiency and accuracy of 3G assembly for various construct sizes, and demonstrate 3G by characterizing variants of an inducible cell-lysis circuit.

  14. 40 CFR 421.206 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Mercury Subcategory... wastewater pollutants in secondary mercury process wastewater introduced into a POTW shall not exceed the following values: (a) Spent battery electrolyte. PSNS for the Secondary Mercury Subcategory Pollutant or...

  15. 40 CFR 421.206 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Mercury Subcategory... wastewater pollutants in secondary mercury process wastewater introduced into a POTW shall not exceed the following values: (a) Spent battery electrolyte. PSNS for the Secondary Mercury Subcategory Pollutant or...

  16. 40 CFR 421.206 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Mercury Subcategory... wastewater pollutants in secondary mercury process wastewater introduced into a POTW shall not exceed the following values: (a) Spent battery electrolyte. PSNS for the Secondary Mercury Subcategory Pollutant or...

  17. 40 CFR 421.206 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Mercury Subcategory... wastewater pollutants in secondary mercury process wastewater introduced into a POTW shall not exceed the following values: (a) Spent battery electrolyte. PSNS for the Secondary Mercury Subcategory Pollutant or...

  18. 40 CFR 421.326 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Uranium Subcategory... wastewater pollutants in secondary uranium process wastewater introduced into a POTW shall not exceed the following values: (a) Refinery sump filtrate. PSNS for the Secondary Uranium Subcategory Pollutant or...

  19. 40 CFR 421.326 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Uranium Subcategory... wastewater pollutants in secondary uranium process wastewater introduced into a POTW shall not exceed the following values: (a) Refinery sump filtrate. PSNS for the Secondary Uranium Subcategory Pollutant or...

  20. 40 CFR 421.326 - Pretreatment standards for new sources.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... GUIDELINES AND STANDARDS NONFERROUS METALS MANUFACTURING POINT SOURCE CATEGORY Secondary Uranium Subcategory... wastewater pollutants in secondary uranium process wastewater introduced into a POTW shall not exceed the following values: (a) Refinery sump filtrate. PSNS for the Secondary Uranium Subcategory Pollutant or...

Top