Sample records for called high efficiency

  1. High-Efficiency and High-Power Mid-Wave Infrared Cascade Lasers

    DTIC Science & Technology

    2012-10-01

    internal quantum efficiency () and factor (2) is usually called the optical extraction efficiency (). The optical extraction efficiency ... quantum efficiency involves more fundamental parameters corresponding to the microscopic processes of the device operation, nevertheless, it can be...deriving parameters such as the internal quantum efficiency of a QC laser, the entire injector miniband can be treated as a single virtual state

  2. Possible use of heterospecific food-associated calls of macaques by sika deer for foraging efficiency.

    PubMed

    Koda, Hiroki

    2012-09-01

    Heterospecific communication signals sometimes convey relevant information for animal survival. For example, animals use or eavesdrop on heterospecific alarm calls concerning common predators. Indeed, most observations have been reported regarding anti-predator strategies. Use of heterospecific signals has rarely been observed as part of a foraging strategy. Here, I report empirical evidence, collected using playback experiments, showing that Japanese sika deer, Cevus nippon, use heterospecific food calls of Japanese macaques, Macaca fuscata yakui, for foraging efficiency. The deer and macaques both inhabit the wild forest of Yakushima Island with high population densities and share many food items. Anecdotal observations suggest that deer often wait to browse fruit falls under the tree where a macaque group is foraging. Furthermore, macaques frequently produce food calls during their foraging. If deer effectively obtain fruit from the leftovers of macaques, browsing fruit fall would provide a potential benefit to the deer, and, further, deer are likely to associate macaque food calls with feeding activity. The results showed that playback of macaque food calls under trees gathered significantly more deer than silence control periods. These results suggest that deer can associate macaque food calls with foraging activities and use heterospecific calls for foraging efficiency. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. ParticleCall: A particle filter for base calling in next-generation sequencing systems

    PubMed Central

    2012-01-01

    Background Next-generation sequencing systems are capable of rapid and cost-effective DNA sequencing, thus enabling routine sequencing tasks and taking us one step closer to personalized medicine. Accuracy and lengths of their reads, however, are yet to surpass those provided by the conventional Sanger sequencing method. This motivates the search for computationally efficient algorithms capable of reliable and accurate detection of the order of nucleotides in short DNA fragments from the acquired data. Results In this paper, we consider Illumina’s sequencing-by-synthesis platform which relies on reversible terminator chemistry and describe the acquired signal by reformulating its mathematical model as a Hidden Markov Model. Relying on this model and sequential Monte Carlo methods, we develop a parameter estimation and base calling scheme called ParticleCall. ParticleCall is tested on a data set obtained by sequencing phiX174 bacteriophage using Illumina’s Genome Analyzer II. The results show that the developed base calling scheme is significantly more computationally efficient than the best performing unsupervised method currently available, while achieving the same accuracy. Conclusions The proposed ParticleCall provides more accurate calls than the Illumina’s base calling algorithm, Bustard. At the same time, ParticleCall is significantly more computationally efficient than other recent schemes with similar performance, rendering it more feasible for high-throughput sequencing data analysis. Improvement of base calling accuracy will have immediate beneficial effects on the performance of downstream applications such as SNP and genotype calling. ParticleCall is freely available at https://sourceforge.net/projects/particlecall. PMID:22776067

  4. Group-based variant calling leveraging next-generation supercomputing for large-scale whole-genome sequencing studies.

    PubMed

    Standish, Kristopher A; Carland, Tristan M; Lockwood, Glenn K; Pfeiffer, Wayne; Tatineni, Mahidhar; Huang, C Chris; Lamberth, Sarah; Cherkas, Yauheniya; Brodmerkel, Carrie; Jaeger, Ed; Smith, Lance; Rajagopal, Gunaretnam; Curran, Mark E; Schork, Nicholas J

    2015-09-22

    Next-generation sequencing (NGS) technologies have become much more efficient, allowing whole human genomes to be sequenced faster and cheaper than ever before. However, processing the raw sequence reads associated with NGS technologies requires care and sophistication in order to draw compelling inferences about phenotypic consequences of variation in human genomes. It has been shown that different approaches to variant calling from NGS data can lead to different conclusions. Ensuring appropriate accuracy and quality in variant calling can come at a computational cost. We describe our experience implementing and evaluating a group-based approach to calling variants on large numbers of whole human genomes. We explore the influence of many factors that may impact the accuracy and efficiency of group-based variant calling, including group size, the biogeographical backgrounds of the individuals who have been sequenced, and the computing environment used. We make efficient use of the Gordon supercomputer cluster at the San Diego Supercomputer Center by incorporating job-packing and parallelization considerations into our workflow while calling variants on 437 whole human genomes generated as part of large association study. We ultimately find that our workflow resulted in high-quality variant calls in a computationally efficient manner. We argue that studies like ours should motivate further investigations combining hardware-oriented advances in computing systems with algorithmic developments to tackle emerging 'big data' problems in biomedical research brought on by the expansion of NGS technologies.

  5. Eprosartan

    MedlinePlus

    ... or in combination with other medications to treat high blood pressure. Eprosartan is in a class of medications called ... smoothly and the heart to pump more efficiently.High blood pressure is a common condition, and when not treated ...

  6. Generating Quitline calls during Australia's National Tobacco Campaign: effects of television advertisement execution and programme placement

    PubMed Central

    Carroll, T; Rock, B

    2003-01-01

    Objective: The study sought to measure the relative efficiency of different television advertisements and types of television programmes in which advertisements were placed, in generating calls to Australia's national Quitline. Design: The study entailed an analysis of the number of calls generated to the Quitline relative to the weight of advertising exposure (in target audience rating points (TARPs) for particular television advertisements and for placement of these advertisements in particular types of television programmes. A total of 238 television advertisement placements and 1769 calls to the Quitline were analysed in Sydney and Melbourne. Results: The more graphic "eye" advertisement conveying new information about the association between smoking and macular degeneration leading to blindness was more efficient in generating quitline calls than the "tar" advertisement, which reinforced the message of tar in a smoker's lungs. Combining the health effects advertisements with a quitline modelling advertisement tended to increase the efficiency of generating Quitline calls. Placing advertisements in lower involvement programmes appears to provide greater efficiency in generating Quitline calls than in higher involvement programmes. Conclusions: Tobacco control campaign planners can increase the number of calls to telephone quitlines by assessing the efficiency of particular advertisements to generate such calls. Pairing of health effect and quitline modelling advertisements can increase efficiency in generating calls. Placement of advertisements in lower involvement programme types may increase efficiency in generating Quitline calls. PMID:12878772

  7. Generating Quitline calls during Australia's National Tobacco Campaign: effects of television advertisement execution and programme placement.

    PubMed

    Carroll, T; Rock, B

    2003-09-01

    The study sought to measure the relative efficiency of different television advertisements and types of television programmes in which advertisements were placed, in generating calls to Australia's national Quitline. The study entailed an analysis of the number of calls generated to the Quitline relative to the weight of advertising exposure (in target audience rating points (TARPs) for particular television advertisements and for placement of these advertisements in particular types of television programmes. A total of 238 television advertisement placements and 1769 calls to the Quitline were analysed in Sydney and Melbourne. The more graphic "eye" advertisement conveying new information about the association between smoking and macular degeneration leading to blindness was more efficient in generating quitline calls than the "tar" advertisement, which reinforced the message of tar in a smoker's lungs. Combining the health effects advertisements with a quitline modelling advertisement tended to increase the efficiency of generating Quitline calls. Placing advertisements in lower involvement programmes appears to provide greater efficiency in generating Quitline calls than in higher involvement programmes. Tobacco control campaign planners can increase the number of calls to telephone quitlines by assessing the efficiency of particular advertisements to generate such calls. Pairing of health effect and quitline modelling advertisements can increase efficiency in generating calls. Placement of advertisements in lower involvement programme types may increase efficiency in generating Quitline calls.

  8. Fast High Resolution Volume Carving for 3D Plant Shoot Reconstruction

    PubMed Central

    Scharr, Hanno; Briese, Christoph; Embgenbroich, Patrick; Fischbach, Andreas; Fiorani, Fabio; Müller-Linow, Mark

    2017-01-01

    Volume carving is a well established method for visual hull reconstruction and has been successfully applied in plant phenotyping, especially for 3d reconstruction of small plants and seeds. When imaging larger plants at still relatively high spatial resolution (≤1 mm), well known implementations become slow or have prohibitively large memory needs. Here we present and evaluate a computationally efficient algorithm for volume carving, allowing e.g., 3D reconstruction of plant shoots. It combines a well-known multi-grid representation called “Octree” with an efficient image region integration scheme called “Integral image.” Speedup with respect to less efficient octree implementations is about 2 orders of magnitude, due to the introduced refinement strategy “Mark and refine.” Speedup is about a factor 1.6 compared to a highly optimized GPU implementation using equidistant voxel grids, even without using any parallelization. We demonstrate the application of this method for trait derivation of banana and maize plants. PMID:29033961

  9. Why Clone?

    MedlinePlus

    ... have been cloned already, including two relatives of cattle called the guar and the banteng, mouflon sheep, ... are underway to clone agricultural animals, such as cattle and pigs, that are efficient producers of high- ...

  10. Highly stable individual differences in the emission of separation calls during early development in the domestic cat.

    PubMed

    Hudson, Robyn; Chacha, Jimena; Bánszegi, Oxána; Szenczi, Péter; Rödel, Heiko G

    2017-04-01

    Study of the development of individuality is often hampered by rapidly changing behavioral repertoires and the need for minimally intrusive tests. We individually tested 33 kittens from eight litters of the domestic cat in an arena for 3 min once a week for the first 3 postnatal weeks, recording the number of separation calls and the duration of locomotor activity. Kittens showed consistent and stable individual differences on both measures across and within trials. Stable individual differences in the emission of separation calls across trials emerged already within the first 10 s of testing, and in locomotor activity within the first 30 s. Furthermore, individual kittens' emission of separation calls, but not their locomotor activity, was highly stable within trials. We conclude that separation calls provide an efficient, minimally intrusive and reliable measure of individual differences in behavior during development in the cat, and possibly in other species emitting such calls. © 2017 Wiley Periodicals, Inc.

  11. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 3, Issue 1

    DTIC Science & Technology

    2011-01-01

    release; distribution is unlimited. Multiscale Modeling of Materials The rotating reflector antenna associated with airport traffic control systems is...batteries and phased-array antennas . Power and efficiency studies evaluate on-board HPC systems and advanced image processing applications. 2010 marked...giving way in some applications to a newer technology called the phased array antenna system (sometimes called a beamformer, example shown at right

  12. Dynamic Forest: An Efficient Index Structure for NAND Flash Memory

    NASA Astrophysics Data System (ADS)

    Yang, Chul-Woong; Yong Lee, Ki; Ho Kim, Myoung; Lee, Yoon-Joon

    In this paper, we present an efficient index structure for NAND flash memory, called the Dynamic Forest (D-Forest). Since write operations incur high overhead on NAND flash memory, D-Forest is designed to minimize write operations for index updates. The experimental results show that D-Forest significantly reduces write operations compared to the conventional B+-tree.

  13. An episomal vector-based CRISPR/Cas9 system for highly efficient gene knockout in human pluripotent stem cells.

    PubMed

    Xie, Yifang; Wang, Daqi; Lan, Feng; Wei, Gang; Ni, Ting; Chai, Renjie; Liu, Dong; Hu, Shijun; Li, Mingqing; Li, Dajin; Wang, Hongyan; Wang, Yongming

    2017-05-24

    Human pluripotent stem cells (hPSCs) represent a unique opportunity for understanding the molecular mechanisms underlying complex traits and diseases. CRISPR/Cas9 is a powerful tool to introduce genetic mutations into the hPSCs for loss-of-function studies. Here, we developed an episomal vector-based CRISPR/Cas9 system, which we called epiCRISPR, for highly efficient gene knockout in hPSCs. The epiCRISPR system enables generation of up to 100% Insertion/Deletion (indel) rates. In addition, the epiCRISPR system enables efficient double-gene knockout and genomic deletion. To minimize off-target cleavage, we combined the episomal vector technology with double-nicking strategy and recent developed high fidelity Cas9. Thus the epiCRISPR system offers a highly efficient platform for genetic analysis in hPSCs.

  14. Towards a flexible middleware for context-aware pervasive and wearable systems.

    PubMed

    Muro, Marco; Amoretti, Michele; Zanichelli, Francesco; Conte, Gianni

    2012-11-01

    Ambient intelligence and wearable computing call for innovative hardware and software technologies, including a highly capable, flexible and efficient middleware, allowing for the reuse of existing pervasive applications when developing new ones. In the considered application domain, middleware should also support self-management, interoperability among different platforms, efficient communications, and context awareness. In the on-going "everything is networked" scenario scalability appears as a very important issue, for which the peer-to-peer (P2P) paradigm emerges as an appealing solution for connecting software components in an overlay network, allowing for efficient and balanced data distribution mechanisms. In this paper, we illustrate how all these concepts can be placed into a theoretical tool, called networked autonomic machine (NAM), implemented into a NAM-based middleware, and evaluated against practical problems of pervasive computing.

  15. Development of a targeted transgenesis strategy in highly differentiated cells: a powerful tool for functional genomic analysis.

    PubMed

    Puttini, Stefania; Ouvrard-Pascaud, Antoine; Palais, Gael; Beggah, Ahmed T; Gascard, Philippe; Cohen-Tannoudji, Michel; Babinet, Charles; Blot-Chabaud, Marcel; Jaisser, Frederic

    2005-03-16

    Functional genomic analysis is a challenging step in the so-called post-genomic field. Identification of potential targets using large-scale gene expression analysis requires functional validation to identify those that are physiologically relevant. Genetically modified cell models are often used for this purpose allowing up- or down-expression of selected targets in a well-defined and if possible highly differentiated cell type. However, the generation of such models remains time-consuming and expensive. In order to alleviate this step, we developed a strategy aimed at the rapid and efficient generation of genetically modified cell lines with conditional, inducible expression of various target genes. Efficient knock-in of various constructs, called targeted transgenesis, in a locus selected for its permissibility to the tet inducible system, was obtained through the stimulation of site-specific homologous recombination by the meganuclease I-SceI. Our results demonstrate that targeted transgenesis in a reference inducible locus greatly facilitated the functional analysis of the selected recombinant cells. The efficient screening strategy we have designed makes possible automation of the transfection and selection steps. Furthermore, this strategy could be applied to a variety of highly differentiated cells.

  16. 500 Watt Solar AMTEC Power System for Small Spacecraft.

    DTIC Science & Technology

    1995-03-01

    Thermal Modeling of High Efficiency AMTEC Cells ," Proceedings of the 24th National Heat Transfer Conference. Journal Article 12. SPACE...power flow calculation is the power required by the AMTEC cells which is the cell output power over the cell efficiency . The system model also...Converter ( AMTEC ) cell , called the multi-tube cell , integrated with an individual Thermal Energy Storage (TES) unit. The

  17. When seconds count: A study of communication variables in the opening segment of emergency calls.

    PubMed

    Penn, Claire; Koole, Tom; Nattrass, Rhona

    2017-09-01

    The opening sequence of an emergency call influences the efficiency of the ambulance dispatch time. The greeting sequences in 105 calls to a South African emergency service were analysed. Initial results suggested the advantage of a specific two-part opening sequence. An on-site experiment aimed at improving call efficiency was conducted during one shift (1100 calls). Results indicated reduced conversational repairs and a significant reduction of 4 seconds in mean call length. Implications for systems and training are derived.

  18. Triple-junction thin-film silicon solar cell fabricated on periodically textured substrate with a stabilized efficiency of 13.6%

    NASA Astrophysics Data System (ADS)

    Sai, Hitoshi; Matsui, Takuya; Koida, Takashi; Matsubara, Koji; Kondo, Michio; Sugiyama, Shuichiro; Katayama, Hirotaka; Takeuchi, Yoshiaki; Yoshida, Isao

    2015-05-01

    We report a high-efficiency triple-junction thin-film silicon solar cell fabricated with the so-called substrate configuration. It was verified whether the design criteria for developing single-junction microcrystalline silicon (μc-Si:H) solar cells are applicable to multijunction solar cells. Furthermore, a notably high short-circuit current density of 32.9 mA/cm2 was achieved in a single-junction μc-Si:H cell fabricated on a periodically textured substrate with a high-mobility front transparent contacting layer. These technologies were also combined into a-Si:H/μc-Si:H/μc-Si:H triple-junction cells, and a world record stabilized efficiency of 13.6% was achieved.

  19. A criterion autoscheduler for long range planning

    NASA Technical Reports Server (NTRS)

    Sponsler, Jeffrey L.

    1994-01-01

    A constraint-based scheduling system called SPIKE is used to create long-term schedules for the Hubble Space Telescope. A meta-level scheduler called the Criterion Autoscheduler for Long range planning (CASL) was created to guide SPIKE's schedule generation according to the agenda of the planning scientists. It is proposed that sufficient flexibility exists in a schedule to allow high level planning heuristics to be applied without adversely affected crucial constraints such as spacecraft efficiency. This hypothesis is supported by test data which is described.

  20. Antennas for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Huang, John

    1991-01-01

    A NASA sponsored program, called the Mobile Satellite (MSAT) system, has prompted the development of several innovative antennas at L-band frequencies. In the space segment of the MSAT system, an efficient, light weight, circularly polarized microstrip array that uses linearly polarized elements was developed as a multiple beam reflector feed system. In the ground segment, a low-cost, low-profile, and very efficient microstrip Yagi array was developed as a medium-gain mechanically steered vehicle antenna. Circularly shaped microstrip patches excited at higher-order modes were also developed as low-gain vehicle antennas. A more recent effort called for the development of a 20/30 GHz mobile terminal antenna for future-generation mobile satellite communications. To combat the high insertion loss encountered at 20/30 GHz, series-fed Monolithic Microwave Integrated Circuit (MMIC) microstrip array antennas are currently being developed. These MMIC arrays may lead to the development of several small but high-gain Ka-band antennas for the Personal Access Satellite Service planned for the 2000s.

  1. Investigating emergency room service quality using lean manufacturing.

    PubMed

    Abdelhadi, Abdelhakim

    2015-01-01

    The purpose of this paper is to investigate a lean manufacturing metric called Takt time as a benchmark evaluation measure to evaluate a public hospital's service quality. Lean manufacturing is an established managerial philosophy with a proven track record in industry. A lean metric called Takt time is applied as a measure to compare the relative efficiency between two emergency departments (EDs) belonging to the same public hospital. Outcomes guide managers to improve patient services and increase hospital performances. The patient treatment lead time within the hospital's two EDs (one department serves male and the other female patients) are the study's focus. A lean metric called Takt time is used to find the service's relative efficiency. Findings show that the lean manufacturing metric called Takt time can be used as an effective way to measure service efficiency by analyzing relative efficiency and identifies bottlenecks in different departments providing the same services. The paper presents a new procedure to compare relative efficiency between two EDs. It can be applied to any healthcare facility.

  2. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  3. LHCSR Expression under HSP70/RBCS2 Promoter as a Strategy to Increase Productivity in Microalgae.

    PubMed

    Perozeni, Federico; Stella, Giulio Rocco; Ballottari, Matteo

    2018-01-05

    Microalgae are unicellular photosynthetic organisms considered as potential alternative sources for biomass, biofuels or high value products. However, limited biomass productivity is commonly experienced in their cultivating system despite their high potential. One of the reasons for this limitation is the high thermal dissipation of the light absorbed by the outer layers of the cultures exposed to high light caused by the activation of a photoprotective mechanism called non-photochemical quenching (NPQ). In the model organism for green algae Chlamydomonas reinhardtii , NPQ is triggered by pigment binding proteins called light-harvesting-complexes-stress-related (LHCSRs), which are over-accumulated in high light. It was recently reported that biomass productivity can be increased both in microalgae and higher plants by properly tuning NPQ induction. In this work increased light use efficiency is reported by introducing in C. reinhardtii a LHCSR3 gene under the control of Heat Shock Protein 70 / RUBISCO small chain 2 promoter in a npq4 lhcsr1 background, a mutant strain knockout for all LHCSR genes. This complementation strategy leads to a low expression of LHCSR3 , causing a strong reduction of NPQ induction but is still capable of protecting from photodamage at high irradiance, resulting in an improved photosynthetic efficiency and higher biomass accumulation.

  4. Water Efficient Installations - A New Army Guidance Document

    DTIC Science & Technology

    2010-06-01

    Toilets 1.28 gpf or less, 50 manuf., 500+ models Required in CA Dual flush options also available WaterSense program provides certification and...lose 8760 to 219,000 gal/year Broken flush valve on toilet can lose 40 gal/hour US Army Corps of Engineers® Engineer Research and Development Center...Engineer Research and Development Center Toilets and Urinals ULFTs Ultra-Low Flush Toilet , also called low flow 1.28 gpf to 1.6 gpf HETs High Efficiency

  5. GRIDSS: sensitive and specific genomic rearrangement detection using positional de Bruijn graph assembly

    PubMed Central

    Do, Hongdo; Molania, Ramyar

    2017-01-01

    The identification of genomic rearrangements with high sensitivity and specificity using massively parallel sequencing remains a major challenge, particularly in precision medicine and cancer research. Here, we describe a new method for detecting rearrangements, GRIDSS (Genome Rearrangement IDentification Software Suite). GRIDSS is a multithreaded structural variant (SV) caller that performs efficient genome-wide break-end assembly prior to variant calling using a novel positional de Bruijn graph-based assembler. By combining assembly, split read, and read pair evidence using a probabilistic scoring, GRIDSS achieves high sensitivity and specificity on simulated, cell line, and patient tumor data, recently winning SV subchallenge #5 of the ICGC-TCGA DREAM8.5 Somatic Mutation Calling Challenge. On human cell line data, GRIDSS halves the false discovery rate compared to other recent methods while matching or exceeding their sensitivity. GRIDSS identifies nontemplate sequence insertions, microhomologies, and large imperfect homologies, estimates a quality score for each breakpoint, stratifies calls into high or low confidence, and supports multisample analysis. PMID:29097403

  6. Learning-Based Just-Noticeable-Quantization- Distortion Modeling for Perceptual Video Coding.

    PubMed

    Ki, Sehwan; Bae, Sung-Ho; Kim, Munchurl; Ko, Hyunsuk

    2018-07-01

    Conventional predictive video coding-based approaches are reaching the limit of their potential coding efficiency improvements, because of severely increasing computation complexity. As an alternative approach, perceptual video coding (PVC) has attempted to achieve high coding efficiency by eliminating perceptual redundancy, using just-noticeable-distortion (JND) directed PVC. The previous JNDs were modeled by adding white Gaussian noise or specific signal patterns into the original images, which were not appropriate in finding JND thresholds due to distortion with energy reduction. In this paper, we present a novel discrete cosine transform-based energy-reduced JND model, called ERJND, that is more suitable for JND-based PVC schemes. Then, the proposed ERJND model is extended to two learning-based just-noticeable-quantization-distortion (JNQD) models as preprocessing that can be applied for perceptual video coding. The two JNQD models can automatically adjust JND levels based on given quantization step sizes. One of the two JNQD models, called LR-JNQD, is based on linear regression and determines the model parameter for JNQD based on extracted handcraft features. The other JNQD model is based on a convolution neural network (CNN), called CNN-JNQD. To our best knowledge, our paper is the first approach to automatically adjust JND levels according to quantization step sizes for preprocessing the input to video encoders. In experiments, both the LR-JNQD and CNN-JNQD models were applied to high efficiency video coding (HEVC) and yielded maximum (average) bitrate reductions of 38.51% (10.38%) and 67.88% (24.91%), respectively, with little subjective video quality degradation, compared with the input without preprocessing applied.

  7. Easi-CRISPR for creating knock-in and conditional knockout mouse models using long ssDNA donors.

    PubMed

    Miura, Hiromi; Quadros, Rolen M; Gurumurthy, Channabasavaiah B; Ohtsuka, Masato

    2018-01-01

    CRISPR/Cas9-based genome editing can easily generate knockout mouse models by disrupting the gene sequence, but its efficiency for creating models that require either insertion of exogenous DNA (knock-in) or replacement of genomic segments is very poor. The majority of mouse models used in research involve knock-in (reporters or recombinases) or gene replacement (e.g., conditional knockout alleles containing exons flanked by LoxP sites). A few methods for creating such models have been reported that use double-stranded DNA as donors, but their efficiency is typically 1-10% and therefore not suitable for routine use. We recently demonstrated that long single-stranded DNAs (ssDNAs) serve as very efficient donors, both for insertion and for gene replacement. We call this method efficient additions with ssDNA inserts-CRISPR (Easi-CRISPR) because it is a highly efficient technology (efficiency is typically 30-60% and reaches as high as 100% in some cases). The protocol takes ∼2 months to generate the founder mice.

  8. Seizing Workplace Learning Affordances in High-Pressure Work Environments

    ERIC Educational Resources Information Center

    Gnaur, Dorina

    2010-01-01

    Work in call centres is often presented as a form of unskilled labour characterized by routinization, technological surveillance and tight management control aimed at reaching intensive performance targets. Beyond delivering business objectives, this control and efficiency strategy is often held to produce counterproductive effects with regard to…

  9. Halvade-RNA: Parallel variant calling from transcriptomic data using MapReduce.

    PubMed

    Decap, Dries; Reumers, Joke; Herzeel, Charlotte; Costanza, Pascal; Fostier, Jan

    2017-01-01

    Given the current cost-effectiveness of next-generation sequencing, the amount of DNA-seq and RNA-seq data generated is ever increasing. One of the primary objectives of NGS experiments is calling genetic variants. While highly accurate, most variant calling pipelines are not optimized to run efficiently on large data sets. However, as variant calling in genomic data has become common practice, several methods have been proposed to reduce runtime for DNA-seq analysis through the use of parallel computing. Determining the effectively expressed variants from transcriptomics (RNA-seq) data has only recently become possible, and as such does not yet benefit from efficiently parallelized workflows. We introduce Halvade-RNA, a parallel, multi-node RNA-seq variant calling pipeline based on the GATK Best Practices recommendations. Halvade-RNA makes use of the MapReduce programming model to create and manage parallel data streams on which multiple instances of existing tools such as STAR and GATK operate concurrently. Whereas the single-threaded processing of a typical RNA-seq sample requires ∼28h, Halvade-RNA reduces this runtime to ∼2h using a small cluster with two 20-core machines. Even on a single, multi-core workstation, Halvade-RNA can significantly reduce runtime compared to using multi-threading, thus providing for a more cost-effective processing of RNA-seq data. Halvade-RNA is written in Java and uses the Hadoop MapReduce 2.0 API. It supports a wide range of distributions of Hadoop, including Cloudera and Amazon EMR.

  10. The Significance of HBCUs to the Production of STEM Graduates: Answering the Call

    ERIC Educational Resources Information Center

    Owens, Emiel W.; Shelton, Andrea J.; Bloom, Collette M.; Cavil, J. Kenyatta

    2012-01-01

    Science, technology, engineering, and mathematics are areas designated as STEM disciplines. There is national and international attention being given to these fields as they are the foundation for partnerships and alliances in the global economy. Education beyond high school is necessary to achieve desired levels of competency and efficiency in…

  11. Estimate of Cost-Effective Potential for Minimum Efficiency Performance Standards in 13 Major World Economies Energy Savings, Environmental and Financial Impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Letschert, Virginie E.; Bojda, Nicholas; Ke, Jing

    2012-07-01

    This study analyzes the financial impacts on consumers of minimum efficiency performance standards (MEPS) for appliances that could be implemented in 13 major economies around the world. We use the Bottom-Up Energy Analysis System (BUENAS), developed at Lawrence Berkeley National Laboratory (LBNL), to analyze various appliance efficiency target levels to estimate the net present value (NPV) of policies designed to provide maximum energy savings while not penalizing consumers financially. These policies constitute what we call the “cost-effective potential” (CEP) scenario. The CEP scenario is designed to answer the question: How high can we raise the efficiency bar in mandatory programsmore » while still saving consumers money?« less

  12. 2.097μ Cth:YAG flashlamp pumped high energy high efficiency laser operation (patent pending)

    NASA Astrophysics Data System (ADS)

    Bar-Joseph, Dan

    2018-02-01

    Flashlamp pumped Cth:YAG lasers are mainly used in medical applications (urology). The main laser transition is at 2.13μ and is called a quasi-three level having an emission cross-section of 7x10-21 cm2 and a ground state absorption of approximately 5%/cm. Because of the relatively low absorption, combined with a modest emission cross-section, the laser requires high reflectivity output coupling, and therefore high intra-cavity energy density which limits the output to approximately 4J/pulse for reliable operation. This paper will describe a method of efficiently generating high output energy at low intra-cavity energy density by using an alternative 2.097μ transition having an emission cross-section of 5x10-21 cm2 and a ground level absorption of approximately 14%/cm.

  13. Beluga whale, Delphinapterus leucas, vocalizations from the Churchill River, Manitoba, Canada.

    PubMed

    Chmelnitsky, Elly G; Ferguson, Steven H

    2012-06-01

    Classification of animal vocalizations is often done by a human observer using aural and visual analysis but more efficient, automated methods have also been utilized to reduce bias and increase reproducibility. Beluga whale, Delphinapterus leucas, calls were described from recordings collected in the summers of 2006-2008, in the Churchill River, Manitoba. Calls (n=706) were classified based on aural and visual analysis, and call characteristics were measured; calls were separated into 453 whistles (64.2%; 22 types), 183 pulsed∕noisy calls (25.9%; 15 types), and 70 combined calls (9.9%; seven types). Measured parameters varied within each call type but less variation existed in pulsed and noisy call types and some combined call types than in whistles. A more efficient and repeatable hierarchical clustering method was applied to 200 randomly chosen whistles using six call characteristics as variables; twelve groups were identified. Call characteristics varied less in cluster analysis groups than in whistle types described by visual and aural analysis and results were similar to the whistle contours described. This study provided the first description of beluga calls in Hudson Bay and using two methods provides more robust interpretations and an assessment of appropriate methods for future studies.

  14. Simultaneous Genotype Calling and Haplotype Phasing Improves Genotype Accuracy and Reduces False-Positive Associations for Genome-wide Association Studies

    PubMed Central

    Browning, Brian L.; Yu, Zhaoxia

    2009-01-01

    We present a novel method for simultaneous genotype calling and haplotype-phase inference. Our method employs the computationally efficient BEAGLE haplotype-frequency model, which can be applied to large-scale studies with millions of markers and thousands of samples. We compare genotype calls made with our method to genotype calls made with the BIRDSEED, CHIAMO, GenCall, and ILLUMINUS genotype-calling methods, using genotype data from the Illumina 550K and Affymetrix 500K arrays. We show that our method has higher genotype-call accuracy and yields fewer uncalled genotypes than competing methods. We perform single-marker analysis of data from the Wellcome Trust Case Control Consortium bipolar disorder and type 2 diabetes studies. For bipolar disorder, the genotype calls in the original study yield 25 markers with apparent false-positive association with bipolar disorder at a p < 10−7 significance level, whereas genotype calls made with our method yield no associated markers at this significance threshold. Conversely, for markers with replicated association with type 2 diabetes, there is good concordance between genotype calls used in the original study and calls made by our method. Results from single-marker and haplotypic analysis of our method's genotype calls for the bipolar disorder study indicate that our method is highly effective at eliminating genotyping artifacts that cause false-positive associations in genome-wide association studies. Our new genotype-calling methods are implemented in the BEAGLE and BEAGLECALL software packages. PMID:19931040

  15. Optical Simulation and Fabrication of Pancharatnam (Geometric) Phase Devices from Liquid Crystals

    NASA Astrophysics Data System (ADS)

    Gao, Kun

    Pancharatnam made clear the concept of a phase-only device based on changes in the polarization state of light. A device of this type is sometimes called a circular polarization grating because of the polarization states of interfering light beams used to fabricate it by polarization holography. Here, we will call it a Pancharatnam (geometric) phase device to emphasize the fact that the phase of diffracted light does not have a discontinuous periodic profile but changes continuously. In this dissertation, using simulations and experiments, we have successfully demonstrated a 90% diffraction efficiency based on the Pancharatnam phase deflector (PPD) with the dual-twist structure. Unlike the conventional Pancharatnam phase deflector (c-PPD) limited to small diffraction angles, our work demonstrates that a device with a structural periodicity near the wavelength of light is highly efficient at deflecting light to large angles. Also, from a similar fabrication procedure, we have made an ultra-compact non-mechanical zoom lens system based on the Pancharatnam phase lens (PPL) with a low f-number and high efficiency. The wavelength dependence on the image quality is evaluated and shown to be satisfactory from red light to near-infrared machine vision systems. A demonstration device is shown with a 4x zoom ratio at a 633 nm wavelength. The unique characteristic of these devices is made possible through the use of azo-dye photoalignment materials to align a liquid crystal polymer (reactive mesogens). Furthermore, the proposed dual-twist design and fabrication opens the possibility for making a high-efficiency beam-steering device, a lens with an f-number less than 1.0, as well as a wide range of other potential applications in the optical and display industry. The details of simulation, fabrication, and characterization of these devices are shown in this dissertation.

  16. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index.

    PubMed

    Xue, Wufeng; Zhang, Lei; Mou, Xuanqin; Bovik, Alan C

    2014-02-01

    It is an important task to faithfully evaluate the perceptual quality of output images in many applications, such as image compression, image restoration, and multimedia streaming. A good image quality assessment (IQA) model should not only deliver high quality prediction accuracy, but also be computationally efficient. The efficiency of IQA metrics is becoming particularly important due to the increasing proliferation of high-volume visual data in high-speed networks. We present a new effective and efficient IQA model, called gradient magnitude similarity deviation (GMSD). The image gradients are sensitive to image distortions, while different local structures in a distorted image suffer different degrees of degradations. This motivates us to explore the use of global variation of gradient based local quality map for overall image quality prediction. We find that the pixel-wise gradient magnitude similarity (GMS) between the reference and distorted images combined with a novel pooling strategy-the standard deviation of the GMS map-can predict accurately perceptual image quality. The resulting GMSD algorithm is much faster than most state-of-the-art IQA methods, and delivers highly competitive prediction accuracy. MATLAB source code of GMSD can be downloaded at http://www4.comp.polyu.edu.hk/~cslzhang/IQA/GMSD/GMSD.htm.

  17. Honeybee economics: optimisation of foraging in a variable world.

    PubMed

    Stabentheiner, Anton; Kovac, Helmut

    2016-06-20

    In honeybees fast and efficient exploitation of nectar and pollen sources is achieved by persistent endothermy throughout the foraging cycle, which means extremely high energy costs. The need for food promotes maximisation of the intake rate, and the high costs call for energetic optimisation. Experiments on how honeybees resolve this conflict have to consider that foraging takes place in a variable environment concerning microclimate and food quality and availability. Here we report, in simultaneous measurements of energy costs, gains, and intake rate and efficiency, how honeybee foragers manage this challenge in their highly variable environment. If possible, during unlimited sucrose flow, they follow an 'investment-guided' ('time is honey') economic strategy promising increased returns. They maximise net intake rate by investing both own heat production and solar heat to increase body temperature to a level which guarantees a high suction velocity. They switch to an 'economizing' ('save the honey') optimisation of energetic efficiency if the intake rate is restricted by the food source when an increased body temperature would not guarantee a high intake rate. With this flexible and graded change between economic strategies honeybees can do both maximise colony intake rate and optimise foraging efficiency in reaction to environmental variation.

  18. Detecting very low allele fraction variants using targeted DNA sequencing and a novel molecular barcode-aware variant caller.

    PubMed

    Xu, Chang; Nezami Ranjbar, Mohammad R; Wu, Zhong; DiCarlo, John; Wang, Yexun

    2017-01-03

    Detection of DNA mutations at very low allele fractions with high accuracy will significantly improve the effectiveness of precision medicine for cancer patients. To achieve this goal through next generation sequencing, researchers need a detection method that 1) captures rare mutation-containing DNA fragments efficiently in the mix of abundant wild-type DNA; 2) sequences the DNA library extensively to deep coverage; and 3) distinguishes low level true variants from amplification and sequencing errors with high accuracy. Targeted enrichment using PCR primers provides researchers with a convenient way to achieve deep sequencing for a small, yet most relevant region using benchtop sequencers. Molecular barcoding (or indexing) provides a unique solution for reducing sequencing artifacts analytically. Although different molecular barcoding schemes have been reported in recent literature, most variant calling has been done on limited targets, using simple custom scripts. The analytical performance of barcode-aware variant calling can be significantly improved by incorporating advanced statistical models. We present here a highly efficient, simple and scalable enrichment protocol that integrates molecular barcodes in multiplex PCR amplification. In addition, we developed smCounter, an open source, generic, barcode-aware variant caller based on a Bayesian probabilistic model. smCounter was optimized and benchmarked on two independent read sets with SNVs and indels at 5 and 1% allele fractions. Variants were called with very good sensitivity and specificity within coding regions. We demonstrated that we can accurately detect somatic mutations with allele fractions as low as 1% in coding regions using our enrichment protocol and variant caller.

  19. RAMA: A file system for massively parallel computers

    NASA Technical Reports Server (NTRS)

    Miller, Ethan L.; Katz, Randy H.

    1993-01-01

    This paper describes a file system design for massively parallel computers which makes very efficient use of a few disks per processor. This overcomes the traditional I/O bottleneck of massively parallel machines by storing the data on disks within the high-speed interconnection network. In addition, the file system, called RAMA, requires little inter-node synchronization, removing another common bottleneck in parallel processor file systems. Support for a large tertiary storage system can easily be integrated in lo the file system; in fact, RAMA runs most efficiently when tertiary storage is used.

  20. Improving real-time efficiency of case-based reasoning for medical diagnosis.

    PubMed

    Park, Yoon-Joo

    2014-01-01

    Conventional case-based reasoning (CBR) does not perform efficiently for high volume dataset because of case-retrieval time. Some previous researches overcome this problem by clustering a case-base into several small groups, and retrieve neighbors within a corresponding group to a target case. However, this approach generally produces less accurate predictive performances than the conventional CBR. This paper suggests a new case-based reasoning method called the Clustering-Merging CBR (CM-CBR) which produces similar level of predictive performances than the conventional CBR with spending significantly less computational cost.

  1. Efficient Ab initio Modeling of Random Multicomponent Alloys

    DOE PAGES

    Jiang, Chao; Uberuaga, Blas P.

    2016-03-08

    Here, we present in this Letter a novel small set of ordered structures (SSOS) method that allows extremely efficient ab initio modeling of random multi-component alloys. Using inverse II-III spinel oxides and equiatomic quinary bcc (so-called high entropy) alloys as examples, we also demonstrate that a SSOS can achieve the same accuracy as a large supercell or a well-converged cluster expansion, but with significantly reduced computational cost. In particular, because of this efficiency, a large number of quinary alloy compositions can be quickly screened, leading to the identification of several new possible high entropy alloy chemistries. Furthermore, the SSOS methodmore » developed here can be broadly useful for the rapid computational design of multi-component materials, especially those with a large number of alloying elements, a challenging problem for other approaches.« less

  2. Performance of an 8 kW Hall Thruster

    DTIC Science & Technology

    2000-01-12

    For the purpose of either orbit raising and/or repositioning the Hall thruster must be capable of delivering sufficient thrust to minimize transfer...time. This coupled with the increasing on-board electric power capacity of military and commercial satellites, requires a high power Hall thruster that...development of a novel, high power Hall thruster , capable of efficient operation over a broad range of Isp and thrust. We call such a thruster the bi

  3. The New NASA-STD-4005 and NASA-HDBK-4006, Essentials for Direct-Drive Solar Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Ferguson, Dale C.

    2007-01-01

    High voltage solar arrays are necessary for direct-drive solar electric propulsion, which has many advantages, including simplicity and high efficiency. Even when direct-drive is not used, the use of high voltage solar arrays leads to power transmission and conversion efficiencies in electric propulsion Power Management and Distribution. Nevertheless, high voltage solar arrays may lead to temporary power disruptions, through the so-called primary electrostatic discharges, and may permanently damage arrays, through the so-called permanent sustained discharges between array strings. Design guidance is needed to prevent these solar array discharges, and to prevent high power drains through coupling between the electric propulsion devices and the high voltage solar arrays. While most electric propulsion systems may operate outside of Low Earth Orbit, the plasmas produced by their thrusters may interact with the high voltage solar arrays in many ways similarly to Low Earth Orbit plasmas. A brief description of previous experiences with high voltage electric propulsion systems will be given in this paper. There are two new official NASA documents available free through the NASA Standards website to help in designing and testing high voltage solar arrays for electric propulsion. They are NASA-STD-4005, the Low Earth Orbit Spacecraft Charging Design Standard, and NASA-HDBK-4006, the Low Earth Orbit Spacecraft Charging Design Handbook. Taken together, they can both educate the high voltage array designer in the engineering and science of spacecraft charging in the presence of dense plasmas and provide techniques for designing and testing high voltage solar arrays to prevent electrical discharges and power drains.

  4. Towards a flat 45%-efficient concentrator module

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mohedano, Rubén, E-mail: rmohedano@lpi-europe.com; Hernandez, Maikel; Vilaplana, Juan

    2015-09-28

    The so-called CCS{sup 4}FK is an ultra-flat photovoltaic system of high concentration and high efficiency, with potential to convert, ideally, the equivalent of a 45% of direct solar radiation into electricity by optimizing the usage of sun spectrum and by collecting part of the diffuse radiation, as a flat plate does. LPI has recently finished a design based on this concept and is now developing a prototype based on this technology, thanks to the support of FUNDACION REPSOL-Fondo de Emprendedores, which promotes entrepreneur projects in different areas linked to energy. This works shows some details of the actual design andmore » preliminary potential performance expected, according to accurate spectral simulations.« less

  5. Towards a flat 45%-efficient concentrator module

    NASA Astrophysics Data System (ADS)

    Mohedano, Rubén; Hernandez, Maikel; Vilaplana, Juan; Chaves, Julio; Miñano, Juan C.; Benitez, Pablo; Sorgato, S.; Falicoff, Waqidi

    2015-09-01

    The so-called CCS4FK is an ultra-flat photovoltaic system of high concentration and high efficiency, with potential to convert, ideally, the equivalent of a 45% of direct solar radiation into electricity by optimizing the usage of sun spectrum and by collecting part of the diffuse radiation, as a flat plate does. LPI has recently finished a design based on this concept and is now developing a prototype based on this technology, thanks to the support of FUNDACION REPSOL-Fondo de Emprendedores, which promotes entrepreneur projects in different areas linked to energy. This works shows some details of the actual design and preliminary potential performance expected, according to accurate spectral simulations.

  6. Sparse Contextual Activation for Efficient Visual Re-Ranking.

    PubMed

    Bai, Song; Bai, Xiang

    2016-03-01

    In this paper, we propose an extremely efficient algorithm for visual re-ranking. By considering the original pairwise distance in the contextual space, we develop a feature vector called sparse contextual activation (SCA) that encodes the local distribution of an image. Hence, re-ranking task can be simply accomplished by vector comparison under the generalized Jaccard metric, which has its theoretical meaning in the fuzzy set theory. In order to improve the time efficiency of re-ranking procedure, inverted index is successfully introduced to speed up the computation of generalized Jaccard metric. As a result, the average time cost of re-ranking for a certain query can be controlled within 1 ms. Furthermore, inspired by query expansion, we also develop an additional method called local consistency enhancement on the proposed SCA to improve the retrieval performance in an unsupervised manner. On the other hand, the retrieval performance using a single feature may not be satisfactory enough, which inspires us to fuse multiple complementary features for accurate retrieval. Based on SCA, a robust feature fusion algorithm is exploited that also preserves the characteristic of high time efficiency. We assess our proposed method in various visual re-ranking tasks. Experimental results on Princeton shape benchmark (3D object), WM-SRHEC07 (3D competition), YAEL data set B (face), MPEG-7 data set (shape), and Ukbench data set (image) manifest the effectiveness and efficiency of SCA.

  7. PIMS: Memristor-Based Processing-in-Memory-and-Storage.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Jeanine

    Continued progress in computing has augmented the quest for higher performance with a new quest for higher energy efficiency. This has led to the re-emergence of Processing-In-Memory (PIM) ar- chitectures that offer higher density and performance with some boost in energy efficiency. Past PIM work either integrated a standard CPU with a conventional DRAM to improve the CPU- memory link, or used a bit-level processor with Single Instruction Multiple Data (SIMD) control, but neither matched the energy consumption of the memory to the computation. We originally proposed to develop a new architecture derived from PIM that more effectively addressed energymore » efficiency for high performance scientific, data analytics, and neuromorphic applications. We also originally planned to implement a von Neumann architecture with arithmetic/logic units (ALUs) that matched the power consumption of an advanced storage array to maximize energy efficiency. Implementing this architecture in storage was our original idea, since by augmenting storage (in- stead of memory), the system could address both in-memory computation and applications that accessed larger data sets directly from storage, hence Processing-in-Memory-and-Storage (PIMS). However, as our research matured, we discovered several things that changed our original direc- tion, the most important being that a PIM that implements a standard von Neumann-type archi- tecture results in significant energy efficiency improvement, but only about a O(10) performance improvement. In addition to this, the emergence of new memory technologies moved us to propos- ing a non-von Neumann architecture, called Superstrider, implemented not in storage, but in a new DRAM technology called High Bandwidth Memory (HBM). HBM is a stacked DRAM tech- nology that includes a logic layer where an architecture such as Superstrider could potentially be implemented.« less

  8. Word-Decoding Skill Interacts with Working Memory Capacity to Influence Inference Generation during Reading

    ERIC Educational Resources Information Center

    Hamilton, Stephen; Freed, Erin; Long, Debra L.

    2016-01-01

    The aim of this study was to examine predictions derived from a proposal about the relation between word-decoding skill and working memory capacity, called verbal efficiency theory. The theory states that poor word representations and slow decoding processes consume resources in working memory that would otherwise be used to execute high-level…

  9. Discrete call types referring to predation risk enhance the efficiency of the meerkat sentinel system

    PubMed Central

    Rauber, R.; Manser, M. B.

    2017-01-01

    Sentinel behaviour, a form of coordinated vigilance, occurs in a limited range of species, mostly in cooperative breeders. In some species sentinels confirm their presence vocally by giving a single sentinel call type, whereby the rate and subtle acoustic changes provide graded information on the variation of perceived predation risk. In contrast, meerkat (Suricata suricatta) sentinels produce six different sentinel call types. Here we show that manipulation of perception of danger has different effects on the likelihood of emitting these different call types, and that these call types affect foraging individuals differently. Increasing the perceived predation risk by playing back alarm calls decreased the production rate of the common short note calls and increased the production rate of the rare long calls. Playbacks of short note calls increased foraging behaviour and decreased vigilance in the rest of the group, whereas the opposite was observed when playing long calls. This suggests that the common call types act as all-clear signals, while the rare call types have a warning function. Therefore, meerkats increase the efficiency of their sentinel system by producing several discrete call types that represent changes in predation risk and lead to adjustments of the group’s vigilance behaviour. PMID:28303964

  10. Development of a Compact Eleven Feed Cryostat for the Patriot 12-m Antenna System

    NASA Technical Reports Server (NTRS)

    Beaudoin, Christopher; Kildal, Per-Simon; Yang, Jian; Pantaleev, Miroslav

    2010-01-01

    The Eleven antenna has constant beam width, constant phase center location, and low spillover over a decade bandwidth. Therefore, it can feed a reflector for high aperture efficiency (also called feed efficiency). It is equally important that the feed efficiency and its subefficiencies not be degraded significantly by installing the feed in a cryostat. The MIT Haystack Observatory, with guidance from Onsala Space Observatory and Chalmers University, has been working to integrate the Eleven antenna into a compact cryostat suitable for the Patriot 12-m antenna. Since the analysis of the feed efficiencies in this presentation is purely computational, we first demonstrate the validity of the computed results by comparing them to measurements. Subsequently, we analyze the dependence of the cryostat size on the feed efficiencies, and, lastly, the Patriot 12-m subreflector is incorporated into the computational model to assess the overall broadband efficiency of the antenna system.

  11. Phosphor chessboard packaging for white LEDs in high efficiency and high color performance

    NASA Astrophysics Data System (ADS)

    Nguyen, Quang-Khoi; Chang, Yu-Yu; Lu, Chun-Yan; Yang, Tsung-Hsun; Chung, Te-Yuan; Sun, Ching-Cherng

    2016-09-01

    We performed the simulation of white LEDs packaging with different chessboard structures of white light converting phosphor layer covered on GaN die chip. Three different types of chessboard structures are called type 1, type 2 and type 3, respectively. The result of investigation according to the phosphor thickness show the increasing of thickness of phosphor layer are, the decreasing of output blue light power are. Meanwhile, the changes of yellow light are neglect. Type 3 shows highest packaging efficiency of 74.3 % compares with packaging efficiency of type 2 and type 1 (72.5 % and 71.3 %, respectively). Type 3 also shows the most effect of forward light. Attention that the type 3 chessboard structure gets packaging efficiency of 74.3 % at color temperature of daylight as well as high saving of phosphor amount. The color temperatures of three types of chessboard structure are higher than 5000 K, so they are suitable for lighting purpose. The angular correlate color temperature deviation (ACCTD) of type 1, type 2 and type 3 are 6500K, 11500K and 17000K, respectively.

  12. The efficiency of asset management strategies to reduce urban flood risk.

    PubMed

    ten Veldhuis, J A E; Clemens, F H L R

    2011-01-01

    In this study, three asset management strategies were compared with respect to their efficiency to reduce flood risk. Data from call centres at two municipalities were used to quantify urban flood risks associated with three causes of urban flooding: gully pot blockage, sewer pipe blockage and sewer overloading. The efficiency of three flood reduction strategies was assessed based on their effect on the causes contributing to flood risk. The sensitivity of the results to uncertainty in the data source, citizens' calls, was analysed through incorporation of uncertainty ranges taken from customer complaint literature. Based on the available data it could be shown that increasing gully pot blockage is the most efficient action to reduce flood risk, given data uncertainty. If differences between cause incidences are large, as in the presented case study, call data are sufficient to decide how flood risk can be most efficiently reduced. According to the results of this analysis, enlargement of sewer pipes is not an efficient strategy to reduce flood risk, because flood risk associated with sewer overloading is small compared to other failure mechanisms.

  13. Bismuth Oxysulfide and Its Polymer Nanocomposites for Efficient Purification

    PubMed Central

    Luo, Yidong; Qiao, Lina; Wang, Huanchun; Lan, Shun; Shen, Yang; Lin, Yuanhua; Nan, Cewen

    2018-01-01

    The danger of toxic organic pollutants in both aquatic and air environments calls for high-efficiency purification material. Herein, layered bismuth copper oxychalcogenides, BiCuSO, nanosheets of high photocatalytic activity were introduced to the PVDF (Polyvinylidene Fluoride). The fibrous membranes provide an easy, efficient, and recyclable way to purify organic pollutant. The physical and photophysical properties of the BiCuSO and its polymer composite were characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD), ultraviolet-visible diffuse reflection spectroscopy (DRS), X-ray photoelectron spectroscopy (XPS), electron spin resonance (EPR). Photocatalysis of Congo Red reveals that the BiCuSO/PVDF shows a superior photocatalytic activity of a 55% degradation rate in 70 min at visible light. The high photocatalytic activity is attributed to the exposed active {101} facets and the triple vacant associates VBi‴VO••VBi‴. By engineering the intrinsic defects on the surface of bismuth oxysulfide, high solar-driven photocatalytic activity can be approached. The successful fabrication of the bismuth oxysulfide and its polymer nanocomposites provides an easy and general approach for high-performance purification materials for various applications. PMID:29562701

  14. Evolution of Automotive Chopper Circuits Towards Ultra High Efficiency and Power Density

    NASA Astrophysics Data System (ADS)

    Pavlovsky, Martin; Tsuruta, Yukinori; Kawamura, Atsuo

    Automotive industry is considered to be one of the main contributors to environmental pollution and global warming. Therefore, many car manufacturers are in near future planning to introduce hybrid electric vehicles (HEV), fuel cell electric vehicles (FCEV) and pure electric vehicles (EV) to make our cars more environmentally friendly. These new vehicles require highly efficient and small power converters. In recent years, considerable improvements were made in designing such converters. In this paper, an approach based on so called Snubber Assisted Zero Voltage and Zero Current Switching topology otherwise also known as SAZZ is presented. This topology has evolved to be one of the leaders in the field of highly efficient converters with high power densities. Evolution and main features of this topology are briefly discussed. Capabilities of the topology are demonstrated on two case study prototypes based on different design approaches. The prototypes are designed to be fully bi-directional for peak power output of 30kW. Both designs reached efficiencies close to 99% in wide load range. Power densities over 40kW/litre are attainable in the same time. Combination of MOSFET technology and SAZZ topology is shown to be very beneficial to converters designed for EV applications.

  15. How nonlinear optics can merge interferometry for high resolution imaging

    NASA Astrophysics Data System (ADS)

    Ceus, D.; Reynaud, F.; Tonello, A.; Delage, L.; Grossard, L.

    2017-11-01

    High resolution stellar interferometers are very powerful efficient instruments to get a better knowledge of our Universe through the spatial coherence analysis of the light. For this purpose, the optical fields collected by each telescope Ti are mixed together. From the interferometric pattern, two expected information called the contrast Cij and the phase information φij are extracted. These information lead to the Vij, called the complex visibility, with Vij=Cijexp(jφij). For each telescope doublet TiTj, it is possible to get a complex visibility Vij. The Zernike Van Cittert theorem gives a relationship between the intensity distribution of the object observed and the complex visibility. The combination of the acquired complex visibilities and a reconstruction algorithm allows imaging reconstruction. To avoid lots of technical difficulties related to infrared optics (components transmission, thermal noises, thermal cooling…), our team proposes to explore the possibility of using nonlinear optical techniques. This is a promising alternative detection technique for detecting infrared optical signals. This way, we experimentally demonstrate that frequency conversion does not result in additional bias on the interferometric data supplied by a stellar interferometer. In this presentation, we report on wavelength conversion of the light collected by each telescope from the infrared domain to the visible. The interferometric pattern is observed in the visible domain with our, so called, upconversion interferometer. Thereby, one can benefit from mature optical components mainly used in optical telecommunications (waveguide, coupler, multiplexer…) and efficient low-noise detection schemes up to the single-photon counting level.

  16. Efficient biprediction decision scheme for fast high efficiency video coding encoding

    NASA Astrophysics Data System (ADS)

    Park, Sang-hyo; Lee, Seung-ho; Jang, Euee S.; Jun, Dongsan; Kang, Jung-Won

    2016-11-01

    An efficient biprediction decision scheme of high efficiency video coding (HEVC) is proposed for fast-encoding applications. For low-delay video applications, bidirectional prediction can be used to increase compression performance efficiently with previous reference frames. However, at the same time, the computational complexity of the HEVC encoder is significantly increased due to the additional biprediction search. Although a some research has attempted to reduce this complexity, whether the prediction is strongly related to both motion complexity and prediction modes in a coding unit has not yet been investigated. A method that avoids most compression-inefficient search points is proposed so that the computational complexity of the motion estimation process can be dramatically decreased. To determine if biprediction is critical, the proposed method exploits the stochastic correlation of the context of prediction units (PUs): the direction of a PU and the accuracy of a motion vector. Through experimental results, the proposed method showed that the time complexity of biprediction can be reduced to 30% on average, outperforming existing methods in view of encoding time, number of function calls, and memory access.

  17. Using learning automata to determine proper subset size in high-dimensional spaces

    NASA Astrophysics Data System (ADS)

    Seyyedi, Seyyed Hossein; Minaei-Bidgoli, Behrouz

    2017-03-01

    In this paper, we offer a new method called FSLA (Finding the best candidate Subset using Learning Automata), which combines the filter and wrapper approaches for feature selection in high-dimensional spaces. Considering the difficulties of dimension reduction in high-dimensional spaces, FSLA's multi-objective functionality is to determine, in an efficient manner, a feature subset that leads to an appropriate tradeoff between the learning algorithm's accuracy and efficiency. First, using an existing weighting function, the feature list is sorted and selected subsets of the list of different sizes are considered. Then, a learning automaton verifies the performance of each subset when it is used as the input space of the learning algorithm and estimates its fitness upon the algorithm's accuracy and the subset size, which determines the algorithm's efficiency. Finally, FSLA introduces the fittest subset as the best choice. We tested FSLA in the framework of text classification. The results confirm its promising performance of attaining the identified goal.

  18. Efficient system interrupt concept design at the microprogramming level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fakharzadeh, M.M.

    1989-01-01

    Over the past decade the demand for high speed super microcomputers has been tremendously increased. To satisfy this demand many high speed 32-bit microcomputers have been designed. However, the currently available 32-bit systems do not provide an adequate solution to many highly demanding problems such as in multitasking, and in interrupt driven applications, which both require context switching. Systems for these purposes usually incorporate sophisticated software. In order to be efficient, a high end microprocessor based system must satisfy stringent software demands. Although these microprocessors use the latest technology in the fabrication design and run at a very high speed,more » they still suffer from insufficient hardware support for such applications. All too often, this lack also is the premier cause of execution inefficiency. In this dissertation a micro-programmable control unit and operation unit is considered in an advanced design. An automaton controller is designed for high speed micro-level interrupt handling. Different stack models are designed for the single task and multitasking environment. The stacks are used for storage of various components of the processor during the interrupt calls, procedure calls, and task switching. A universal (as an example seven port) register file is designed for high speed parameter passing, and intertask communication in the multitasking environment. In addition, the register file provides a direct path between ALU and the peripheral data which is important in real-time control applications. The overall system is a highly parallel architecture, with no pipeline and internal cache memory, which allows the designer to be able to predict the processor's behavior during the critical times.« less

  19. Low-loss and energy efficient modulation in silicon photonic waveguides by adiabatic elimination scheme

    NASA Astrophysics Data System (ADS)

    Mrejen, Michael; Suchowski, Haim; Bachelard, Nicolas; Wang, Yuan; Zhang, Xiang

    2017-07-01

    High-speed Silicon Photonics calls for solutions providing a small footprint, high density, and minimum crosstalk, as exemplified by the recent development of integrated optical modulators. Yet, the performances of such modulators are hindered by intrinsic material losses, which results in low energy efficiency. Using the concept of Adiabatic Elimination, here, we introduce a scheme allowing for the low-loss modulation in densely packed waveguides. Our system is composed of two waveguides, whose coupling is mediated by an intermediate third waveguide. The signal is carried by the two outer modes, while the active control of their coupling is achieved via the intermediate dark mode. The modulation is performed by the manipulation of the central-waveguide mode index, leaving the signal-carrying waveguides unaffected by the loss. We discuss how Adiabatic Elimination provides a solution for mitigating signal losses and designing relatively compact, broadband, and energy-efficient integrated optical modulators.

  20. A series connection architecture for large-area organic photovoltaic modules with a 7.5% module efficiency.

    PubMed

    Hong, Soonil; Kang, Hongkyu; Kim, Geunjin; Lee, Seongyu; Kim, Seok; Lee, Jong-Hoon; Lee, Jinho; Yi, Minjin; Kim, Junghwan; Back, Hyungcheol; Kim, Jae-Ryoung; Lee, Kwanghee

    2016-01-05

    The fabrication of organic photovoltaic modules via printing techniques has been the greatest challenge for their commercial manufacture. Current module architecture, which is based on a monolithic geometry consisting of serially interconnecting stripe-patterned subcells with finite widths, requires highly sophisticated patterning processes that significantly increase the complexity of printing production lines and cause serious reductions in module efficiency due to so-called aperture loss in series connection regions. Herein we demonstrate an innovative module structure that can simultaneously reduce both patterning processes and aperture loss. By using a charge recombination feature that occurs at contacts between electron- and hole-transport layers, we devise a series connection method that facilitates module fabrication without patterning the charge transport layers. With the successive deposition of component layers using slot-die and doctor-blade printing techniques, we achieve a high module efficiency reaching 7.5% with area of 4.15 cm(2).

  1. JPRS Report, China.

    DTIC Science & Technology

    1988-06-24

    departments incapable of managing social economic development in Zhejiang. The broad introduction of the personal responsibility system and other... SOCIAL Wang Feng Addresses Conference on Civil Disputes, Commends Mediation Work [FAZHIBAO 14 JulJ 43 Educational Spending per High School Student Tar...RIBAO 11 JulJ 46 Zhejiang Governor Shen Zulun Calls for Personal , Administrative Efficiency fZHEJIANG RIBAO 22 JulJ 46 HONG KONG, MACAO Li Yu

  2. An Investigation of the Effects of Opportunities to Respond and Intelligence on Sight Word Retention Using Incremental Rehearsal

    ERIC Educational Resources Information Center

    Johnson, Kade Ryan

    2012-01-01

    High opportunities to respond (OTR) have been touted as being a key factor in a popular and effective drill procedure called incremental rehearsal (IR). However, IR has also been criticized because it takes more instructional time than other drill procedures and can be less time efficient. The current study compared the effectiveness and…

  3. Ambient noise induces independent shifts in call frequency and amplitude within the Lombard effect in echolocating bats

    PubMed Central

    Hage, Steffen R.; Jiang, Tinglei; Berquist, Sean W.; Feng, Jiang; Metzner, Walter

    2013-01-01

    The Lombard effect, an involuntary rise in call amplitude in response to masking ambient noise, represents one of the most efficient mechanisms to optimize signal-to-noise ratio. The Lombard effect occurs in birds and mammals, including humans, and is often associated with several other vocal changes, such as call frequency and duration. Most studies, however, have focused on noise-dependent changes in call amplitude. It is therefore still largely unknown how the adaptive changes in call amplitude relate to associated vocal changes such as frequency shifts, how the underlying mechanisms are linked, and if auditory feedback from the changing vocal output is needed. Here, we examined the Lombard effect and the associated changes in call frequency in a highly vocal mammal, echolocating horseshoe bats. We analyzed how bandpass-filtered noise (BFN; bandwidth 20 kHz) affected their echolocation behavior when BFN was centered on different frequencies within their hearing range. Call amplitudes increased only when BFN was centered on the dominant frequency component of the bats’ calls. In contrast, call frequencies increased for all but one BFN center frequency tested. Both amplitude and frequency rises were extremely fast and occurred in the first call uttered after noise onset, suggesting that no auditory feedback was required. The different effects that varying the BFN center frequency had on amplitude and frequency rises indicate different neural circuits and/or mechanisms underlying these changes. PMID:23431172

  4. The Productivity Analysis of Chennai Automotive Industry Cluster

    NASA Astrophysics Data System (ADS)

    Bhaskaran, E.

    2014-07-01

    Chennai, also called the Detroit of India, is India's second fastest growing auto market and exports auto components and vehicles to US, Germany, Japan and Brazil. For inclusive growth and sustainable development, 250 auto component industries in Ambattur, Thirumalisai and Thirumudivakkam Industrial Estates located in Chennai have adopted the Cluster Development Approach called Automotive Component Cluster. The objective is to study the Value Chain, Correlation and Data Envelopment Analysis by determining technical efficiency, peer weights, input and output slacks of 100 auto component industries in three estates. The methodology adopted is using Data Envelopment Analysis of Output Oriented Banker Charnes Cooper model by taking net worth, fixed assets, employment as inputs and gross output as outputs. The non-zero represents the weights for efficient clusters. The higher slack obtained reveals the excess net worth, fixed assets, employment and shortage in gross output. To conclude, the variables are highly correlated and the inefficient industries should increase their gross output or decrease the fixed assets or employment. Moreover for sustainable development, the cluster should strengthen infrastructure, technology, procurement, production and marketing interrelationships to decrease costs and to increase productivity and efficiency to compete in the indigenous and export market.

  5. Superficially Porous Particles with 1000 Å Pores for Large Biomolecule High Performance Liquid Chromatography and Polymer Size Exclusion Chromatography

    PubMed Central

    Wagner, Brian M.; Schuster, Stephanie A.; Boyes, Barry E.; Shields, Taylor J.; Miles, William L.; Haynes, Mark J.; Moran, Robert E.; Kirkland, Joseph J.; Schure, Mark R.

    2017-01-01

    To facilitate mass transport and column efficiency, solutes must have free access to particle pores to facilitate interactions with the stationary phase. To ensure this feature, particles should be used for HPLC separations which have pores sufficiently large to accommodate the solute without restricted diffusion. This paper describes the design and properties of superficially porous (also called Fused-Core®, core shell or porous shell) particles with very large (1000 Å) pores specifically developed for separating very large biomolecules and polymers. Separations of DNA fragments, monoclonal antibodies, large proteins and large polystyrene standards are used to illustrate the utility of these particles for efficient, high-resolution applications. PMID:28213987

  6. Superficially porous particles with 1000Å pores for large biomolecule high performance liquid chromatography and polymer size exclusion chromatography.

    PubMed

    Wagner, Brian M; Schuster, Stephanie A; Boyes, Barry E; Shields, Taylor J; Miles, William L; Haynes, Mark J; Moran, Robert E; Kirkland, Joseph J; Schure, Mark R

    2017-03-17

    To facilitate mass transport and column efficiency, solutes must have free access to particle pores to facilitate interactions with the stationary phase. To ensure this feature, particles should be used for HPLC separations which have pores sufficiently large to accommodate the solute without restricted diffusion. This paper describes the design and properties of superficially porous (also called Fused-Core ® , core shell or porous shell) particles with very large (1000Å) pores specifically developed for separating very large biomolecules and polymers. Separations of DNA fragments, monoclonal antibodies, large proteins and large polystyrene standards are used to illustrate the utility of these particles for efficient, high-resolution applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Development of a Low-Lift Chiller Controller and Simplified Precooling Control Algorithm - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gayeski, N.; Armstrong, Peter; Alvira, M.

    2011-11-30

    KGS Buildings LLC (KGS) and Pacific Northwest National Laboratory (PNNL) have developed a simplified control algorithm and prototype low-lift chiller controller suitable for model-predictive control in a demonstration project of low-lift cooling. Low-lift cooling is a highly efficient cooling strategy conceived to enable low or net-zero energy buildings. A low-lift cooling system consists of a high efficiency low-lift chiller, radiant cooling, thermal storage, and model-predictive control to pre-cool thermal storage overnight on an optimal cooling rate trajectory. We call the properly integrated and controlled combination of these elements a low-lift cooling system (LLCS). This document is the final report formore » that project.« less

  8. A new efficient mixture screening design for optimization of media.

    PubMed

    Rispoli, Fred; Shah, Vishal

    2009-01-01

    Screening ingredients for the optimization of media is an important first step to reduce the many potential ingredients down to the vital few components. In this study, we propose a new method of screening for mixture experiments called the centroid screening design. Comparison of the proposed design with Plackett-Burman, fractional factorial, simplex lattice design, and modified mixture design shows that the centroid screening design is the most efficient of all the designs in terms of the small number of experimental runs needed and for detecting high-order interaction among ingredients. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.

  9. The future of almanac services --- an HMNAO perspective

    NASA Astrophysics Data System (ADS)

    Bell, S.; Nelmes, S.; Prema, P.; Whittaker, J.

    2015-08-01

    This talk will explore the means for delivering almanac data currently under consideration by HM Nautical Almanac Office in the near to medium future. While there will be a need to continue printed almanacs, almanac data must be available in a variety of forms ranging from paper almanacs to traditional web services through to applications for mobile devices and smartphones. The supply of data using applications may call for a different philosophy in supplying ephemeris data, one that differentiates between an application that calls on a web server for its data and one that has built-in ephemerides. These ephemerides need to be of a reasonably high precision while maintaining a modest machine footprint. These services also need to provide a wide range of applications ranging from traditional sunrise/set data though to more specialized services such as celestial navigation. The work necessary to meet these goals involves efficient programming, intuitive user interfaces, compact and efficient ephemerides and a suitable range of tools to meet the user's needs.

  10. Efficient identification and referral of low-income women at high risk for hereditary breast cancer: a practice-based approach.

    PubMed

    Joseph, G; Kaplan, C; Luce, J; Lee, R; Stewart, S; Guerra, C; Pasick, R

    2012-01-01

    Identification of low-income women with the rare but serious risk of hereditary cancer and their referral to appropriate services presents an important public health challenge. We report the results of formative research to reach thousands of women for efficient identification of those at high risk and expedient access to free genetic services. External validity is maximized by emphasizing intervention fit with the two end-user organizations who must connect to make this possible. This study phase informed the design of a subsequent randomized controlled trial. We conducted a randomized controlled pilot study (n = 38) to compare two intervention models for feasibility and impact. The main outcome was receipt of genetic counseling during a two-month intervention period. Model 1 was based on the usual outcall protocol of an academic hospital genetic risk program, and Model 2 drew on the screening and referral procedures of a statewide toll-free phone line through which large numbers of high-risk women can be identified. In Model 1, the risk program proactively calls patients to schedule genetic counseling; for Model 2, women are notified of their eligibility for counseling and make the call themselves. We also developed and pretested a family history screener for administration by phone to identify women appropriate for genetic counseling. There was no statistically significant difference in receipt of genetic counseling between women randomized to Model 1 (3/18) compared with Model 2 (3/20) during the intervention period. However, when unresponsive women in Model 2 were called after 2 months, 7 more obtained counseling; 4 women from Model 1 were also counseled after the intervention. Thus, the intervention model that closely aligned with the risk program's outcall to high-risk women was found to be feasible and brought more low-income women to free genetic counseling. Our screener was easy to administer by phone and appeared to identify high-risk callers effectively. The model and screener are now in use in the main trial to test the effectiveness of this screening and referral intervention. A validation analysis of the screener is also underway. Identification of intervention strategies and tools, and their systematic comparison for impact and efficiency in the context where they will ultimately be used are critical elements of practice-based research. Copyright © 2012 S. Karger AG, Basel.

  11. Efficient massively parallel simulation of dynamic channel assignment schemes for wireless cellular communications

    NASA Technical Reports Server (NTRS)

    Greenberg, Albert G.; Lubachevsky, Boris D.; Nicol, David M.; Wright, Paul E.

    1994-01-01

    Fast, efficient parallel algorithms are presented for discrete event simulations of dynamic channel assignment schemes for wireless cellular communication networks. The driving events are call arrivals and departures, in continuous time, to cells geographically distributed across the service area. A dynamic channel assignment scheme decides which call arrivals to accept, and which channels to allocate to the accepted calls, attempting to minimize call blocking while ensuring co-channel interference is tolerably low. Specifically, the scheme ensures that the same channel is used concurrently at different cells only if the pairwise distances between those cells are sufficiently large. Much of the complexity of the system comes from ensuring this separation. The network is modeled as a system of interacting continuous time automata, each corresponding to a cell. To simulate the model, conservative methods are used; i.e., methods in which no errors occur in the course of the simulation and so no rollback or relaxation is needed. Implemented on a 16K processor MasPar MP-1, an elegant and simple technique provides speedups of about 15 times over an optimized serial simulation running on a high speed workstation. A drawback of this technique, typical of conservative methods, is that processor utilization is rather low. To overcome this, new methods were developed that exploit slackness in event dependencies over short intervals of time, thereby raising the utilization to above 50 percent and the speedup over the optimized serial code to about 120 times.

  12. High-sensitivity HLA typing by Saturated Tiling Capture Sequencing (STC-Seq).

    PubMed

    Jiao, Yang; Li, Ran; Wu, Chao; Ding, Yibin; Liu, Yanning; Jia, Danmei; Wang, Lifeng; Xu, Xiang; Zhu, Jing; Zheng, Min; Jia, Junling

    2018-01-15

    Highly polymorphic human leukocyte antigen (HLA) genes are responsible for fine-tuning the adaptive immune system. High-resolution HLA typing is important for the treatment of autoimmune and infectious diseases. Additionally, it is routinely performed for identifying matched donors in transplantation medicine. Although many HLA typing approaches have been developed, the complexity, low-efficiency and high-cost of current HLA-typing assays limit their application in population-based high-throughput HLA typing for donors, which is required for creating large-scale databases for transplantation and precision medicine. Here, we present a cost-efficient Saturated Tiling Capture Sequencing (STC-Seq) approach to capturing 14 HLA class I and II genes. The highly efficient capture (an approximately 23,000-fold enrichment) of these genes allows for simplified allele calling. Tests on five genes (HLA-A/B/C/DRB1/DQB1) from 31 human samples and 351 datasets using STC-Seq showed results that were 98% consistent with the known two sets of digitals (field1 and field2) genotypes. Additionally, STC can capture genomic DNA fragments longer than 3 kb from HLA loci, making the library compatible with the third-generation sequencing. STC-Seq is a highly accurate and cost-efficient method for HLA typing which can be used to facilitate the establishment of population-based HLA databases for the precision and transplantation medicine.

  13. Dynamic Call Admission Control Scheme Based on Predictive User Mobility Behavior for Cellular Networks

    NASA Astrophysics Data System (ADS)

    Intarasothonchun, Silada; Thipchaksurat, Sakchai; Varakulsiripunth, Ruttikorn; Onozato, Yoshikuni

    In this paper, we propose a modified scheme of MSODB and PMS, called Predictive User Mobility Behavior (PUMB) to improve performance of resource reservation and call admission control for cellular networks. This algorithm is proposed in which bandwidth is allocated more efficiently to neighboring cells by key mobility parameters in order to provide QoS guarantees for transferring traffic. The probability is used to form a cluster of cells and the shadow cluster, where a mobile unit is likely to visit. When a mobile unit may change the direction and migrate to the cell that does not belong to its shadow cluster, we can support it by making efficient use of predicted nonconforming call. Concomitantly, to ensure continuity of on-going calls with better utilization of resources, bandwidth is borrowed from predicted nonconforming calls and existing adaptive calls without affecting the minimum QoS guarantees. The performance of the PUMB is demonstrated by simulation results in terms of new call blocking probability, handoff call dropping probability, bandwidth utilization, call successful probability, and overhead message transmission when arrival rate and speed of mobile units are varied. Our results show that PUMB provides the better performances comparing with those of MSODB and PMS under different traffic conditions.

  14. The design of an air-cooled metallic high temperature radial turbine

    NASA Technical Reports Server (NTRS)

    Snyder, Philip H.; Roelke, Richard J.

    1988-01-01

    Recent trends in small advanced gas turbine engines call for higher turbine inlet temperatures. Advances in radial turbine technology have opened the way for a cooled metallic radial turbine capable of withstanding turbine inlet temperatures of 2500 F while meeting the challenge of high efficiency in this small flow size range. In response to this need, a small air-cooled radial turbine has been designed utilizing internal blade coolant passages. The coolant flow passage design is uniquely tailored to simultaneously meet rotor cooling needs and rotor fabrication constraints. The rotor flow-path design seeks to realize improved aerodynamic blade loading characteristics and high efficiency while satisfying rotor life requirements. An up-scaled version of the final engine rotor is currently under fabrication and, after instrumentation, will be tested in the warm turbine test facility at the NASA Lewis Research Center.

  15. Characterisation of Geiger-mode avalanche photodiodes for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Britvitch, I.; Johnson, I.; Renker, D.; Stoykov, A.; Lorenz, E.

    2007-02-01

    Recently developed multipixel Geiger-mode avalanche photodiodes (G-APDs) are very promising candidates for the detection of light in medical imaging instruments (e.g. positron emission tomography) as well as in high-energy physics experiments and astrophysical applications. G-APDs are especially well suited for morpho-functional imaging (multimodality PET/CT, SPECT/CT, PET/MRI, SPECT/MRI). G-APDs have many advantages compared to conventional photosensors such as photomultiplier tubes because of their compact size, low-power consumption, high quantum efficiency and insensitivity to magnetic fields. Compared to avalanche photodiodes and PIN diodes, they are advantageous because of their high gain, reduced sensitivity to pick up and the so-called nuclear counter effect and lower noise. We present measurements of the basic G-APD characteristics: photon detection efficiency, gain, inter-cell crosstalk, dynamic range, recovery time and dark count rate.

  16. Innovative use of global navigation satellite systems for flight inspection

    NASA Astrophysics Data System (ADS)

    Kim, Eui-Ho

    The International Civil Aviation Organization (ICAO) mandates flight inspection in every country to provide safety during flight operations. Among many criteria of flight inspection, airborne inspection of Instrument Landing Systems (ILS) is very important because the ILS is the primary landing guidance system worldwide. During flight inspection of the ILS, accuracy in ILS landing guidance is checked by using a Flight Inspection System (FIS). Therefore, a flight inspection system must have high accuracy in its positioning capability to detect any deviation so that accurate guidance of the ILS can be maintained. Currently, there are two Automated Flight Inspection Systems (AFIS). One is called Inertial-based AFIS, and the other one is called Differential GPS-based (DGPS-based) AFIS. The Inertial-based AFIS enables efficient flight inspection procedures, but its drawback is high cost because it requires a navigation-grade Inertial Navigation System (INS). On the other hand, the DGPS-based AFIS has relatively low cost, but flight inspection procedures require landing and setting up a reference receiver. Most countries use either one of the systems based on their own preferences. There are around 1200 ILS in the U.S., and each ILS must be inspected every 6 to 9 months. Therefore, it is important to manage the airborne inspection of the ILS in a very efficient manner. For this reason, the Federal Aviation Administration (FAA) mainly uses the Inertial-based AFIS, which has better efficiency than the DGPS-based AFIS in spite of its high cost. Obviously, the FAA spends tremendous resources on flight inspection. This thesis investigates the value of GPS and the FAA's augmentation to GPS for civil aviation called the Wide Area Augmentation System (or WAAS) for flight inspection. Because standard GPS or WAAS position outputs cannot meet the required accuracy for flight inspection, in this thesis, various algorithms are developed to improve the positioning ability of Flight Inspection Systems (FIS) by using GPS and WAAS in novel manners. The algorithms include Adaptive Carrier Smoothing (ACS), optimizing WAAS accuracy and stability, and reference point-based precise relative positioning for real-time and near-real-time applications. The developed systems are WAAS-aided FIS, WAAS-based FIS, and stand-alone GPS-based FIS. These systems offer both high efficiency and low cost, and they have different advantages over one another in terms of accuracy, integrity, and worldwide availability. The performance of each system is tested with experimental flight test data and shown to have accuracy that is sufficient for flight inspection and superior to the current Inertial-based AFIS.

  17. Distributed Efficient Similarity Search Mechanism in Wireless Sensor Networks

    PubMed Central

    Ahmed, Khandakar; Gregory, Mark A.

    2015-01-01

    The Wireless Sensor Network similarity search problem has received considerable research attention due to sensor hardware imprecision and environmental parameter variations. Most of the state-of-the-art distributed data centric storage (DCS) schemes lack optimization for similarity queries of events. In this paper, a DCS scheme with metric based similarity searching (DCSMSS) is proposed. DCSMSS takes motivation from vector distance index, called iDistance, in order to transform the issue of similarity searching into the problem of an interval search in one dimension. In addition, a sector based distance routing algorithm is used to efficiently route messages. Extensive simulation results reveal that DCSMSS is highly efficient and significantly outperforms previous approaches in processing similarity search queries. PMID:25751081

  18. Using an integrated information system to reduce interruptions and the number of non-relevant contacts in the inpatient pharmacy at tertiary hospital.

    PubMed

    Binobaid, Saleh; Almeziny, Mohammed; Fan, Ip-Shing

    2017-07-01

    Patient care is provided by a multidisciplinary team of healthcare professionals intended for high-quality and safe patient care. Accordingly, the team must work synergistically and communicate efficiently. In many hospitals, nursing and pharmacy communication relies mainly on telephone calls. In fact, numerous studies have reported telephone calls as a source of interruption for both pharmacy and nursing operations; therefore, the workload increases and the chance of errors raises. This report describes the implementation of an integrated information system that possibly can reduce telephone calls through providing real-time tracking capabilities and sorting prescriptions urgency, thus significantly improving traceability of all prescriptions inside pharmacy. The research design is based on a quasi-experiment using pre-post testing using the continuous improvement approach. The improvement project is performed using a six-step method. A survey was conducted in Prince Sultan Military Medical City (PSMMC) to measure the volume and types of telephone calls before and after implementation to evaluate the impact of the new system. Beforehand of the system implementation, during the two-week measurement period, all pharmacies received 4466 calls and the majority were follow-up calls. Subsequently of the integrated system rollout, there was a significant reduction ( p  > 0.001) in the volume of telephone calls to 2630 calls; besides, the calls nature turned out to be more professional inquiries ( p  > 0.001). As a result, avoidable interruptions and workload were decreased.

  19. Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess; Mount, David M.

    2007-01-01

    Interpolating scattered data points is a problem of wide ranging interest. A number of approaches for interpolation have been proposed both from theoretical domains such as computational geometry and in applications' fields such as geostatistics. Our motivation arises from geological and mining applications. In many instances data can be costly to compute and are available only at nonuniformly scattered positions. Because of the high cost of collecting measurements, high accuracy is required in the interpolants. One of the most popular interpolation methods in this field is called ordinary kriging. It is popular because it is a best linear unbiased estimator. The price for its statistical optimality is that the estimator is computationally very expensive. This is because the value of each interpolant is given by the solution of a large dense linear system. In practice, kriging problems have been solved approximately by restricting the domain to a small local neighborhood of points that lie near the query point. Determining the proper size for this neighborhood is a solved by ad hoc methods, and it has been shown that this approach leads to undesirable discontinuities in the interpolant. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. This process achieves its efficiency by replacing the large dense kriging system with a much sparser linear system. This technique has been applied to a restriction of our problem, called simple kriging, which is not unbiased for general data sets. In this paper we generalize these results by showing how to apply covariance tapering to the more general problem of ordinary kriging. Through experimentation we demonstrate the space and time efficiency and accuracy of approximating ordinary kriging through the use of covariance tapering combined with iterative methods for solving large sparse systems. We demonstrate our approach on large data sizes arising both from synthetic sources and from real applications.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pope, C.S.

    The future of the photovoltaic industry is discussed. The success of a small New Jersey high technology solar firm, Chronar, is described. The company started a modern, efficient commercial facility for the manufacture of 1 megawatt capacity amorphous silicon solar cells. The hatch manufacturing process consists of the deposition of the amorphous silicon layers in a machine called a 6 pack named for the six identical glow discharge chambers operated simultaneously by a mini-computer.

  1. Least Reliable Bits Coding (LRBC) for high data rate satellite communications

    NASA Technical Reports Server (NTRS)

    Vanderaar, Mark; Wagner, Paul; Budinger, James

    1992-01-01

    An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.

  2. Biofilm-Related Infections: Bridging the Gap between Clinical Management and Fundamental Aspects of Recalcitrance toward Antibiotics

    PubMed Central

    Lebeaux, David; Ghigo, Jean-Marc

    2014-01-01

    SUMMARY Surface-associated microbial communities, called biofilms, are present in all environments. Although biofilms play an important positive role in a variety of ecosystems, they also have many negative effects, including biofilm-related infections in medical settings. The ability of pathogenic biofilms to survive in the presence of high concentrations of antibiotics is called “recalcitrance” and is a characteristic property of the biofilm lifestyle, leading to treatment failure and infection recurrence. This review presents our current understanding of the molecular mechanisms of biofilm recalcitrance toward antibiotics and describes how recent progress has improved our capacity to design original and efficient strategies to prevent or eradicate biofilm-related infections. PMID:25184564

  3. Extending green technology innovations to enable greener fabs

    NASA Astrophysics Data System (ADS)

    Takahisa, Kenji; Yoo, Young Sun; Fukuda, Hitomi; Minegishi, Yuji; Enami, Tatsuo

    2015-03-01

    Semiconductor manufacturing industry has growing concerns over future environmental impacts as fabs expand and new generations of equipment become more powerful. Especially rare gases supply and price are one of prime concerns for operation of high volume manufacturing (HVM) fabs. Over the past year it has come to our attention that Helium and Neon gas supplies could be unstable and become a threat to HVM fabs. To address these concerns, Gigaphoton has implemented various green technologies under its EcoPhoton program. One of the initiatives is GigaTwin deep ultraviolet (DUV) lithography laser design which enables highly efficient and stable operation. Under this design laser systems run with 50% less electric energy and gas consumption compared to conventional laser designs. In 2014 we have developed two technologies to further reduce electric energy and gas efficiency. The electric energy reduction technology is called eGRYCOS (enhanced Gigaphoton Recycled Chamber Operation System), and it reduces electric energy by 15% without compromising any of laser performances. eGRYCOS system has a sophisticated gas flow design so that we can reduce cross-flow-fan rotation speed. The gas reduction technology is called eTGM (enhanced Total gas Manager) and it improves gas management system optimizing the gas injection and exhaust amount based on laser performances, resulting in 50% gas savings. The next steps in our roadmap technologies are indicated and we call for potential partners to work with us based on OPEN INNOVATION concept to successfully develop faster and better solutions in all possible areas where green innovation may exist.

  4. Efficiency Analysis of Public Universities in Thailand

    ERIC Educational Resources Information Center

    Kantabutra, Saranya; Tang, John C. S.

    2010-01-01

    This paper examines the performance of Thai public universities in terms of efficiency, using a non-parametric approach called data envelopment analysis. Two efficiency models, the teaching efficiency model and the research efficiency model, are developed and the analysis is conducted at the faculty level. Further statistical analyses are also…

  5. Inexpensive and Highly Reproducible Cloud-Based Variant Calling of 2,535 Human Genomes

    PubMed Central

    Shringarpure, Suyash S.; Carroll, Andrew; De La Vega, Francisco M.; Bustamante, Carlos D.

    2015-01-01

    Population scale sequencing of whole human genomes is becoming economically feasible; however, data management and analysis remains a formidable challenge for many research groups. Large sequencing studies, like the 1000 Genomes Project, have improved our understanding of human demography and the effect of rare genetic variation in disease. Variant calling on datasets of hundreds or thousands of genomes is time-consuming, expensive, and not easily reproducible given the myriad components of a variant calling pipeline. Here, we describe a cloud-based pipeline for joint variant calling in large samples using the Real Time Genomics population caller. We deployed the population caller on the Amazon cloud with the DNAnexus platform in order to achieve low-cost variant calling. Using our pipeline, we were able to identify 68.3 million variants in 2,535 samples from Phase 3 of the 1000 Genomes Project. By performing the variant calling in a parallel manner, the data was processed within 5 days at a compute cost of $7.33 per sample (a total cost of $18,590 for completed jobs and $21,805 for all jobs). Analysis of cost dependence and running time on the data size suggests that, given near linear scalability, cloud computing can be a cheap and efficient platform for analyzing even larger sequencing studies in the future. PMID:26110529

  6. Integrated on-line system for DNA sequencing by capillary electrophoresis: From template to called bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ton, H.; Yeung, E.S.

    1997-02-15

    An integrated on-line prototype for coupling a microreactor to capillary electrophoresis for DNA sequencing has been demonstrated. A dye-labeled terminator cycle-sequencing reaction is performed in a fused-silica capillary. Subsequently, the sequencing ladder is directly injected into a size-exclusion chromatographic column operated at nearly 95{degree}C for purification. On-line injection to a capillary for electrophoresis is accomplished at a junction set at nearly 70{degree}C. High temperature at the purification column and injection junction prevents the renaturation of DNA fragments during on-line transfer without affecting the separation. The high solubility of DNA in and the relatively low ionic strength of 1 x TEmore » buffer permit both effective purification and electrokinetic injection of the DNA sample. The system is compatible with highly efficient separations by a replaceable poly(ethylene oxide) polymer solution in uncoated capillary tubes. Future automation and adaptation to a multiple-capillary array system should allow high-speed, high-throughput DNA sequencing from templates to called bases in one step. 32 refs., 5 figs.« less

  7. L-shaped fiber-chip grating couplers with high directionality and low reflectivity fabricated with deep-UV lithography.

    PubMed

    Benedikovic, Daniel; Alonso-Ramos, Carlos; Pérez-Galacho, Diego; Guerber, Sylvain; Vakarin, Vladyslav; Marcaud, Guillaume; Le Roux, Xavier; Cassan, Eric; Marris-Morini, Delphine; Cheben, Pavel; Boeuf, Frédéric; Baudot, Charles; Vivien, Laurent

    2017-09-01

    Grating couplers enable position-friendly interfacing of silicon chips by optical fibers. The conventional coupler designs call upon comparatively complex architectures to afford efficient light coupling to sub-micron silicon-on-insulator (SOI) waveguides. Conversely, the blazing effect in double-etched gratings provides high coupling efficiency with reduced fabrication intricacy. In this Letter, we demonstrate for the first time, to the best of our knowledge, the realization of an ultra-directional L-shaped grating coupler, seamlessly fabricated by using 193 nm deep-ultraviolet (deep-UV) lithography. We also include a subwavelength index engineered waveguide-to-grating transition that provides an eight-fold reduction of the grating reflectivity, down to 1% (-20  dB). A measured coupling efficiency of -2.7  dB (54%) is achieved, with a bandwidth of 62 nm. These results open promising prospects for the implementation of efficient, robust, and cost-effective coupling interfaces for sub-micrometric SOI waveguides, as desired for large-volume applications in silicon photonics.

  8. EMPRESS: A European Project to Enhance Process Control Through Improved Temperature Measurement

    NASA Astrophysics Data System (ADS)

    Pearce, J. V.; Edler, F.; Elliott, C. J.; Rosso, L.; Sutton, G.; Andreu, A.; Machin, G.

    2017-08-01

    A new European project called EMPRESS, funded by the EURAMET program `European Metrology Program for Innovation and Research,' is described. The 3 year project, which started in the summer of 2015, is intended to substantially augment the efficiency of high-value manufacturing processes by improving temperature measurement techniques at the point of use. The project consortium has 18 partners and 5 external collaborators, from the metrology sector, high-value manufacturing, sensor manufacturing, and academia. Accurate control of temperature is key to ensuring process efficiency and product consistency and is often not achieved to the level required for modern processes. Enhanced efficiency of processes may take several forms including reduced product rejection/waste; improved energy efficiency; increased intervals between sensor recalibration/maintenance; and increased sensor reliability, i.e., reduced amount of operator intervention. Traceability of temperature measurements to the International Temperature Scale of 1990 (ITS-90) is a critical factor in establishing low measurement uncertainty and reproducible, consistent process control. Introducing such traceability in situ (i.e., within the industrial process) is a theme running through this project.

  9. An approach to efficient mobility management in intelligent networks

    NASA Technical Reports Server (NTRS)

    Murthy, K. M. S.

    1995-01-01

    Providing personal communication systems supporting full mobility require intelligent networks for tracking mobile users and facilitating outgoing and incoming calls over different physical and network environments. In realizing the intelligent network functionalities, databases play a major role. Currently proposed network architectures envision using the SS7-based signaling network for linking these DB's and also for interconnecting DB's with switches. If the network has to support ubiquitous, seamless mobile services, then it has to support additionally mobile application parts, viz., mobile origination calls, mobile destination calls, mobile location updates and inter-switch handovers. These functions will generate significant amount of data and require them to be transferred between databases (HLR, VLR) and switches (MSC's) very efficiently. In the future, the users (fixed or mobile) may use and communicate with sophisticated CPE's (e.g. multimedia, multipoint and multisession calls) which may require complex signaling functions. This will generate volumness service handling data and require efficient transfer of these message between databases and switches. Consequently, the network providers would be able to add new services and capabilities to their networks incrementally, quickly and cost-effectively.

  10. Comprehensive benchmarking of SNV callers for highly admixed tumor data

    PubMed Central

    Bohnert, Regina; Vivas, Sonia

    2017-01-01

    Precision medicine attempts to individualize cancer therapy by matching tumor-specific genetic changes with effective targeted therapies. A crucial first step in this process is the reliable identification of cancer-relevant variants, which is considerably complicated by the impurity and heterogeneity of clinical tumor samples. We compared the impact of admixture of non-cancerous cells and low somatic allele frequencies on the sensitivity and precision of 19 state-of-the-art SNV callers. We studied both whole exome and targeted gene panel data and up to 13 distinct parameter configurations for each tool. We found vast differences among callers. Based on our comprehensive analyses we recommend joint tumor-normal calling with MuTect, EBCall or Strelka for whole exome somatic variant calling, and HaplotypeCaller or FreeBayes for whole exome germline calling. For targeted gene panel data on a single tumor sample, LoFreqStar performed best. We further found that tumor impurity and admixture had a negative impact on precision, and in particular, sensitivity in whole exome experiments. At admixture levels of 60% to 90% sometimes seen in pathological biopsies, sensitivity dropped significantly, even when variants were originally present in the tumor at 100% allele frequency. Sensitivity to low-frequency SNVs improved with targeted panel data, but whole exome data allowed more efficient identification of germline variants. Effective somatic variant calling requires high-quality pathological samples with minimal admixture, a consciously selected sequencing strategy, and the appropriate variant calling tool with settings optimized for the chosen type of data. PMID:29020110

  11. A Game Map Complexity Measure Based on Hamming Distance

    NASA Astrophysics Data System (ADS)

    Li, Yan; Su, Pan; Li, Wenliang

    With the booming of PC game market, Game AI has attracted more and more researches. The interesting and difficulty of a game are relative with the map used in game scenarios. Besides, the path-finding efficiency in a game is also impacted by the complexity of the used map. In this paper, a novel complexity measure based on Hamming distance, called the Hamming complexity, is introduced. This measure is able to estimate the complexity of binary tileworld. We experimentally demonstrated that Hamming complexity is highly relative with the efficiency of A* algorithm, and therefore it is a useful reference to the designer when developing a game map.

  12. Interaction sorting method for molecular dynamics on multi-core SIMD CPU architecture.

    PubMed

    Matvienko, Sergey; Alemasov, Nikolay; Fomin, Eduard

    2015-02-01

    Molecular dynamics (MD) is widely used in computational biology for studying binding mechanisms of molecules, molecular transport, conformational transitions, protein folding, etc. The method is computationally expensive; thus, the demand for the development of novel, much more efficient algorithms is still high. Therefore, the new algorithm designed in 2007 and called interaction sorting (IS) clearly attracted interest, as it outperformed the most efficient MD algorithms. In this work, a new IS modification is proposed which allows the algorithm to utilize SIMD processor instructions. This paper shows that the improvement provides an additional gain in performance, 9% to 45% in comparison to the original IS method.

  13. Earth system feedback statistically extracted from the Indian Ocean deep-sea sediments recording Eocene hyperthermals.

    PubMed

    Yasukawa, Kazutaka; Nakamura, Kentaro; Fujinaga, Koichiro; Ikehara, Minoru; Kato, Yasuhiro

    2017-09-12

    Multiple transient global warming events occurred during the early Palaeogene. Although these events, called hyperthermals, have been reported from around the globe, geologic records for the Indian Ocean are limited. In addition, the recovery processes from relatively modest hyperthermals are less constrained than those from the severest and well-studied hothouse called the Palaeocene-Eocene Thermal Maximum. In this study, we constructed a new and high-resolution geochemical dataset of deep-sea sediments clearly recording multiple Eocene hyperthermals in the Indian Ocean. We then statistically analysed the high-dimensional data matrix and extracted independent components corresponding to the biogeochemical responses to the hyperthermals. The productivity feedback commonly controls and efficiently sequesters the excess carbon in the recovery phases of the hyperthermals via an enhanced biological pump, regardless of the magnitude of the events. Meanwhile, this negative feedback is independent of nannoplankton assemblage changes generally recognised in relatively large environmental perturbations.

  14. Halbach array motor/generators: A novel generalized electric machine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merritt, B.T.; Post, R.F.; Dreifuerst, G.R.

    1995-02-01

    For many years Klaus Halbach has been investigating novel designs for permanent magnet arrays, using advanced analytical approaches and employing a keen insight into such systems. One of his motivations for this research was to find more efficient means for the utilization of permanent magnets for use in particle accelerators and in the control of particle beams. As a result of his pioneering work, high power free-electron laser systems, such as the ones built at the Lawrence Livermore Laboratory, became feasible, and his arrays have been incorporated into other particle-focusing systems of various types. This paper reports another, quite different,more » application of Klaus` work, in the design of high power, high efficiency, electric generators and motors. When tested, these motor/generator systems display some rather remarkable properties. Their success derives from the special properties which these arrays, which the authors choose to call {open_quotes}Halbach arrays,{close_quotes} possess.« less

  15. User-Defined Data Distributions in High-Level Programming Languages

    NASA Technical Reports Server (NTRS)

    Diaconescu, Roxana E.; Zima, Hans P.

    2006-01-01

    One of the characteristic features of today s high performance computing systems is a physically distributed memory. Efficient management of locality is essential for meeting key performance requirements for these architectures. The standard technique for dealing with this issue has involved the extension of traditional sequential programming languages with explicit message passing, in the context of a processor-centric view of parallel computation. This has resulted in complex and error-prone assembly-style codes in which algorithms and communication are inextricably interwoven. This paper presents a high-level approach to the design and implementation of data distributions. Our work is motivated by the need to improve the current parallel programming methodology by introducing a paradigm supporting the development of efficient and reusable parallel code. This approach is currently being implemented in the context of a new programming language called Chapel, which is designed in the HPCS project Cascade.

  16. New amplifying laser concept for inertial fusion driver

    NASA Astrophysics Data System (ADS)

    Mourou, G. A.; Labaune, C.; Hulin, D.; Galvanauskas, A.

    2008-05-01

    This paper presents a new amplifying laser concept designed to produce high energy in either short or long pulses using coherent or incoherent addition of few millions fibers. These are called respectively CAN for Coherent Amplification Network and FAN for Fiber Amplification Network. The fibers would be large core or Large Mode Area (LMA) which have demonstrated up to 10, mJ output energy per fiber1. Such a system could meet the driver criteria of Inertial Fusion Energy (IFE) power plants based on Inertial Confinement Fusion (ICF), in particular high efficiency and high repetition rate.

  17. A large high-efficiency multi-layered Micromegas thermal neutron detector

    NASA Astrophysics Data System (ADS)

    Tsiledakis, G.; Delbart, A.; Desforge, D.; Giomataris, I.; Menelle, A.; Papaevangelou, T.

    2017-09-01

    Due to the so-called 3He shortage crisis, many detection techniques used nowadays for thermal neutrons are based on alternative converters. Thin films of 10B or 10B4C are used to convert neutrons into ionizing particles which are subsequently detected in gas proportional counters, but only for small or medium sensitive areas so far. The micro-pattern gaseous detector Micromegas has been developed for several years in Saclay and is used in a wide variety of neutron experiments combining high accuracy, high rate capability, excellent timing properties and robustness. We propose here a large high-efficiency Micromegas-based neutron detector with several 10B4C thin layers mounted inside the gas volume for thermal neutron detection. The principle and the fabrication of a single detector unit prototype with overall dimension of ~ 15 × 15 cm2 and a flexibility of modifying the number of layers of 10B4C neutron converters are described and simulated results are reported, demonstrating that typically five 10B4C layers of 1-2 μm thickness can lead to a detection efficiency of 20-40% for thermal neutrons and a spatial resolution of sub-mm. The design is well adapted to large sizes making possible the construction of a mosaic of several such detector units with a large area coverage and a high detection efficiency, showing the good potential of this novel technique.

  18. High-order computational fluid dynamics tools for aircraft design

    PubMed Central

    Wang, Z. J.

    2014-01-01

    Most forecasts predict an annual airline traffic growth rate between 4.5 and 5% in the foreseeable future. To sustain that growth, the environmental impact of aircraft cannot be ignored. Future aircraft must have much better fuel economy, dramatically less greenhouse gas emissions and noise, in addition to better performance. Many technical breakthroughs must take place to achieve the aggressive environmental goals set up by governments in North America and Europe. One of these breakthroughs will be physics-based, highly accurate and efficient computational fluid dynamics and aeroacoustics tools capable of predicting complex flows over the entire flight envelope and through an aircraft engine, and computing aircraft noise. Some of these flows are dominated by unsteady vortices of disparate scales, often highly turbulent, and they call for higher-order methods. As these tools will be integral components of a multi-disciplinary optimization environment, they must be efficient to impact design. Ultimately, the accuracy, efficiency, robustness, scalability and geometric flexibility will determine which methods will be adopted in the design process. This article explores these aspects and identifies pacing items. PMID:25024419

  19. Multiplex real-time PCR using temperature sensitive primer-supplying hydrogel particles and its application for malaria species identification

    PubMed Central

    Byoun, Mun Sub; Yoo, Changhoon; Sim, Sang Jun; Lim, Chae Seung; Kim, Sung Woo

    2018-01-01

    Real-time PCR, also called quantitative PCR (qPCR), has been powerful analytical tool for detection of nucleic acids since it developed. Not only for biological research but also for diagnostic needs, qPCR technique requires capacity to detect multiple genes in recent years. Solid phase PCR (SP-PCR) where one or two directional primers are immobilized on solid substrates could analyze multiplex genetic targets. However, conventional SP-PCR was subjected to restriction of application for lack of PCR efficiency and quantitative resolution. Here we introduce an advanced qPCR with primer-incorporated network (PIN). One directional primers are immobilized in the porous hydrogel particle by covalent bond and the other direction of primers are temporarily immobilized at so-called 'Supplimers'. Supplimers released the primers to aqueous phase in the hydrogel at the thermal cycling of PCR. It induced the high PCR efficiency over 92% with high reliability. It reduced the formation of primer dimers and improved the selectivity of qPCR thanks to the strategy of 'right primers supplied to right place only'. By conducting a six-plex qPCR of 30 minutes, we analyzed DNA samples originated from malaria patients and successfully identified malaria species in a single reaction. PMID:29293604

  20. Metasurface holograms reaching 80% efficiency.

    PubMed

    Zheng, Guoxing; Mühlenbernd, Holger; Kenney, Mitchell; Li, Guixin; Zentgraf, Thomas; Zhang, Shuang

    2015-04-01

    Surfaces covered by ultrathin plasmonic structures--so-called metasurfaces--have recently been shown to be capable of completely controlling the phase of light, representing a new paradigm for the design of innovative optical elements such as ultrathin flat lenses, directional couplers for surface plasmon polaritons and wave plate vortex beam generation. Among the various types of metasurfaces, geometric metasurfaces, which consist of an array of plasmonic nanorods with spatially varying orientations, have shown superior phase control due to the geometric nature of their phase profile. Metasurfaces have recently been used to make computer-generated holograms, but the hologram efficiency remained too low at visible wavelengths for practical purposes. Here, we report the design and realization of a geometric metasurface hologram reaching diffraction efficiencies of 80% at 825 nm and a broad bandwidth between 630 nm and 1,050 nm. The 16-level-phase computer-generated hologram demonstrated here combines the advantages of a geometric metasurface for the superior control of the phase profile and of reflectarrays for achieving high polarization conversion efficiency. Specifically, the design of the hologram integrates a ground metal plane with a geometric metasurface that enhances the conversion efficiency between the two circular polarization states, leading to high diffraction efficiency without complicating the fabrication process. Because of these advantages, our strategy could be viable for various practical holographic applications.

  1. Tn5-Mob transposon mediated transfer of salt tolerance and symbiotic characteristics between Rhizobia genera.

    PubMed

    Yang, S; Wu, Z; Gao, W; Li, J

    1993-01-01

    Rhizobium meliloti 042B is a fast-growing, salt-tolerant and high efficiency nitrogen-fixing symbiont with alfalfa. Bradyrhizobium japonicum USDA110 grows slowly, and cannot grow in YMA medium containing 0.1M NaCl, but nodulates and fixed nitrogen efficiently with soybean. Eighty-six transconjugants, called SR, were obtained by inserting Tn5-Mob randomly into genomes of 042B using pSUP5011 and helper plasmid RP4. Selecting 4 SR strains randomly and introducing DNA fragment of SR into USDA110 with helper plasmid R68.45 by triparental mating, 106 transconjugants, called BSR, were constructed. Most of BSR strains had the fast-growing phenotype and could tolerate 0.3-0.5M NaCl generally. Some of them produced melanine. When soybean and alfalfa were inoculated with these transconjugants BSR, 47 out of 90 BSR were found to nodulate in both of these plants, but no nitrogenase activity was observed with alfalfa; 26 strains could only nodulate and fix nitrogen in soybean; 13 strains could nodulate in alfalfa but did not fix nitrogen; 4 strains failed to nodulate in either soybean or alfalfa. Among them, 4 transconjugants which tolerated and fixed nitrogen efficiently in soybean were constructed.

  2. Entropy Beacon: A Hairpin-Free DNA Amplification Strategy for Efficient Detection of Nucleic Acids

    PubMed Central

    2015-01-01

    Here, we propose an efficient strategy for enzyme- and hairpin-free nucleic acid detection called an entropy beacon (abbreviated as Ebeacon). Different from previously reported DNA hybridization/displacement-based strategies, Ebeacon is driven forward by increases in the entropy of the system, instead of free energy released from new base-pair formation. Ebeacon shows high sensitivity, with a detection limit of 5 pM target DNA in buffer and 50 pM in cellular homogenate. Ebeacon also benefits from the hairpin-free amplification strategy and zero-background, excellent thermostability from 20 °C to 50 °C, as well as good resistance to complex environments. In particular, based on the huge difference between the breathing rate of a single base pair and two adjacent base pairs, Ebeacon also shows high selectivity toward base mutations, such as substitution, insertion, and deletion and, therefore, is an efficient nucleic acid detection method, comparable to most reported enzyme-free strategies. PMID:26505212

  3. Bell violation using entangled photons without the fair-sampling assumption.

    PubMed

    Giustina, Marissa; Mech, Alexandra; Ramelow, Sven; Wittmann, Bernhard; Kofler, Johannes; Beyer, Jörn; Lita, Adriana; Calkins, Brice; Gerrits, Thomas; Nam, Sae Woo; Ursin, Rupert; Zeilinger, Anton

    2013-05-09

    The violation of a Bell inequality is an experimental observation that forces the abandonment of a local realistic viewpoint--namely, one in which physical properties are (probabilistically) defined before and independently of measurement, and in which no physical influence can propagate faster than the speed of light. All such experimental violations require additional assumptions depending on their specific construction, making them vulnerable to so-called loopholes. Here we use entangled photons to violate a Bell inequality while closing the fair-sampling loophole, that is, without assuming that the sample of measured photons accurately represents the entire ensemble. To do this, we use the Eberhard form of Bell's inequality, which is not vulnerable to the fair-sampling assumption and which allows a lower collection efficiency than other forms. Technical improvements of the photon source and high-efficiency transition-edge sensors were crucial for achieving a sufficiently high collection efficiency. Our experiment makes the photon the first physical system for which each of the main loopholes has been closed, albeit in different experiments.

  4. CRISPR-Cas9, a tool to efficiently increase the development of recombinant African swine fever viruses.

    PubMed

    Borca, Manuel V; Holinka, Lauren G; Berggren, Keith A; Gladue, Douglas P

    2018-02-16

    African swine fever virus (ASFV) causes a highly contagious disease called African swine fever. This disease is often lethal for domestic pigs, causing extensive losses for the swine industry. ASFV is a large and complex double stranded DNA virus. Currently there is no commercially available treatment or vaccine to prevent this devastating disease. Development of recombinant ASFV for producing live-attenuated vaccines or studying the involvement of specific genes in virus virulence has relied on the relatively rare event of homologous recombination in primary swine macrophages, causing difficulty to purify the recombinant virus from the wild-type parental ASFV. Here we present the use of the CRISPR-Cas9 gene editing system as a more robust and efficient system to produce recombinant ASFVs. Using CRISPR-Cas9 a recombinant virus was efficiently developed by deleting the non-essential gene 8-DR from the genome of the highly virulent field strain Georgia07 using swine macrophages as cell substrate.

  5. Single-Layer Halide Perovskite Light-Emitting Diodes with Sub-Band Gap Turn-On Voltage and High Brightness.

    PubMed

    Li, Junqiang; Shan, Xin; Bade, Sri Ganesh R; Geske, Thomas; Jiang, Qinglong; Yang, Xin; Yu, Zhibin

    2016-10-03

    Charge-carrier injection into an emissive semiconductor thin film can result in electroluminescence and is generally achieved by using a multilayer device structure, which requires an electron-injection layer (EIL) between the cathode and the emissive layer and a hole-injection layer (HIL) between the anode and the emissive layer. The recent advancement of halide perovskite semiconductors opens up a new path to electroluminescent devices with a greatly simplified device structure. We report cesium lead tribromide light-emitting diodes (LEDs) without the aid of an EIL or HIL. These so-called single-layer LEDs have exhibited a sub-band gap turn-on voltage. The devices obtained a brightness of 591 197 cd m -2 at 4.8 V, with an external quantum efficiency of 5.7% and a power efficiency of 14.1 lm W -1 . Such an advancement demonstrates that very high efficiency of electron and hole injection can be obtained in perovskite LEDs even without using an EIL or HIL.

  6. A series connection architecture for large-area organic photovoltaic modules with a 7.5% module efficiency

    PubMed Central

    Hong, Soonil; Kang, Hongkyu; Kim, Geunjin; Lee, Seongyu; Kim, Seok; Lee, Jong-Hoon; Lee, Jinho; Yi, Minjin; Kim, Junghwan; Back, Hyungcheol; Kim, Jae-Ryoung; Lee, Kwanghee

    2016-01-01

    The fabrication of organic photovoltaic modules via printing techniques has been the greatest challenge for their commercial manufacture. Current module architecture, which is based on a monolithic geometry consisting of serially interconnecting stripe-patterned subcells with finite widths, requires highly sophisticated patterning processes that significantly increase the complexity of printing production lines and cause serious reductions in module efficiency due to so-called aperture loss in series connection regions. Herein we demonstrate an innovative module structure that can simultaneously reduce both patterning processes and aperture loss. By using a charge recombination feature that occurs at contacts between electron- and hole-transport layers, we devise a series connection method that facilitates module fabrication without patterning the charge transport layers. With the successive deposition of component layers using slot-die and doctor-blade printing techniques, we achieve a high module efficiency reaching 7.5% with area of 4.15 cm2. PMID:26728507

  7. High-Price And Low-Price Physician Practices Do Not Differ Significantly On Care Quality Or Efficiency

    PubMed Central

    Roberts, Eric T.; Mehrotra, Ateev; McWilliams, J. Michael

    2017-01-01

    Provider consolidation has intensified concerns that providers with market power may be able to charge higher prices without having to deliver better care. Providers have argued that higher prices cover the costs of delivering higher-quality care. We examined the relationship between physician practice prices for outpatient services and the quality and efficiency of care provided to their patients. Using commercial claims, we classified practices as high-priced or low-priced. We compared care quality, utilization, and spending between high-priced and low-priced practices in the same areas using data from the Consumer Assessment of Health Care Providers and Systems survey and linked claims for Medicare beneficiaries. Compared with low-priced practices, high-priced practices were much larger and received 36% higher prices. Patients of high-priced practices reported significantly higher scores on some measures of care coordination and management, but did not differ meaningfully in their overall care ratings, other domains of patient experiences (including physician ratings and access to care), receipt of mammography, vaccinations, or diabetes services, acute care use, or total Medicare spending. These findings suggest an overall weak relationship between practices’ prices and the quality and efficiency of care they provide, calling into question claims that high-priced providers deliver substantially higher-value care. PMID:28461352

  8. Application of porous medium for efficiency improvement of a concentrated solar air heating system

    NASA Astrophysics Data System (ADS)

    Prasartkaew, Boonrit

    2018-01-01

    The objective of this study is to evaluate the thermal efficiency of a concentrated solar collector for a high temperature air heating system. The proposed system consists of a 25-m2 focused multi-flat-mirror solar heliostat equipped with a porous medium solar collector/receiver which was installed on the top of a 3-m tower, called ‘tower receiver’. To know how the system efficiency cloud be improved by using porous medium, the proposed system with and without porous medium were tested and the comparative study was performed. The experimental results reveal that, for the proposed system, application of porous medium is promising, the efficiency can be increased about 2 times compared to the conventional one. In addition, due to the porous medium used in this study was the waste material with very low cost. It can be summarized that the substantial efficiency improvement with very low investment cost of the proposed system seem to be a vital measures for addressing the energy issues.

  9. Quantifying the flow efficiency in constant-current capacitive deionization.

    PubMed

    Hawks, Steven A; Knipe, Jennifer M; Campbell, Patrick G; Loeb, Colin K; Hubert, McKenzie A; Santiago, Juan G; Stadermann, Michael

    2018-02-01

    Here we detail a previously unappreciated loss mechanism inherent to capacitive deionization (CDI) cycling operation that has a substantial role determining performance. This mechanism reflects the fact that desalinated water inside a cell is partially lost to re-salination if desorption is carried out immediately after adsorption. We describe such effects by a parameter called the flow efficiency, and show that this efficiency is distinct from and yet multiplicative with other highly-studied adsorption efficiencies. Flow losses can be minimized by flowing more feed solution through the cell during desalination; however, this also results in less effluent concentration reduction. While the rationale outlined here is applicable to all CDI cell architectures that rely on cycling, we validate our model with a flow-through electrode CDI device operated in constant-current mode. We find excellent agreement between flow efficiency model predictions and experimental results, thus giving researchers simple equations by which they can estimate this distinct loss process for their operation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Efficient Identification of Low-Income Asian American Women at High Risk for Hepatitis B

    PubMed Central

    Joseph, Galen; Nguyen, Kim; Nguyen, Tung; Stewart, Susan; Davis, Sharon; Kevany, Sebastian; Marquez, Titas; Pasick, Rena

    2015-01-01

    Hepatitis B disproportionately affects Asian Americans. Because outreach to promote testing and vaccination can be intensive and costly, we assessed the feasibility of an efficient strategy to identify Asian Americans at risk. Prior research with California’s statewide toll-free phone service where low-income women call for free cancer screening found 50% of English- and Spanish-speaking callers were willing to participate in a study on health topics other than cancer screening. The current study ascertained whether Asian Americans could be recruited. Among 200 eligible callers, 50% agreed to take part (95% confidence interval 43%–57%), a rate comparable to our previous study. Subsequent qualitative interviews revealed that receptivity to recruitment was due to trust in the phone service and women’s need for health services and information. This was a relatively low-intensity intervention in that, on average, only five minutes additional call time was required to identify women at risk and provide a brief educational message. Underserved women from diverse backgrounds may be reached in large numbers through existing communication channels. PMID:24185165

  11. A Note on Improving Process Efficiency in Panel Surveys with Paradata

    ERIC Educational Resources Information Center

    Kreuter, Frauke; Müller, Gerrit

    2015-01-01

    Call scheduling is a challenge for surveys around the world. Unlike cross-sectional surveys, panel surveys can use information from prior waves to enhance call-scheduling algorithms. Past observational studies showed the benefit of calling panel cases at times that had been successful in the past. This article is the first to experimentally assign…

  12. Cycle analysis of MCFC/gas turbine system

    NASA Astrophysics Data System (ADS)

    Musa, Abdullatif; Alaktiwi, Abdulsalam; Talbi, Mosbah

    2017-11-01

    High temperature fuel cells such as the solid oxide fuel cell (SOFC) and the molten carbonate fuel cell (MCFC) are considered extremely suitable for electrical power plant application. The molten carbonate fuel cell (MCFC) performances is evaluated using validated model for the internally reformed (IR) fuel cell. This model is integrated in Aspen Plus™. Therefore, several MCFC/Gas Turbine systems are introduced and investigated. One of this a new cycle is called a heat recovery (HR) cycle. In the HR cycle, a regenerator is used to preheat water by outlet air compressor. So the waste heat of the outlet air compressor and the exhaust gases of turbine are recovered and used to produce steam. This steam is injected in the gas turbine, resulting in a high specific power and a high thermal efficiency. The cycles are simulated in order to evaluate and compare their performances. Moreover, the effects of an important parameters such as the ambient air temperature on the cycle performance are evaluated. The simulation results show that the HR cycle has high efficiency.

  13. Outsourcing your medical practice call center: how to choose a vendor to ensure regulatory compliance.

    PubMed

    Johnson, Bill

    2014-01-01

    Medical practices receive hundreds if not thousands of calls every week from patients, payers, pharmacies, and others. Outsourcing call centers can be a smart move to improve efficiency, lower costs, improve customer care, ensure proper payer management, and ensure regulatory compliance. This article discusses how to know when it's time to move to an outsourced call center, the benefits of making the move, how to choose the right call center, and how to make the transition. It also provides tips on how to manage the call center to ensure the objectives are being met.

  14. Neural network for control of rearrangeable Clos networks.

    PubMed

    Park, Y K; Cherkassky, V

    1994-09-01

    Rapid evolution in the field of communication networks requires high speed switching technologies. This involves a high degree of parallelism in switching control and routing performed at the hardware level. The multistage crossbar networks have always been attractive to switch designers. In this paper a neural network approach to controlling a three-stage Clos network in real time is proposed. This controller provides optimal routing of communication traffic requests on a call-by-call basis by rearranging existing connections, with a minimum length of rearrangement sequence so that a new blocked call request can be accommodated. The proposed neural network controller uses Paull's rearrangement algorithm, along with the special (least used) switch selection rule in order to minimize the length of rearrangement sequences. The functional behavior of our model is verified by simulations and it is shown that the convergence time required for finding an optimal solution is constant, regardless of the switching network size. The performance is evaluated for random traffic with various traffic loads. Simulation results show that applying the least used switch selection rule increases the efficiency in switch rearrangements, reducing the network convergence time. The implementation aspects are also discussed to show the feasibility of the proposed approach.

  15. An Update on the CCSDS Optical Communications Working Group

    NASA Technical Reports Server (NTRS)

    Edwards, Bernard L.; Schulz, Klaus-Juergen; Hamkins, Jonathan; Robinson, Bryan; Alliss, Randall; Daddato, Robert; Schmidt, Christopher; Giggebach, Dirk; Braatz, Lena

    2017-01-01

    International space agencies around the world are currently developing optical communication systems for Near Earth and Deep Space applications for both robotic and human rated spacecraft. These applications include both links between spacecraft and links between spacecraft and ground. The Interagency Operation Advisory Group (IOAG) has stated that there is a strong business case for international cross support of spacecraft optical links. It further concluded that in order to enable cross support the links must be standardized. This paper will overview the history and structure of the space communications international standards body, the Consultative Committee for Space Data Systems (CCSDS), that will develop the standards and provide an update on the proceedings of the Optical Communications Working Group within CCSDS. This paper will also describe the set of optical communications standards being developed and outline some of the issues that must be addressed in the next few years. The paper will address in particular the ongoing work on application scenarios for deep space to ground called High Photon Efficiency, for LEO to ground called Low Complexity, for inter-satellite and near Earth to ground called High Data Rate, as well as associated atmospheric measurement techniques and link operations concepts.

  16. On modelling three-dimensional piezoelectric smart structures with boundary spectral element method

    NASA Astrophysics Data System (ADS)

    Zou, Fangxin; Aliabadi, M. H.

    2017-05-01

    The computational efficiency of the boundary element method in elastodynamic analysis can be significantly improved by employing high-order spectral elements for boundary discretisation. In this work, for the first time, the so-called boundary spectral element method is utilised to formulate the piezoelectric smart structures that are widely used in structural health monitoring (SHM) applications. The resultant boundary spectral element formulation has been validated by the finite element method (FEM) and physical experiments. The new formulation has demonstrated a lower demand on computational resources and a higher numerical stability than commercial FEM packages. Comparing to the conventional boundary element formulation, a significant reduction in computational expenses has been achieved. In summary, the boundary spectral element formulation presented in this paper provides a highly efficient and stable mathematical tool for the development of SHM applications.

  17. Molecular thermal transistor: Dimension analysis and mechanism

    NASA Astrophysics Data System (ADS)

    Behnia, S.; Panahinia, R.

    2018-04-01

    Recently, large challenge has been spent to realize high efficient thermal transistors. Outstanding properties of DNA make it as an excellent nano material in future technologies. In this paper, we introduced a high efficient DNA based thermal transistor. The thermal transistor operates when the system shows an increase in the thermal flux despite of decreasing temperature gradient. This is what called as negative differential thermal resistance (NDTR). Based on multifractal analysis, we could distinguish regions with NDTR state from non-NDTR state. Moreover, Based on dimension spectrum of the system, it is detected that NDTR state is accompanied by ballistic transport regime. The generalized correlation sum (analogous to specific heat) shows that an irregular decrease in the specific heat induces an increase in the mean free path (mfp) of phonons. This leads to the occurrence of NDTR.

  18. Process membership in asynchronous environments

    NASA Technical Reports Server (NTRS)

    Ricciardi, Aleta M.; Birman, Kenneth P.

    1993-01-01

    The development of reliable distributed software is simplified by the ability to assume a fail-stop failure model. The emulation of such a model in an asynchronous distributed environment is discussed. The solution proposed, called Strong-GMP, can be supported through a highly efficient protocol, and was implemented as part of a distributed systems software project at Cornell University. The precise definition of the problem, the protocol, correctness proofs, and an analysis of costs are addressed.

  19. Cryogenic Eyesafer Laser Optimization for Use Without Liquid Nitrogen

    DTIC Science & Technology

    2014-02-01

    liquid cryogens. This calls for optimal performance around 125–150 K—high enough for reasonably efficient operation of a Stirling cooler. We...state laser system with an optimum operating temperature somewhat higher—ideally 125–150 K—can be identified, then a Stirling cooler can be used to...needed to optimize laser performance in the desired temperature range. This did not include actual use of Stirling coolers, but rather involved both

  20. Inverse problem of radiofrequency sounding of ionosphere

    NASA Astrophysics Data System (ADS)

    Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.

    2016-01-01

    An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.

  1. Seasonal and Diel Vocalization Patterns of Antarctic Blue Whale (Balaenoptera musculus intermedia) in the Southern Indian Ocean: A Multi-Year and Multi-Site Study.

    PubMed

    Leroy, Emmanuelle C; Samaran, Flore; Bonnel, Julien; Royer, Jean-Yves

    2016-01-01

    Passive acoustic monitoring is an efficient way to provide insights on the ecology of large whales. This approach allows for long-term and species-specific monitoring over large areas. In this study, we examined six years (2010 to 2015) of continuous acoustic recordings at up to seven different locations in the Central and Southern Indian Basin to assess the peak periods of presence, seasonality and migration movements of Antarctic blue whales (Balaenoptera musculus intermedia). An automated method is used to detect the Antarctic blue whale stereotyped call, known as Z-call. Detection results are analyzed in terms of distribution, seasonal presence and diel pattern of emission at each site. Z-calls are detected year-round at each site, except for one located in the equatorial Indian Ocean, and display highly seasonal distribution. This seasonality is stable across years for every site, but varies between sites. Z-calls are mainly detected during autumn and spring at the subantarctic locations, suggesting that these sites are on the Antarctic blue whale migration routes, and mostly during winter at the subtropical sites. In addition to these seasonal trends, there is a significant diel pattern in Z-call emission, with more Z-calls in daytime than in nighttime. This diel pattern may be related to the blue whale feeding ecology.

  2. Seasonal and Diel Vocalization Patterns of Antarctic Blue Whale (Balaenoptera musculus intermedia) in the Southern Indian Ocean: A Multi-Year and Multi-Site Study

    PubMed Central

    Leroy, Emmanuelle C.; Samaran, Flore; Bonnel, Julien; Royer, Jean-Yves

    2016-01-01

    Passive acoustic monitoring is an efficient way to provide insights on the ecology of large whales. This approach allows for long-term and species-specific monitoring over large areas. In this study, we examined six years (2010 to 2015) of continuous acoustic recordings at up to seven different locations in the Central and Southern Indian Basin to assess the peak periods of presence, seasonality and migration movements of Antarctic blue whales (Balaenoptera musculus intermedia). An automated method is used to detect the Antarctic blue whale stereotyped call, known as Z-call. Detection results are analyzed in terms of distribution, seasonal presence and diel pattern of emission at each site. Z-calls are detected year-round at each site, except for one located in the equatorial Indian Ocean, and display highly seasonal distribution. This seasonality is stable across years for every site, but varies between sites. Z-calls are mainly detected during autumn and spring at the subantarctic locations, suggesting that these sites are on the Antarctic blue whale migration routes, and mostly during winter at the subtropical sites. In addition to these seasonal trends, there is a significant diel pattern in Z-call emission, with more Z-calls in daytime than in nighttime. This diel pattern may be related to the blue whale feeding ecology. PMID:27828976

  3. Invert biopanning: A novel method for efficient and rapid isolation of scFvs by phage display technology.

    PubMed

    Rahbarnia, Leila; Farajnia, Safar; Babaei, Hossein; Majidi, Jafar; Veisi, Kamal; Tanomand, Asghar; Akbari, Bahman

    2016-11-01

    Phage display is a prominent screening technique for development of novel high affinity antibodies against almost any antigen. However, removing false positive clones in screening process remains a challenge. The aim of this study was to develop an efficient and rapid method for isolation of high affinity scFvs by removing NSBs without losing rare specific clones. Therefore, a novel two rounds strategy called invert biopanning was developed for isolating high affinity scFvs against EGFRvIII antigen from human scFv library. The efficiency of invert biopanning method (procedure III) was analyzed by comparing with results of conventional biopanning methods (procedures I and II). According to the results of polyclonal ELISA, the second round of procedure III displayed highest binding affinity against EGFRvIII peptide accompanied by lowest NSB comparing to other two procedures. Several positive clones were identified among output phages of procedure III by monoclonal phage ELISA which displayed high affinity to EGFRvIII antigen. In conclusion, results of our study indicate that invert biopanning is an efficient method for avoiding NSBs and conservation of rare specific clones during screening of a scFv phage library. Novel anti EGFRvIII scFv isolated could be a promising candidate for potential use in treatment of EGFRvIII expressing cancers. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.

  4. Statistical properties of proportional residual energy intake as a new measure of energetic efficiency.

    PubMed

    Zamani, Pouya

    2017-08-01

    Traditional ratio measures of efficiency, including feed conversion ratio (FCR), gross milk efficiency (GME), gross energy efficiency (GEE) and net energy efficiency (NEE) may have some statistical problems including high correlations with milk yield. Residual energy intake (REI) or residual feed intake (RFI) is another criterion, proposed to overcome the problems attributed to the traditional ratio criteria, but it does not account for production or intake levels. For example, the same REI value could be considerable for low producing and negligible for high producing cows. The aim of this study was to propose a new measure of efficiency to overcome the problems attributed to the previous criteria. A total of 1478 monthly records of 268 lactating Holstein cows were used for this study. In addition to FCR, GME, GEE, NEE and REI, a new criterion called proportional residual energy intake (PREI) was calculated as REI to net energy intake ratio and defined as proportion of net energy intake lost as REI. The PREI had an average of -0·02 and range of -0·36 to 0·27, meaning that the least efficient cow lost 0·27 of her net energy intake as REI, while the most efficient animal saved 0·36 of her net energy intake as less REI. Traditional ratio criteria (FCR, GME, GEE and NEE) had high correlations with milk and fat corrected milk yields (absolute values from 0·469 to 0·816), while the REI and PREI had low correlations (0·000 to 0·069) with milk production. The results showed that the traditional ratio criteria (FCR, GME, GEE and NEE) are highly influenced by production traits, while the REI and PREI are independent of production level. Moreover, the PREI adjusts the REI magnitude for intake level. It seems that the PREI could be considered as a worthwhile measure of efficiency for future studies.

  5. Thermal Characterization of Nanostructures and Advanced Engineered Materials

    NASA Astrophysics Data System (ADS)

    Goyal, Vivek Kumar

    Continuous downscaling of Si complementary metal-oxide semiconductor (CMOS) technology and progress in high-power electronics demand more efficient heat removal techniques to handle the increasing power density and rising temperature of hot spots. For this reason, it is important to investigate thermal properties of materials at nanometer scale and identify materials with the extremely large or extremely low thermal conductivity for applications as heat spreaders or heat insulators in the next generation of integrated circuits. The thin films used in microelectronic and photonic devices need to have high thermal conductivity in order to transfer the dissipated power to heat sinks more effectively. On the other hand, thermoelectric devices call for materials or structures with low thermal conductivity because the performance of thermoelectric devices is determined by the figure of merit Z=S2sigma/K, where S is the Seebeck coefficient, K and sigma are the thermal and electrical conductivity, respectively. Nanostructured superlattices can have drastically reduced thermal conductivity as compared to their bulk counterparts making them promising candidates for high-efficiency thermoelectric materials. Other applications calling for thin films with low thermal conductivity value are high-temperature coatings for engines. Thus, materials with both high thermal conductivity and low thermal conductivity are technologically important. The increasing temperature of the hot spots in state-of-the-art chips stimulates the search for innovative methods for heat removal. One promising approach is to incorporate materials, which have high thermal conductivity into the chip design. Two suitable candidates for such applications are diamond and graphene. Another approach is to integrate the high-efficiency thermoelectric elements for on-spot cooling. In addition, there is strong motivation for improved thermal interface materials (TIMs) for heat transfer from the heat-generating chip to heat-sinking units. This dissertation presents results of the experimental investigation and theoretical interpretation of thermal transport in the advanced engineered materials, which include thin films for thermal management of nanoscale devices, nanostructured superlattices as promising candidates for high-efficiency thermoelectric materials, and improved TIMs with graphene and metal particles as fillers providing enhanced thermal conductivity. The advanced engineered materials studied include chemical vapor deposition (CVD) grown ultrananocrystalline diamond (UNCD) and microcrystalline diamond (MCD) films on Si substrates, directly integrated nanocrystalline diamond (NCD) films on GaN, free-standing polycrystalline graphene (PCG) films, graphene oxide (GOx) films, and "pseudo-superlattices" of the mechanically exfoliated Bi2Te3 topological insulator films, and thermal interface materials (TIMs) with graphene fillers.

  6. Soybean stem growth under high-pressure sodium with supplemental blue lighting

    NASA Technical Reports Server (NTRS)

    Wheeler, R. M.; Mackowiak, C. L.; Sager, J. C.

    1991-01-01

    To study high-pressure sodium (HPS) lamps used for plant lighting because of their high energy conversion efficiencies, 'McCall' soybean plants were grown for 28 days in growth chambers utilizing HPS lamps, with/without supplemental light from blue phosphor fluorescent lamps. Total photosynthetic photon flux levels, including blue fluorescent, were maintained near 300 or 500 micromol/sq m s. Results indicate that employment of HPS or other blue-deficient sources for lighting at low to moderate photosynthetic photon flux levels may cause abnormal stem elongation, but this can be prevented by the addition of a small amount of supplemental blue light.

  7. Highly efficient generation of GGTA1 biallelic knockout inbred mini-pigs with TALENs.

    PubMed

    Xin, Jige; Yang, Huaqiang; Fan, Nana; Zhao, Bentian; Ouyang, Zhen; Liu, Zhaoming; Zhao, Yu; Li, Xiaoping; Song, Jun; Yang, Yi; Zou, Qingjian; Yan, Quanmei; Zeng, Yangzhi; Lai, Liangxue

    2013-01-01

    Inbred mini-pigs are ideal organ donors for future human xenotransplantations because of their clear genetic background, high homozygosity, and high inbreeding endurance. In this study, we chose fibroblast cells from a highly inbred pig line called Banna mini-pig inbred line (BMI) as donor nuclei for nuclear transfer, combining with transcription activator-like effector nucleases (TALENs) and successfully generated α-1,3-galactosyltransferase (GGTA1) gene biallelic knockout (KO) pigs. To validate the efficiency of TALEN vectors, in vitro-transcribed TALEN mRNAs were microinjected into one-cell stage parthenogenetically activated porcine embryos. The efficiency of indel mutations at the GGTA1-targeting loci was as high as 73.1% (19/26) among the parthenogenetic blastocysts. TALENs were co-transfected into porcine fetal fibroblasts of BMI with a plasmid containing neomycin gene. The targeting efficiency reached 89.5% (187/209) among the survived cell clones after a 10 d selection. More remarkably 27.8% (58/209) of colonies were biallelic KO. Five fibroblast cell lines with biallelic KO were chosen as nuclear donors for somatic cell nuclear transfer (SCNT). Three miniature piglets with biallelic mutations of the GGTA1 gene were achieved. Gal epitopes on the surface of cells from all the three biallelic KO piglets were completely absent. The fibroblasts from the GGTA1 null piglets were more resistant to lysis by pooled complement-preserved normal human serum than those from wild-type pigs. These results indicate that a combination of TALENs technology with SCNT can generate biallelic KO pigs directly with high efficiency. The GGTA1 null piglets with inbred features created in this study can provide a new organ source for xenotransplantation research.

  8. Feasibility of using a pediatric call center as part of a quality improvement effort to prevent hospital readmission.

    PubMed

    Kirsch, Sallie Davis; Wilson, Lauren S; Harkins, Michelle; Albin, Dawn; Del Beccaro, Mark A

    2015-01-01

    The primary aim of this intervention was to assess the feasibility of using call center nurses who are experts in telephone triage to conduct post discharge telephone calls, as part of a quality improvement effort to prevent hospital readmission. Families of patients with bronchiolitis were called between 24 and 48 hours after discharge. The calls conducted by the nurses were efficient (average time was 12 minutes), and their assessments helped to identify gaps in inpatient family education. Overall, the project demonstrated the efficacy in readmission prevention by using nurses who staff a call center to conduct post-hospitalization telephone calls. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Experimental Results of the First Two Stages of an Advanced Transonic Core Compressor Under Isolated and Multi-Stage Conditions

    NASA Technical Reports Server (NTRS)

    Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.

    2015-01-01

    NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency of large gas turbine engines. Under ERA the task for a High Pressure Ratio Core Technology program calls for a higher overall pressure ratio of 60 to 70. This mean that the HPC would have to almost double in pressure ratio and keep its high level of efficiency. The challenge is how to match the corrected mass flow rate of the front two supersonic high reaction and high corrected tip speed stages with a total pressure ratio of 3.5. NASA and GE teamed to address this challenge by using the initial geometry of an advanced GE compressor design to meet the requirements of the first 2 stages of the very high pressure ratio core compressor. The rig was configured to run as a 2 stage machine, with Strut and IGV, Rotor 1 and Stator 1 run as independent tests which were then followed by adding the second stage. The goal is to fully understand the stage performances under isolated and multi-stage conditions and fully understand any differences and provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to isolate fluid dynamics loss source mechanisms due to interaction and endwalls. The paper will present the description of the compressor test article, its predicted performance and operability, and the experimental results for both the single stage and two stage configurations. We focus the detailed measurements on 97 and 100 of design speed at 3 vane setting angles.

  10. Design and Evaluation of a Net Zero Energy Low-Income Residential Housing Development in Lafayette, Colorado

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dean, J.; VanGeet, O.; Simkus, S.

    This report outlines the lessons learned and sub-metered energy performance of an ultra low energy single family ranch home and duplex unit, called the Paradigm Pilot Project and presents the final design recommendations for a 153-unit net zero energy residential development called the Josephine Commons Project. Affordable housing development authorities throughout the United States continually struggle to find the most cost-effective pathway to provide quality, durable, and sustainable housing. The challenge for these authorities is to achieve the mission of delivering affordable housing at the lowest cost per square foot in environments that may be rural, urban, suburban, or withinmore » a designated redevelopment district. With the challenges the U.S. faces regarding energy, the environmental impacts of consumer use of fossil fuels and the increased focus on reducing greenhouse gas emissions, housing authorities are pursuing the goal of constructing affordable, energy efficient and sustainable housing at the lowest life-cycle cost of ownership. This report outlines the lessons learned and sub-metered energy performance of an ultra-low-energy single family ranch home and duplex unit, called the Paradigm Pilot Project and presents the final design recommendations for a 153-unit net zero energy residential development called the Josephine Commons Project. In addition to describing the results of the performance monitoring from the pilot project, this paper describes the recommended design process of (1) setting performance goals for energy efficiency and renewable energy on a life-cycle cost basis, (2) using an integrated, whole building design approach, and (3) incorporating systems-built housing, a green jobs training program, and renewable energy technologies into a replicable high performance, low-income housing project development model.« less

  11. Solar cells utilizing pulsed-energy crystallized microcrystalline/polycrystalline silicon

    DOEpatents

    Kaschmitter, J.L.; Sigmon, T.W.

    1995-10-10

    A process for producing multi-terminal devices such as solar cells wherein a pulsed high energy source is used to melt and crystallize amorphous silicon deposited on a substrate which is intolerant to high processing temperatures, whereby the amorphous silicon is converted into a microcrystalline/polycrystalline phase. Dopant and hydrogenation can be added during the fabrication process which provides for fabrication of extremely planar, ultra shallow contacts which results in reduction of non-current collecting contact volume. The use of the pulsed energy beams results in the ability to fabricate high efficiency microcrystalline/polycrystalline solar cells on the so-called low-temperature, inexpensive plastic substrates which are intolerant to high processing temperatures.

  12. Solar cells utilizing pulsed-energy crystallized microcrystalline/polycrystalline silicon

    DOEpatents

    Kaschmitter, James L.; Sigmon, Thomas W.

    1995-01-01

    A process for producing multi-terminal devices such as solar cells wherein a pulsed high energy source is used to melt and crystallize amorphous silicon deposited on a substrate which is intolerant to high processing temperatures, whereby to amorphous silicon is converted into a microcrystalline/polycrystalline phase. Dopant and hydrogenization can be added during the fabrication process which provides for fabrication of extremely planar, ultra shallow contacts which results in reduction of non-current collecting contact volume. The use of the pulsed energy beams results in the ability to fabricate high efficiency microcrystalline/polycrystalline solar cells on the so-called low-temperature, inexpensive plastic substrates which are intolerant to high processing temperatures.

  13. High-Voltage High-Energy Stretched Lens Array Square-Rigger (SLASR) for Direct-Drive Solar Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Howell, Joe T.; O'Neill, Mark J.; Mankins, John C.

    2006-01-01

    Development is underway on a unique high-voltage, high energy solar concentrator array called Stretched Lens Array Square-Rigger (SLASR) for direct drive electric propulsion. The SLASR performance attributes closely match the critical needs of solar electric propulsion (SEP) systems, which may be used for space tugs to fuel efficiently transport cargo from low earth orbit (LEO) to low lunar orbit (LLO), in support of NASA's robotic and human exploration missions. Later SEP systems may similarly transport cargo from the earth-moon neighborhood to the Mars neighborhood. This paper will describe the SLASR technology, discuss SLASR developments and ground testing, and outline plans for future SLASR technology maturation.

  14. High-Voltage High-Energy Stretched Lens Array Square-Rigger (SLASR) for Direct-Drive Solar Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Howell, Joe T.; O'Neill, Mark; Mankins, John C.

    2006-01-01

    Development is underway on a unique high-voltage, high-energy solar concentrator array called Stretched Lens Array Square-Rigger (SLASR) for direct drive electric propulsion. The SLASR performance attributes closely match the critical needs of solar electric propulsion (SEP) systems, which may be used for space tugs to fuel-efficiently transport cargo from low earth orbit (LEO) to low lunar orbit (LLO), in support of NASA s robotic and human exploration missions. Later SEP systems may similarly transport cargo from the earth-moon neighborhood to the Mars neighborhood. This paper will describe the SLASR technology, discuss SLASR developments and ground testing, and outline plans for future SLASR technology maturation.

  15. Anonymizing 1:M microdata with high utility

    PubMed Central

    Gong, Qiyuan; Luo, Junzhou; Yang, Ming; Ni, Weiwei; Li, Xiao-Bai

    2016-01-01

    Preserving privacy and utility during data publishing and data mining is essential for individuals, data providers and researchers. However, studies in this area typically assume that one individual has only one record in a dataset, which is unrealistic in many applications. Having multiple records for an individual leads to new privacy leakages. We call such a dataset a 1:M dataset. In this paper, we propose a novel privacy model called (k, l)-diversity that addresses disclosure risks in 1:M data publishing. Based on this model, we develop an efficient algorithm named 1:M-Generalization to preserve privacy and data utility, and compare it with alternative approaches. Extensive experiments on real-world data show that our approach outperforms the state-of-the-art technique, in terms of data utility and computational cost. PMID:28603388

  16. How the Supplement-Not-Supplant Requirement Can Work against the Policy Goals of Title I: A Case for Using Title I, Part A, Education Funds More Effectively and Efficiently. Tightening Up Title I

    ERIC Educational Resources Information Center

    Junge, Melissa; Krvaric, Sheara

    2012-01-01

    Title I of the Elementary and Secondary Education Act, a federal program to provide additional assistance to academically struggling students in high-poverty areas, has long contained a provision called the "supplement-not-supplant" requirement. This provision was designed to ensure Title I funds were spent on extra educational services for…

  17. Single-feature polymorphism discovery in the barley transcriptome

    PubMed Central

    Rostoks, Nils; Borevitz, Justin O; Hedley, Peter E; Russell, Joanne; Mudie, Sharon; Morris, Jenny; Cardle, Linda; Marshall, David F; Waugh, Robbie

    2005-01-01

    A probe-level model for analysis of GeneChip gene-expression data is presented which identified more than 10,000 single-feature polymorphisms (SFP) between two barley genotypes. The method has good sensitivity, as 67% of known single-nucleotide polymorphisms (SNP) were called as SFPs. This method is applicable to all oligonucleotide microarray data, accounts for SNP effects in gene-expression data and represents an efficient and versatile approach for highly parallel marker identification in large genomes. PMID:15960806

  18. Nonlinear acoustics in cicada mating calls enhance sound propagation.

    PubMed

    Hughes, Derke R; Nuttall, Albert H; Katz, Richard A; Carter, G Clifford

    2009-02-01

    An analysis of cicada mating calls, measured in field experiments, indicates that the very high levels of acoustic energy radiated by this relatively small insect are mainly attributed to the nonlinear characteristics of the signal. The cicada emits one of the loudest sounds in all of the insect population with a sound production system occupying a physical space typically less than 3 cc. The sounds made by tymbals are amplified by the hollow abdomen, functioning as a tuned resonator, but models of the signal based solely on linear techniques do not fully account for a sound radiation capability that is so disproportionate to the insect's size. The nonlinear behavior of the cicada signal is demonstrated by combining the mutual information and surrogate data techniques; the results obtained indicate decorrelation when the phase-randomized and non-phase-randomized data separate. The Volterra expansion technique is used to fit the nonlinearity in the insect's call. The second-order Volterra estimate provides further evidence that the cicada mating calls are dominated by nonlinear characteristics and also suggests that the medium contributes to the cicada's efficient sound propagation. Application of the same principles has the potential to improve radiated sound levels for sonar applications.

  19. Supersonic N-Crowdions in a Two-Dimensional Morse Crystal

    NASA Astrophysics Data System (ADS)

    Dmitriev, S. V.; Korznikova, E. A.; Chetverikov, A. P.

    2018-03-01

    An interstitial atom placed in a close-packed atomic row of a crystal is called crowdion. Such defects are highly mobile; they can move along the row, transferring mass and energy. We generalize the concept of a classical supersonic crowdion to an N-crowdion in which not one but N atoms move simultaneously with a high velocity. Using molecular dynamics simulations for a close-packed two-dimensional Morse crystal, we show that N-crowdions transfer mass much more efficiently, because they are capable of covering large distances while having a lower total energy than that of a classical 1-crowdion.

  20. NCI Helps Children’s Hospital of Philadelphia to Identify and Treat New Target in Pediatric Cancer | Poster

    Cancer.gov

    There may be a new, more effective method for treating high-risk neuroblastoma, according to scientists at the Children’s Hospital of Philadelphia and collaborators in the Cancer and Inflammation Program at NCI at Frederick. Together, the groups published a study describing a previously unrecognized protein on neuroblastoma cells, called GPC2, as well as the creation of a novel antibody-drug conjugate, a combination of a human antibody and a naturally occurring anticancer drug, that locates and binds to GPC2 in a highly efficient way.

  1. CRISPR-Cas9-Edited Site Sequencing (CRES-Seq): An Efficient and High-Throughput Method for the Selection of CRISPR-Cas9-Edited Clones.

    PubMed

    Veeranagouda, Yaligara; Debono-Lagneaux, Delphine; Fournet, Hamida; Thill, Gilbert; Didier, Michel

    2018-01-16

    The emergence of clustered regularly interspaced short palindromic repeats-Cas9 (CRISPR-Cas9) gene editing systems has enabled the creation of specific mutants at low cost, in a short time and with high efficiency, in eukaryotic cells. Since a CRISPR-Cas9 system typically creates an array of mutations in targeted sites, a successful gene editing project requires careful selection of edited clones. This process can be very challenging, especially when working with multiallelic genes and/or polyploid cells (such as cancer and plants cells). Here we described a next-generation sequencing method called CRISPR-Cas9 Edited Site Sequencing (CRES-Seq) for the efficient and high-throughput screening of CRISPR-Cas9-edited clones. CRES-Seq facilitates the precise genotyping up to 96 CRISPR-Cas9-edited sites (CRES) in a single MiniSeq (Illumina) run with an approximate sequencing cost of $6/clone. CRES-Seq is particularly useful when multiple genes are simultaneously targeted by CRISPR-Cas9, and also for screening of clones generated from multiallelic genes/polyploid cells. © 2018 by John Wiley & Sons, Inc. Copyright © 2018 John Wiley & Sons, Inc.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knittel, Christopher; Wolfran, Catherine; Gandhi, Raina

    A wide range of climate plans rely on energy efficiency to generate energy and carbon emissions reductions, but conventional wisdom holds that consumers have historically underinvested in energy efficiency upgrades. This underinvestment may occur for a variety of reasons, one of which is that consumers are not adequately informed about the benefits to energy efficiency. To address this, the U.S. Department of Energy created a tool called the Home Energy Score (HEScore) to act as a simple, low-cost means to provide clear information about a home’s energy efficiency and motivate homeowners and homebuyers to invest in energy efficiency. The Departmentmore » of Energy is in the process of conducting four evaluations assessing the impact of the Home Energy Score on residential energy efficiency investments and program participation. This paper describes one of these evaluations: a randomized controlled trial conducted in New Jersey in partnership with New Jersey Natural Gas. The evaluation randomly provides homeowners who have received an audit, either because they have recently replaced their furnace, boiler, and/or gas water heater with a high-efficiency model and participated in a free audit to access an incentive, or because they requested an independent audit3, between May 2014 and October 2015, with the Home Energy Score.« less

  3. Study on high-resolution representation of terraces in Shanxi Loess Plateau area

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ma, Lei

    2008-10-01

    A new elevation points sampling method, namely TIN-based Sampling Method (TSM) and a new visual method called Elevation Addition Method (EAM), are put forth for representing the typical terraces in Shanxi loess plateau area. The DEM Feature Points and Lines Classification (DEPLC) put forth by the authors in 2007 is perfected for depicting the main path in the study area. The EAM is used to visualize the terraces and the path in the study area. 406 key elevation points and 15 feature constrained lines sampled by this method are used to construct CD-TINs which can depict the terraces and path correctly and effectively. Our case study shows that the new sampling method called TSM is reasonable and feasible. The complicated micro-terrains like terraces and path can be represented with high resolution and high efficiency successfully by use of the perfected DEPLC, TSM and CD-TINs. And both the terraces and the main path are visualized very well by use of EAM even when the terrace height is not more than 1m.

  4. A High Performance COTS Based Computer Architecture

    NASA Astrophysics Data System (ADS)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  5. ICI Showcase House Prototype

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-02-16

    Building Science Corporation collaborated with ICI Homes in Daytona Beach, FL on a 2008 prototype Showcase House that demonstrates the energy efficiency and durability upgrades that ICI currently promotes through its in-house efficiency program called EFactor.

  6. High-Price And Low-Price Physician Practices Do Not Differ Significantly On Care Quality Or Efficiency.

    PubMed

    Roberts, Eric T; Mehrotra, Ateev; McWilliams, J Michael

    2017-05-01

    Consolidation of physician practices has intensified concerns that providers with greater market power may be able to charge higher prices without having to deliver better care, compared to providers with less market power. Providers have argued that higher prices cover the costs of delivering higher-quality care. We examined the relationship between physician practice prices for outpatient services and practices' quality and efficiency of care. Using commercial claims data, we classified practices as being high- or low-price. We used national data from the Consumer Assessment of Healthcare Providers and Systems survey and linked claims for Medicare beneficiaries to compare high- and low-price practices in the same geographic area in terms of care quality, utilization, and spending. Compared with low-price practices, high-price practices were much larger and received 36 percent higher prices. Patients of high-price practices reported significantly higher scores on some measures of care coordination and management but did not differ meaningfully in their overall care ratings, other domains of patient experiences (including physician ratings and access to care), receipt of preventive services, acute care use, or total Medicare spending. This suggests an overall weak relationship between practice prices and the quality and efficiency of care and calls into question claims that high-price providers deliver substantially higher-value care. Project HOPE—The People-to-People Health Foundation, Inc.

  7. Exploring Infiniband Hardware Virtualization in OpenNebula towards Efficient High-Performance Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pais Pitta de Lacerda Ruivo, Tiago; Bernabeu Altayo, Gerard; Garzoglio, Gabriele

    2014-11-11

    has been widely accepted that software virtualization has a big negative impact on high-performance computing (HPC) application performance. This work explores the potential use of Infiniband hardware virtualization in an OpenNebula cloud towards the efficient support of MPI-based workloads. We have implemented, deployed, and tested an Infiniband network on the FermiCloud private Infrastructure-as-a-Service (IaaS) cloud. To avoid software virtualization towards minimizing the virtualization overhead, we employed a technique called Single Root Input/Output Virtualization (SRIOV). Our solution spanned modifications to the Linux’s Hypervisor as well as the OpenNebula manager. We evaluated the performance of the hardware virtualization on up to 56more » virtual machines connected by up to 8 DDR Infiniband network links, with micro-benchmarks (latency and bandwidth) as well as w a MPI-intensive application (the HPL Linpack benchmark).« less

  8. High particle export over the continental shelf of the west Antarctic Peninsula

    NASA Astrophysics Data System (ADS)

    Buesseler, Ken O.; McDonnell, Andrew M. P.; Schofield, Oscar M. E.; Steinberg, Deborah K.; Ducklow, Hugh W.

    2010-11-01

    Drifting cylindrical traps and the flux proxy 234Th indicate more than an order of magnitude higher sinking fluxes of particulate carbon and 234Th in January 2009 than measured by a time-series conical trap used regularly on the shelf of the west Antarctic Peninsula (WAP). The higher fluxes measured in this study have several implications for our understanding of the WAP ecosystem. Larger sinking fluxes result in a revised export efficiency of at least 10% (C flux/net primary production) and a requisite lower regeneration efficiency in surface waters. High fluxes also result in a large supply of sinking organic matter to support subsurface and benthic food webs on the continental shelf. These new findings call into question the magnitude of seasonal and interannual variability in particle flux and reaffirm the difficulty of using moored conical traps as a quantitative flux collector in shallow waters.

  9. The drive for Aircraft Energy Efficiency

    NASA Technical Reports Server (NTRS)

    James, R. L., Jr.; Maddalon, D. V.

    1984-01-01

    NASA's Aircraft Energy Efficiency (ACEE) program, which began in 1976, has mounted a development effort in four major transport aircraft technology fields: laminar flow systems, advanced aerodynamics, flight controls, and composite structures. ACEE has explored two basic methods for achieving drag-reducing boundary layer laminarization: the use of suction through the wing structure (via slots or perforations) to remove boundary layer turbulence, and the encouragement of natural laminar flow maintenance through refined design practices. Wind tunnel tests have been conducted for wide bodied aircraft equipped with high aspect ratio supercritical wings and winglets. Maneuver load control and pitch-active stability augmentation control systems reduce fuel consumption by reducing the drag associated with high aircraft stability margins. Composite structures yield lighter airframes that in turn call for smaller wing and empennage areas, reducing induced drag for a given payload. In combination, all four areas of development are expected to yield a fuel consumption reduction of 40 percent.

  10. Tree crickets optimize the acoustics of baffles to exaggerate their mate-attraction signal.

    PubMed

    Mhatre, Natasha; Malkin, Robert; Deb, Rittik; Balakrishnan, Rohini; Robert, Daniel

    2017-12-11

    Object manufacture in insects is typically inherited, and believed to be highly stereotyped. Optimization, the ability to select the functionally best material and modify it appropriately for a specific function, implies flexibility and is usually thought to be incompatible with inherited behaviour. Here, we show that tree-crickets optimize acoustic baffles, objects that are used to increase the effective loudness of mate-attraction calls. We quantified the acoustic efficiency of all baffles within the naturally feasible design space using finite-element modelling and found that design affects efficiency significantly. We tested the baffle-making behaviour of tree crickets in a series of experimental contexts. We found that given the opportunity, tree crickets optimised baffle acoustics; they selected the best sized object and modified it appropriately to make a near optimal baffle. Surprisingly, optimization could be achieved in a single attempt, and is likely to be achieved through an inherited yet highly accurate behavioural heuristic.

  11. SNSPD with parallel nanowires (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Ejrnaes, Mikkel; Parlato, Loredana; Gaggero, Alessandro; Mattioli, Francesco; Leoni, Roberto; Pepe, Giampiero; Cristiano, Roberto

    2017-05-01

    Superconducting nanowire single-photon detectors (SNSPDs) have shown to be promising in applications such as quantum communication and computation, quantum optics, imaging, metrology and sensing. They offer the advantages of a low dark count rate, high efficiency, a broadband response, a short time jitter, a high repetition rate, and no need for gated-mode operation. Several SNSPD designs have been proposed in literature. Here, we discuss the so-called parallel nanowires configurations. They were introduced with the aim of improving some SNSPD property like detection efficiency, speed, signal-to-noise ratio, or photon number resolution. Although apparently similar, the various parallel designs are not the same. There is no one design that can improve the mentioned properties all together. In fact, each design presents its own characteristics with specific advantages and drawbacks. In this work, we will discuss the various designs outlining peculiarities and possible improvements.

  12. A desalination battery.

    PubMed

    Pasta, Mauro; Wessells, Colin D; Cui, Yi; La Mantia, Fabio

    2012-02-08

    Water desalination is an important approach to provide fresh water around the world, although its high energy consumption, and thus high cost, call for new, efficient technology. Here, we demonstrate the novel concept of a "desalination battery", which operates by performing cycles in reverse on our previously reported mixing entropy battery. Rather than generating electricity from salinity differences, as in mixing entropy batteries, desalination batteries use an electrical energy input to extract sodium and chloride ions from seawater and to generate fresh water. The desalination battery is comprised by a Na(2-x)Mn(5)O(10) nanorod positive electrode and Ag/AgCl negative electrode. Here, we demonstrate an energy consumption of 0.29 Wh l(-1) for the removal of 25% salt using this novel desalination battery, which is promising when compared to reverse osmosis (~ 0.2 Wh l(-1)), the most efficient technique presently available. © 2012 American Chemical Society

  13. Distributed collaborative probabilistic design for turbine blade-tip radial running clearance using support vector machine of regression

    NASA Astrophysics Data System (ADS)

    Fei, Cheng-Wei; Bai, Guang-Chen

    2014-12-01

    To improve the computational precision and efficiency of probabilistic design for mechanical dynamic assembly like the blade-tip radial running clearance (BTRRC) of gas turbine, a distribution collaborative probabilistic design method-based support vector machine of regression (SR)(called as DCSRM) is proposed by integrating distribution collaborative response surface method and support vector machine regression model. The mathematical model of DCSRM is established and the probabilistic design idea of DCSRM is introduced. The dynamic assembly probabilistic design of aeroengine high-pressure turbine (HPT) BTRRC is accomplished to verify the proposed DCSRM. The analysis results reveal that the optimal static blade-tip clearance of HPT is gained for designing BTRRC, and improving the performance and reliability of aeroengine. The comparison of methods shows that the DCSRM has high computational accuracy and high computational efficiency in BTRRC probabilistic analysis. The present research offers an effective way for the reliability design of mechanical dynamic assembly and enriches mechanical reliability theory and method.

  14. Free-piston Stirling technology for space power

    NASA Technical Reports Server (NTRS)

    Slaby, Jack G.

    1989-01-01

    An overview is presented of the NASA Lewis Research Center free-piston Stirling engine activities directed toward space power. This work is being carried out under NASA's new Civil Space Technology Initiative (CSTI). The overall goal of CSTI's High Capacity Power element is to develop the technology base needed to meet the long duration, high capacity power requirements for future NASA space missions. The Stirling cycle offers an attractive power conversion concept for space power needs. Discussed here is the completion of the Space Power Demonstrator Engine (SPDE) testing-culminating in the generation of 25 kW of engine power from a dynamically-balanced opposed-piston Stirling engine at a temperature ratio of 2.0. Engine efficiency was approximately 22 percent. The SPDE recently has been divided into two separate single-cylinder engines, called Space Power Research Engine (SPRE), that now serve as test beds for the evaluation of key technology disciplines. These disciplines include hydrodynamic gas bearings, high-efficiency linear alternators, space qualified heat pipe heat exchangers, oscillating flow code validation, and engine loss understanding.

  15. Extending the BEAGLE library to a multi-FPGA platform.

    PubMed

    Jin, Zheming; Bakos, Jason D

    2013-01-19

    Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein's pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein's pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform's peak memory bandwidth and the implementation's memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE's CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE's GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor.

  16. Comprehensive performance comparison of high-resolution array platforms for genome-wide Copy Number Variation (CNV) analysis in humans.

    PubMed

    Haraksingh, Rajini R; Abyzov, Alexej; Urban, Alexander Eckehart

    2017-04-24

    High-resolution microarray technology is routinely used in basic research and clinical practice to efficiently detect copy number variants (CNVs) across the entire human genome. A new generation of arrays combining high probe densities with optimized designs will comprise essential tools for genome analysis in the coming years. We systematically compared the genome-wide CNV detection power of all 17 available array designs from the Affymetrix, Agilent, and Illumina platforms by hybridizing the well-characterized genome of 1000 Genomes Project subject NA12878 to all arrays, and performing data analysis using both manufacturer-recommended and platform-independent software. We benchmarked the resulting CNV call sets from each array using a gold standard set of CNVs for this genome derived from 1000 Genomes Project whole genome sequencing data. The arrays tested comprise both SNP and aCGH platforms with varying designs and contain between ~0.5 to ~4.6 million probes. Across the arrays CNV detection varied widely in number of CNV calls (4-489), CNV size range (~40 bp to ~8 Mbp), and percentage of non-validated CNVs (0-86%). We discovered strikingly strong effects of specific array design principles on performance. For example, some SNP array designs with the largest numbers of probes and extensive exonic coverage produced a considerable number of CNV calls that could not be validated, compared to designs with probe numbers that are sometimes an order of magnitude smaller. This effect was only partially ameliorated using different analysis software and optimizing data analysis parameters. High-resolution microarrays will continue to be used as reliable, cost- and time-efficient tools for CNV analysis. However, different applications tolerate different limitations in CNV detection. Our study quantified how these arrays differ in total number and size range of detected CNVs as well as sensitivity, and determined how each array balances these attributes. This analysis will inform appropriate array selection for future CNV studies, and allow better assessment of the CNV-analytical power of both published and ongoing array-based genomics studies. Furthermore, our findings emphasize the importance of concurrent use of multiple analysis algorithms and independent experimental validation in array-based CNV detection studies.

  17. Economic Risk Analysis of Agricultural Tillage Systems Using the SMART Stochastic Efficiency Software Package

    USDA-ARS?s Scientific Manuscript database

    Recently, a variant of stochastic dominance called stochastic efficiency with respect to a function (SERF) has been developed and applied. Unlike traditional stochastic dominance approaches, SERF uses the concept of certainty equivalents (CEs) to rank a set of risk-efficient alternatives instead of...

  18. Design and Verification of Remote Sensing Image Data Center Storage Architecture Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Tang, D.; Zhou, X.; Jing, Y.; Cong, W.; Li, C.

    2018-04-01

    The data center is a new concept of data processing and application proposed in recent years. It is a new method of processing technologies based on data, parallel computing, and compatibility with different hardware clusters. While optimizing the data storage management structure, it fully utilizes cluster resource computing nodes and improves the efficiency of data parallel application. This paper used mature Hadoop technology to build a large-scale distributed image management architecture for remote sensing imagery. Using MapReduce parallel processing technology, it called many computing nodes to process image storage blocks and pyramids in the background to improve the efficiency of image reading and application and sovled the need for concurrent multi-user high-speed access to remotely sensed data. It verified the rationality, reliability and superiority of the system design by testing the storage efficiency of different image data and multi-users and analyzing the distributed storage architecture to improve the application efficiency of remote sensing images through building an actual Hadoop service system.

  19. Study of relationships of material properties and high efficiency solar cell performance on material composition

    NASA Technical Reports Server (NTRS)

    Sah, C. T.

    1983-01-01

    The performance improvements obtainable from extending the traditionally thin back-surface-field (BSF) layer deep into the base of silicon solar cells under terrestrial solar illumination (AM1) are analyzed. This extended BSF cell is also known as the back-drift-field cell. About 100 silicon cells were analyzed, each with a different emitter or base dopant impurity distribution whose selection was based on physically anticipated improvements. The four principal performance parameters (the open-circuit voltage, the short-circuit current, the fill factor, and the maximum efficiency) are computed using a FORTRAN program, called Circuit Technique for Semiconductor-device Analysis, CTSA, which numerically solves the six Shockley Equations under AM1 solar illumination at 88.92 mW/cm, at an optimum cell thickness of 50 um. The results show that very significant performance improvements can be realized by extending the BSF layer thickness from 2 um (18% efficiency) to 40 um (20% efficiency).

  20. Estimation of optimum density and temperature for maximum efficiency of tin ions in Z discharge extreme ultraviolet sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masnavi, Majid; Nakajima, Mitsuo; Hotta, Eiki

    Extreme ultraviolet (EUV) discharge-based lamps for EUV lithography need to generate extremely high power in the narrow spectrum band of 13.5{+-}0.135 nm. A simplified collisional-radiative model and radiative transfer solution for an isotropic medium were utilized to investigate the wavelength-integrated light outputs in tin (Sn) plasma. Detailed calculations using the Hebrew University-Lawrence Livermore atomic code were employed for determination of necessary atomic data of the Sn{sup 4+} to Sn{sup 13+} charge states. The result of model is compared with experimental spectra from a Sn-based discharge-produced plasma. The analysis reveals that considerably larger efficiency compared to the so-called efficiency of amore » black-body radiator is formed for the electron density {approx_equal}10{sup 18} cm{sup -3}. For higher electron density, the spectral efficiency of Sn plasma reduces due to the saturation of resonance transitions.« less

  1. The Creation of a CPU Timer for High Fidelity Programs

    NASA Technical Reports Server (NTRS)

    Dick, Aidan A.

    2011-01-01

    Using C and C++ programming languages, a tool was developed that measures the efficiency of a program by recording the amount of CPU time that various functions consume. By inserting the tool between lines of code in the program, one can receive a detailed report of the absolute and relative time consumption associated with each section. After adapting the generic tool for a high-fidelity launch vehicle simulation program called MAVERIC, the components of a frequently used function called "derivatives ( )" were measured. Out of the 34 sub-functions in "derivatives ( )", it was found that the top 8 sub-functions made up 83.1% of the total time spent. In order to decrease the overall run time of MAVERIC, a launch vehicle simulation program, a change was implemented in the sub-function "Event_Controller ( )". Reformatting "Event_Controller ( )" led to a 36.9% decrease in the total CPU time spent by that sub-function, and a 3.2% decrease in the total CPU time spent by the overarching function "derivatives ( )".

  2. Multi Sensor Fusion Using Fitness Adaptive Differential Evolution

    NASA Astrophysics Data System (ADS)

    Giri, Ritwik; Ghosh, Arnob; Chowdhury, Aritra; Das, Swagatam

    The rising popularity of multi-source, multi-sensor networks supports real-life applications calls for an efficient and intelligent approach to information fusion. Traditional optimization techniques often fail to meet the demands. The evolutionary approach provides a valuable alternative due to its inherent parallel nature and its ability to deal with difficult problems. We present a new evolutionary approach based on a modified version of Differential Evolution (DE), called Fitness Adaptive Differential Evolution (FiADE). FiADE treats sensors in the network as distributed intelligent agents with various degrees of autonomy. Existing approaches based on intelligent agents cannot completely answer the question of how their agents could coordinate their decisions in a complex environment. The proposed approach is formulated to produce good result for the problems that are high-dimensional, highly nonlinear, and random. The proposed approach gives better result in case of optimal allocation of sensors. The performance of the proposed approach is compared with an evolutionary algorithm coordination generalized particle model (C-GPM).

  3. RGB-Stack Light Emitting Diode Modules with Transparent Glass Circuit Board and Oil Encapsulation

    PubMed Central

    Li, Ying-Chang; Chang, Yuan-Hsiao; Singh, Preetpal; Chang, Liann-Be; Yeh, Der-Hwa; Chao, Ting-Yu; Jian, Si-Yun; Li, Yu-Chi; Lai, Chao-Sung; Ying, Shang-Ping

    2018-01-01

    The light emitting diode (LED) is widely used in modern solid-state lighting applications, and its output efficiency is closely related to the submounts’ material properties. Most submounts used today, such as low-power printed circuit boards (PCBs) or high-power metal core printed circuit boards (MCPCBs), are not transparent and seriously decrease the output light extraction. To meet the requirements of high light output and better color mixing, a three-dimensional (3-D) stacked flip-chip (FC) LED module is proposed and demonstrated. To realize light penetration and mixing, the mentioned 3-D vertically stacking RGB LEDs use transparent glass as FC package submounts called glass circuit boards (GCB). Light emitted from each GCB stacked LEDs passes through each other and thus exhibits good output efficiency and homogeneous light-mixing characteristics. In this work, the parasitic problem of heat accumulation, which caused by the poor thermal conductivity of GCB and leads to a serious decrease in output efficiency, is solved by a proposed transparent cooling oil encapsulation (OCP) method. PMID:29494534

  4. RGB-Stack Light Emitting Diode Modules with Transparent Glass Circuit Board and Oil Encapsulation.

    PubMed

    Li, Ying-Chang; Chang, Yuan-Hsiao; Singh, Preetpal; Chang, Liann-Be; Yeh, Der-Hwa; Chao, Ting-Yu; Jian, Si-Yun; Li, Yu-Chi; Tan, Cher Ming; Lai, Chao-Sung; Chow, Lee; Ying, Shang-Ping

    2018-03-01

    The light emitting diode (LED) is widely used in modern solid-state lighting applications, and its output efficiency is closely related to the submounts' material properties. Most submounts used today, such as low-power printed circuit boards (PCBs) or high-power metal core printed circuit boards (MCPCBs), are not transparent and seriously decrease the output light extraction. To meet the requirements of high light output and better color mixing, a three-dimensional (3-D) stacked flip-chip (FC) LED module is proposed and demonstrated. To realize light penetration and mixing, the mentioned 3-D vertically stacking RGB LEDs use transparent glass as FC package submounts called glass circuit boards (GCB). Light emitted from each GCB stacked LEDs passes through each other and thus exhibits good output efficiency and homogeneous light-mixing characteristics. In this work, the parasitic problem of heat accumulation, which caused by the poor thermal conductivity of GCB and leads to a serious decrease in output efficiency, is solved by a proposed transparent cooling oil encapsulation (OCP) method.

  5. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  6. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  7. Electrodeposition of hierarchically structured three-dimensional nickel–iron electrodes for efficient oxygen evolution at high current densities

    PubMed Central

    Lu, Xunyu; Zhao, Chuan

    2015-01-01

    Large-scale industrial application of electrolytic splitting of water has called for the development of oxygen evolution electrodes that are inexpensive, robust and can deliver large current density (>500 mA cm−2) at low applied potentials. Here we show that an efficient oxygen electrode can be developed by electrodepositing amorphous mesoporous nickel–iron composite nanosheets directly onto macroporous nickel foam substrates. The as-prepared oxygen electrode exhibits high catalytic activity towards water oxidation in alkaline solutions, which only requires an overpotential of 200 mV to initiate the reaction, and is capable of delivering current densities of 500 and 1,000 mA cm−2 at overpotentials of 240 and 270 mV, respectively. The electrode also shows prolonged stability against bulk water electrolysis at large current. Collectively, the as-prepared three-dimensional structured electrode is the most efficient oxygen evolution electrode in alkaline electrolytes reported to the best of our knowledge, and can potentially be applied for industrial scale water electrolysis. PMID:25776015

  8. Microencapsulation of soybean oil by spray drying using oleosomes

    NASA Astrophysics Data System (ADS)

    Maurer, S.; Ghebremedhin, M.; Zielbauer, B. I.; Knorr, D.; Vilgis, T. A.

    2016-02-01

    The food industry has discovered that oleosomes are beneficial as carriers of bioactive ingredients. Oleosomes are subcellular oil droplets typically found in plant seeds. Within seeds, they exist as pre-emulsified oil high in unsaturated fatty acids, stabilised by a monolayer of phospholipids and proteins, called oleosins. Oleosins are anchored into the oil core with a hydrophobic domain, while the hydrophilic domains remain on the oleosome surface. To preserve the nutritional value of the oil and the function of oleosomes, microencapsulation by means of spray drying is a promising technique. For the microencapsulation of oleosomes, maltodextrin was used. To achieve a high oil encapsulation efficiency, optimal process parameters needed to be established. In order to better understand the mechanisms of drying behind powder formation and the associated powder properties, the findings obtained using different microscopic and spectroscopic measurements were correlated with each other. By doing this, it was found that spray drying of pure oleosome emulsions resulted in excessive component segregation and thus in a poor encapsulation efficiency. With the addition of maltodextrin, the oil encapsulation efficiency was significantly improved.

  9. Efficiency of energy recovery from waste incineration, in the light of the new Waste Framework Directive.

    PubMed

    Grosso, Mario; Motta, Astrid; Rigamonti, Lucia

    2010-07-01

    This paper deals with a key issue related to municipal waste incineration, which is the efficiency of energy recovery. A strong driver for improving the energy performances of waste-to-energy plants is the recent Waste Framework Directive (Directive 2008/98/EC of the European Parliament and of the Council of 19 November 2008 on waste and repealing certain Directives), which allows high efficiency installations to benefit from a status of "recovery" rather than "disposal". The change in designation means a step up in the waste hierarchy, where the lowest level of priority is now restricted to landfilling and low efficiency wastes incineration. The so-called "R1 formula" reported in the Directive, which counts for both production of power and heat, is critically analyzed and correlated to the more scientific-based approach of exergy efficiency. The results obtained for waste-to-energy plants currently operating in Europe reveal some significant differences in their performance, mainly related to the average size and to the availability of a heat market (district heating). Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  10. Interpretation of Extinction in Gaussian-Beam Scattering

    NASA Technical Reports Server (NTRS)

    Lock, James A.

    1995-01-01

    The extinction efficiency for the interaction of a plane wave with a large nonabsorbing spherical particle is approximately 2.0. When a Gaussian beam of half-width w(sub 0) is incident upon a spherical particle of radius a with w(sub 0)/a less than 1, the extinction efficiency attains unexpectedly high or low values, contrary to intuitive expectations. The reason for this is associated with the so-called compensating term in the scattered field, which cancels the field of the Gaussian beam behind the particle, thereby producing the particle's shadow. I introduce a decomposition of the total exterior field into incoming and outgoing portions that are free of compensating terms. It is then shown that a suitably defined interaction efficiency has the intuitively expected asymptotic values of 2.0 for w(sub 0)/a much greater than 1 and 1.0 for w(sub 0)/a much less than 1.

  11. Modular Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Schmitz, Paul C.; Mason, Lee S.; Schifer, Nicholas A.

    2016-01-01

    High-efficiency radioisotope power generators will play an important role in future NASA space exploration missions. Stirling Radioisotope Generators (SRGs) have been identified as a candidate generator technology capable of providing mission designers with an efficient, high-specific-power electrical generator. SRGs high conversion efficiency has the potential to extend the limited Pu-238 supply when compared with current Radioisotope Thermoelectric Generators (RTGs). Due to budgetary constraints, the Advanced Stirling Radioisotope Generator (ASRG) was canceled in the fall of 2013. Over the past year a joint study by NASA and the Department of Energy (DOE) called the Nuclear Power Assessment Study (NPAS) recommended that Stirling technologies continue to be explored. During the mission studies of the NPAS, spare SRGs were sometimes required to meet mission power system reliability requirements. This led to an additional mass penalty and increased isotope consumption levied on certain SRG-based missions. In an attempt to remove the spare power system, a new generator architecture is considered, which could increase the reliability of a Stirling generator and provide a more fault-tolerant power system. This new generator called the Modular Stirling Radioisotope Generator (MSRG) employs multiple parallel Stirling convertor/controller strings, all of which share the heat from the General Purpose Heat Source (GPHS) modules. For this design, generators utilizing one to eight GPHS modules were analyzed, which provided about 50 to 450 W of direct current (DC) to the spacecraft, respectively. Four Stirling convertors are arranged around each GPHS module resulting in from 4 to 32 Stirling/controller strings. The convertors are balanced either individually or in pairs, and are radiatively coupled to the GPHS modules. Heat is rejected through the housing/radiator, which is similar in construction to the ASRG. Mass and power analysis for these systems indicate that specific power may be slightly lower than the ASRG and similar to the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG). However, the reliability should be significantly increased compared to ASRG.

  12. Modular Stirling Radioisotope Generator

    NASA Technical Reports Server (NTRS)

    Schmitz, Paul C.; Mason, Lee S.; Schifer, Nicholas A.

    2015-01-01

    High efficiency radioisotope power generators will play an important role in future NASA space exploration missions. Stirling Radioisotope Generators (SRG) have been identified as a candidate generator technology capable of providing mission designers with an efficient, high specific power electrical generator. SRGs high conversion efficiency has the potential to extend the limited Pu-238 supply when compared with current Radioisotope Thermoelectric Generators (RTG). Due to budgetary constraints, the Advanced Stirling Radioisotope Generator (ASRG) was canceled in the fall of 2013. Over the past year a joint study by NASA and DOE called the Nuclear Power Assessment Study (NPAS) recommended that Stirling technologies continue to be explored. During the mission studies of the NPAS, spare SRGs were sometimes required to meet mission power system reliability requirements. This led to an additional mass penalty and increased isotope consumption levied on certain SRG-based missions. In an attempt to remove the spare power system, a new generator architecture is considered which could increase the reliability of a Stirling generator and provide a more fault-tolerant power system. This new generator called the Modular Stirling Radioisotope Generator (MSRG) employs multiple parallel Stirling convertor/controller strings, all of which share the heat from the General Purpose Heat Source (GPHS) modules. For this design, generators utilizing one to eight GPHS modules were analyzed, which provide about 50 to 450 watts DC to the spacecraft, respectively. Four Stirling convertors are arranged around each GPHS module resulting in from 4 to 32 Stirling/controller strings. The convertors are balanced either individually or in pairs, and are radiatively coupled to the GPHS modules. Heat is rejected through the housing/radiator which is similar in construction to the ASRG. Mass and power analysis for these systems indicate that specific power may be slightly lower than the ASRG and similar to the MMRTG. However, the reliability should be significantly increased compared to ASRG.

  13. Technology needs for high speed rotorcraft (3)

    NASA Technical Reports Server (NTRS)

    Detore, Jack; Conway, Scott

    1991-01-01

    The spectrum of vertical takeoff and landing (VTOL) type aircraft is examined to determine which aircraft are most likely to achieve high subsonic cruise speeds and have hover qualities similar to a helicopter. Two civil mission profiles are considered: a 600-n.mi. mission for a 15- and a 30-passenger payload. Applying current technology, only the 15- and 30-passenger tiltfold aircraft are capable of attaining the 450-knot design goal. The two tiltfold aircraft at 450 knots and a 30-passenger tiltrotor at 375 knots were further developed for the Task II technology analysis. A program called High-Speed Total Envelope Proprotor (HI-STEP) is recommended to meet several of these issues based on the tiltrotor concept. A program called Tiltfold System (TFS) is recommended based on the tiltrotor concept. A task is identified to resolve the best design speed from productivity and demand considerations based on the technology that emerges from the recommended programs. HI-STEP's goals are to investigate propulsive efficiency, maneuver loads, and aeroelastic stability. Programs currently in progress that may meet the other technology needs include the Integrated High Performance Turbine Engine Technology (IHPTET) (NASA Lewis) and the Advanced Structural Concepts Program funded through NASA Langley.

  14. Fast single-pass alignment and variant calling using sequencing data

    USDA-ARS?s Scientific Manuscript database

    Sequencing research requires efficient computation. Few programs use already known information about DNA variants when aligning sequence data to the reference map. New program findmap.f90 reads the previous variant list before aligning sequence, calling variant alleles, and summing the allele counts...

  15. Extreme triple asymmetric (ETAS) epitaxial designs for increased efficiency at high powers in 9xx-nm diode lasers

    NASA Astrophysics Data System (ADS)

    Kaul, T.; Erbert, G.; Maaßdorf, A.; Martin, D.; Crump, P.

    2018-02-01

    Broad area lasers that are tailored to be most efficient at the highest achievable optical output power are sought by industry to decrease operation costs and improve system performance. Devices using Extreme-Double-ASymmetric (EDAS) epitaxial designs are promising candidates for improved efficiency at high optical output powers due to low series resistance, low optical loss and low carrier leakage. However, EDAS designs leverage ultra-thin p-side waveguides, meaning that the optical mode is shifted into the n-side waveguide, resulting in a low optical confinement in the active region, low gain and hence high threshold current, limiting peak performance. We introduce here explicit design considerations that enable EDAS-based devices to be developed with increased optical confinement in the active layer without changing the p-side layer thicknesses. Specifically, this is realized by introducing a third asymmetric component in the vicinity of the quantum well. We call this approach Extreme-Triple-ASymmetric (ETAS) design. A series of ETAS-based vertical designs were fabricated into broad area lasers that deliver up to 63% power conversion efficiency at 14 W CW optical output power from a 100 μm stripe laser, which corresponds to the operation point of a kW optical output power in a laser bar. The design process, the impact of structural changes on power saturation mechanisms and finally devices with improved performance will be presented.

  16. WhopGenome: high-speed access to whole-genome variation and sequence data in R.

    PubMed

    Wittelsbürger, Ulrich; Pfeifer, Bastian; Lercher, Martin J

    2015-02-01

    The statistical programming language R has become a de facto standard for the analysis of many types of biological data, and is well suited for the rapid development of new algorithms. However, variant call data from population-scale resequencing projects are typically too large to be read and processed efficiently with R's built-in I/O capabilities. WhopGenome can efficiently read whole-genome variation data stored in the widely used variant call format (VCF) file format into several R data types. VCF files can be accessed either on local hard drives or on remote servers. WhopGenome can associate variants with annotations such as those available from the UCSC genome browser, and can accelerate the reading process by filtering loci according to user-defined criteria. WhopGenome can also read other Tabix-indexed files and create indices to allow fast selective access to FASTA-formatted sequence files. The WhopGenome R package is available on CRAN at http://cran.r-project.org/web/packages/WhopGenome/. A Bioconductor package has been submitted. lercher@cs.uni-duesseldorf.de. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. A New Signaling Architecture THREP with Autonomous Radio-Link Control for Wireless Communications Systems

    NASA Astrophysics Data System (ADS)

    Hirono, Masahiko; Nojima, Toshio

    This paper presents a new signaling architecture for radio-access control in wireless communications systems. Called THREP (for THREe-phase link set-up Process), it enables systems with low-cost configurations to provide tetherless access and wide-ranging mobility by using autonomous radio-link controls for fast cell searching and distributed call management. A signaling architecture generally consists of a radio-access part and a service-entity-access part. In THREP, the latter part is divided into two steps: preparing a communication channel, and sustaining it. Access control in THREP is thus composed of three separated parts, or protocol phases. The specifications of each phase are determined independently according to system requirements. In the proposed architecture, the first phase uses autonomous radio-link control because we want to construct low-power indoor wireless communications systems. Evaluation of channel usage efficiency and hand-over loss probability in the personal handy-phone system (PHS) shows that THREP makes the radio-access sub-system operations in a practical application model highly efficient, and the results of a field experiment show that THREP provides sufficient protection against severe fast CNR degradation in practical indoor propagation environments.

  18. Overview of Single-Molecule Speckle (SiMS) Microscopy and Its Electroporation-Based Version with Efficient Labeling and Improved Spatiotemporal Resolution.

    PubMed

    Yamashiro, Sawako; Watanabe, Naoki

    2017-07-06

    Live-cell single-molecule imaging was introduced more than a decade ago, and has provided critical information on remodeling of the actin cytoskeleton, the motion of plasma membrane proteins, and dynamics of molecular motor proteins. Actin remodeling has been the best target for this approach because actin and its associated proteins stop diffusing when assembled, allowing visualization of single-molecules of fluorescently-labeled proteins in a state specific manner. The approach based on this simple principle is called Single-Molecule Speckle (SiMS) microscopy. For instance, spatiotemporal regulation of actin polymerization and lifetime distribution of actin filaments can be monitored directly by tracking actin SiMS. In combination with fluorescently labeled probes of various actin regulators, SiMS microscopy has contributed to clarifying the processes underlying recycling, motion and remodeling of the live-cell actin network. Recently, we introduced an electroporation-based method called eSiMS microscopy, with high efficiency, easiness and improved spatiotemporal precision. In this review, we describe the application of live-cell single-molecule imaging to cellular actin dynamics and discuss the advantages of eSiMS microscopy over previous SiMS microscopy.

  19. DROMPA: easy-to-handle peak calling and visualization software for the computational analysis and validation of ChIP-seq data.

    PubMed

    Nakato, Ryuichiro; Itoh, Tahehiko; Shirahige, Katsuhiko

    2013-07-01

    Chromatin immunoprecipitation with high-throughput sequencing (ChIP-seq) can identify genomic regions that bind proteins involved in various chromosomal functions. Although the development of next-generation sequencers offers the technology needed to identify these protein-binding sites, the analysis can be computationally challenging because sequencing data sometimes consist of >100 million reads/sample. Herein, we describe a cost-effective and time-efficient protocol that is generally applicable to ChIP-seq analysis; this protocol uses a novel peak-calling program termed DROMPA to identify peaks and an additional program, parse2wig, to preprocess read-map files. This two-step procedure drastically reduces computational time and memory requirements compared with other programs. DROMPA enables the identification of protein localization sites in repetitive sequences and efficiently identifies both broad and sharp protein localization peaks. Specifically, DROMPA outputs a protein-binding profile map in pdf or png format, which can be easily manipulated by users who have a limited background in bioinformatics. © 2013 The Authors Genes to Cells © 2013 by the Molecular Biology Society of Japan and Wiley Publishing Asia Pty Ltd.

  20. Effects of different analysis techniques and recording duty cycles on passive acoustic monitoring of killer whales.

    PubMed

    Riera, Amalis; Ford, John K; Ross Chapman, N

    2013-09-01

    Killer whales in British Columbia are at risk, and little is known about their winter distribution. Passive acoustic monitoring of their year-round habitat is a valuable supplemental method to traditional visual and photographic surveys. However, long-term acoustic studies of odontocetes have some limitations, including the generation of large amounts of data that require highly time-consuming processing. There is a need to develop tools and protocols to maximize the efficiency of such studies. Here, two types of analysis, real-time and long term spectral averages, were compared to assess their performance at detecting killer whale calls in long-term acoustic recordings. In addition, two different duty cycles, 1/3 and 2/3, were tested. Both the use of long term spectral averages and a lower duty cycle resulted in a decrease in call detection and positive pod identification, leading to underestimations of the amount of time the whales were present. The impact of these limitations should be considered in future killer whale acoustic surveys. A compromise between a lower resolution data processing method and a higher duty cycle is suggested for maximum methodological efficiency.

  1. Calligraphic Poling for WGM Resonators

    NASA Technical Reports Server (NTRS)

    Mohageg, Makan; Strekalov, Dmitry; Savchenkov, Anatoliy; Matsko, Andrey; Ilchenko, Vladimir; Maleki, Lute

    2007-01-01

    By engineering the geometry of a nonlinear optical crystal, the effective efficiency of all nonlinear optical oscillations can be increased dramatically. Specifically, sphere and disk shaped crystal resonators have been used to demonstrate nonlinear optical oscillations at sub-milliwatt input power when cs light propagates in a Whispering Gallery Mode (WGM) of such a resonant cavity. in terms of both device production and experimentation in quantum optics, some nonlinear optical effects with naturally high efficiency can occult the desired nonlinear scattering process. the structure to the crystal resonator. In this paper, I will discuss a new method for generating poling structures in ferroelectric crystal resonators called calligraphic poling. The details of the poling apparatus, experimental results and speculation on future applications will be discussed.

  2. An efficient approach to the analysis of rail surface irregularities accounting for dynamic train-track interaction and inelastic deformations

    NASA Astrophysics Data System (ADS)

    Andersson, Robin; Torstensson, Peter T.; Kabo, Elena; Larsson, Fredrik

    2015-11-01

    A two-dimensional computational model for assessment of rolling contact fatigue induced by discrete rail surface irregularities, especially in the context of so-called squats, is presented. Dynamic excitation in a wide frequency range is considered in computationally efficient time-domain simulations of high-frequency dynamic vehicle-track interaction accounting for transient non-Hertzian wheel-rail contact. Results from dynamic simulations are mapped onto a finite element model to resolve the cyclic, elastoplastic stress response in the rail. Ratcheting under multiple wheel passages is quantified. In addition, low cycle fatigue impact is quantified using the Jiang-Sehitoglu fatigue parameter. The functionality of the model is demonstrated by numerical examples.

  3. Exploring high dimensional free energy landscapes: Temperature accelerated sliced sampling

    NASA Astrophysics Data System (ADS)

    Awasthi, Shalini; Nair, Nisanth N.

    2017-03-01

    Biased sampling of collective variables is widely used to accelerate rare events in molecular simulations and to explore free energy surfaces. However, computational efficiency of these methods decreases with increasing number of collective variables, which severely limits the predictive power of the enhanced sampling approaches. Here we propose a method called Temperature Accelerated Sliced Sampling (TASS) that combines temperature accelerated molecular dynamics with umbrella sampling and metadynamics to sample the collective variable space in an efficient manner. The presented method can sample a large number of collective variables and is advantageous for controlled exploration of broad and unbound free energy basins. TASS is also shown to achieve quick free energy convergence and is practically usable with ab initio molecular dynamics techniques.

  4. Enabling technologies and green processes in cyclodextrin chemistry.

    PubMed

    Cravotto, Giancarlo; Caporaso, Marina; Jicsinszky, Laszlo; Martina, Katia

    2016-01-01

    The design of efficient synthetic green strategies for the selective modification of cyclodextrins (CDs) is still a challenging task. Outstanding results have been achieved in recent years by means of so-called enabling technologies, such as microwaves, ultrasound and ball mills, that have become irreplaceable tools in the synthesis of CD derivatives. Several examples of sonochemical selective modification of native α-, β- and γ-CDs have been reported including heterogeneous phase Pd- and Cu-catalysed hydrogenations and couplings. Microwave irradiation has emerged as the technique of choice for the production of highly substituted CD derivatives, CD grafted materials and polymers. Mechanochemical methods have successfully furnished greener, solvent-free syntheses and efficient complexation, while flow microreactors may well improve the repeatability and optimization of critical synthetic protocols.

  5. Using Decision Procedures to Build Domain-Specific Deductive Synthesis Systems

    NASA Technical Reports Server (NTRS)

    VanBaalen, Jeffrey; Roach, Steven; Lau, Sonie (Technical Monitor)

    1998-01-01

    This paper describes a class of decision procedures that we have found useful for efficient, domain-specific deductive synthesis. These procedures are called closure-based ground literal satisfiability procedures. We argue that this is a large and interesting class of procedures and show how to interface these procedures to a theorem prover for efficient deductive synthesis. Finally, we describe some results we have observed from our implementation. Amphion/NAIF is a domain-specific, high-assurance software synthesis system. It takes an abstract specification of a problem in solar system mechanics, such as 'when will a signal sent from the Cassini spacecraft to Earth be blocked by the planet Saturn?', and automatically synthesizes a FORTRAN program to solve it.

  6. Delivery of multiple siRNAs using lipid-coated PLGA nanoparticles for treatment of prostate cancer.

    PubMed

    Hasan, Warefta; Chu, Kevin; Gullapalli, Anuradha; Dunn, Stuart S; Enlow, Elizabeth M; Luft, J Christopher; Tian, Shaomin; Napier, Mary E; Pohlhaus, Patrick D; Rolland, Jason P; DeSimone, Joseph M

    2012-01-11

    Nanotechnology can provide a critical advantage in developing strategies for cancer management and treatment by helping to improve the safety and efficacy of novel therapeutic delivery vehicles. This paper reports the fabrication of poly(lactic acid-co-glycolic acid)/siRNA nanoparticles coated with lipids for use as prostate cancer therapeutics made via a unique soft lithography particle molding process called Particle Replication In Nonwetting Templates (PRINT). The PRINT process enables high encapsulation efficiency of siRNA into neutral and monodisperse PLGA particles (32-46% encapsulation efficiency). Lipid-coated PLGA/siRNA PRINT particles were used to deliver therapeutic siRNA in vitro to knockdown genes relevant to prostate cancer. © 2011 American Chemical Society

  7. Efficient tiled calculation of over-10-gigapixel holograms using ray-wavefront conversion.

    PubMed

    Igarashi, Shunsuke; Nakamura, Tomoya; Matsushima, Kyoji; Yamaguchi, Masahiro

    2018-04-16

    In the calculation of large-scale computer-generated holograms, an approach called "tiling," which divides the hologram plane into small rectangles, is often employed due to limitations on computational memory. However, the total amount of computational complexity severely increases with the number of divisions. In this paper, we propose an efficient method for calculating tiled large-scale holograms using ray-wavefront conversion. In experiments, the effectiveness of the proposed method was verified by comparing its calculation cost with that using the previous method. Additionally, a hologram of 128K × 128K pixels was calculated and fabricated by a laser-lithography system, and a high-quality 105 mm × 105 mm 3D image including complicated reflection and translucency was optically reconstructed.

  8. Methods for Multiplex Template Sampling in Digital PCR Assays

    PubMed Central

    Petriv, Oleh I.; Heyries, Kevin A.; VanInsberghe, Michael; Walker, David; Hansen, Carl L.

    2014-01-01

    The efficient use of digital PCR (dPCR) for precision copy number analysis requires high concentrations of target molecules that may be difficult or impossible to obtain from clinical samples. To solve this problem we present a strategy, called Multiplex Template Sampling (MTS), that effectively increases template concentrations by detecting multiple regions of fragmented target molecules. Three alternative assay approaches are presented for implementing MTS analysis of chromosome 21, providing a 10-fold concentration enhancement while preserving assay precision. PMID:24854517

  9. Methods for multiplex template sampling in digital PCR assays.

    PubMed

    Petriv, Oleh I; Heyries, Kevin A; VanInsberghe, Michael; Walker, David; Hansen, Carl L

    2014-01-01

    The efficient use of digital PCR (dPCR) for precision copy number analysis requires high concentrations of target molecules that may be difficult or impossible to obtain from clinical samples. To solve this problem we present a strategy, called Multiplex Template Sampling (MTS), that effectively increases template concentrations by detecting multiple regions of fragmented target molecules. Three alternative assay approaches are presented for implementing MTS analysis of chromosome 21, providing a 10-fold concentration enhancement while preserving assay precision.

  10. Severity Summarization and Just in Time Alert Computation in mHealth Monitoring.

    PubMed

    Pathinarupothi, Rahul Krishnan; Alangot, Bithin; Rangan, Ekanath

    2017-01-01

    Mobile health is fast evolving into a practical solution to remotely monitor high-risk patients and deliver timely intervention in case of emergencies. Building upon our previous work on a fast and power efficient summarization framework for remote health monitoring applications, called RASPRO (Rapid Alerts Summarization for Effective Prognosis), we have developed a real-time criticality detection technique, which ensures meeting physician defined interventional time. We also present the results from initial testing of this technique.

  11. A front-end automation tool supporting design, verification and reuse of SOC.

    PubMed

    Yan, Xiao-lang; Yu, Long-li; Wang, Jie-bing

    2004-09-01

    This paper describes an in-house developed language tool called VPerl used in developing a 250 MHz 32-bit high-performance low power embedded CPU core. The authors showed that use of this tool can compress the Verilog code by more than a factor of 5, increase the efficiency of the front-end design, reduce the bug rate significantly. This tool can be used to enhance the reusability of an intellectual property model, and facilitate porting design for different platforms.

  12. General Monte Carlo reliability simulation code including common mode failures and HARP fault/error-handling

    NASA Technical Reports Server (NTRS)

    Platt, M. E.; Lewis, E. E.; Boehm, F.

    1991-01-01

    A Monte Carlo Fortran computer program was developed that uses two variance reduction techniques for computing system reliability applicable to solving very large highly reliable fault-tolerant systems. The program is consistent with the hybrid automated reliability predictor (HARP) code which employs behavioral decomposition and complex fault-error handling models. This new capability is called MC-HARP which efficiently solves reliability models with non-constant failures rates (Weibull). Common mode failure modeling is also a specialty.

  13. Silicon web process development

    NASA Technical Reports Server (NTRS)

    Duncan, C. S.; Seidensticker, R. G.; Mchugh, J. P.; Skutch, M. E.; Driggers, J. M.; Hopkins, R. H.

    1981-01-01

    The silicon web process takes advantage of natural crystallographic stabilizing forces to grow long, thin single crystal ribbons directly from liquid silicon. The ribbon, or web, is formed by the solidification of a liquid film supported by surface tension between two silicon filaments, called dendrites, which border the edges of the growing strip. The ribbon can be propagated indefinitely by replenishing the liquid silicon as it is transformed to crystal. The dendritic web process has several advantages for achieving low cost, high efficiency solar cells. These advantages are discussed.

  14. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  15. Effectiveness and cost effectiveness of television, radio and print advertisements in promoting the New York smokers' quitline.

    PubMed

    Farrelly, Matthew C; Hussin, Altijani; Bauer, Ursula E

    2007-12-01

    This study assessed the relative effectiveness and cost effectiveness of television, radio and print advertisements to generate calls to the New York smokers' quitline. Regression analysis was used to link total county level monthly quitline calls to television, radio and print advertising expenditures. Based on regression results, standardised measures of the relative effectiveness and cost effectiveness of expenditures were computed. There was a positive and statistically significant relation between call volume and expenditures for television (p<0.01) and radio (p<0.001) advertisements and a marginally significant effect for expenditures on newspaper advertisements (p<0.065). The largest effect was for television advertising. However, because of differences in advertising costs, for every $1000 increase in television, radio and newspaper expenditures, call volume increased by 0.1%, 5.7% and 2.8%, respectively. Television, radio and print media all effectively increased calls to the New York smokers' quitline. Although increases in expenditures for television were the most effective, their relatively high costs suggest they are not currently the most cost effective means to promote a quitline. This implies that a more efficient mix of media would place greater emphasis on radio than television. However, because the current study does not adequately assess the extent to which radio expenditures would sustain their effectiveness with substantial expenditure increases, it is not feasible to determine a more optimal mix of expenditures.

  16. TPMG Northern California appointments and advice call center.

    PubMed

    Conolly, Patricia; Levine, Leslie; Amaral, Debra J; Fireman, Bruce H; Driscoll, Tom

    2005-08-01

    Kaiser Permanente (KP) has been developing its use of call centers as a way to provide an expansive set of healthcare services to KP members efficiently and cost effectively. Since 1995, when The Permanente Medical Group (TPMG) began to consolidate primary care phone services into three physical call centers, the TPMG Appointments and Advice Call Center (AACC) has become the "front office" for primary care services across approximately 89% of Northern California. The AACC provides primary care phone service for approximately 3 million Kaiser Foundation Health Plan members in Northern California and responds to approximately 1 million calls per month across the three AACC sites. A database records each caller's identity as well as the day, time, and duration of each call; reason for calling; services provided to callers as a result of calls; and clinical outcomes of calls. We here summarize this information for the period 2000 through 2003.

  17. 75 FR 6435 - Sunshine Act Meeting Notice

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-09

    ... will answer questions from the news media following the Board meeting. Status: Open. Agenda Old... Efficiency Committee. For more information: Please call TVA Media Relations at (865) 632- 6000, Knoxville, Tennessee. People who plan to attend the meeting and have special needs should call (865) 632-6000. Anyone...

  18. Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel

    NASA Technical Reports Server (NTRS)

    Downey, Joseph; Downey, James; Reinhart, Richard C.; Evans, Michael Alan; Mortensen, Dale John

    2016-01-01

    The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 248-phase shift keying (PSK) and 1632- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 bsHz) modulation combined with various LDPC encoding rates to maximize throughput. With a symbol rate of 200 Mbaud, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.

  19. Bandwidth-Efficient Communication through 225 MHz Ka-band Relay Satellite Channel

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Downey, James M.; Reinhart, Richard C.; Evans, Michael A.; Mortensen, Dale J.

    2016-01-01

    The communications and navigation space infrastructure of the National Aeronautics and Space Administration (NASA) consists of a constellation of relay satellites (called Tracking and Data Relay Satellites (TDRS)) and a global set of ground stations to receive and deliver data to researchers around the world from mission spacecraft throughout the solar system. Planning is underway to enhance and transform the infrastructure over the coming decade. Key to the upgrade will be the simultaneous and efficient use of relay transponders to minimize cost and operations while supporting science and exploration spacecraft. Efficient use of transponders necessitates bandwidth efficient communications to best use and maximize data throughput within the allocated spectrum. Experiments conducted with NASA's Space Communication and Navigation (SCaN) Testbed on the International Space Station provides a unique opportunity to evaluate advanced communication techniques, such as bandwidth-efficient modulations, in an operational flight system. Demonstrations of these new techniques in realistic flight conditions provides critical experience and reduces the risk of using these techniques in future missions. Efficient use of spectrum is enabled by using high-order modulations coupled with efficient forward error correction codes. This paper presents a high-rate, bandwidth-efficient waveform operating over the 225 MHz Ka-band service of the TDRS System (TDRSS). The testing explores the application of Gaussian Minimum Shift Keying (GMSK), 2/4/8-phase shift keying (PSK) and 16/32- amplitude PSK (APSK) providing over three bits-per-second-per-Hertz (3 b/s/Hz) modulation combined with various LDPC encoding rates to maximize through- put. With a symbol rate of 200 M-band, coded data rates of 1000 Mbps were tested in the laboratory and up to 800 Mbps over the TDRS 225 MHz channel. This paper will present on the high-rate waveform design, channel characteristics, performance results, compensation techniques for filtering and equalization, and architecture considerations going forward for efficient use of NASA's infrastructure.

  20. An index-based algorithm for fast on-line query processing of latent semantic analysis

    PubMed Central

    Li, Pohan; Wang, Wei

    2017-01-01

    Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm. PMID:28520747

  1. First-order irreversible thermodynamic approach to a simple energy converter

    NASA Astrophysics Data System (ADS)

    Arias-Hernandez, L. A.; Angulo-Brown, F.; Paez-Hernandez, R. T.

    2008-01-01

    Several authors have shown that dissipative thermal cycle models based on finite-time thermodynamics exhibit loop-shaped curves of power output versus efficiency, such as it occurs with actual dissipative thermal engines. Within the context of first-order irreversible thermodynamics (FOIT), in this work we show that for an energy converter consisting of two coupled fluxes it is also possible to find loop-shaped curves of both power output and the so-called ecological function versus efficiency. In a previous work Stucki [J. W. Stucki, Eur. J. Biochem. 109, 269 (1980)] used a FOIT approach to describe the modes of thermodynamic performance of oxidative phosphorylation involved in adenosine triphosphate (ATP) synthesis within mithochondrias. In that work the author did not use the mentioned loop-shaped curves and he proposed that oxidative phosphorylation operates in a steady state at both minimum entropy production and maximum efficiency simultaneously, by means of a conductance matching condition between extreme states of zero and infinite conductances, respectively. In the present work we show that all Stucki’s results about the oxidative phosphorylation energetics can be obtained without the so-called conductance matching condition. On the other hand, we also show that the minimum entropy production state implies both null power output and efficiency and therefore this state is not fulfilled by the oxidative phosphorylation performance. Our results suggest that actual efficiency values of oxidative phosphorylation performance are better described by a mode of operation consisting of the simultaneous maximization of both the so-called ecological function and the efficiency.

  2. An index-based algorithm for fast on-line query processing of latent semantic analysis.

    PubMed

    Zhang, Mingxi; Li, Pohan; Wang, Wei

    2017-01-01

    Latent Semantic Analysis (LSA) is widely used for finding the documents whose semantic is similar to the query of keywords. Although LSA yield promising similar results, the existing LSA algorithms involve lots of unnecessary operations in similarity computation and candidate check during on-line query processing, which is expensive in terms of time cost and cannot efficiently response the query request especially when the dataset becomes large. In this paper, we study the efficiency problem of on-line query processing for LSA towards efficiently searching the similar documents to a given query. We rewrite the similarity equation of LSA combined with an intermediate value called partial similarity that is stored in a designed index called partial index. For reducing the searching space, we give an approximate form of similarity equation, and then develop an efficient algorithm for building partial index, which skips the partial similarities lower than a given threshold θ. Based on partial index, we develop an efficient algorithm called ILSA for supporting fast on-line query processing. The given query is transformed into a pseudo document vector, and the similarities between query and candidate documents are computed by accumulating the partial similarities obtained from the index nodes corresponds to non-zero entries in the pseudo document vector. Compared to the LSA algorithm, ILSA reduces the time cost of on-line query processing by pruning the candidate documents that are not promising and skipping the operations that make little contribution to similarity scores. Extensive experiments through comparison with LSA have been done, which demonstrate the efficiency and effectiveness of our proposed algorithm.

  3. Call transmission efficiency in native and invasive anurans: competing hypotheses of divergence in acoustic signals.

    PubMed

    Llusia, Diego; Gómez, Miguel; Penna, Mario; Márquez, Rafael

    2013-01-01

    Invasive species are a leading cause of the current biodiversity decline, and hence examining the major traits favouring invasion is a key and long-standing goal of invasion biology. Despite the prominent role of the advertisement calls in sexual selection and reproduction, very little attention has been paid to the features of acoustic communication of invasive species in nonindigenous habitats and their potential impacts on native species. Here we compare for the first time the transmission efficiency of the advertisement calls of native and invasive species, searching for competitive advantages for acoustic communication and reproduction of introduced taxa, and providing insights into competing hypotheses in evolutionary divergence of acoustic signals: acoustic adaptation vs. morphological constraints. Using sound propagation experiments, we measured the attenuation rates of pure tones (0.2-5 kHz) and playback calls (Lithobates catesbeianus and Pelophylax perezi) across four distances (1, 2, 4, and 8 m) and over two substrates (water and soil) in seven Iberian localities. All factors considered (signal type, distance, substrate, and locality) affected transmission efficiency of acoustic signals, which was maximized with lower frequency sounds, shorter distances, and over water surface. Despite being broadcast in nonindigenous habitats, the advertisement calls of invasive L. catesbeianus were propagated more efficiently than those of the native species, in both aquatic and terrestrial substrates, and in most of the study sites. This implies absence of optimal relationship between native environments and propagation of acoustic signals in anurans, in contrast to what predicted by the acoustic adaptation hypothesis, and it might render these vertebrates particularly vulnerable to intrusion of invasive species producing low frequency signals, such as L. catesbeianus. Our findings suggest that mechanisms optimizing sound transmission in native habitat can play a less significant role than other selective forces or biological constraints in evolutionary design of anuran acoustic signals.

  4. Call Transmission Efficiency in Native and Invasive Anurans: Competing Hypotheses of Divergence in Acoustic Signals

    PubMed Central

    Llusia, Diego; Gómez, Miguel; Penna, Mario; Márquez, Rafael

    2013-01-01

    Invasive species are a leading cause of the current biodiversity decline, and hence examining the major traits favouring invasion is a key and long-standing goal of invasion biology. Despite the prominent role of the advertisement calls in sexual selection and reproduction, very little attention has been paid to the features of acoustic communication of invasive species in nonindigenous habitats and their potential impacts on native species. Here we compare for the first time the transmission efficiency of the advertisement calls of native and invasive species, searching for competitive advantages for acoustic communication and reproduction of introduced taxa, and providing insights into competing hypotheses in evolutionary divergence of acoustic signals: acoustic adaptation vs. morphological constraints. Using sound propagation experiments, we measured the attenuation rates of pure tones (0.2–5 kHz) and playback calls (Lithobates catesbeianus and Pelophylax perezi) across four distances (1, 2, 4, and 8 m) and over two substrates (water and soil) in seven Iberian localities. All factors considered (signal type, distance, substrate, and locality) affected transmission efficiency of acoustic signals, which was maximized with lower frequency sounds, shorter distances, and over water surface. Despite being broadcast in nonindigenous habitats, the advertisement calls of invasive L. catesbeianus were propagated more efficiently than those of the native species, in both aquatic and terrestrial substrates, and in most of the study sites. This implies absence of optimal relationship between native environments and propagation of acoustic signals in anurans, in contrast to what predicted by the acoustic adaptation hypothesis, and it might render these vertebrates particularly vulnerable to intrusion of invasive species producing low frequency signals, such as L. catesbeianus. Our findings suggest that mechanisms optimizing sound transmission in native habitat can play a less significant role than other selective forces or biological constraints in evolutionary design of anuran acoustic signals. PMID:24155940

  5. Implementation and quality assessment of a pharmacy services call center for outpatient pharmacies and specialty pharmacy services in an academic health system.

    PubMed

    Rim, Matthew H; Thomas, Karen C; Chandramouli, Jane; Barrus, Stephanie A; Nickman, Nancy A

    2018-05-15

    The implementation and quality assessment of a pharmacy services call center (PSCC) for outpatient pharmacies and specialty pharmacy services within an academic health system are described. Prolonged wait times in outpatient pharmacies or hold times on the phone affect the ability of pharmacies to capture and retain prescriptions. To support outpatient pharmacy operations and improve quality, a PSCC was developed to centralize handling of all outpatient and specialty pharmacy calls. The purpose of the PSCC was to improve the quality of pharmacy telephone services by (1) decreasing the call abandonment rate, (2) improving the speed of answer, (3) increasing first-call resolution, (4) centralizing all specialty pharmacy and prior authorization calls, (5) increasing labor efficiency and pharmacy capacities, (6) implementing a quality evaluation program, and (7) improving workplace satisfaction and retention of outpatient pharmacy staff. The PSCC centralized pharmacy calls from 9 pharmacy locations, 2 outpatient clinics, and a specialty pharmacy. Since implementation, the PSCC has achieved and maintained program goals, including improved abandonment rate, speed of answer, and first-call resolution. A centralized 24-7 support line for specialty pharmacy patients was also successfully established. A quality calibration program was implemented to ensure service quality and excellent patient experience. Additional ongoing evaluations measure the impact of the PSCC on improving workplace satisfaction and retention of outpatient pharmacy staff. The design and implementation of the PSCC have significantly improved the health system's patient experiences, efficiency, and quality. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  6. A loophole-free Bell's inequality experiment

    NASA Astrophysics Data System (ADS)

    Kwiat, Paul G.; Steinberg, Aephraim M.; Chiao, Raymond Y.; Eberhard, Philippe H.

    1994-05-01

    The proof of Nature's nonlocality through Bell-type experiments is a topic of longstanding interest. Nevertheless, no experiments performed thus far have avoided the so-called 'detection loophole,' arising from low detector efficiencies and angular-correlation difficulties. In fact, most, if not all, of the systems employed to date can never close this loophole, even with perfect detectors. In addition, another loophole involving the non-rapid, non-random switching of various parameter settings exists in all past experiments. We discuss a proposal for a potentially loophole-free Bell's inequality experiment. The source of the EPR-correlated pairs consists of two simultaneously-pumped type-2 phase-matched nonlinear crystals and a polarizing beam splitter. The feasibility of such a scheme with current detector technology seems high, and will be discussed. We also present a single-crystal version, motivated by other work presented at this conference. In a separate experiment, we have measured the absolute detection efficiency and time response of four single-photon detectors. The highest observed efficiencies were 70.7 plus or minus 1.9 percent (at 633 nm, with a device from Rockwell International) and 76.4 plus or minus 2.3 percent (at 702 nm, with an EG&G counting module). Possible efficiencies as high as 90 percent were implied. The EG&G devices displayed sub-nanosecond time resolution.

  7. Geochemical modeling of trivalent chromium migration in saline-sodic soil during Lasagna process: impact on soil physicochemical properties.

    PubMed

    Lukman, Salihu; Bukhari, Alaadin; Al-Malack, Muhammad H; Mu'azu, Nuhu D; Essa, Mohammed H

    2014-01-01

    Trivalent Cr is one of the heavy metals that are difficult to be removed from soil using electrokinetic study because of its geochemical properties. High buffering capacity soil is expected to reduce the mobility of the trivalent Cr and subsequently reduce the remedial efficiency thereby complicating the remediation process. In this study, geochemical modeling and migration of trivalent Cr in saline-sodic soil (high buffering capacity and alkaline) during integrated electrokinetics-adsorption remediation, called the Lasagna process, were investigated. The remedial efficiency of trivalent Cr in addition to the impacts of the Lasagna process on the physicochemical properties of the soil was studied. Box-Behnken design was used to study the interaction effects of voltage gradient, initial contaminant concentration, and polarity reversal rate on the soil pH, electroosmotic volume, soil electrical conductivity, current, and remedial efficiency of trivalent Cr in saline-sodic soil that was artificially spiked with Cr, Cu, Cd, Pb, Hg, phenol, and kerosene. Overall desirability of 0.715 was attained at the following optimal conditions: voltage gradient 0.36 V/cm; polarity reversal rate 17.63 hr; soil pH 10.0. Under these conditions, the expected trivalent Cr remedial efficiency is 64.75%.

  8. Geochemical Modeling of Trivalent Chromium Migration in Saline-Sodic Soil during Lasagna Process: Impact on Soil Physicochemical Properties

    PubMed Central

    Bukhari, Alaadin; Al-Malack, Muhammad H.; Mu'azu, Nuhu D.; Essa, Mohammed H.

    2014-01-01

    Trivalent Cr is one of the heavy metals that are difficult to be removed from soil using electrokinetic study because of its geochemical properties. High buffering capacity soil is expected to reduce the mobility of the trivalent Cr and subsequently reduce the remedial efficiency thereby complicating the remediation process. In this study, geochemical modeling and migration of trivalent Cr in saline-sodic soil (high buffering capacity and alkaline) during integrated electrokinetics-adsorption remediation, called the Lasagna process, were investigated. The remedial efficiency of trivalent Cr in addition to the impacts of the Lasagna process on the physicochemical properties of the soil was studied. Box-Behnken design was used to study the interaction effects of voltage gradient, initial contaminant concentration, and polarity reversal rate on the soil pH, electroosmotic volume, soil electrical conductivity, current, and remedial efficiency of trivalent Cr in saline-sodic soil that was artificially spiked with Cr, Cu, Cd, Pb, Hg, phenol, and kerosene. Overall desirability of 0.715 was attained at the following optimal conditions: voltage gradient 0.36 V/cm; polarity reversal rate 17.63 hr; soil pH 10.0. Under these conditions, the expected trivalent Cr remedial efficiency is 64.75 %. PMID:25152905

  9. A loophole-free Bell's inequality experiment

    NASA Technical Reports Server (NTRS)

    Kwiat, Paul G.; Steinberg, Aephraim M.; Chiao, Raymond Y.; Eberhard, Philippe H.

    1994-01-01

    The proof of Nature's nonlocality through Bell-type experiments is a topic of longstanding interest. Nevertheless, no experiments performed thus far have avoided the so-called 'detection loophole,' arising from low detector efficiencies and angular-correlation difficulties. In fact, most, if not all, of the systems employed to date can never close this loophole, even with perfect detectors. In addition, another loophole involving the non-rapid, non-random switching of various parameter settings exists in all past experiments. We discuss a proposal for a potentially loophole-free Bell's inequality experiment. The source of the EPR-correlated pairs consists of two simultaneously-pumped type-2 phase-matched nonlinear crystals and a polarizing beam splitter. The feasibility of such a scheme with current detector technology seems high, and will be discussed. We also present a single-crystal version, motivated by other work presented at this conference. In a separate experiment, we have measured the absolute detection efficiency and time response of four single-photon detectors. The highest observed efficiencies were 70.7 plus or minus 1.9 percent (at 633 nm, with a device from Rockwell International) and 76.4 plus or minus 2.3 percent (at 702 nm, with an EG&G counting module). Possible efficiencies as high as 90 percent were implied. The EG&G devices displayed sub-nanosecond time resolution.

  10. Optical analysis of a curved-slats fixed-mirror solar concentrator by a forward ray-tracing procedure.

    PubMed

    Pujol Nadal, Ramon; Martínez Moll, Víctor

    2013-10-20

    Fixed-mirror solar concentrators (FMSCs) use a static reflector and a moving receiver. They are easily installable on building roofs. However, for high-concentration factors, several flat mirrors would be needed. If curved mirrors are used instead, high-concentration levels can be achieved, and such a solar concentrator is called a curved-slats fixed-mirror solar concentrator (CSFMSC), on which little information is available. Herein, a methodology is proposed to characterize the CSFMSC using 3D ray-tracing tools. The CSFMSC shows better optical characteristics than the FMSC, as it needs fewer reflector segments for achieving the same concentration and optical efficiency.

  11. Fast and accurate de novo genome assembly from long uncorrected reads

    PubMed Central

    Vaser, Robert; Sović, Ivan; Nagarajan, Niranjan

    2017-01-01

    The assembly of long reads from Pacific Biosciences and Oxford Nanopore Technologies typically requires resource-intensive error-correction and consensus-generation steps to obtain high-quality assemblies. We show that the error-correction step can be omitted and that high-quality consensus sequences can be generated efficiently with a SIMD-accelerated, partial-order alignment–based, stand-alone consensus module called Racon. Based on tests with PacBio and Oxford Nanopore data sets, we show that Racon coupled with miniasm enables consensus genomes with similar or better quality than state-of-the-art methods while being an order of magnitude faster. PMID:28100585

  12. The use of an automated interactive voice response system to manage medication identification calls to a poison center.

    PubMed

    Krenzelok, Edward P; Mrvos, Rita

    2009-05-01

    In 2007, medication identification requests (MIRs) accounted for 26.2% of all calls to U.S. poison centers. MIRs are documented with minimal information, but they still require an inordinate amount of work by specialists in poison information (SPI). An analysis was undertaken to identify options to reduce the impact of MIRs on both human and financial resources. All MIRs (2003-2007) to a certified regional poison information center were analyzed to determine call patterns and staffing. The data were used to justify an efficient and cost-effective solution. MIRs represented 42.3% of the 2007 call volume. Optimal staffing would require hiring an additional four full-time equivalent SPI. An interactive voice response (IVR) system was developed to respond to the MIRs. The IVR was used to develop the Medication Identification System that allowed the diversion of up to 50% of the MIRs, enhancing surge capacity and allowing specialists to address the more emergent poison exposure calls. This technology is an entirely voice-activated response call management system that collects zip code, age, gender and drug data and stores all responses as .csv files for reporting purposes. The query bank includes the 200 most common MIRs, and the system features text-to-voice synthesis that allows easy modification of the drug identification menu. Callers always have the option of engaging a SPI at any time during the IVR call flow. The IVR is an efficient and effective alternative that creates better staff utilization.

  13. Deposition Nucleation or Pore Condensation and Freezing?

    NASA Astrophysics Data System (ADS)

    David, Robert O.; Mahrt, Fabian; Marcolli, Claudia; Fahrni, Jonas; Brühwiler, Dominik; Lohmann, Ulrike; Kanji, Zamin A.

    2017-04-01

    Ice nucleation plays an important role in moderating Earth's climate and precipitation formation. Over the last century of research, several mechanisms for the nucleation of ice have been identified. Of the known mechanisms for ice nucleation, only deposition nucleation occurs below water saturation. Deposition nucleation is defined as the formation of ice from supersaturated water vapor on an insoluble particle without the prior formation of liquid. However, recent work has found that the efficiency of so-called deposition nucleation shows a dependence on the homogeneous freezing temperature of water even though no liquid phase is presumed to be present. Additionally, the ability of certain particles to nucleate ice more efficiently after being pre-cooled (pre-activation) raises questions on the true mechanism when ice nucleation occurs below water saturation. In an attempt to explain the dependence of the efficiency of so-called deposition nucleation on the onset of homogeneous freezing of liquid water, pore condensation and freezing has been proposed. Pore condensation and freezing suggests that the liquid phase can exist under sub-saturated conditions with respect to liquid in narrow confinements or pores due to the inverse Kelvin effect. Once the liquid-phase condenses, it is capable of nucleating ice either homogeneously or heterogeneously. The role of pore condensation and freezing is assessed in the Zurich Ice Nucleation Chamber, a continuous flow diffusion chamber, using spherical nonporous and mesoporous silica particles. The mesoporous silica particles have a well-defined particle size range of 400 to 600nm with discreet pore sizes of 2.5, 2.8, 3.5 and 3.8nm. Experiments conducted between 218K and 238K show that so-called deposition nucleation only occurs below the homogenous freezing temperature of water and is highly dependent on the presence of pores and their size. The results strongly support pore condensation and freezing, questioning the role of deposition nucleation as an ice nucleation pathway.

  14. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  15. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  16. 76 FR 54748 - State Energy Advisory Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-02

    ... DEPARTMENT OF ENERGY Energy Efficiency and Renewable Energy State Energy Advisory Board AGENCY: Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of open teleconference. SUMMARY: This notice announces a teleconference call of the State Energy Advisory Board (STEAB). The...

  17. 76 FR 16763 - State Energy Advisory Board (STEAB)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-25

    ... DEPARTMENT OF ENERGY Energy Efficiency and Renewable Energy State Energy Advisory Board (STEAB) AGENCY: Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of open teleconference. SUMMARY: This notice announces a teleconference call of the State Energy Advisory Board (STEAB...

  18. 76 FR 60012 - State Energy Advisory Board (STEAB)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ... DEPARTMENT OF ENERGY Energy Efficiency and Renewable Energy State Energy Advisory Board (STEAB) AGENCY: Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of open teleconference. SUMMARY: This notice announces a teleconference call of the State Energy Advisory Board (STEAB...

  19. 76 FR 25317 - State Energy Advisory Board (STEAB)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-04

    ... DEPARTMENT OF ENERGY Energy Efficiency and Renewable Energy State Energy Advisory Board (STEAB) AGENCY: Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of Open Teleconference. SUMMARY: This notice announces a teleconference call of the State Energy Advisory Board (STEAB...

  20. 76 FR 75876 - State Energy Advisory Board (STEAB)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-05

    ... DEPARTMENT OF ENERGY Energy Efficiency and Renewable Energy State Energy Advisory Board (STEAB) AGENCY: Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of open teleconference. SUMMARY: This notice announces a teleconference call of the State Energy Advisory Board (STEAB...

  1. Global warming alters sound transmission: differential impact on the prey detection ability of echolocating bats

    PubMed Central

    Luo, Jinhong; Koselj, Klemen; Zsebők, Sándor; Siemers, Björn M.; Goerlitz, Holger R.

    2014-01-01

    Climate change impacts the biogeography and phenology of plants and animals, yet the underlying mechanisms are little known. Here, we present a functional link between rising temperature and the prey detection ability of echolocating bats. The maximum distance for echo-based prey detection is physically determined by sound attenuation. Attenuation is more pronounced for high-frequency sound, such as echolocation, and is a nonlinear function of both call frequency and ambient temperature. Hence, the prey detection ability, and thus possibly the foraging efficiency, of echolocating bats and susceptible to rising temperatures through climate change. Using present-day climate data and projected temperature rises, we modelled this effect for the entire range of bat call frequencies and climate zones around the globe. We show that depending on call frequency, the prey detection volume of bats will either decrease or increase: species calling above a crossover frequency will lose and species emitting lower frequencies will gain prey detection volume, with crossover frequency and magnitude depending on the local climatic conditions. Within local species assemblages, this may cause a change in community composition. Global warming can thus directly affect the prey detection ability of individual bats and indirectly their interspecific interactions with competitors and prey. PMID:24335559

  2. Global warming alters sound transmission: differential impact on the prey detection ability of echolocating bats.

    PubMed

    Luo, Jinhong; Koselj, Klemen; Zsebok, Sándor; Siemers, Björn M; Goerlitz, Holger R

    2014-02-06

    Climate change impacts the biogeography and phenology of plants and animals, yet the underlying mechanisms are little known. Here, we present a functional link between rising temperature and the prey detection ability of echolocating bats. The maximum distance for echo-based prey detection is physically determined by sound attenuation. Attenuation is more pronounced for high-frequency sound, such as echolocation, and is a nonlinear function of both call frequency and ambient temperature. Hence, the prey detection ability, and thus possibly the foraging efficiency, of echolocating bats and susceptible to rising temperatures through climate change. Using present-day climate data and projected temperature rises, we modelled this effect for the entire range of bat call frequencies and climate zones around the globe. We show that depending on call frequency, the prey detection volume of bats will either decrease or increase: species calling above a crossover frequency will lose and species emitting lower frequencies will gain prey detection volume, with crossover frequency and magnitude depending on the local climatic conditions. Within local species assemblages, this may cause a change in community composition. Global warming can thus directly affect the prey detection ability of individual bats and indirectly their interspecific interactions with competitors and prey.

  3. Allele-specific copy-number discovery from whole-genome and whole-exome sequencing

    PubMed Central

    Wang, WeiBo; Wang, Wei; Sun, Wei; Crowley, James J.; Szatkiewicz, Jin P.

    2015-01-01

    Copy-number variants (CNVs) are a major form of genetic variation and a risk factor for various human diseases, so it is crucial to accurately detect and characterize them. It is conceivable that allele-specific reads from high-throughput sequencing data could be leveraged to both enhance CNV detection and produce allele-specific copy number (ASCN) calls. Although statistical methods have been developed to detect CNVs using whole-genome sequence (WGS) and/or whole-exome sequence (WES) data, information from allele-specific read counts has not yet been adequately exploited. In this paper, we develop an integrated method, called AS-GENSENG, which incorporates allele-specific read counts in CNV detection and estimates ASCN using either WGS or WES data. To evaluate the performance of AS-GENSENG, we conducted extensive simulations, generated empirical data using existing WGS and WES data sets and validated predicted CNVs using an independent methodology. We conclude that AS-GENSENG not only predicts accurate ASCN calls but also improves the accuracy of total copy number calls, owing to its unique ability to exploit information from both total and allele-specific read counts while accounting for various experimental biases in sequence data. Our novel, user-friendly and computationally efficient method and a complete analytic protocol is freely available at https://sourceforge.net/projects/asgenseng/. PMID:25883151

  4. A three-parameter model for classifying anurans into four genera based on advertisement calls.

    PubMed

    Gingras, Bruno; Fitch, William Tecumseh

    2013-01-01

    The vocalizations of anurans are innate in structure and may therefore contain indicators of phylogenetic history. Thus, advertisement calls of species which are more closely related phylogenetically are predicted to be more similar than those of distant species. This hypothesis was evaluated by comparing several widely used machine-learning algorithms. Recordings of advertisement calls from 142 species belonging to four genera were analyzed. A logistic regression model, using mean values for dominant frequency, coefficient of variation of root-mean square energy, and spectral flux, correctly classified advertisement calls with regard to genus with an accuracy above 70%. Similar accuracy rates were obtained using these parameters with a support vector machine model, a K-nearest neighbor algorithm, and a multivariate Gaussian distribution classifier, whereas a Gaussian mixture model performed slightly worse. In contrast, models based on mel-frequency cepstral coefficients did not fare as well. Comparable accuracy levels were obtained on out-of-sample recordings from 52 of the 142 original species. The results suggest that a combination of low-level acoustic attributes is sufficient to discriminate efficiently between the vocalizations of these four genera, thus supporting the initial premise and validating the use of high-throughput algorithms on animal vocalizations to evaluate phylogenetic hypotheses.

  5. Social Communication and Vocal Recognition in Free-Ranging Rhesus Monkeys

    NASA Astrophysics Data System (ADS)

    Rendall, Christopher Andrew

    Kinship and individual identity are key determinants of primate sociality, and the capacity for vocal recognition of individuals and kin is hypothesized to be an important adaptation facilitating intra-group social communication. Research was conducted on adult female rhesus monkeys on Cayo Santiago, Puerto Rico to test this hypothesis for three acoustically distinct calls characterized by varying selective pressures on communicating identity: coos (contact calls), grunts (close range social calls), and noisy screams (agonistic recruitment calls). Vocalization playback experiments confirmed a capacity for both individual and kin recognition of coos, but not screams (grunts were not tested). Acoustic analyses, using traditional spectrographic methods as well as linear predictive coding techniques, indicated that coos (but not grunts or screams) were highly distinctive, and that the effects of vocal tract filtering--formants --contributed more to statistical discriminations of both individuals and kin groups than did temporal or laryngeal source features. Formants were identified from very short (23 ms.) segments of coos and were stable within calls, indicating that formant cues to individual and kin identity were available throughout a call. This aspect of formant cues is predicted to be an especially important design feature for signaling identity efficiently in complex acoustic environments. Results of playback experiments involving manipulated coo stimuli provided preliminary perceptual support for the statistical inference that formant cues take precedence in facilitating vocal recognition. The similarity of formants among female kin suggested a mechanism for the development of matrilineal vocal signatures from the genetic and environmental determinants of vocal tract morphology shared among relatives. The fact that screams --calls strongly expected to communicate identity--were not individually distinctive nor recognized suggested the possibility that their acoustic structure and role in signaling identity might be constrained by functional or morphological design requirements associated with their role in signaling submission.

  6. The Plasmodium falciparum pseudoprotease SERA5 regulates the kinetics and efficiency of malaria parasite egress from host erythrocytes

    PubMed Central

    Hackett, Fiona; Atid, Jonathan; Tan, Michele Ser Ying

    2017-01-01

    Egress of the malaria parasite Plasmodium falciparum from its host red blood cell is a rapid, highly regulated event that is essential for maintenance and completion of the parasite life cycle. Egress is protease-dependent and is temporally associated with extensive proteolytic modification of parasite proteins, including a family of papain-like proteins called SERA that are expressed in the parasite parasitophorous vacuole. Previous work has shown that the most abundant SERA, SERA5, plays an important but non-enzymatic role in asexual blood stages. SERA5 is extensively proteolytically processed by a parasite serine protease called SUB1 as well as an unidentified cysteine protease just prior to egress. However, neither the function of SERA5 nor the role of its processing is known. Here we show that conditional disruption of the SERA5 gene, or of both the SERA5 and related SERA4 genes simultaneously, results in a dramatic egress and replication defect characterised by premature host cell rupture and the failure of daughter merozoites to efficiently disseminate, instead being transiently retained within residual bounding membranes. SERA5 is not required for poration (permeabilization) or vesiculation of the host cell membrane at egress, but the premature rupture phenotype requires the activity of a parasite or host cell cysteine protease. Complementation of SERA5 null parasites by ectopic expression of wild-type SERA5 reversed the egress defect, whereas expression of a SERA5 mutant refractory to processing failed to rescue the phenotype. Our findings implicate SERA5 as an important regulator of the kinetics and efficiency of egress and suggest that proteolytic modification is required for SERA5 function. In addition, our study reveals that efficient egress requires tight control of the timing of membrane rupture. PMID:28683142

  7. Promoted decomposition of NOx in automotive diesel-like exhausts by electro-catalytic honeycombs.

    PubMed

    Huang, Ta-Jen; Chiang, De-Yi; Shih, Chi; Lee, Cheng-Chin; Mao, Chih-Wei; Wang, Bo-Chung

    2015-03-17

    NO and NO2 (collectively called NOx) are major air pollutants in automotive emissions. More effective and easier treatments of NOx than those achieved by the present methods can offer better protection of human health and higher fuel efficiency that can reduce greenhouse gas emissions. However, currently commercialized technologies for automotive NOx emission control cannot effectively treat diesel-like exhausts with high NOx concentrations. Thus, exhaust gas recirculation (EGR) has been used extensively, which reduces fuel efficiency and increases particulate emission considerably. Our results show that the electro-catalytic honeycomb (ECH) promotes the decomposition of NOx to nitrogen and oxygen, without consuming reagents or other resources. NOx can be converted to nitrogen and oxygen almost completely. The ECHs are shown to effectively remove NOx from gasoline-fueled diesel-like exhausts. A very high NO concentration is preferred in the engine exhaust, especially during engine cold-start. Promoted NOx decomposition (PND) technology for real-world automotive applications is established in this study by using the ECH. With PND, EGR is no longer needed. Diesel-like engines can therefore achieve superior fuel efficiency, and all major automotive pollutants can be easily treated due to high concentration of oxygen in the diesel-like exhausts, leading to zero pollution.

  8. Energy efficiency analysis and implementation of AES on an FPGA

    NASA Astrophysics Data System (ADS)

    Kenney, David

    The Advanced Encryption Standard (AES) was developed by Joan Daemen and Vincent Rjimen and endorsed by the National Institute of Standards and Technology in 2001. It was designed to replace the aging Data Encryption Standard (DES) and be useful for a wide range of applications with varying throughput, area, power dissipation and energy consumption requirements. Field Programmable Gate Arrays (FPGAs) are flexible and reconfigurable integrated circuits that are useful for many different applications including the implementation of AES. Though they are highly flexible, FPGAs are often less efficient than Application Specific Integrated Circuits (ASICs); they tend to operate slower, take up more space and dissipate more power. There have been many FPGA AES implementations that focus on obtaining high throughput or low area usage, but very little research done in the area of low power or energy efficient FPGA based AES; in fact, it is rare for estimates on power dissipation to be made at all. This thesis presents a methodology to evaluate the energy efficiency of FPGA based AES designs and proposes a novel FPGA AES implementation which is highly flexible and energy efficient. The proposed methodology is implemented as part of a novel scripting tool, the AES Energy Analyzer, which is able to fully characterize the power dissipation and energy efficiency of FPGA based AES designs. Additionally, this thesis introduces a new FPGA power reduction technique called Opportunistic Combinational Operand Gating (OCOG) which is used in the proposed energy efficient implementation. The AES Energy Analyzer was able to estimate the power dissipation and energy efficiency of the proposed AES design during its most commonly performed operations. It was found that the proposed implementation consumes less energy per operation than any previous FPGA based AES implementations that included power estimations. Finally, the use of Opportunistic Combinational Operand Gating on an AES cipher was found to reduce its dynamic power consumption by up to 17% when compared to an identical design that did not employ the technique.

  9. High call volume at poison control centers: identification and implications for communication

    PubMed Central

    CARAVATI, E. M.; LATIMER, S.; REBLIN, M.; BENNETT, H. K. W.; CUMMINS, M. R.; CROUCH, B. I.; ELLINGTON, L.

    2016-01-01

    Context High volume surges in health care are uncommon and unpredictable events. Their impact on health system performance and capacity is difficult to study. Objectives To identify time periods that exhibited very busy conditions at a poison control center and to determine whether cases and communication during high volume call periods are different from cases during low volume periods. Methods Call data from a US poison control center over twelve consecutive months was collected via a call logger and an electronic case database (Toxicall®). Variables evaluated for high call volume conditions were: (1) call duration; (2) number of cases; and (3) number of calls per staff member per 30 minute period. Statistical analyses identified peak periods as busier than 99% of all other 30 minute time periods and low volume periods as slower than 70% of all other 30 minute periods. Case and communication characteristics of high volume and low volume calls were compared using logistic regression. Results A total of 65,364 incoming calls occurred over 12 months. One hundred high call volume and 4885 low call volume 30 minute periods were identified. High volume periods were more common between 1500 and 2300 hours and during the winter months. Coded verbal communication data were evaluated for 42 high volume and 296 low volume calls. The mean (standard deviation) call length of these calls during high volume and low volume periods was 3 minutes 27 seconds (1 minute 46 seconds) and 3 minutes 57 seconds (2 minutes 11 seconds), respectively. Regression analyses revealed a trend for fewer overall verbal statements and fewer staff questions during peak periods, but no other significant differences for staff-caller communication behaviors were found. Conclusion Peak activity for poison center call volume can be identified by statistical modeling. Calls during high volume periods were similar to low volume calls. Communication was more concise yet staff was able to maintain a good rapport with callers during busy call periods. This approach allows evaluation of poison exposure call characteristics and communication during high volume periods. PMID:22889059

  10. High call volume at poison control centers: identification and implications for communication.

    PubMed

    Caravati, E M; Latimer, S; Reblin, M; Bennett, H K W; Cummins, M R; Crouch, B I; Ellington, L

    2012-09-01

    High volume surges in health care are uncommon and unpredictable events. Their impact on health system performance and capacity is difficult to study. To identify time periods that exhibited very busy conditions at a poison control center and to determine whether cases and communication during high volume call periods are different from cases during low volume periods. Call data from a US poison control center over twelve consecutive months was collected via a call logger and an electronic case database (Toxicall®).Variables evaluated for high call volume conditions were: (1) call duration; (2) number of cases; and (3) number of calls per staff member per 30 minute period. Statistical analyses identified peak periods as busier than 99% of all other 30 minute time periods and low volume periods as slower than 70% of all other 30 minute periods. Case and communication characteristics of high volume and low volume calls were compared using logistic regression. A total of 65,364 incoming calls occurred over 12 months. One hundred high call volume and 4885 low call volume 30 minute periods were identified. High volume periods were more common between 1500 and 2300 hours and during the winter months. Coded verbal communication data were evaluated for 42 high volume and 296 low volume calls. The mean (standard deviation) call length of these calls during high volume and low volume periods was 3 minutes 27 seconds (1 minute 46 seconds) and 3 minutes 57 seconds (2 minutes 11 seconds), respectively. Regression analyses revealed a trend for fewer overall verbal statements and fewer staff questions during peak periods, but no other significant differences for staff-caller communication behaviors were found. Peak activity for poison center call volume can be identified by statistical modeling. Calls during high volume periods were similar to low volume calls. Communication was more concise yet staff was able to maintain a good rapport with callers during busy call periods. This approach allows evaluation of poison exposure call characteristics and communication during high volume periods.

  11. Contextuality and Cultural Texts: A Case Study of Workplace Learning in Call Centres

    ERIC Educational Resources Information Center

    Crouch, Margaret

    2006-01-01

    Purpose: The paper seeks to show the contextualisation of call centres as a work-specific ethnographically and culturally based community, which, in turn, influences pedagogical practices through the encoding and decoding of cultural texts in relation to two logics: cost-efficiency and customer-orientation. Design/methodology/approach: The paper…

  12. 76 FR 36103 - State Energy Advisory Board (STEAB)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-21

    ... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy State Energy Advisory Board... Open Teleconference. SUMMARY: This notice announces an open teleconference call of the State Energy... Energy Efficiency and Renewable Energy, 1000 Independence Ave, SW., Washington DC, 20585 or telephone...

  13. 77 FR 43067 - State Energy Advisory Board

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-23

    ... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy State Energy Advisory Board AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of open teleconference. SUMMARY: This notice announces a teleconference call of the State Energy Advisory Board (STEAB...

  14. Efficient Kriging Algorithms

    NASA Technical Reports Server (NTRS)

    Memarsadeghi, Nargess

    2011-01-01

    More efficient versions of an interpolation method, called kriging, have been introduced in order to reduce its traditionally high computational cost. Written in C++, these approaches were tested on both synthetic and real data. Kriging is a best unbiased linear estimator and suitable for interpolation of scattered data points. Kriging has long been used in the geostatistic and mining communities, but is now being researched for use in the image fusion of remotely sensed data. This allows a combination of data from various locations to be used to fill in any missing data from any single location. To arrive at the faster algorithms, sparse SYMMLQ iterative solver, covariance tapering, Fast Multipole Methods (FMM), and nearest neighbor searching techniques were used. These implementations were used when the coefficient matrix in the linear system is symmetric, but not necessarily positive-definite.

  15. Flexo-photovoltaic effect

    NASA Astrophysics Data System (ADS)

    Yang, Ming-Min; Kim, Dong Jik; Alexe, Marin

    2018-05-01

    It is highly desirable to discover photovoltaic mechanisms that enable enhanced efficiency of solar cells. Here we report that the bulk photovoltaic effect, which is free from the thermodynamic Shockley-Queisser limit but usually manifested only in noncentrosymmetric (piezoelectric or ferroelectric) materials, can be realized in any semiconductor, including silicon, by mediation of flexoelectric effect. We used either an atomic force microscope or a micrometer-scale indentation system to introduce strain gradients, thus creating very large photovoltaic currents from centrosymmetric single crystals of strontium titanate, titanium dioxide, and silicon. This strain gradient–induced bulk photovoltaic effect, which we call the flexo-photovoltaic effect, functions in the absence of a p-n junction. This finding may extend present solar cell technologies by boosting the solar energy conversion efficiency from a wide pool of established semiconductors.

  16. Characteristics of blue organic light emitting diodes with different thick emitting layers

    NASA Astrophysics Data System (ADS)

    Li, Chong; Tsuboi, Taiju; Huang, Wei

    2014-08-01

    We fabricated blue organic light emitting diodes (called blue OLEDs) with emitting layer (EML) of diphenylanthracene derivative 9,10-di(2-naphthyl)anthracene (ADN) doped with blue-emitting DSA-ph (1-4-di-[4-(N,N-di-phenyl)amino]styryl-benzene) to investigate how the thickness of EML and hole injection layer (HIL) influences the electroluminescence characteristics. The driving voltage was observed to increase with increasing EML thickness from 15 nm to 70 nm. The maximum external quantum efficiency of 6.2% and the maximum current efficiency of 14 cd/A were obtained from the OLED with 35 nm thick EML and 75 nm thick HIL. High luminance of 120,000 cd/m2 was obtained at 7.5 V from OLED with 15 nm thick EML.

  17. Enabling technologies and green processes in cyclodextrin chemistry

    PubMed Central

    Caporaso, Marina; Jicsinszky, Laszlo; Martina, Katia

    2016-01-01

    Summary The design of efficient synthetic green strategies for the selective modification of cyclodextrins (CDs) is still a challenging task. Outstanding results have been achieved in recent years by means of so-called enabling technologies, such as microwaves, ultrasound and ball mills, that have become irreplaceable tools in the synthesis of CD derivatives. Several examples of sonochemical selective modification of native α-, β- and γ-CDs have been reported including heterogeneous phase Pd- and Cu-catalysed hydrogenations and couplings. Microwave irradiation has emerged as the technique of choice for the production of highly substituted CD derivatives, CD grafted materials and polymers. Mechanochemical methods have successfully furnished greener, solvent-free syntheses and efficient complexation, while flow microreactors may well improve the repeatability and optimization of critical synthetic protocols. PMID:26977187

  18. Multi-partitioning for ADI-schemes on message passing architectures

    NASA Technical Reports Server (NTRS)

    Vanderwijngaart, Rob F.

    1994-01-01

    A kind of discrete-operator splitting called Alternating Direction Implicit (ADI) has been found to be useful in simulating fluid flow problems. In particular, it is being used to study the effects of hot exhaust jets from high performance aircraft on landing surfaces. Decomposition techniques that minimize load imbalance and message-passing frequency are described. Three strategies that are investigated for implementing the NAS Scalar Penta-diagonal Parallel Benchmark (SP) are transposition, pipelined Gaussian elimination, and multipartitioning. The multipartitioning strategy, which was used on Ethernet, was found to be the most efficient, although it was considered only a moderate success because of Ethernet's limited communication properties. The efficiency derived largely from the coarse granularity of the strategy, which reduced latencies and allowed overlap of communication and computation.

  19. Cell optoporation with a sub-15 fs and a 250-fs laser

    NASA Astrophysics Data System (ADS)

    Breunig, Hans Georg; Batista, Ana; Uchugonova, Aisada; König, Karsten

    2016-06-01

    We employed two commercially available femtosecond lasers, a Ti:sapphire and a ytterbium-based oscillator, to directly compare from a user's practical point-of-view in one common experimental setup the efficiencies of transient laser-induced cell membrane permeabilization, i.e., of so-called optoporation. The experimental setup consisted of a modified multiphoton laser-scanning microscope employing high-NA focusing optics. An automatic cell irradiation procedure was realized with custom-made software that identified cell positions and controlled relevant hardware components. The Ti:sapphire and ytterbium-based oscillators generated broadband sub-15-fs pulses around 800 nm and 250-fs pulses at 1044 nm, respectively. A higher optoporation rate and posttreatment viability were observed for the shorter fs pulses, confirming the importance of multiphoton effects for efficient optoporation.

  20. Practical Efficiency of Photovoltaic Panel Used for Solar Vehicles

    NASA Astrophysics Data System (ADS)

    Koyuncu, T.

    2017-08-01

    In this experimental investigation, practical efficiency of semi-flexible monocrystalline silicon solar panel used for a solar powered car called “Firat Force” and a solar powered minibus called “Commagene” was determined. Firat Force has 6 solar PV modules, a maintenance free long life gel battery pack, a regenerative brushless DC electric motor and Commagene has 12 solar PV modules, a maintenance free long life gel battery pack, a regenerative brushless DC electric motor. In addition, both solar vehicles have MPPT (Maximum power point tracker), ECU (Electronic control unit), differential, instrument panel, steering system, brake system, brake and gas pedals, mechanical equipments, chassis and frame. These two solar vehicles were used for people transportation in Adiyaman city, Turkey, during one year (June 2010-May 2011) of test. As a result, the practical efficiency of semi-flexible monocrystalline silicon solar panel used for Firat Force and Commagene was determined as 13 % in despite of efficiency value of 18% (at 1000 W/m2 and 25 °C ) given by the producer company. Besides, the total efficiency (from PV panels to vehicle wheel) of the system was also defined as 9%.

  1. Estimating means and variances: The comparative efficiency of composite and grab samples.

    PubMed

    Brumelle, S; Nemetz, P; Casey, D

    1984-03-01

    This paper compares the efficiencies of two sampling techniques for estimating a population mean and variance. One procedure, called grab sampling, consists of collecting and analyzing one sample per period. The second procedure, called composite sampling, collectsn samples per period which are then pooled and analyzed as a single sample. We review the well known fact that composite sampling provides a superior estimate of the mean. However, it is somewhat surprising that composite sampling does not always generate a more efficient estimate of the variance. For populations with platykurtic distributions, grab sampling gives a more efficient estimate of the variance, whereas composite sampling is better for leptokurtic distributions. These conditions on kurtosis can be related to peakedness and skewness. For example, a necessary condition for composite sampling to provide a more efficient estimate of the variance is that the population density function evaluated at the mean (i.e.f(μ)) be greater than[Formula: see text]. If[Formula: see text], then a grab sample is more efficient. In spite of this result, however, composite sampling does provide a smaller estimate of standard error than does grab sampling in the context of estimating population means.

  2. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  3. Wind increases leaf water use efficiency.

    PubMed

    Schymanski, Stanislaus J; Or, Dani

    2016-07-01

    A widespread perception is that, with increasing wind speed, transpiration from plant leaves increases. However, evidence suggests that increasing wind speed enhances carbon dioxide (CO2 ) uptake while reducing transpiration because of more efficient convective cooling (under high solar radiation loads). We provide theoretical and experimental evidence that leaf water use efficiency (WUE, carbon uptake per water transpired) commonly increases with increasing wind speed, thus improving plants' ability to conserve water during photosynthesis. Our leaf-scale analysis suggests that the observed global decrease in near-surface wind speeds could have reduced WUE at a magnitude similar to the increase in WUE attributed to global rise in atmospheric CO2 concentrations. However, there is indication that the effect of long-term trends in wind speed on leaf gas exchange may be compensated for by the concurrent reduction in mean leaf sizes. These unintuitive feedbacks between wind, leaf size and water use efficiency call for re-evaluation of the role of wind in plant water relations and potential re-interpretation of temporal and geographic trends in leaf sizes. © 2015 The Authors. Plant, Cell & Environment published by John Wiley & Sons Ltd.

  4. Hybrid polylingual object model: an efficient and seamless integration of Java and native components on the Dalvik virtual machine.

    PubMed

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded.

  5. Efficient Hardware Implementation of the Lightweight Block Encryption Algorithm LEA

    PubMed Central

    Lee, Donggeon; Kim, Dong-Chan; Kwon, Daesung; Kim, Howon

    2014-01-01

    Recently, due to the advent of resource-constrained trends, such as smartphones and smart devices, the computing environment is changing. Because our daily life is deeply intertwined with ubiquitous networks, the importance of security is growing. A lightweight encryption algorithm is essential for secure communication between these kinds of resource-constrained devices, and many researchers have been investigating this field. Recently, a lightweight block cipher called LEA was proposed. LEA was originally targeted for efficient implementation on microprocessors, as it is fast when implemented in software and furthermore, it has a small memory footprint. To reflect on recent technology, all required calculations utilize 32-bit wide operations. In addition, the algorithm is comprised of not complex S-Box-like structures but simple Addition, Rotation, and XOR operations. To the best of our knowledge, this paper is the first report on a comprehensive hardware implementation of LEA. We present various hardware structures and their implementation results according to key sizes. Even though LEA was originally targeted at software efficiency, it also shows high efficiency when implemented as hardware. PMID:24406859

  6. Multi-Year Program Plan FY'09-FY'15 Solid-State Lighting Research and Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2009-03-01

    President Obama's energy and environment agenda calls for deployment of 'the Cheapest, Cleanest, Fastest Energy Source - Energy Efficiency.' The Department of Energy's (DOE) Office of Energy Efficiency and Renewable Energy (EERE) plays a critical role in advancing the President's agenda by helping the United States advance toward an energy-efficient future. Lighting in the United States is projected to consume nearly 10 quads of primary energy by 2012.3 A nation-wide move toward solid-state lighting (SSL) for general illumination could save a total of 32.5 quads of primary energy between 2012 and 2027. No other lighting technology offers the DOE andmore » our nation so much potential to save energy and enhance the quality of our built environment. The DOE has set forth the following mission statement for the SSL R&D Portfolio: Guided by a Government-industry partnership, the mission is to create a new, U.S.-led market for high-efficiency, general illumination products through the advancement of semiconductor technologies, to save energy, reduce costs and enhance the quality of the lighted environment.« less

  7. BLESS 2: accurate, memory-efficient and fast error correction method.

    PubMed

    Heo, Yun; Ramachandran, Anand; Hwu, Wen-Mei; Ma, Jian; Chen, Deming

    2016-08-01

    The most important features of error correction tools for sequencing data are accuracy, memory efficiency and fast runtime. The previous version of BLESS was highly memory-efficient and accurate, but it was too slow to handle reads from large genomes. We have developed a new version of BLESS to improve runtime and accuracy while maintaining a small memory usage. The new version, called BLESS 2, has an error correction algorithm that is more accurate than BLESS, and the algorithm has been parallelized using hybrid MPI and OpenMP programming. BLESS 2 was compared with five top-performing tools, and it was found to be the fastest when it was executed on two computing nodes using MPI, with each node containing twelve cores. Also, BLESS 2 showed at least 11% higher gain while retaining the memory efficiency of the previous version for large genomes. Freely available at https://sourceforge.net/projects/bless-ec dchen@illinois.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Generation of µW level plateau harmonics at high repetition rate.

    PubMed

    Hädrich, S; Krebs, M; Rothhardt, J; Carstens, H; Demmler, S; Limpert, J; Tünnermann, A

    2011-09-26

    The process of high harmonic generation allows for coherent transfer of infrared laser light to the extreme ultraviolet spectral range opening a variety of applications. The low conversion efficiency of this process calls for optimization or higher repetition rate intense ultrashort pulse lasers. Here we present state-of-the-art fiber laser systems for the generation of high harmonics up to 1 MHz repetition rate. We perform measurements of the average power with a calibrated spectrometer and achieved µW harmonics between 45 nm and 61 nm (H23-H17) at a repetition rate of 50 kHz. Additionally, we show the potential for few-cycle pulses at high average power and repetition rate that may enable water-window harmonics at unprecedented repetition rate. © 2011 Optical Society of America

  9. A "caliper" type of controlled-source, frequency-domain, electromagnetic sounding method

    NASA Astrophysics Data System (ADS)

    Zhang, W.; Lin, J.; Zhou, F.; Liu, C.; Chen, J.; Xue, K.; Liu, L.; Wu, Y.

    2011-12-01

    We developed a special measurement manner for controlled-source, frequency-domain, electromagnetic sounding method that can improve resolution and efficiency, called as "caliper". This manner is base on our array electromagnetic system DPS-I, which consists of 53 channels and can cover 2500 m survey line at one arrangement. There are several steps to apply this method. First, a rough measurement is carried out, using large dynamic range but sparse frequencies. The ratio of adjacent frequency is set to be 2 or 4. The frequency points cover the entire frequency band that is required according to the geological environment, and are almost equidistantly distributed at logarithmic axis. Receivers array are arranged in one or more survey lines to measure the amplitude and phase of electromagnetic field components simultaneously. After all frequency points for rough measurement are measured, data in each sub-receiver are transmitted to the controller and the apparent resistivity and phase are calculated in field quickly. Then the pseudo section diagrams of apparent resistivity and phase are drew. By the pseudo section we can roughly lock the abnormal zone and determine the frequency band required for detail investigation of abnormal zone. Next, the measurement using high density of frequencies in this frequency band is carried out, which we called "detailed measurement". The ratio of adjacent frequency in this time is m which lies between 1 and 2. The exact value of m will depend on how detailed that the user expected. After "detailed measurement" is finished, the pseudo section diagrams of apparent resistivity and phase are drew in the same way with the first step. We can see more detailed information about the abnormal zone and decide whether further measurement is necessary. If it is necessary, we can repeat the second step using smaller m until the resolution meet the requirements to distinguish the target. By simulation, we know that high density of frequencies really help us to improve resolution. But we also need to say that the improvement is limited and it will do no help to add frequencies if the frequency is already dense enough. This method not only improves efficiency, but also improves the ability to distinguish the abnormal body. This measurement mode consisting of rough measurement and detailed measurement is similar to the caliper measurement of length, so called "caliper" type. It is accurate and fast. It not only can be applied to frequency-domain sounding, such as controlled source audio -frequency magnetotelluric (CSAMT), but also can be extended to the spectral induced polarization method. By using this measurement manner, high resolution and high-efficiency can be expected.

  10. Development of solar concentrators for high-power solar-pumped lasers.

    PubMed

    Dinh, T H; Ohkubo, T; Yabe, T

    2014-04-20

    We have developed unique solar concentrators for solar-pumped solid-state lasers to improve both efficiency and laser output power. Natural sunlight is collected by a primary concentrator which is a 2  m×2  m Fresnel lens, and confined by a cone-shaped hybrid concentrator. Such solar power is coupled to a laser rod by a cylinder with coolant surrounding it that is called a liquid light-guide lens (LLGL). Performance of the cylindrical LLGL has been characterized analytically and experimentally. Since a 14 mm diameter LLGL generates efficient and uniform pumping along a Nd:YAG rod that is 6 mm in diameter and 100 mm in length, 120 W cw laser output is achieved with beam quality factor M2 of 137 and overall slope efficiency of 4.3%. The collection efficiency is 30.0  W/m2, which is 1.5 times larger than the previous record. The overall conversion efficiency is more than 3.2%, which can be comparable to a commercial lamp-pumped solid-state laser. The concept of the light-guide lens can be applied for concentrator photovoltaics or other solar energy optics.

  11. A novel bioinformatics method for efficient knowledge discovery by BLSOM from big genomic sequence data.

    PubMed

    Bai, Yu; Iwasaki, Yuki; Kanaya, Shigehiko; Zhao, Yue; Ikemura, Toshimichi

    2014-01-01

    With remarkable increase of genomic sequence data of a wide range of species, novel tools are needed for comprehensive analyses of the big sequence data. Self-Organizing Map (SOM) is an effective tool for clustering and visualizing high-dimensional data such as oligonucleotide composition on one map. By modifying the conventional SOM, we have previously developed Batch-Learning SOM (BLSOM), which allows classification of sequence fragments according to species, solely depending on the oligonucleotide composition. In the present study, we introduce the oligonucleotide BLSOM used for characterization of vertebrate genome sequences. We first analyzed pentanucleotide compositions in 100 kb sequences derived from a wide range of vertebrate genomes and then the compositions in the human and mouse genomes in order to investigate an efficient method for detecting differences between the closely related genomes. BLSOM can recognize the species-specific key combination of oligonucleotide frequencies in each genome, which is called a "genome signature," and the specific regions specifically enriched in transcription-factor-binding sequences. Because the classification and visualization power is very high, BLSOM is an efficient powerful tool for extracting a wide range of information from massive amounts of genomic sequences (i.e., big sequence data).

  12. Efficient computation of the joint sample frequency spectra for multiple populations.

    PubMed

    Kamm, John A; Terhorst, Jonathan; Song, Yun S

    2017-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.

  13. Nonlinear detection for a high rate extended binary phase shift keying system.

    PubMed

    Chen, Xian-Qing; Wu, Le-Nan

    2013-03-28

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding.

  14. Nonlinear Detection for a High Rate Extended Binary Phase Shift Keying System

    PubMed Central

    Chen, Xian-Qing; Wu, Le-Nan

    2013-01-01

    The algorithm and the results of a nonlinear detector using a machine learning technique called support vector machine (SVM) on an efficient modulation system with high data rate and low energy consumption is presented in this paper. Simulation results showed that the performance achieved by the SVM detector is comparable to that of a conventional threshold decision (TD) detector. The two detectors detect the received signals together with the special impacting filter (SIF) that can improve the energy utilization efficiency. However, unlike the TD detector, the SVM detector concentrates not only on reducing the BER of the detector, but also on providing accurate posterior probability estimates (PPEs), which can be used as soft-inputs of the LDPC decoder. The complexity of this detector is considered in this paper by using four features and simplifying the decision function. In addition, a bandwidth efficient transmission is analyzed with both SVM and TD detector. The SVM detector is more robust to sampling rate than TD detector. We find that the SVM is suitable for extended binary phase shift keying (EBPSK) signal detection and can provide accurate posterior probability for LDPC decoding. PMID:23539034

  15. PS-FW: A Hybrid Algorithm Based on Particle Swarm and Fireworks for Global Optimization

    PubMed Central

    Chen, Shuangqing; Wei, Lixin; Guan, Bing

    2018-01-01

    Particle swarm optimization (PSO) and fireworks algorithm (FWA) are two recently developed optimization methods which have been applied in various areas due to their simplicity and efficiency. However, when being applied to high-dimensional optimization problems, PSO algorithm may be trapped in the local optima owing to the lack of powerful global exploration capability, and fireworks algorithm is difficult to converge in some cases because of its relatively low local exploitation efficiency for noncore fireworks. In this paper, a hybrid algorithm called PS-FW is presented, in which the modified operators of FWA are embedded into the solving process of PSO. In the iteration process, the abandonment and supplement mechanism is adopted to balance the exploration and exploitation ability of PS-FW, and the modified explosion operator and the novel mutation operator are proposed to speed up the global convergence and to avoid prematurity. To verify the performance of the proposed PS-FW algorithm, 22 high-dimensional benchmark functions have been employed, and it is compared with PSO, FWA, stdPSO, CPSO, CLPSO, FIPS, Frankenstein, and ALWPSO algorithms. Results show that the PS-FW algorithm is an efficient, robust, and fast converging optimization method for solving global optimization problems. PMID:29675036

  16. Efficient computation of the joint sample frequency spectra for multiple populations

    PubMed Central

    Kamm, John A.; Terhorst, Jonathan; Song, Yun S.

    2016-01-01

    A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248

  17. Zebra: A striped network file system

    NASA Technical Reports Server (NTRS)

    Hartman, John H.; Ousterhout, John K.

    1992-01-01

    The design of Zebra, a striped network file system, is presented. Zebra applies ideas from log-structured file system (LFS) and RAID research to network file systems, resulting in a network file system that has scalable performance, uses its servers efficiently even when its applications are using small files, and provides high availability. Zebra stripes file data across multiple servers, so that the file transfer rate is not limited by the performance of a single server. High availability is achieved by maintaining parity information for the file system. If a server fails its contents can be reconstructed using the contents of the remaining servers and the parity information. Zebra differs from existing striped file systems in the way it stripes file data: Zebra does not stripe on a per-file basis; instead it stripes the stream of bytes written by each client. Clients write to the servers in units called stripe fragments, which are analogous to segments in an LFS. Stripe fragments contain file blocks that were written recently, without regard to which file they belong. This method of striping has numerous advantages over per-file striping, including increased server efficiency, efficient parity computation, and elimination of parity update.

  18. Applying mobile and pervasive computer technology to enhance coordination of work in a surgical ward.

    PubMed

    Hansen, Thomas Riisgaard; Bardram, Jakob E

    2007-01-01

    Collaboration, coordination, and communication are crucial in maintaining an efficient and smooth flow of work in an operating ward. This coordination, however, often comes at a high price in terms of unsuccessfully trying to get hold of people, disturbing telephone calls, looking for people, and unnecessary stress. To accommodate this situation and to increase the quality of work in operating wards, we have designed a set of pervasive computer systems which supports what we call context-mediated communication and awareness. These systems use large interactive displays, video streaming from key locations, tracking systems, and mobile devices to support social awareness and different types of communication modalities relevant to the current context. In this paper we report qualitative data from a one-year deployment of the system in a local hospital. Overall, this study shows that 75% of the participants strongly agreed that these systems had made their work easier.

  19. Performance studies of resistive Micromegas chambers for the upgrade of the ATLAS Muon Spectrometer

    NASA Astrophysics Data System (ADS)

    Ntekas, Konstantinos

    2018-02-01

    The ATLAS collaboration at LHC has endorsed the resistive Micromegas technology (MM), along with the small-strip Thin Gap Chambers (sTGC), for the high luminosity upgrade of the first muon station in the high-rapidity region, the so called New Small Wheel (NSW) project. The NSW requires fully efficient MM chambers, up to a particle rate of ˜ 15 kHz/cm2, with spatial resolution better than 100 μm independent of the track incidence angle and the magnetic field (B ≤ 0.3 T). Along with the precise tracking the MM should be able to provide a trigger signal, complementary to the sTGC, thus a decent timing resolution is required. Several tests have been performed on small (10 × 10 cm2) MM chambers using medium (10 GeV/c) and high (150 GeV/c) momentum hadron beams at CERN. Results on the efficiency and position resolution measured during these tests are presented demonstrating the excellent characteristics of the MM that fulfil the NSW requirements. Exploiting the ability of the MM to work as a Time Projection Chamber a novel method, called the μTPC, has been developed for the case of inclined tracks, allowing for a precise segment reconstruction using a single detection plane. A detailed description of the method along with thorough studies towards refining the method's performance are shown. Finally, during 2014 the first MM quadruplet (MMSW) following the NSW design scheme, comprising four detection planes in a stereo readout configuration, has been realised at CERN. Test-beam results of this prototype are discussed and compared to theoretical expectations.

  20. Screening of a Brassica napus bacterial artificial chromosome library using highly parallel single nucleotide polymorphism assays

    PubMed Central

    2013-01-01

    Background Efficient screening of bacterial artificial chromosome (BAC) libraries with polymerase chain reaction (PCR)-based markers is feasible provided that a multidimensional pooling strategy is implemented. Single nucleotide polymorphisms (SNPs) can be screened in multiplexed format, therefore this marker type lends itself particularly well for medium- to high-throughput applications. Combining the power of multiplex-PCR assays with a multidimensional pooling system may prove to be especially challenging in a polyploid genome. In polyploid genomes two classes of SNPs need to be distinguished, polymorphisms between accessions (intragenomic SNPs) and those differentiating between homoeologous genomes (intergenomic SNPs). We have assessed whether the highly parallel Illumina GoldenGate® Genotyping Assay is suitable for the screening of a BAC library of the polyploid Brassica napus genome. Results A multidimensional screening platform was developed for a Brassica napus BAC library which is composed of almost 83,000 clones. Intragenomic and intergenomic SNPs were included in Illumina’s GoldenGate® Genotyping Assay and both SNP classes were used successfully for screening of the multidimensional BAC pools of the Brassica napus library. An optimized scoring method is proposed which is especially valuable for SNP calling of intergenomic SNPs. Validation of the genotyping results by independent methods revealed a success of approximately 80% for the multiplex PCR-based screening regardless of whether intra- or intergenomic SNPs were evaluated. Conclusions Illumina’s GoldenGate® Genotyping Assay can be efficiently used for screening of multidimensional Brassica napus BAC pools. SNP calling was specifically tailored for the evaluation of BAC pool screening data. The developed scoring method can be implemented independently of plant reference samples. It is demonstrated that intergenomic SNPs represent a powerful tool for BAC library screening of a polyploid genome. PMID:24010766

  1. Effectiveness and cost effectiveness of television, radio and print advertisements in promoting the New York smokers' quitline

    PubMed Central

    Farrelly, Matthew C; Hussin, Altijani; Bauer, Ursula E

    2007-01-01

    Objectives This study assessed the relative effectiveness and cost effectiveness of television, radio and print advertisements to generate calls to the New York smokers' quitline. Methods Regression analysis was used to link total county level monthly quitline calls to television, radio and print advertising expenditures. Based on regression results, standardised measures of the relative effectiveness and cost effectiveness of expenditures were computed. Results There was a positive and statistically significant relation between call volume and expenditures for television (p<0.01) and radio (p<0.001) advertisements and a marginally significant effect for expenditures on newspaper advertisements (p<0.065). The largest effect was for television advertising. However, because of differences in advertising costs, for every $1000 increase in television, radio and newspaper expenditures, call volume increased by 0.1%, 5.7% and 2.8%, respectively. Conclusions Television, radio and print media all effectively increased calls to the New York smokers' quitline. Although increases in expenditures for television were the most effective, their relatively high costs suggest they are not currently the most cost effective means to promote a quitline. This implies that a more efficient mix of media would place greater emphasis on radio than television. However, because the current study does not adequately assess the extent to which radio expenditures would sustain their effectiveness with substantial expenditure increases, it is not feasible to determine a more optimal mix of expenditures. PMID:18048625

  2. 77 FR 39690 - State Energy Advisory Board (STEAB)

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-05

    ... DEPARTMENT OF ENERGY State Energy Advisory Board (STEAB) AGENCY: Energy Efficiency and Renewable... teleconference call of the State Energy Advisory Board (STEAB). The Federal Advisory Committee Act (Pub. L. 92... Energy, Office of Energy Efficiency and Renewable Energy, 1000 Independence Ave. SW., Washington, DC...

  3. 77 FR 55218 - Homeland Security Advisory Council

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-07

    ... childhood arrivals program. The HSAC will also receive a report from the Sustainability and Efficiency Task Force, review and discuss the task forces' report, and formulate recommendations for the Department. The.... HSAC conference call details and the Sustainability and Efficiency Task Force report will be provided...

  4. An efficient algorithm to compute marginal posterior genotype probabilities for every member of a pedigree with loops

    PubMed Central

    2009-01-01

    Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551

  5. Extending the BEAGLE library to a multi-FPGA platform

    PubMed Central

    2013-01-01

    Background Maximum Likelihood (ML)-based phylogenetic inference using Felsenstein’s pruning algorithm is a standard method for estimating the evolutionary relationships amongst a set of species based on DNA sequence data, and is used in popular applications such as RAxML, PHYLIP, GARLI, BEAST, and MrBayes. The Phylogenetic Likelihood Function (PLF) and its associated scaling and normalization steps comprise the computational kernel for these tools. These computations are data intensive but contain fine grain parallelism that can be exploited by coprocessor architectures such as FPGAs and GPUs. A general purpose API called BEAGLE has recently been developed that includes optimized implementations of Felsenstein’s pruning algorithm for various data parallel architectures. In this paper, we extend the BEAGLE API to a multiple Field Programmable Gate Array (FPGA)-based platform called the Convey HC-1. Results The core calculation of our implementation, which includes both the phylogenetic likelihood function (PLF) and the tree likelihood calculation, has an arithmetic intensity of 130 floating-point operations per 64 bytes of I/O, or 2.03 ops/byte. Its performance can thus be calculated as a function of the host platform’s peak memory bandwidth and the implementation’s memory efficiency, as 2.03 × peak bandwidth × memory efficiency. Our FPGA-based platform has a peak bandwidth of 76.8 GB/s and our implementation achieves a memory efficiency of approximately 50%, which gives an average throughput of 78 Gflops. This represents a ~40X speedup when compared with BEAGLE’s CPU implementation on a dual Xeon 5520 and 3X speedup versus BEAGLE’s GPU implementation on a Tesla T10 GPU for very large data sizes. The power consumption is 92 W, yielding a power efficiency of 1.7 Gflops per Watt. Conclusions The use of data parallel architectures to achieve high performance for likelihood-based phylogenetic inference requires high memory bandwidth and a design methodology that emphasizes high memory efficiency. To achieve this objective, we integrated 32 pipelined processing elements (PEs) across four FPGAs. For the design of each PE, we developed a specialized synthesis tool to generate a floating-point pipeline with resource and throughput constraints to match the target platform. We have found that using low-latency floating-point operators can significantly reduce FPGA area and still meet timing requirement on the target platform. We found that this design methodology can achieve performance that exceeds that of a GPU-based coprocessor. PMID:23331707

  6. High-efficiency genome editing and allele replacement in prototrophic and wild strains of Saccharomyces.

    PubMed

    Alexander, William G; Doering, Drew T; Hittinger, Chris Todd

    2014-11-01

    Current genome editing techniques available for Saccharomyces yeast species rely on auxotrophic markers, limiting their use in wild and industrial strains and species. Taking advantage of the ancient loss of thymidine kinase in the fungal kingdom, we have developed the herpes simplex virus thymidine kinase gene as a selectable and counterselectable marker that forms the core of novel genome engineering tools called the H: aploid E: ngineering and R: eplacement P: rotocol (HERP) cassettes. Here we show that these cassettes allow a researcher to rapidly generate heterogeneous populations of cells with thousands of independent chromosomal allele replacements using mixed PCR products. We further show that the high efficiency of this approach enables the simultaneous replacement of both alleles in diploid cells. Using these new techniques, many of the most powerful yeast genetic manipulation strategies are now available in wild, industrial, and other prototrophic strains from across the diverse Saccharomyces genus. Copyright © 2014 by the Genetics Society of America.

  7. Tree crickets optimize the acoustics of baffles to exaggerate their mate-attraction signal

    PubMed Central

    Balakrishnan, Rohini; Robert, Daniel

    2017-01-01

    Object manufacture in insects is typically inherited, and believed to be highly stereotyped. Optimization, the ability to select the functionally best material and modify it appropriately for a specific function, implies flexibility and is usually thought to be incompatible with inherited behaviour. Here, we show that tree-crickets optimize acoustic baffles, objects that are used to increase the effective loudness of mate-attraction calls. We quantified the acoustic efficiency of all baffles within the naturally feasible design space using finite-element modelling and found that design affects efficiency significantly. We tested the baffle-making behaviour of tree crickets in a series of experimental contexts. We found that given the opportunity, tree crickets optimised baffle acoustics; they selected the best sized object and modified it appropriately to make a near optimal baffle. Surprisingly, optimization could be achieved in a single attempt, and is likely to be achieved through an inherited yet highly accurate behavioural heuristic. PMID:29227246

  8. Semiconductor photoelectrochemistry

    NASA Technical Reports Server (NTRS)

    Buoncristiani, A. M.; Byvik, C. E.

    1983-01-01

    Semiconductor photoelectrochemical reactions are investigated. A model of the charge transport processes in the semiconductor, based on semiconductor device theory, is presented. It incorporates the nonlinear processes characterizing the diffusion and reaction of charge carriers in the semiconductor. The model is used to study conditions limiting useful energy conversion, specifically the saturation of current flow due to high light intensity. Numerical results describing charge distributions in the semiconductor and its effects on the electrolyte are obtained. Experimental results include: an estimate rate at which a semiconductor photoelectrode is capable of converting electromagnetic energy into chemical energy; the effect of cell temperature on the efficiency; a method for determining the point of zero zeta potential for macroscopic semiconductor samples; a technique using platinized titanium dioxide powders and ultraviolet radiation to produce chlorine, bromine, and iodine from solutions containing their respective ions; the photoelectrochemical properties of a class of layered compounds called transition metal thiophosphates; and a technique used to produce high conversion efficiency from laser radiation to chemical energy.

  9. OPPORTUNITIES AND RESPONSIBILITIES OF THE GENERAL PRACTITIONER

    PubMed Central

    Truman, Stanley R.

    1949-01-01

    We believe in the ability of general practice to serve the greatest number of people with the best medical care, most efficiently and most economically. We believe that the physician in general practice receives the utmost in personal satisfaction for a job well done. We believe in the necessity for specialists and hold in high regard and deep admiration their fund of knowledge and fine technical skills. Who would call one branch of medicine more necessary than another? All are necessary, all must be nurtured, each commanding and deserving the respect of the others. PMID:18122336

  10. Design and evaluation of Nemesis, a scalable, low-latency, message-passing communication subsystem.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buntinas, D.; Mercier, G.; Gropp, W.

    2005-12-02

    This paper presents a new low-level communication subsystem called Nemesis. Nemesis has been designed and implemented to be scalable and efficient both in the intranode communication context using shared-memory and in the internode communication case using high-performance networks and is natively multimethod-enabled. Nemesis has been integrated in MPICH2 as a CH3 channel and delivers better performance than other dedicated communication channels in MPICH2. Furthermore, the resulting MPICH2 architecture outperforms other MPI implementations in point-to-point benchmarks.

  11. Portable data collection terminal in the automated power consumption measurement system

    NASA Astrophysics Data System (ADS)

    Vologdin, S. V.; Shushkov, I. D.; Bysygin, E. K.

    2018-01-01

    Aim of efficiency increasing, automation process of electric energy data collection and processing is very important at present time. High cost of classic electric energy billing systems prevent from its mass application. Udmurtenergo Branch of IDGC of Center and Volga Region developed electronic automated system called “Mobile Energy Billing” based on data collection terminals. System joins electronic components based on service-oriented architecture, WCF services. At present time all parts of Udmurtenergo Branch electric network are connected to “Mobile Energy Billing” project. System capabilities are expanded due to flexible architecture.

  12. Information technology in the foxhole.

    PubMed

    Eyestone, S M

    1995-08-01

    The importance of digital data capture at the point of health care service within the military environment is highlighted. Current paper-based data capture does not allow for efficient data reuse throughout the medical support information domain. A simple, high-level process and data flow model is used to demonstrate the importance of data capture at point of service. The Department of Defense is developing a personal digital assistant, called MEDTAG, that accomplishes point of service data capture in the field using a prototype smart card as a data store in austere environments.

  13. Hidden Markov induced Dynamic Bayesian Network for recovering time evolving gene regulatory networks

    NASA Astrophysics Data System (ADS)

    Zhu, Shijia; Wang, Yadong

    2015-12-01

    Dynamic Bayesian Networks (DBN) have been widely used to recover gene regulatory relationships from time-series data in computational systems biology. Its standard assumption is ‘stationarity’, and therefore, several research efforts have been recently proposed to relax this restriction. However, those methods suffer from three challenges: long running time, low accuracy and reliance on parameter settings. To address these problems, we propose a novel non-stationary DBN model by extending each hidden node of Hidden Markov Model into a DBN (called HMDBN), which properly handles the underlying time-evolving networks. Correspondingly, an improved structural EM algorithm is proposed to learn the HMDBN. It dramatically reduces searching space, thereby substantially improving computational efficiency. Additionally, we derived a novel generalized Bayesian Information Criterion under the non-stationary assumption (called BWBIC), which can help significantly improve the reconstruction accuracy and largely reduce over-fitting. Moreover, the re-estimation formulas for all parameters of our model are derived, enabling us to avoid reliance on parameter settings. Compared to the state-of-the-art methods, the experimental evaluation of our proposed method on both synthetic and real biological data demonstrates more stably high prediction accuracy and significantly improved computation efficiency, even with no prior knowledge and parameter settings.

  14. Transmission and Distribution Efficiency Improvement Rearch and Development Survey.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brooks, C.L.; Westinghouse Electric Corporation. Advanced Systems Technology.

    Purpose of this study was to identify and quantify those technologies for improving transmission and distribution (T and D) system efficiency that could provide the greatest benefits for utility customers in the Pacific Northwest. Improving the efficiency of transmission and distribution systems offers a potential source of conservation within the utility sector. An extensive review of this field resulted in a list of 49 state-of-the-art technologies and 39 future technologies. Of these, 15 from the former list and 7 from the latter were chosen as the most promising and then submitted to an evaluative test - a modeled sample systemmore » for Benton County PUD, a utility with characteristics typical of a BPA customer system. Reducing end-use voltage on secondary distribution systems to decrease the energy consumption of electrical users when possible, called ''Conservation Voltage Reduction,'' was found to be the most cost effective state-of-the-art technology. Voltampere reactive (var) optimization is a similarly cost effective alternative. The most significant reduction in losses on the transmission and distribution system would be achieved through the replacement of standard transformers with high efficiency transformers, such as amorphous steel transformers. Of the future technologies assessed, the ''Distribution Static VAR Generator'' appears to have the greatest potential for technological breakthroughs and, therefore in time, commercialization. ''Improved Dielectric Materials,'' with a relatively low cost and high potential for efficiency improvement, warrant R and D consideration. ''Extruded Three-Conductor Cable'' and ''Six- and Twelve-Phase Transmission'' programs provide only limited gains in efficiency and applicability and are therefore the least cost effective.« less

  15. Evaluation of the Multi-Chambered Treatment Train, a retrofit water-quality management device

    USGS Publications Warehouse

    Corsi, Steven R.; Greb, Steven R.; Bannerman, Roger T.; Pitt, Robert E.

    1999-01-01

    This paper presents the results of an evaluation of the benefits and efficiencies of a device called the Multi-Chambered Treatment Train (MCTT), which was installed below the pavement surface at a municipal maintenance garage and parking facility in Milwaukee, Wisconsin. Flow-weighted water samples were collected at the inlet and outlet of the device during 15 storms, and the efficiency of the device was based on reductions in the loads of 68 chemical constituents and organic compounds. High reduction efficiencies were achieved for all particulate-associated constituents, including total suspended solids (98 percent), total phosphorus (88 percent), and total recoverable zinc (91 percent). Reduction rates for dissolved fractions of the constituents were substantial, but somewhat lower (dissolved solids, 13 percent; dissolved phosphorus, 78 percent; dissolved zinc, 68 percent). The total dissolved solids load, which originated from road salt storage, was more than four times the total suspended solids load. No appreciable difference was detected between particle-size distributions in inflow and outflow samples.

  16. A novel superparamagnetic surface molecularly imprinted nanoparticle adopting dummy template: an efficient solid-phase extraction adsorbent for bisphenol A.

    PubMed

    Lin, Zhenkun; Cheng, Wenjing; Li, Yanyan; Liu, Zhiren; Chen, Xiangping; Huang, Changjiang

    2012-03-30

    Leakage of the residual template molecules is one of the biggest challenges for application of molecularly imprinted polymer (MIP) in solid-phase extraction (SPE). In this study, bisphenol F (BPF) was adopted as a dummy template to prepare MIP of bisphenol A (BPA) with a superparamagnetic core-shell nanoparticle as the supporter, aiming to avoid residual template leakage and to increase the efficiency of SPE. Characterization and test of the obtained products (called mag-DMIP beads) revealed that these novel nanoparticles not only had excellent magnetic property but also displayed high selectivity to the target molecule BPA. As mag-DMIP beads were adopted as the adsorbents of solid-phase extraction for detecting BPA in real water samples, the recoveries of spiked samples ranged from 84.7% to 93.8% with the limit of detection of 2.50 pg mL(-1), revealing that mag-DMIP beads were efficient SPE adsorbents. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Efficient Agrobacterium-mediated transformation of the liverwort Marchantia polymorpha using regenerating thalli.

    PubMed

    Kubota, Akane; Ishizaki, Kimitsune; Hosaka, Masashi; Kohchi, Takayuki

    2013-01-01

    The thallus, the gametophyte body of the liverwort Marchantia polymorpha, develops clonal progenies called gemmae that are useful in the isolation and propagation of isogenic plants. Developmental timing is critical to Agrobacterium-mediated transformation, and high transformation efficiency has been achieved only with sporelings. Here we report an Agrobacterium-mediated transformation system for M. polymorpha using regenerating thalli. Thallus regeneration was induced by cutting the mature thallus across the apical-basal axis and incubating the basal portion of the thallus for 3 d. Regenerating thalli were infected with Agrobacterium carrying binary vector that contained a selection marker, the hygromycin phosphotransferase gene, and hygromycin-resistant transformants were obtained with an efficiency of over 60%. Southern blot analysis verified random integration of 1 to 4 copies of the T-DNA into the M. polymorpha genome. This Agrobacterium-mediated transformation system for M. polymorpha should provide opportunities to perform genetic transformation without preparing spores and to generate a sufficient number of transformants with isogenic background.

  18. Absorption, scattering, and radiation force efficiencies in the longitudinal wave scattering by a small viscoelastic particle in an isotropic solid.

    PubMed

    Lopes, J H; Leão-Neto, J P; Silva, G T

    2017-11-01

    Analytical expressions of the absorption, scattering, and elastic radiation force efficiency factors are derived for the longitudinal plane wave scattering by a small viscoelastic particle in a lossless solid matrix. The particle is assumed to be much smaller than the incident wavelength, i.e., the so-called long-wavelength (Rayleigh) approximation. The efficiencies are dimensionless quantities that represent the absorbed and scattering powers and the elastic radiation force on the particle. In the quadrupole approximation, they are expressed in terms of contrast functions (bulk and shear moduli, and density) between the particle and solid matrix. The results for a high-density polyethylene particle embedded in an aluminum matrix agree with those obtained with the partial wave expansion method. Additionally, the connection between the elastic radiation force and forward scattering function is established through the optical theorem. The present results should be useful for ultrasound characterization of particulate composites, and the development of implanted devices activated by radiation force.

  19. Fabrication and application of a non-contact double-tapered optical fiber tweezers.

    PubMed

    Liu, Z L; Liu, Y X; Tang, Y; Zhang, N; Wu, F P; Zhang, B

    2017-09-18

    A double-tapered optical fiber tweezers (DOFTs) was fabricated by a chemical etching called interfacial layer etching. In this method, the second taper angle (STA) of DOFTs can be controlled easily by the interfacial layer etching time. Application of the DOFTs to the optical trapping of the yeast cells was presented. Effects of the STA on the axile trapping efficiency and the trapping position were investigated experimentally and theoretically. The experimental results are good agreement with the theoretical ones. The results demonstrated that the non-contact capture can be realized for the large STA (e.g. 90 deg) and there was an optimal axile trapping efficiency as the STA increasing. In order to obtain a more accurate measurement result of the trapping force, a correction factor to Stokes drag coefficient was introduced. This work provided a way of designing and fabricating an optical fiber tweezers (OFTs) with a high trapping efficient or a non-contact capture.

  20. [New distribution forms for pharmaceuticals--a logistic perspective].

    PubMed

    Grund, J; Vartdal, T E

    1998-11-10

    Pharmaceuticals are an important input in health care. As a complement to other modes of treatment and as a substitute for hospitalisation, they affect the health of individuals and populations. Enormous public financial resources are spent on pharmaceuticals, and halting the growth in expenditures is a political objective. Factors with room for improvement include drug use efficiency, cost-efficient prescription, purchasing prices and distribution. High distribution costs affect prices and, thus, the assessment of product cost vs. utility. Changes in the distribution system may be important, for three reasons: First, increased capital costs call for higher efficiency. Second, increased competition requires improved logistics. And third, information technology has opened up for new supply chain solutions. Direct sales solutions are being considered, and were discussed in a Norwegian public report on the matter, but no final conclusion has been reached. This article discusses changes in the supply of pharmaceuticals and the development of the market. Alternative supply chains are outlined, including what role the postal service may play in a deregulated pharmaceutical market.

  1. Irrigation Without Waste

    ERIC Educational Resources Information Center

    Shea, Kevin P.

    1975-01-01

    A new means of irrigation, called the drip or trickle system, has been proven more efficient and less wasteful than the current system of flood irrigation. As a result of this drip system, fertilizer-use efficiency is improved and crop yield, though never decreased, is sometimes increased in some crops. (MA)

  2. Does Competition Improve Public School Efficiency? A Spatial Analysis

    ERIC Educational Resources Information Center

    Misra, Kaustav; Grimes, Paul W.; Rogers, Kevin E.

    2012-01-01

    Advocates for educational reform frequently call for policies to increase competition between schools because it is argued that market forces naturally lead to greater efficiencies, including improved student learning, when schools face competition. Researchers examining this issue are confronted with difficulties in defining reasonable measures…

  3. Broadband terahertz-power extracting by using electron cyclotron maser.

    PubMed

    Pan, Shi; Du, Chao-Hai; Qi, Xiang-Bo; Liu, Pu-Kun

    2017-08-04

    Terahertz applications urgently require high performance and room temperature terahertz sources. The gyrotron based on the principle of electron cyclotron maser is able to generate watt-to-megawatt level terahertz radiation, and becomes an exceptional role in the frontiers of energy, security and biomedicine. However, in normal conditions, a terahertz gyrotron could generate terahertz radiation with high efficiency on a single frequency or with low efficiency in a relatively narrow tuning band. Here a frequency tuning scheme for the terahertz gyrotron utilizing sequentially switching among several whispering-gallery modes is proposed to reach high performance with broadband, coherence and high power simultaneously. Such mode-switching gyrotron has the potential of generating broadband radiation with 100-GHz-level bandwidth. Even wider bandwidth is limited by the frequency-dependent effective electrical length of the cavity. Preliminary investigation applies a pre-bunched circuit to the single-mode wide-band tuning. Then, more broadband sweeping is produced by mode switching in great-range magnetic tuning. The effect of mode competition, as well as critical engineering techniques on frequency tuning is discussed to confirm the feasibility for the case close to reality. This multi-mode-switching scheme could make gyrotron a promising device towards bridging the so-called terahertz gap.

  4. Advanced thermal management of high-power quantum cascade laser arrays for infrared countermeasures

    NASA Astrophysics Data System (ADS)

    Barletta, Philip; Diehl, Laurent; North, Mark T.; Yang, Bao; Baldasaro, Nick; Temple, Dorota

    2017-10-01

    Next-generation infrared countermeasure (IRCM) systems call for compact and lightweight high-power laser sources. Specifically, optical output power of tens of Watts in the mid-wave infrared (MWIR) is desired. Monolithically fabricated arrays of quantum cascade lasers (QCLs) have the potential to meet these requirements. Single MWIR QCL emitters operating in continuous wave at room temperature have demonstrated multi-Watt power levels with wall-plug efficiency of up to 20%. However, tens of Watts of output power from an array of QCLs translates into the necessity of removing hundreds of Watts per cm2, a formidable thermal management challenge. A potential thermal solution for such high-power QCL arrays is active cooling based on high-performance thin-film thermoelectric coolers (TFTECs), in conjunction with pumped porous-media heat exchangers. The use of active cooling via TFTECs makes it possible to not only pump the heat away, but also to lower the QCL junction temperature, thus improving the wall-plug efficiency of the array. TFTECs have shown the ability to pump >250W/cm2 at ΔT=0K, which is 25 times greater than that typically seen in commercially available bulk thermoelectric devices.

  5. Low nitrous oxide production through nitrifier-denitrification in intermittent-feed high-rate nitritation reactors.

    PubMed

    Su, Qingxian; Ma, Chun; Domingo-Félez, Carlos; Kiil, Anne Sofie; Thamdrup, Bo; Jensen, Marlene Mark; Smets, Barth F

    2017-10-15

    Nitrous oxide (N 2 O) production from autotrophic nitrogen conversion processes, especially nitritation systems, can be significant, requires understanding and calls for mitigation. In this study, the rates and pathways of N 2 O production were quantified in two lab-scale sequencing batch reactors operated with intermittent feeding and demonstrating long-term and high-rate nitritation. The resulting reactor biomass was highly enriched in ammonia-oxidizing bacteria, and converted ∼93 ± 14% of the oxidized ammonium to nitrite. The low DO set-point combined with intermittent feeding was sufficient to maintain high nitritation efficiency and high nitritation rates at 20-26 °C over a period of ∼300 days. Even at the high nitritation efficiencies, net N 2 O production was low (∼2% of the oxidized ammonium). Net N 2 O production rates transiently increased with a rise in pH after each feeding, suggesting a potential effect of pH on N 2 O production. In situ application of 15 N labeled substrates revealed nitrifier denitrification as the dominant pathway of N 2 O production. Our study highlights operational conditions that minimize N 2 O emission from two-stage autotrophic nitrogen removal systems. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. High-voltage Array Ground Test for Direct-drive Solar Electric Propulsion

    NASA Technical Reports Server (NTRS)

    Howell, Joe T.; Mankins, John C.; O'Neill, Mark J.

    2005-01-01

    Development is underway on a unique high-power solar concentrator array called Stretched Lens Array (SLA) for direct drive electric propulsion. These SLA performance attributes closely match the critical needs of solar electric propulsion (SEP) systems, which may be used for "space tugs" to fuel-efficiently transport cargo from low earth orbit (LEO) to low lunar orbit (LLO), in support of NASA s robotic and human exploration missions. Later SEP systems may similarly transport cargo from the earth-moon neighborhood to the Mars neighborhood. This paper will describe the SLA SEP technology, discuss ground tests already completed, and present plans for future ground tests and future flight tests of SLA SEP systems.

  7. Optoelectronic engineering of colloidal quantum-dot solar cells beyond the efficiency black hole: a modeling approach

    NASA Astrophysics Data System (ADS)

    Mahpeykar, Seyed Milad; Wang, Xihua

    2017-02-01

    Colloidal quantum dot (CQD) solar cells have been under the spotlight in recent years mainly due to their potential for low-cost solution-processed fabrication and efficient light harvesting through multiple exciton generation (MEG) and tunable absorption spectrum via the quantum size effect. Despite the impressive advances achieved in charge carrier mobility of quantum dot solids and the cells' light trapping capabilities, the recent progress in CQD solar cell efficiencies has been slow, leaving them behind other competing solar cell technologies. In this work, using comprehensive optoelectronic modeling and simulation, we demonstrate the presence of a strong efficiency loss mechanism, here called the "efficiency black hole", that can significantly hold back the improvements achieved by any efficiency enhancement strategy. We prove that this efficiency black hole is the result of sole focus on enhancement of either light absorption or charge extraction capabilities of CQD solar cells. This means that for a given thickness of CQD layer, improvements accomplished exclusively in optic or electronic aspect of CQD solar cells do not necessarily translate into tangible enhancement in their efficiency. The results suggest that in order for CQD solar cells to come out of the mentioned black hole, incorporation of an effective light trapping strategy and a high quality CQD film at the same time is an essential necessity. Using the developed optoelectronic model, the requirements for this incorporation approach and the expected efficiencies after its implementation are predicted as a roadmap for CQD solar cell research community.

  8. Contraction coupling efficiency of human first dorsal interosseous muscle.

    PubMed

    Jubrias, Sharon A; Vollestad, Nina K; Gronka, Rod K; Kushmerick, Martin J

    2008-04-01

    During working contractions, chemical energy in the form of ATP is converted to external work. The efficiency of this conversion, called 'contraction coupling efficiency', is calculated by the ratio of work output to energy input from ATP splitting. Experiments on isolated muscles and permeabilized fibres show the efficiency of this conversion has a wide range, 0.2-0.7. We measured the work output in contractions of a single human hand muscle in vivo and of the ATP cost of that work to calculate the contraction coupling efficiency of the muscle. Five subjects performed six bouts of rapid voluntary contractions every 1.5 s for 42 s (28 contractions, each with time to peak force < 150 ms). The bouts encompassed a 7-fold range of workloads. The ATP cost during work was quantified by measuring the extent of chemical changes within the muscle from (31)P magnetic resonance spectra. Contraction coupling efficiency was determined as the slope of paired measurements of work output and ATP cost at the five graded work loads. The results show that 0.68 of the chemical energy available from ATP splitting was converted to external work output. A plausible mechanism to account for this high value is a substantially lower efficiency for mitochondrial ATP synthesis. The method described here can be used to analyse changes in the overall efficiency determined from oxygen consumption during exercise that can occur in disease or with age, and to test the hypothesis that such changes are due to reduced contraction coupling efficiency.

  9. Reuse of imputed data in microarray analysis increases imputation efficiency

    PubMed Central

    Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su

    2004-01-01

    Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240

  10. A surrogate-based sensitivity quantification and Bayesian inversion of a regional groundwater flow model

    NASA Astrophysics Data System (ADS)

    Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.; Amerjeed, Mansoor

    2018-02-01

    Bayesian inference using Markov Chain Monte Carlo (MCMC) provides an explicit framework for stochastic calibration of hydrogeologic models accounting for uncertainties; however, the MCMC sampling entails a large number of model calls, and could easily become computationally unwieldy if the high-fidelity hydrogeologic model simulation is time consuming. This study proposes a surrogate-based Bayesian framework to address this notorious issue, and illustrates the methodology by inverse modeling a regional MODFLOW model. The high-fidelity groundwater model is approximated by a fast statistical model using Bagging Multivariate Adaptive Regression Spline (BMARS) algorithm, and hence the MCMC sampling can be efficiently performed. In this study, the MODFLOW model is developed to simulate the groundwater flow in an arid region of Oman consisting of mountain-coast aquifers, and used to run representative simulations to generate training dataset for BMARS model construction. A BMARS-based Sobol' method is also employed to efficiently calculate input parameter sensitivities, which are used to evaluate and rank their importance for the groundwater flow model system. According to sensitivity analysis, insensitive parameters are screened out of Bayesian inversion of the MODFLOW model, further saving computing efforts. The posterior probability distribution of input parameters is efficiently inferred from the prescribed prior distribution using observed head data, demonstrating that the presented BMARS-based Bayesian framework is an efficient tool to reduce parameter uncertainties of a groundwater system.

  11. Structural identifiability of cyclic graphical models of biological networks with latent variables.

    PubMed

    Wang, Yulin; Lu, Na; Miao, Hongyu

    2016-06-13

    Graphical models have long been used to describe biological networks for a variety of important tasks such as the determination of key biological parameters, and the structure of graphical model ultimately determines whether such unknown parameters can be unambiguously obtained from experimental observations (i.e., the identifiability problem). Limited by resources or technical capacities, complex biological networks are usually partially observed in experiment, which thus introduces latent variables into the corresponding graphical models. A number of previous studies have tackled the parameter identifiability problem for graphical models such as linear structural equation models (SEMs) with or without latent variables. However, the limited resolution and efficiency of existing approaches necessarily calls for further development of novel structural identifiability analysis algorithms. An efficient structural identifiability analysis algorithm is developed in this study for a broad range of network structures. The proposed method adopts the Wright's path coefficient method to generate identifiability equations in forms of symbolic polynomials, and then converts these symbolic equations to binary matrices (called identifiability matrix). Several matrix operations are introduced for identifiability matrix reduction with system equivalency maintained. Based on the reduced identifiability matrices, the structural identifiability of each parameter is determined. A number of benchmark models are used to verify the validity of the proposed approach. Finally, the network module for influenza A virus replication is employed as a real example to illustrate the application of the proposed approach in practice. The proposed approach can deal with cyclic networks with latent variables. The key advantage is that it intentionally avoids symbolic computation and is thus highly efficient. Also, this method is capable of determining the identifiability of each single parameter and is thus of higher resolution in comparison with many existing approaches. Overall, this study provides a basis for systematic examination and refinement of graphical models of biological networks from the identifiability point of view, and it has a significant potential to be extended to more complex network structures or high-dimensional systems.

  12. Does Competition Improve Public School Efficiency? A Spatial Analysis

    ERIC Educational Resources Information Center

    Misra, Kaustav

    2010-01-01

    Proponents of educational reform often call for policies to increase competition between schools. It is argued that market forces naturally lead to greater efficiencies, including improved student learning, when schools face competition. In many parts of the country, public schools experience significant competition from private schools; however,…

  13. Extracting Hot spots of Topics from Time Stamped Documents

    PubMed Central

    Chen, Wei; Chundi, Parvathi

    2011-01-01

    Identifying time periods with a burst of activities related to a topic has been an important problem in analyzing time-stamped documents. In this paper, we propose an approach to extract a hot spot of a given topic in a time-stamped document set. Topics can be basic, containing a simple list of keywords, or complex. Logical relationships such as and, or, and not are used to build complex topics from basic topics. A concept of presence measure of a topic based on fuzzy set theory is introduced to compute the amount of information related to the topic in the document set. Each interval in the time period of the document set is associated with a numeric value which we call the discrepancy score. A high discrepancy score indicates that the documents in the time interval are more focused on the topic than those outside of the time interval. A hot spot of a given topic is defined as a time interval with the highest discrepancy score. We first describe a naive implementation for extracting hot spots. We then construct an algorithm called EHE (Efficient Hot Spot Extraction) using several efficient strategies to improve performance. We also introduce the notion of a topic DAG to facilitate an efficient computation of presence measures of complex topics. The proposed approach is illustrated by several experiments on a subset of the TDT-Pilot Corpus and DBLP conference data set. The experiments show that the proposed EHE algorithm significantly outperforms the naive one, and the extracted hot spots of given topics are meaningful. PMID:21765568

  14. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  15. TotalReCaller: improved accuracy and performance via integrated alignment and base-calling.

    PubMed

    Menges, Fabian; Narzisi, Giuseppe; Mishra, Bud

    2011-09-01

    Currently, re-sequencing approaches use multiple modules serially to interpret raw sequencing data from next-generation sequencing platforms, while remaining oblivious to the genomic information until the final alignment step. Such approaches fail to exploit the full information from both raw sequencing data and the reference genome that can yield better quality sequence reads, SNP-calls, variant detection, as well as an alignment at the best possible location in the reference genome. Thus, there is a need for novel reference-guided bioinformatics algorithms for interpreting analog signals representing sequences of the bases ({A, C, G, T}), while simultaneously aligning possible sequence reads to a source reference genome whenever available. Here, we propose a new base-calling algorithm, TotalReCaller, to achieve improved performance. A linear error model for the raw intensity data and Burrows-Wheeler transform (BWT) based alignment are combined utilizing a Bayesian score function, which is then globally optimized over all possible genomic locations using an efficient branch-and-bound approach. The algorithm has been implemented in soft- and hardware [field-programmable gate array (FPGA)] to achieve real-time performance. Empirical results on real high-throughput Illumina data were used to evaluate TotalReCaller's performance relative to its peers-Bustard, BayesCall, Ibis and Rolexa-based on several criteria, particularly those important in clinical and scientific applications. Namely, it was evaluated for (i) its base-calling speed and throughput, (ii) its read accuracy and (iii) its specificity and sensitivity in variant calling. A software implementation of TotalReCaller as well as additional information, is available at: http://bioinformatics.nyu.edu/wordpress/projects/totalrecaller/ fabian.menges@nyu.edu.

  16. Higher Intelligence Is Associated with Less Task-Related Brain Network Reconfiguration

    PubMed Central

    Cole, Michael W.

    2016-01-01

    The human brain is able to exceed modern computers on multiple computational demands (e.g., language, planning) using a small fraction of the energy. The mystery of how the brain can be so efficient is compounded by recent evidence that all brain regions are constantly active as they interact in so-called resting-state networks (RSNs). To investigate the brain's ability to process complex cognitive demands efficiently, we compared functional connectivity (FC) during rest and multiple highly distinct tasks. We found previously that RSNs are present during a wide variety of tasks and that tasks only minimally modify FC patterns throughout the brain. Here, we tested the hypothesis that, although subtle, these task-evoked FC updates from rest nonetheless contribute strongly to behavioral performance. One might expect that larger changes in FC reflect optimization of networks for the task at hand, improving behavioral performance. Alternatively, smaller changes in FC could reflect optimization for efficient (i.e., small) network updates, reducing processing demands to improve behavioral performance. We found across three task domains that high-performing individuals exhibited more efficient brain connectivity updates in the form of smaller changes in functional network architecture between rest and task. These smaller changes suggest that individuals with an optimized intrinsic network configuration for domain-general task performance experience more efficient network updates generally. Confirming this, network update efficiency correlated with general intelligence. The brain's reconfiguration efficiency therefore appears to be a key feature contributing to both its network dynamics and general cognitive ability. SIGNIFICANCE STATEMENT The brain's network configuration varies based on current task demands. For example, functional brain connections are organized in one way when one is resting quietly but in another way if one is asked to make a decision. We found that the efficiency of these updates in brain network organization is positively related to general intelligence, the ability to perform a wide variety of cognitively challenging tasks well. Specifically, we found that brain network configuration at rest was already closer to a wide variety of task configurations in intelligent individuals. This suggests that the ability to modify network connectivity efficiently when task demands change is a hallmark of high intelligence. PMID:27535904

  17. The mechanics of locomotion in the squid Loligo pealei: locomotory function and unsteady hydrodynamics of the jet and intramantle pressure.

    PubMed

    Anderson, E J; DeMont, M E

    2000-09-01

    High-speed, high-resolution digital video recordings of swimming squid (Loligo pealei) were acquired. These recordings were used to determine very accurate swimming kinematics, body deformations and mantle cavity volume. The time-varying squid profile was digitized automatically from the acquired swimming sequences. Mantle cavity volume flow rates were determined under the assumption of axisymmetry and the condition of incompressibility. The data were then used to calculate jet velocity, jet thrust and intramantle pressure, including unsteady effects. Because of the accurate measurements of volume flow rate, the standard use of estimated discharge coefficients was avoided. Equations for jet and whole-cycle propulsive efficiency were developed, including a general equation incorporating unsteady effects. Squid were observed to eject up to 94 % of their intramantle working fluid at relatively high swimming speeds. As a result, the standard use of the so-called large-reservoir approximation in the determination of intramantle pressure by the Bernoulli equation leads to significant errors in calculating intramantle pressure from jet velocity and vice versa. The failure of this approximation in squid locomotion also implies that pressure variation throughout the mantle cannot be ignored. In addition, the unsteady terms of the Bernoulli equation and the momentum equation proved to be significant to the determination of intramantle pressure and jet thrust. Equations of propulsive efficiency derived for squid did not resemble Froude efficiency. Instead, they resembled the equation of rocket motor propulsive efficiency. The Froude equation was found to underestimate the propulsive efficiency of the jet period of the squid locomotory cycle and to overestimate whole-cycle propulsive efficiency when compared with efficiencies calculated from equations derived with the squid locomotory apparatus in mind. The equations for squid propulsive efficiency reveal that the refill period of squid plays a greater role, and the jet period a lesser role, in the low whole-cycle efficiencies predicted in squid and similar jet-propelled organisms. These findings offer new perspectives on locomotory hydrodynamics, intramantle pressure measurements and functional morphology with regard to squid and other jet-propelled organisms.

  18. Stratified Diffractive Optic Approach for Creating High Efficiency Gratings

    NASA Technical Reports Server (NTRS)

    Chambers, Diana M.; Nordin, Gregory P.

    1998-01-01

    Gratings with high efficiency in a single diffracted order can be realized with both volume holographic and diffractive optical elements. However, each method has limitations that restrict the applications in which they can be used. For example, high efficiency volume holographic gratings require an appropriate combination of thickness and permittivity modulation throughout the bulk of the material. Possible combinations of those two characteristics are limited by properties of currently available materials, thus restricting the range of applications for volume holographic gratings. Efficiency of a diffractive optic grating is dependent on its approximation of an ideal analog profile using discrete features. The size of constituent features and, consequently, the number that can be used within a required grating period restricts the applications in which diffractive optic gratings can be used. These limitations imply that there are applications which cannot be addressed by either technology. In this paper we propose to address a number of applications in this category with a new method of creating high efficiency gratings which we call stratified diffractive optic gratings. In this approach diffractive optic techniques are used to create an optical structure that emulates volume grating behavior. To illustrate the stratified diffractive optic grating concept we consider a specific application, a scanner for a space-based coherent wind lidar, with requirements that would be difficult to meet by either volume holographic or diffractive optic methods. The lidar instrument design specifies a transmissive scanner element with the input beam normally incident and the exiting beam deflected at a fixed angle from the optical axis. The element will be rotated about the optical axis to produce a conical scan pattern. The wavelength of the incident beam is 2.06 microns and the required deflection angle is 30 degrees, implying a grating period of approximately 4 microns. Creating a high efficiency volume grating with these parameters would require a grating thickness that cannot be attained with current photosensitive materials. For a diffractive optic grating, the number of binary steps necessary to produce high efficiency combined with the grating period requires feature sizes and alignment tolerances that are also unattainable with current techniques. Rotation of the grating and integration into a space-based lidar system impose the additional requirements that it be insensitive to polarization orientation, that its mass be minimized and that it be able to withstand launch and space environments.

  19. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.

  20. In-depth analysis of chloride treatments for thin-film CdTe solar cells

    PubMed Central

    Major, J. D.; Al Turkestani, M.; Bowen, L.; Brossard, M.; Li, C.; Lagoudakis, P.; Pennycook, S. J.; Phillips, L. J.; Treharne, R. E.; Durose, K.

    2016-01-01

    CdTe thin-film solar cells are now the main industrially established alternative to silicon-based photovoltaics. These cells remain reliant on the so-called chloride activation step in order to achieve high conversion efficiencies. Here, by comparison of effective and ineffective chloride treatments, we show the main role of the chloride process to be the modification of grain boundaries through chlorine accumulation, which leads an increase in the carrier lifetime. It is also demonstrated that while improvements in fill factor and short circuit current may be achieved through use of the ineffective chlorides, or indeed simple air annealing, voltage improvement is linked directly to chlorine incorporation at the grain boundaries. This suggests that focus on improved or more controlled grain boundary treatments may provide a route to achieving higher cell voltages and thus efficiencies. PMID:27775037

  1. Efficient modification of the myostatin gene in porcine somatic cells and generation of knockout piglets.

    PubMed

    Rao, Shengbin; Fujimura, Tatsuya; Matsunari, Hitomi; Sakuma, Tetsushi; Nakano, Kazuaki; Watanabe, Masahito; Asano, Yoshinori; Kitagawa, Eri; Yamamoto, Takashi; Nagashima, Hiroshi

    2016-01-01

    Myostatin (MSTN) is a negative regulator of myogenesis, and disruption of its function causes increased muscle mass in various species. Here, we report the generation of MSTN-knockout (KO) pigs using genome editing technology combined with somatic-cell nuclear transfer (SCNT). Transcription activator-like effector nuclease (TALEN) with non-repeat-variable di-residue variations, called Platinum TALEN, was highly efficient in modifying genes in porcine somatic cells, which were then used for SCNT to create MSTN KO piglets. These piglets exhibited a double-muscled phenotype, possessing a higher body weight and longissimus muscle mass measuring 170% that of wild-type piglets, with double the number of muscle fibers. These results demonstrate that loss of MSTN increases muscle mass in pigs, which may help increase pork production for consumption in the future. © 2015 Wiley Periodicals, Inc.

  2. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors

    PubMed Central

    Li, Jian; Wei, Xinguo; Zhang, Guangjun

    2017-01-01

    Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method. PMID:28825684

  3. Superconducting thermoelectric generator

    DOEpatents

    Metzger, J.D.; El-Genk, M.S.

    1998-05-05

    An apparatus and method for producing electricity from heat is disclosed. The present invention is a thermoelectric generator that uses materials with substantially no electrical resistance, often called superconductors, to efficiently convert heat into electrical energy without resistive losses. Preferably, an array of superconducting elements is encased within a second material with a high thermal conductivity. The second material is preferably a semiconductor. Alternatively, the superconducting material can be doped on a base semiconducting material, or the superconducting material and the semiconducting material can exist as alternating, interleaved layers of waferlike materials. A temperature gradient imposed across the boundary of the two materials establishes an electrical potential related to the magnitude of the temperature gradient. The superconducting material carries the resulting electrical current at zero resistivity, thereby eliminating resistive losses. The elimination of resistive losses significantly increases the conversion efficiency of the thermoelectric device. 4 figs.

  4. Superconducting thermoelectric generator

    DOEpatents

    Metzger, J.D.; El-Genk, M.S.

    1996-01-01

    An apparatus and method for producing electricity from heat. The present invention is a thermoelectric generator that uses materials with substantially no electrical resistance, often called superconductors, to efficiently convert heat into electrical energy without resistive losses. Preferably, an array of superconducting elements is encased within a second material with a high thermal conductivity. The second material is preferably a semiconductor. Alternatively, the superconducting material can be doped on a base semiconducting material, or the superconducting material and the semiconducting material can exist as alternating, interleaved layers of waferlike materials. A temperature gradient imposed across the boundary of the two materials establishes an electrical potential related to the magnitude of the temperature gradient. The superconducting material carries the resulting electrical current at zero resistivity, thereby eliminating resistive losses. The elimination of resistive losses significantly increases the conversion efficiency of the thermoelectric device.

  5. Toward of a highly integrated probe for improving wireless network quality

    NASA Astrophysics Data System (ADS)

    Ding, Fei; Song, Aiguo; Wu, Zhenyang; Pan, Zhiwen; You, Xiaohu

    2016-10-01

    Quality of service and customer perception is the focus of the telecommunications industry. This paper proposes a low-cost approach to the acquisition of terminal data, collected from LTE networks with the application of a soft probe, based on the Java language. The soft probe includes support for fast call in the form of a referenced library, and can be integrated into various Android-based applications to automatically monitor any exception event in the network. Soft probe-based acquisition of terminal data has the advantages of low cost and can be applied on large scale. Experiment shows that a soft probe can efficiently obtain terminal network data. With this method, the quality of service of LTE networks can be determined from acquired wireless data. This work contributes to efficient network optimization, and the analysis of abnormal network events.

  6. An Extended Kalman Filter-Based Attitude Tracking Algorithm for Star Sensors.

    PubMed

    Li, Jian; Wei, Xinguo; Zhang, Guangjun

    2017-08-21

    Efficiency and reliability are key issues when a star sensor operates in tracking mode. In the case of high attitude dynamics, the performance of existing attitude tracking algorithms degenerates rapidly. In this paper an extended Kalman filtering-based attitude tracking algorithm is presented. The star sensor is modeled as a nonlinear stochastic system with the state estimate providing the three degree-of-freedom attitude quaternion and angular velocity. The star positions in the star image are predicted and measured to estimate the optimal attitude. Furthermore, all the cataloged stars observed in the sensor field-of-view according the predicted image motion are accessed using a catalog partition table to speed up the tracking, called star mapping. Software simulation and night-sky experiment are performed to validate the efficiency and reliability of the proposed method.

  7. Superconducting thermoelectric generator

    DOEpatents

    Metzger, John D.; El-Genk, Mohamed S.

    1998-01-01

    An apparatus and method for producing electricity from heat. The present invention is a thermoelectric generator that uses materials with substantially no electrical resistance, often called superconductors, to efficiently convert heat into electrical energy without resistive losses. Preferably, an array of superconducting elements is encased within a second material with a high thermal conductivity. The second material is preferably a semiconductor. Alternatively, the superconducting material can be doped on a base semiconducting material, or the superconducting material and the semiconducting material can exist as alternating, interleaved layers of waferlike materials. A temperature gradient imposed across the boundary of the two materials establishes an electrical potential related to the magnitude of the temperature gradient. The superconducting material carries the resulting electrical current at zero resistivity, thereby eliminating resistive losses. The elimination of resistive losses significantly increases the conversion efficiency of the thermoelectric device.

  8. The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.

    PubMed

    Pang, Haotian; Liu, Han; Vanderbei, Robert

    2014-02-01

    We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.

  9. dDocent: a RADseq, variant-calling pipeline designed for population genomics of non-model organisms.

    PubMed

    Puritz, Jonathan B; Hollenbeck, Christopher M; Gold, John R

    2014-01-01

    Restriction-site associated DNA sequencing (RADseq) has become a powerful and useful approach for population genomics. Currently, no software exists that utilizes both paired-end reads from RADseq data to efficiently produce population-informative variant calls, especially for non-model organisms with large effective population sizes and high levels of genetic polymorphism. dDocent is an analysis pipeline with a user-friendly, command-line interface designed to process individually barcoded RADseq data (with double cut sites) into informative SNPs/Indels for population-level analyses. The pipeline, written in BASH, uses data reduction techniques and other stand-alone software packages to perform quality trimming and adapter removal, de novo assembly of RAD loci, read mapping, SNP and Indel calling, and baseline data filtering. Double-digest RAD data from population pairings of three different marine fishes were used to compare dDocent with Stacks, the first generally available, widely used pipeline for analysis of RADseq data. dDocent consistently identified more SNPs shared across greater numbers of individuals and with higher levels of coverage. This is due to the fact that dDocent quality trims instead of filtering, incorporates both forward and reverse reads (including reads with INDEL polymorphisms) in assembly, mapping, and SNP calling. The pipeline and a comprehensive user guide can be found at http://dDocent.wordpress.com.

  10. dDocent: a RADseq, variant-calling pipeline designed for population genomics of non-model organisms

    PubMed Central

    Hollenbeck, Christopher M.; Gold, John R.

    2014-01-01

    Restriction-site associated DNA sequencing (RADseq) has become a powerful and useful approach for population genomics. Currently, no software exists that utilizes both paired-end reads from RADseq data to efficiently produce population-informative variant calls, especially for non-model organisms with large effective population sizes and high levels of genetic polymorphism. dDocent is an analysis pipeline with a user-friendly, command-line interface designed to process individually barcoded RADseq data (with double cut sites) into informative SNPs/Indels for population-level analyses. The pipeline, written in BASH, uses data reduction techniques and other stand-alone software packages to perform quality trimming and adapter removal, de novo assembly of RAD loci, read mapping, SNP and Indel calling, and baseline data filtering. Double-digest RAD data from population pairings of three different marine fishes were used to compare dDocent with Stacks, the first generally available, widely used pipeline for analysis of RADseq data. dDocent consistently identified more SNPs shared across greater numbers of individuals and with higher levels of coverage. This is due to the fact that dDocent quality trims instead of filtering, incorporates both forward and reverse reads (including reads with INDEL polymorphisms) in assembly, mapping, and SNP calling. The pipeline and a comprehensive user guide can be found at http://dDocent.wordpress.com. PMID:24949246

  11. Allele-specific copy-number discovery from whole-genome and whole-exome sequencing.

    PubMed

    Wang, WeiBo; Wang, Wei; Sun, Wei; Crowley, James J; Szatkiewicz, Jin P

    2015-08-18

    Copy-number variants (CNVs) are a major form of genetic variation and a risk factor for various human diseases, so it is crucial to accurately detect and characterize them. It is conceivable that allele-specific reads from high-throughput sequencing data could be leveraged to both enhance CNV detection and produce allele-specific copy number (ASCN) calls. Although statistical methods have been developed to detect CNVs using whole-genome sequence (WGS) and/or whole-exome sequence (WES) data, information from allele-specific read counts has not yet been adequately exploited. In this paper, we develop an integrated method, called AS-GENSENG, which incorporates allele-specific read counts in CNV detection and estimates ASCN using either WGS or WES data. To evaluate the performance of AS-GENSENG, we conducted extensive simulations, generated empirical data using existing WGS and WES data sets and validated predicted CNVs using an independent methodology. We conclude that AS-GENSENG not only predicts accurate ASCN calls but also improves the accuracy of total copy number calls, owing to its unique ability to exploit information from both total and allele-specific read counts while accounting for various experimental biases in sequence data. Our novel, user-friendly and computationally efficient method and a complete analytic protocol is freely available at https://sourceforge.net/projects/asgenseng/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Viewing the Impact of Shared Services through the Four Frames of Bolman and Deal

    ERIC Educational Resources Information Center

    Schumacher, Kyle A.

    2011-01-01

    On March 31, 2011, Governor Quinn of Illinois called for schools to consolidate in order to become more financially and administratively efficient. This call for massive school reform is not new. Although consolidation, or reducing the number of school districts to save administrative costs, seemed radical to some, the idea of sharing services to…

  13. Projection methods for line radiative transfer in spherical media.

    NASA Astrophysics Data System (ADS)

    Anusha, L. S.; Nagendra, K. N.

    An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).

  14. The diffusive finite state projection algorithm for efficient simulation of the stochastic reaction-diffusion master equation.

    PubMed

    Drawert, Brian; Lawson, Michael J; Petzold, Linda; Khammash, Mustafa

    2010-02-21

    We have developed a computational framework for accurate and efficient simulation of stochastic spatially inhomogeneous biochemical systems. The new computational method employs a fractional step hybrid strategy. A novel formulation of the finite state projection (FSP) method, called the diffusive FSP method, is introduced for the efficient and accurate simulation of diffusive transport. Reactions are handled by the stochastic simulation algorithm.

  15. Global Interconnectivity Between Mobile Satellite and Terrestrial Users: Call Signalling Issues and Challenges

    NASA Technical Reports Server (NTRS)

    Estabrook, Polly; Moon, Todd; Spade, Rob

    1996-01-01

    This paper will discuss some of the challenges in connecting mobile satellite users and mobile terrestrial users in a cost efficient manner and with a grade of service comparable to that of satellite to fixed user calls. Issues arising from the translation between the mobility management protocols resident at the satellite Earth station and those resident at cellular switches - either GSM (Group Special Mobile) or IS-41 (used by U.S. digital cellular systems) type - will be discussed. The impact of GSM call routing procedures on the call setup of a satellite to roaming GSM user will be described. Challenges facing provision of seamless call handoff between satellite and cellular systems will be given. A summary of the issues explored in the paper are listed and future work outlined.

  16. Pairing call-response surveys and distance sampling for a mammalian carnivore

    USGS Publications Warehouse

    Hansen, Sara J. K.; Frair, Jacqueline L.; Underwood, Harold B.; Gibbs, James P.

    2015-01-01

    Density estimates accounting for differential animal detectability are difficult to acquire for wide-ranging and elusive species such as mammalian carnivores. Pairing distance sampling with call-response surveys may provide an efficient means of tracking changes in populations of coyotes (Canis latrans), a species of particular interest in the eastern United States. Blind field trials in rural New York State indicated 119-m linear error for triangulated coyote calls, and a 1.8-km distance threshold for call detectability, which was sufficient to estimate a detection function with precision using distance sampling. We conducted statewide road-based surveys with sampling locations spaced ≥6 km apart from June to August 2010. Each detected call (be it a single or group) counted as a single object, representing 1 territorial pair, because of uncertainty in the number of vocalizing animals. From 524 survey points and 75 detections, we estimated the probability of detecting a calling coyote to be 0.17 ± 0.02 SE, yielding a detection-corrected index of 0.75 pairs/10 km2 (95% CI: 0.52–1.1, 18.5% CV) for a minimum of 8,133 pairs across rural New York State. Importantly, we consider this an index rather than true estimate of abundance given the unknown probability of coyote availability for detection during our surveys. Even so, pairing distance sampling with call-response surveys provided a novel, efficient, and noninvasive means of monitoring populations of wide-ranging and elusive, albeit reliably vocal, mammalian carnivores. Our approach offers an effective new means of tracking species like coyotes, one that is readily extendable to other species and geographic extents, provided key assumptions of distance sampling are met.

  17. Development of "Purple Endosperm Rice" by Engineering Anthocyanin Biosynthesis in the Endosperm with a High-Efficiency Transgene Stacking System.

    PubMed

    Zhu, Qinlong; Yu, Suize; Zeng, Dongchang; Liu, Hongmei; Wang, Huicong; Yang, Zhongfang; Xie, Xianrong; Shen, Rongxin; Tan, Jiantao; Li, Heying; Zhao, Xiucai; Zhang, Qunyu; Chen, Yuanling; Guo, Jingxing; Chen, Letian; Liu, Yao-Guang

    2017-07-05

    Anthocyanins have high antioxidant activities, and engineering of anthocyanin biosynthesis in staple crops, such as rice (Oryza sativa L.), could provide health-promoting foods for improving human health. However, engineering metabolic pathways for biofortification remains difficult, and previous attempts to engineer anthocyanin production in rice endosperm failed because of the sophisticated genetic regulatory network of its biosynthetic pathway. In this study, we developed a high-efficiency vector system for transgene stacking and used it to engineer anthocyanin biosynthesis in rice endosperm. We made a construct containing eight anthocyanin-related genes (two regulatory genes from maize and six structural genes from Coleus) driven by the endosperm-specific promoters,plus a selectable marker and a gene for marker excision. Transformation of rice with this construct generated a novel biofortified germplasm "Purple Endosperm Rice" (called "Zijingmi" in Chinese), which has high anthocyanin contents and antioxidant activity in the endosperm. This anthocyanin production results from expression of the transgenes and the resulting activation (or enhancement) of expression of 13 endogenous anthocyanin biosynthesis genes that are silenced or expressed at low levels in wild-type rice endosperm. This study provides an efficient, versatile toolkit for transgene stacking and demonstrates its use for successful engineering of a sophisticated biological pathway, suggesting the potential utility of this toolkit for synthetic biology and improvement of agronomic traits in plants. Copyright © 2017 The Author. Published by Elsevier Inc. All rights reserved.

  18. High-performance analysis of filtered semantic graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buluc, Aydin; Fox, Armando; Gilbert, John R.

    2012-01-01

    High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry "attributes" of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices and edges of interest. The filtered approach is superior due to its generality, ease of use, and memory efficiency, but may carry amore » performance cost. In the Knowledge Discovery Toolbox (KDT), a Python library for parallel graph computations, the user writes filters in a high-level language, but those filters result in relatively low performance due to the bottleneck of having to call into the Python interpreter for each edge. In this work, we use the Selective Embedded JIT Specialization (SEJITS) approach to automatically translate filters defined by programmers into a lower-level efficiency language, bypassing the upcall into Python. We evaluate our approach by comparing it with the high-performance C++ /MPI Combinatorial BLAS engine, and show that the productivity gained by using a high-level filtering language comes without sacrificing performance.« less

  19. Variations in killer whale food-associated calls produced during different prey behavioural contexts.

    PubMed

    Samarra, Filipa I P

    2015-07-01

    Killer whales produce herding calls to increase herring school density but previous studies suggested that these calls were made only when feeding upon spawning herring. Herring schools less densely when spawning compared to overwintering; therefore, producing herding calls may be advantageous only when feeding upon less dense spawning schools. To investigate if herding calls were produced across different prey behavioural contexts and whether structural variants occurred and correlated with prey behaviour, this study recorded killer whales when feeding upon spawning and overwintering herring. Herding calls were produced by whales feeding on both spawning and overwintering herring, however, calls recorded during overwintering had significantly different duration and peak frequency to those recorded during spawning. Calls recorded in herring overwintering grounds were more variable and sometimes included nonlinear phenomena. Thus, herding calls were not produced exclusively when feeding upon spawning herring, likely because the call increases feeding efficiency regardless of herring school density or behaviour. Variations in herding call structure were observed between prey behavioural contexts and did not appear to be adapted to prey characteristics. Herding call structural variants may be more likely a result of individual or group variation rather than a reflection of properties of the food source. Copyright © 2015. Published by Elsevier B.V.

  20. The experiences of frequent users of crisis helplines: A qualitative interview study.

    PubMed

    Middleton, Aves; Gunn, Jane; Bassilios, Bridget; Pirkis, Jane

    2016-11-01

    To understand why some users call crisis helplines frequently. Nineteen semi-structured telephone interviews were conducted with callers to Lifeline Australia who reported calling 20 times or more in the past month and provided informed consent. Interviews were audio-recorded and transcribed verbatim. Inductive thematic analysis was used to generate common themes. Approval was granted by The University of Melbourne Human Research Ethics Committee. Three overarching themes emerged from the data and included reasons for calling, service response and calling behaviours. Respondents called seeking someone to talk to, help with their mental health issues and assistance with negative life events. When they called, they found short-term benefits in the unrestricted support offered by the helpline. Over time they called about similar issues and described reactive, support-seeking and dependent calling behaviours. Frequent users of crisis helplines call about ongoing issues. They have developed distinctive calling behaviours which appear to occur through an interaction between their reasons for calling and the response they receive from the helpline. The ongoing nature of the issues prompting frequent users to call suggests that a service model that includes a continuity of care component may be more efficient in meeting their needs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Phase structure rewrite systems in information retrieval

    NASA Technical Reports Server (NTRS)

    Klingbiel, P. H.

    1985-01-01

    Operational level automatic indexing requires an efficient means of normalizing natural language phrases. Subject switching requires an efficient means of translating one set of authorized terms to another. A phrase structure rewrite system called a Lexical Dictionary is explained that performs these functions. Background, operational use, other applications and ongoing research are explained.

  2. From Taylor to Tyler to "No Child Left Behind": Legitimating Educational Standards

    ERIC Educational Resources Information Center

    Waldow, Florian

    2015-01-01

    In the early 20th century, proponents of the so-called "social efficiency movement" in the United States tried to apply methods and concepts for enhancing efficiency in industrial production to the organization of teaching and learning processes. This included the formulation of "educational standards" analogous to industrial…

  3. An imperialist competitive algorithm for virtual machine placement in cloud computing

    NASA Astrophysics Data System (ADS)

    Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza

    2017-05-01

    Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.

  4. Storying energy consumption: Collective video storytelling in energy efficiency social marketing.

    PubMed

    Gordon, Ross; Waitt, Gordon; Cooper, Paul; Butler, Katherine

    2018-05-01

    Despite calls for more socio-technical research on energy, there is little practical advice to how narratives collected through qualitative research may be melded with technical knowledge from the physical sciences such as engineering and then applied in energy efficiency social action strategies. This is despite established knowledge in the environmental management literature about domestic energy use regarding the utility of social practice theory and narrative framings that socialise everyday consumption. Storytelling is positioned in this paper both as a focus for socio-technical energy research, and as one potential practical tool that can arguably enhance energy efficiency interventions. We draw upon the literature on everyday social practices, and storytelling, to present our framework called 'collective video storytelling' that combines scientific and lay knowledge about domestic energy use to offer a practical tool for energy efficiency management. Collective video storytelling is discussed in the context of Energy+Illawarra, a 3-year cross-disciplinary collaboration between social marketers, human geographers, and engineers to target energy behavioural change within older low-income households in regional NSW, Australia. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Generating high-speed dynamic running gaits in a quadruped robot using an evolutionary search.

    PubMed

    Krasny, Darren P; Orin, David E

    2004-08-01

    Over the past several decades, there has been a considerable interest in investigating high-speed dynamic gaits for legged robots. While much research has been published, both in the biomechanics and engineering fields regarding the analysis of these gaits, no single study has adequately characterized the dynamics of high-speed running as can be achieved in a realistic, yet simple, robotic system. The goal of this paper is to find the most energy-efficient, natural, and unconstrained gallop that can be achieved using a simulated quadrupedal robot with articulated legs, asymmetric mass distribution, and compliant legs. For comparison purposes, we also implement the bound and canter. The model used here is planar, although we will show that it captures much of the predominant dynamic characteristics observed in animals. While it is not our goal to prove anything about biological locomotion, the dynamic similarities between the gaits we produce and those found in animals does indicate a similar underlying dynamic mechanism. Thus, we will show that achieving natural, efficient high-speed locomotion is possible even with a fairly simple robotic system. To generate the high-speed gaits, we use an efficient evolutionary algorithm called set-based stochastic optimization. This algorithm finds open-loop control parameters to generate periodic trajectories for the body. Several alternative methods are tested to generate periodic trajectories for the legs. The combined solutions found by the evolutionary search and the periodic-leg methods, over a range of speeds up to 10.0 m/s, reveal "biological" characteristics that are emergent properties of the underlying gaits.

  6. A compact high power Er:Yb:glass eyesafe laser for infrared remote sensing applications

    NASA Astrophysics Data System (ADS)

    Vitiello, Marco; Pizzarulli, Andrea; Ruffini, Andrea

    2010-10-01

    The key features and performances of a compact, lightweight, high power Er3+:Yb3+ glass laser transmitter are reported on. The theory employed to get an optimal design of the device is also described. In free running regime high energies of about 15mJ in 3ms long pulses were obtained, with an optical efficiency close to 85%. When q-switched by a Co: MALO crystal of carefully selected initial transmittivity, a high peak power in excess of 500 kW was obtained in about 9ns pulse duration, with an optical efficiency of 60%. The laser was successfully run with no significant power losses at repetition rates up to 5Hz due to a carefully designed heat sink which allowed an efficient conduction cooling of both the diode bars and the phosphate glass. The transmitter emits at a wavelength of 1535nm in the so-called "eyesafe" region of the light spectrum thus being highly attractive for any application involving the risk of human injury as is typically the case in remote sensing activities. Moreover, the spectral band around 1,5mm corresponds to a peak in the athmospheric transmittance thus being more effective in adverse weather conditions with respect to other wavelengths. Actually, the device has been successfully integrated into a rangefinder system allowing a reliable and precise detection of small targets at distances up to 20Km. Moreover, the transmitter capabilities were used into a state of the art infrared laser illuminator for night vision allowing even the recognition of a human being at distances in excess of 5Km.

  7. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system

    PubMed Central

    Schrode, Katrina M.; Bee, Mark A.

    2015-01-01

    ABSTRACT Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male–male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  8. A Time Study of Plastic Surgery Residents.

    PubMed

    Lau, Frank H; Sinha, Indranil; Jiang, Wei; Lipsitz, Stuart R; Eriksson, Elof

    2016-05-01

    Resident work hours are under scrutiny and have been subject to multiple restrictions. The studies supporting these changes have not included data on surgical residents. We studied the workday of a team of plastic surgery residents to establish prospective time-study data of plastic surgery (PRS) residents at a single tertiary-care academic medical center. Five trained research assistants observed all residents (n = 8) on a PRS service for 10 weeks and produced minute-by-minute activity logs. Data collection began when the team first met in the morning and continued until the resident being followed completed all non-call activities. We analyzed our data from 3 perspectives: 1) time spent in direct patient care (DPC), indirect patient care, and didactic activities; 2) time spent in high education-value activities (HEAs) versus low education-value activities; and 3) resident efficiency. We defined HEAs as activities that surgeons must master; other activities were LEAs. We quantified resident efficiency in terms of time fragmentation and time spent waiting. A total of 642.4 hours of data across 50 workdays were collected. Excluding call, residents worked an average of 64.2 hours per week. Approximately 50.7% of surgical resident time was allotted to DPC, with surgery accounting for the largest segment of this time (34.8%). Time spent on HEAs demonstrated trended upward with higher resident level (P = 0.086). Time in spent in surgery was significantly associated with higher resident levels (P < 0.0001); 57.7% of activities require 4 minutes or less, suggesting that resident work was highly fragmented. Residents spent 10.7% of their workdays waiting for other services. In this first-time study of PRS residents, we found that compared with medicine trainees, surgical residents spent 3.23 times more time on DPC. High education-value activities comprised most of our residents' workdays. Surgery was the leading component of both DPC and HEAs. Our residents were highly efficient and fragmented, with the majority of all activities requiring 4 minutes or less. Residents spent a large portion of their time waiting for other services. In light of these data, we suggest that future changes to residency programs be pilot tested, with preimplantation and postimplementation time studies performed to quantify the changes' impact.

  9. Semiconductor solar cells: Recent progress in terrestrial applications

    NASA Astrophysics Data System (ADS)

    Avrutin, V.; Izyumskaya, N.; Morkoç, H.

    2011-04-01

    In the last decade, the photovoltaic industry grew at a rate exceeding 30% per year. Currently, solar-cell modules based on single-crystal and large-grain polycrystalline silicon wafers comprise more than 80% of the market. Bulk Si photovoltaics, which benefit from the highly advanced growth and fabrication processes developed for microelectronics industry, is a mature technology. The light-to-electric power conversion efficiency of the best modules offered on the market is over 20%. While there is still room for improvement, the device performance is approaching the thermodynamic limit of ˜28% for single-junction Si solar cells. The major challenge that the bulk Si solar cells face is, however, the cost reduction. The potential for price reduction of electrical power generated by wafer-based Si modules is limited by the cost of bulk Si wafers, making the electrical power cost substantially higher than that generated by combustion of fossil fuels. One major strategy to bring down the cost of electricity generated by photovoltaic modules is thin-film solar cells, whose production does not require expensive semiconductor substrates and very high temperatures and thus allows decreasing the cost per unit area while retaining a reasonable efficiency. Thin-film solar cells based on amorphous, microcrystalline, and polycrystalline Si as well as cadmium telluride and copper indium diselenide compound semiconductors have already proved their commercial viability and their market share is increasing rapidly. Another avenue to reduce the cost of photovoltaic electricity is to increase the cell efficiency beyond the Shockley-Queisser limit. A variety of concepts proposed along this avenue forms the basis of the so-called third generation photovoltaics technologies. Among these approaches, high-efficiency multi-junction solar cells based on III-V compound semiconductors, which initially found uses in space applications, are now being developed for terrestrial applications. In this article, we discuss the progress, outstanding problems, and environmental issues associated with bulk Si, thin-film, and high-efficiency multi-junction solar cells.

  10. New Detector Developments for Future UV Space Missions

    NASA Astrophysics Data System (ADS)

    Werner, Klaus; Kappelmann, Norbert

    Ultraviolet (UV) astronomy is facing “dark ages”: After the shutdown of the Hubble Space Tele-scope only the WSO/UV mission will be operable in the UV wavelength region with efficient instruments. Improved optics and detectors are necessary for future successor missions to tackle new scientific goals. This drives our development of microchannel plate (MCP) UV-detectors with high quantum efficiency, high spatial resolution and low-power readout electronics. To enhance the quantum efficiency and the lifetime of the MCP detectors we are developing new cathodes and new anodes for these detectors. To achieve high quantum efficiency, we will use caesium-activated gallium nitride as semitransparent photocathodes with a much higher efficiency than default CsI/CsTe cathodes in this wavelength range. The new anodes will be cross-strip anodes with 64 horizontal and 64 vertical electrodes. This type of anode requires a lower gain and leads to an increased lifetime of the detector, compared to MCP detectors with other anode types. The heart of the new developed front-end-electronic for such type of anode is the so called “BEETLE chip”, which was designed by the MPI für Kernphysik Heidelberg for the LHCb ex-periment at CERN. This chip provides 128 input channels with charge-sensitive preamplifiers and shapers. Our design of the complete front-end readout electronics enables a total power con-sumption of less than 10 W. The MCP detector is intrinsically solar blind, single photon counting and has a very low read-out noise. To qualify this new type of detectors we are presently planning to build a small UV telescope for the usage on the German Technology Experimental Carrier (TET). Furthermore we are involved in the new German initiative for a Public Telescope, a space telescope equipped with an 80 cm mirror. One of the main instruments will be a high-resolution UV-Echelle Spectrograph that will be built by the University of Tübingen. The launch of this mission is scheduled for 2017.

  11. Church ownership and hospital efficiency.

    PubMed

    White, K R; Ozcan, Y A

    1996-01-01

    Using a sample of California hospitals, the effect of church ownership was examined as it relates to nonprofit hospital efficiency. Efficiency scores were computed using a nonparametric method called data envelopment analysis (DEA). Controlling for hospital size, location, system membership, and type of church ownership, church-owned hospitals were found to be more frequently in the efficient category than their secular nonprofit counterparts. The outcomes have policy implications for reducing healthcare expenditures by focusing on increasing outputs or decreasing inputs, as appropriate, and bolstering the case for church-sponsored hospitals to retain the tax-exempt status due to their ability to manage their resources as efficiently as (or more efficiently than) secular hospitals.

  12. Portable parallel portfolio optimization in the Aurora Financial Management System

    NASA Astrophysics Data System (ADS)

    Laure, Erwin; Moritsch, Hans

    2001-07-01

    Financial planning problems are formulated as large scale, stochastic, multiperiod, tree structured optimization problems. An efficient technique for solving this kind of problems is the nested Benders decomposition method. In this paper we present a parallel, portable, asynchronous implementation of this technique. To achieve our portability goals we elected the programming language Java for our implementation and used a high level Java based framework, called OpusJava, for expressing the parallelism potential as well as synchronization constraints. Our implementation is embedded within a modular decision support tool for portfolio and asset liability management, the Aurora Financial Management System.

  13. Hybrid PolyLingual Object Model: An Efficient and Seamless Integration of Java and Native Components on the Dalvik Virtual Machine

    PubMed Central

    Huang, Yukun; Chen, Rong; Wei, Jingbo; Pei, Xilong; Cao, Jing; Prakash Jayaraman, Prem; Ranjan, Rajiv

    2014-01-01

    JNI in the Android platform is often observed with low efficiency and high coding complexity. Although many researchers have investigated the JNI mechanism, few of them solve the efficiency and the complexity problems of JNI in the Android platform simultaneously. In this paper, a hybrid polylingual object (HPO) model is proposed to allow a CAR object being accessed as a Java object and as vice in the Dalvik virtual machine. It is an acceptable substitute for JNI to reuse the CAR-compliant components in Android applications in a seamless and efficient way. The metadata injection mechanism is designed to support the automatic mapping and reflection between CAR objects and Java objects. A prototype virtual machine, called HPO-Dalvik, is implemented by extending the Dalvik virtual machine to support the HPO model. Lifespan management, garbage collection, and data type transformation of HPO objects are also handled in the HPO-Dalvik virtual machine automatically. The experimental result shows that the HPO model outweighs the standard JNI in lower overhead on native side, better executing performance with no JNI bridging code being demanded. PMID:25110745

  14. MinT: Middleware for Cooperative Interaction of Things

    PubMed Central

    Jeon, Soobin; Jung, Inbum

    2017-01-01

    This paper proposes an Internet of Things (IoT) middleware called Middleware for Cooperative Interaction of Things (MinT). MinT supports a fully distributed IoT environment in which IoT devices directly connect to peripheral devices easily construct a local or global network, and share their data in an energy efficient manner. MinT provides a sensor abstract layer, a system layer and an interaction layer. These enable integrated sensing device operations, efficient resource management, and active interconnection between peripheral IoT devices. In addition, MinT provides a high-level API to develop IoT devices easily for IoT device developers. We aim to enhance the energy efficiency and performance of IoT devices through the performance improvements offered by MinT resource management and request processing. The experimental results show that the average request rate increased by 25% compared to Californium, which is a middleware for efficient interaction in IoT environments with powerful performance, an average response time decrease of 90% when resource management was used, and power consumption decreased by up to 68%. Finally, the proposed platform can reduce the latency and power consumption of IoT devices. PMID:28632182

  15. MinT: Middleware for Cooperative Interaction of Things.

    PubMed

    Jeon, Soobin; Jung, Inbum

    2017-06-20

    This paper proposes an Internet of Things (IoT) middleware called Middleware for Cooperative Interaction of Things (MinT). MinT supports a fully distributed IoT environment in which IoT devices directly connect to peripheral devices easily construct a local or global network, and share their data in an energy efficient manner. MinT provides a sensor abstract layer, a system layer and an interaction layer. These enable integrated sensing device operations, efficient resource management, and active interconnection between peripheral IoT devices. In addition, MinT provides a high-level API to develop IoT devices easily for IoT device developers. We aim to enhance the energy efficiency and performance of IoT devices through the performance improvements offered by MinT resource management and request processing. The experimental results show that the average request rate increased by 25% compared to Californium, which is a middleware for efficient interaction in IoT environments with powerful performance, an average response time decrease of 90% when resource management was used, and power consumption decreased by up to 68%. Finally, the proposed platform can reduce the latency and power consumption of IoT devices.

  16. Efficient color mixing through étendue conservation using freeform optics

    NASA Astrophysics Data System (ADS)

    Sorgato, Simone; Mohedano, Rubén.; Chaves, Julio; Cvetkovic, Aleksandra; Hernández, Maikel; Benitez, Pablo; Miñano, Juan C.; Thienpont, Hugo; Duerr, Fabian

    2015-08-01

    Today's SSL illumination market shows a clear trend to high flux packages with higher efficiency and higher CRI, realized by means of multiple color chips and phosphors. Such light sources require the optics to provide both near- and far-field color mixing. This design problem is particularly challenging for collimated luminaries, since traditional diffusers cannot be employed without enlarging the exit aperture and reducing brightness. Furthermore, diffusers compromise the light output ratio (efficiency) of the lamps to which they are applied. A solution, based on Köhler integration, consisting of a spherical cap comprising spherical microlenses on both its interior and exterior sides was presented in 2012. The diameter of this so-called Shell-Mixer was 3 times that of the chip array footprint. A new version of the Shell-Mixer, based on the Edge Ray Principle and conservation of etendue, where neither the outer shape of the cap nor the surfaces of the lenses are constrained to spheres or 2D Cartesian ovals will be shown in this work. The new shell is freeform, only twice as large as the original chip-array and equals the original model in terms of color uniformity, brightness and efficiency.

  17. The effect of water-containing electrolyte on lithium-sulfur batteries

    NASA Astrophysics Data System (ADS)

    Wu, Heng-Liang; Haasch, Richard T.; Perdue, Brian R.; Apblett, Christopher A.; Gewirth, Andrew A.

    2017-11-01

    Dissolved polysulfides, formed during Li-S battery operation, freely migrate and react with both the Li anode and the sulfur cathode. These soluble polysulfides shuttle between the anode and cathode - the so-called shuttle effect - resulting in an infinite recharge process and poor Columbic efficiency. In this study, water present as an additive in the Li-S battery electrolyte is found to reduce the shuttle effect in Li-S batteries. Batteries where water content was below 50 ppm exhibited a substantial shuttle effect and low charge capacity. Alternatively, addition of 250 ppm water led to stable charge/discharge behavior with high Coulombic efficiency. XPS results show that H2O addition results in the formation of solid electrolyte interphase (SEI) film with more LiOH on Li anode which protects the Li anode from the polysulfides. Batteries cycled without water result in a SEI film with more Li2CO3 likely formed by direct contact between the Li metal and the solvent. Intermediate quantities of H2O in the electrolyte result in high cycle efficiency for the first few cycles which then rapidly decays. This suggests that H2O is consumed during battery cycling, likely by interaction with freshly exposed Li metal formed during Li deposition.

  18. Delay-Aware Energy-Efficient Routing towards a Path-Fixed Mobile Sink in Industrial Wireless Sensor Networks.

    PubMed

    Wu, Shaobo; Chou, Wusheng; Niu, Jianwei; Guizani, Mohsen

    2018-03-18

    Wireless sensor networks (WSNs) involve more mobile elements with their widespread development in industries. Exploiting mobility present in WSNs for data collection can effectively improve the network performance. However, when the sink (i.e., data collector) path is fixed and the movement is uncontrollable, existing schemes fail to guarantee delay requirements while achieving high energy efficiency. This paper proposes a delay-aware energy-efficient routing algorithm for WSNs with a path-fixed mobile sink, named DERM, which can strike a desirable balance between the delivery latency and energy conservation. We characterize the object of DERM as realizing the energy-optimal anycast to time-varying destination regions, and introduce a location-based forwarding technique tailored for this problem. To reduce the control overhead, a lightweight sink location calibration method is devised, which cooperates with the rough estimation based on the mobility pattern to determine the sink location. We also design a fault-tolerant mechanism called track routing to tackle location errors for ensuring reliable and on-time data delivery. We comprehensively evaluate DERM by comparing it with two canonical routing schemes and a baseline solution presented in this work. Extensive evaluation results demonstrate that DERM can provide considerable energy savings while meeting the delay constraint and maintaining a high delivery ratio.

  19. Delay-Aware Energy-Efficient Routing towards a Path-Fixed Mobile Sink in Industrial Wireless Sensor Networks

    PubMed Central

    Wu, Shaobo; Chou, Wusheng; Niu, Jianwei; Guizani, Mohsen

    2018-01-01

    Wireless sensor networks (WSNs) involve more mobile elements with their widespread development in industries. Exploiting mobility present in WSNs for data collection can effectively improve the network performance. However, when the sink (i.e., data collector) path is fixed and the movement is uncontrollable, existing schemes fail to guarantee delay requirements while achieving high energy efficiency. This paper proposes a delay-aware energy-efficient routing algorithm for WSNs with a path-fixed mobile sink, named DERM, which can strike a desirable balance between the delivery latency and energy conservation. We characterize the object of DERM as realizing the energy-optimal anycast to time-varying destination regions, and introduce a location-based forwarding technique tailored for this problem. To reduce the control overhead, a lightweight sink location calibration method is devised, which cooperates with the rough estimation based on the mobility pattern to determine the sink location. We also design a fault-tolerant mechanism called track routing to tackle location errors for ensuring reliable and on-time data delivery. We comprehensively evaluate DERM by comparing it with two canonical routing schemes and a baseline solution presented in this work. Extensive evaluation results demonstrate that DERM can provide considerable energy savings while meeting the delay constraint and maintaining a high delivery ratio. PMID:29562628

  20. Enhanced thermoelectric efficiency via orthogonal electrical and thermal conductances in phosphorene.

    PubMed

    Fei, Ruixiang; Faghaninia, Alireza; Soklaski, Ryan; Yan, Jia-An; Lo, Cynthia; Yang, Li

    2014-11-12

    Thermoelectric devices that utilize the Seebeck effect convert heat flow into electrical energy and are highly desirable for the development of portable, solid state, passively powered electronic systems. The conversion efficiencies of such devices are quantified by the dimensionless thermoelectric figure of merit (ZT), which is proportional to the ratio of a device's electrical conductance to its thermal conductance. In this paper, a recently fabricated two-dimensional (2D) semiconductor called phosphorene (monolayer black phosphorus) is assessed for its thermoelectric capabilities. First-principles and model calculations reveal not only that phosphorene possesses a spatially anisotropic electrical conductance, but that its lattice thermal conductance exhibits a pronounced spatial-anisotropy as well. The prominent electrical and thermal conducting directions are orthogonal to one another, enhancing the ratio of these conductances. As a result, ZT may reach the criterion for commercial deployment along the armchair direction of phosphorene at T = 500 K and is close to 1 even at room temperature given moderate doping (∼2 × 10(16) m(-2) or 2 × 10(12) cm(-2)). Ultimately, phosphorene hopefully stands out as an environmentally sound thermoelectric material with unprecedented qualities. Intrinsically, it is a mechanically flexible material that converts heat energy with high efficiency at low temperatures (∼300 K), one whose performance does not require any sophisticated engineering techniques.

  1. Bridging the "green gap" of LEDs: giant light output enhancement and directional control of LEDs via embedded nano-void photonic crystals.

    PubMed

    Tsai, Yu-Lin; Liu, Che-Yu; Krishnan, Chirenjeevi; Lin, Da-Wei; Chu, You-Chen; Chen, Tzu-Pei; Shen, Tien-Lin; Kao, Tsung-Sheng; Charlton, Martin D B; Yu, Peichen; Lin, Chien-Chung; Kuo, Hao-Chung; He, Jr-Hau

    2016-01-14

    Green LEDs do not show the same level of performance as their blue and red cousins, greatly hindering the solid-state lighting development, which is the so-called "green gap". In this work, nano-void photonic crystals (NVPCs) were fabricated to embed within the GaN/InGaN green LEDs by using epitaxial lateral overgrowth (ELO) and nano-sphere lithography techniques. The NVPCs act as an efficient scattering back-reflector to outcouple the guided and downward photons, which not only boost the light extraction efficiency of LEDs with an enhancement of 78% but also collimate the view angle of LEDs from 131.5° to 114.0°. This could be because of the highly scattering nature of NVPCs which reduce the interference giving rise to Fabry-Perot resonance. Moreover, due to the threading dislocation suppression and strain relief by the NVPCs, the internal quantum efficiency was increased by 25% and droop behavior was reduced from 37.4% to 25.9%. The enhancement of light output power can be achieved as high as 151% at a driving current of 350 mA. Giant light output enhancement and directional control via NVPCs point the way towards a promising avenue of solid-state lighting.

  2. Dynamic frame resizing with convolutional neural network for efficient video compression

    NASA Astrophysics Data System (ADS)

    Kim, Jaehwan; Park, Youngo; Choi, Kwang Pyo; Lee, JongSeok; Jeon, Sunyoung; Park, JeongHoon

    2017-09-01

    In the past, video codecs such as vc-1 and H.263 used a technique to encode reduced-resolution video and restore original resolution from the decoder for improvement of coding efficiency. The techniques of vc-1 and H.263 Annex Q are called dynamic frame resizing and reduced-resolution update mode, respectively. However, these techniques have not been widely used due to limited performance improvements that operate well only under specific conditions. In this paper, video frame resizing (reduced/restore) technique based on machine learning is proposed for improvement of coding efficiency. The proposed method features video of low resolution made by convolutional neural network (CNN) in encoder and reconstruction of original resolution using CNN in decoder. The proposed method shows improved subjective performance over all the high resolution videos which are dominantly consumed recently. In order to assess subjective quality of the proposed method, Video Multi-method Assessment Fusion (VMAF) which showed high reliability among many subjective measurement tools was used as subjective metric. Moreover, to assess general performance, diverse bitrates are tested. Experimental results showed that BD-rate based on VMAF was improved by about 51% compare to conventional HEVC. Especially, VMAF values were significantly improved in low bitrate. Also, when the method is subjectively tested, it had better subjective visual quality in similar bit rate.

  3. Efficient conceptual design for LED-based pixel light vehicle headlamps

    NASA Astrophysics Data System (ADS)

    Held, Marcel Philipp; Lachmayer, Roland

    2017-12-01

    High-resolution vehicle headlamps represent a future-oriented technology that can be used to increase traffic safety and driving comfort. As a further development to the current Matrix Beam headlamps, LED-based pixel light systems enable ideal lighting functions (e.g. projection of navigation information onto the road) to be activated in any given driving scenario. Moreover, compared to other light-modulating elements such as DMDs and LCDs, instantaneous LED on-off toggling provides a decisive advantage in efficiency. To generate highly individualized light distributions for automotive applications, a number of approaches using an LED array may be pursued. One approach is to vary the LED density in the array so as to output the desired light distribution. Another notable approach makes use of an equidistant arrangement of the individual LEDs together with distortion optics to formulate the desired light distribution. The optical system adjusts the light distribution in a manner that improves resolution and increases luminous intensity of the desired area. An efficient setup for pixel generation calls for one lens per LED. Taking into consideration the limited space requirements of the system, this implies that the luminous flux, efficiency and resolution image parameters are primarily controlled by the lens dimensions. In this paper a concept for an equidistant LED array arrangement utilizing distortion optics is presented. The paper is divided into two parts. The first part discusses the influence of lens geometry on the system efficiency whereas the second part investigates the correlation between resolution and luminous flux based on the lens dimensions.

  4. Manipulating the Electronic Excited State Energies of Pyrimidine-Based Thermally Activated Delayed Fluorescence Emitters To Realize Efficient Deep-Blue Emission.

    PubMed

    Komatsu, Ryutaro; Ohsawa, Tatsuya; Sasabe, Hisahiro; Nakao, Kohei; Hayasaka, Yuya; Kido, Junji

    2017-02-08

    The development of efficient and robust deep-blue emitters is one of the key issues in organic light-emitting devices (OLEDs) for environmentally friendly, large-area displays or general lighting. As a promising technology that realizes 100% conversion from electrons to photons, thermally activated delayed fluorescence (TADF) emitters have attracted considerable attention. However, only a handful of examples of deep-blue TADF emitters have been reported to date, and the emitters generally show large efficiency roll-off at practical luminance over several hundreds to thousands of cd m -2 , most likely because of the long delayed fluorescent lifetime (τ d ). To overcome this problem, we molecularly manipulated the electronic excited state energies of pyrimidine-based TADF emitters to realize deep-blue emission and reduced τ d . We then systematically investigated the relationships among the chemical structure, properties, and device performances. The resultant novel pyrimidine emitters, called Ac-XMHPMs (X = 1, 2, and 3), contain different numbers of bulky methyl substituents at acceptor moieties, increasing the excited singlet (E S ) and triplet state (E T ) energies. Among them, Ac-3MHPM, with a high E T of 2.95 eV, exhibited a high external quantum efficiency (η ext,max ) of 18% and an η ext of 10% at 100 cd m -2 with Commission Internationale de l'Eclairage chromaticity coordinates of (0.16, 0.15). These efficiencies are among the highest values to date for deep-blue TADF OLEDs. Our molecular design strategy provides fundamental guidance to design novel deep-blue TADF emitters.

  5. On-call work and physicians' turnover intention: the moderating effect of job strain.

    PubMed

    Heponiemi, Tarja; Presseau, Justin; Elovainio, Marko

    2016-01-01

    Physician shortage and turnover are major problems worldwide. On-call duties may be among the risk factors of high turnover rates among physicians. We investigated whether having on-call duties is associated with physicians' turnover intention and whether job strain variables moderate this association. The present study was a cross-sectional questionnaire study among 3324 (61.6% women) Finnish physicians. The analyses were conducted using analyses of covariance adjusted for age, gender, response format, specialization status and employment sector. The results showed that job strain moderated the association between being on-call and turnover intention. The highest levels of turnover intention were among those who had on-call duties and high level of job strain characterized by high demands and low control opportunities. The lowest levels of turnover intention were among those who were not on-call and who had low strain involving low demands and high control. Also, job demands moderated the association between being on-call and turnover intention; turnover intention levels were higher among those with on-call duties and high demands than those being on-call and low demands. To conclude, working on-call was related to physicians' turnover intention particularly in those with high job strain. Health care organizations should focus more attention on working arrangements and scheduling of on-call work, provide a suitable working pace and implement means to increase physicians' participation and control over their job.

  6. Enhanced trigger for the NIFFTE fissionTPC in presence of high-rate alpha backgrounds

    NASA Astrophysics Data System (ADS)

    Bundgaard, Jeremy; Niffte Collaboration

    2015-10-01

    Nuclear physics and nuclear energy communities call for new, high precision measurements to improve existing fission models and design next generation reactors. The Neutron Induced Fission Fragment Tracking experiment (NIFFTE) has developed the fission Time Projection Chamber (fissionTPC) to measure neutron induced fission with unrivaled precision. The fissionTPC is annually deployed to the Weapons Neutron Research facility at Los Alamos Neutron Science Center where it operates with a neutron beam passing axially through the drift volume, irradiating heavy actinide targets to induce fission. The fissionTPC was developed at the Lawrence Livermore National Laboratory's TPC lab, where it measures spontaneous fission from radioactive sources to characterize detector response, improve performance, and evolve the design. To measure 244Cm, we've developed a fission trigger to reduce the data rate from alpha tracks while maintaining a high fission detection efficiency. In beam, alphas from 239Pu are a large background when detecting fission fragments; implementing the fission trigger will greatly reduce this background. The implementation of the cathode fission trigger in the fissionTPC will be presented along with a detailed study of its efficiency.

  7. FIB/SEM technology and high-throughput 3D reconstruction of dendritic spines and synapses in GFP-labeled adult-generated neurons.

    PubMed

    Bosch, Carles; Martínez, Albert; Masachs, Nuria; Teixeira, Cátia M; Fernaud, Isabel; Ulloa, Fausto; Pérez-Martínez, Esther; Lois, Carlos; Comella, Joan X; DeFelipe, Javier; Merchán-Pérez, Angel; Soriano, Eduardo

    2015-01-01

    The fine analysis of synaptic contacts is usually performed using transmission electron microscopy (TEM) and its combination with neuronal labeling techniques. However, the complex 3D architecture of neuronal samples calls for their reconstruction from serial sections. Here we show that focused ion beam/scanning electron microscopy (FIB/SEM) allows efficient, complete, and automatic 3D reconstruction of identified dendrites, including their spines and synapses, from GFP/DAB-labeled neurons, with a resolution comparable to that of TEM. We applied this technology to analyze the synaptogenesis of labeled adult-generated granule cells (GCs) in mice. 3D reconstruction of dendritic spines in GCs aged 3-4 and 8-9 weeks revealed two different stages of dendritic spine development and unexpected features of synapse formation, including vacant and branched dendritic spines and presynaptic terminals establishing synapses with up to 10 dendritic spines. Given the reliability, efficiency, and high resolution of FIB/SEM technology and the wide use of DAB in conventional EM, we consider FIB/SEM fundamental for the detailed characterization of identified synaptic contacts in neurons in a high-throughput manner.

  8. FIB/SEM technology and high-throughput 3D reconstruction of dendritic spines and synapses in GFP-labeled adult-generated neurons

    PubMed Central

    Bosch, Carles; Martínez, Albert; Masachs, Nuria; Teixeira, Cátia M.; Fernaud, Isabel; Ulloa, Fausto; Pérez-Martínez, Esther; Lois, Carlos; Comella, Joan X.; DeFelipe, Javier; Merchán-Pérez, Angel; Soriano, Eduardo

    2015-01-01

    The fine analysis of synaptic contacts is usually performed using transmission electron microscopy (TEM) and its combination with neuronal labeling techniques. However, the complex 3D architecture of neuronal samples calls for their reconstruction from serial sections. Here we show that focused ion beam/scanning electron microscopy (FIB/SEM) allows efficient, complete, and automatic 3D reconstruction of identified dendrites, including their spines and synapses, from GFP/DAB-labeled neurons, with a resolution comparable to that of TEM. We applied this technology to analyze the synaptogenesis of labeled adult-generated granule cells (GCs) in mice. 3D reconstruction of dendritic spines in GCs aged 3–4 and 8–9 weeks revealed two different stages of dendritic spine development and unexpected features of synapse formation, including vacant and branched dendritic spines and presynaptic terminals establishing synapses with up to 10 dendritic spines. Given the reliability, efficiency, and high resolution of FIB/SEM technology and the wide use of DAB in conventional EM, we consider FIB/SEM fundamental for the detailed characterization of identified synaptic contacts in neurons in a high-throughput manner. PMID:26052271

  9. Efficient experimental design of high-fidelity three-qubit quantum gates via genetic programming

    NASA Astrophysics Data System (ADS)

    Devra, Amit; Prabhu, Prithviraj; Singh, Harpreet; Arvind; Dorai, Kavita

    2018-03-01

    We have designed efficient quantum circuits for the three-qubit Toffoli (controlled-controlled-NOT) and the Fredkin (controlled-SWAP) gate, optimized via genetic programming methods. The gates thus obtained were experimentally implemented on a three-qubit NMR quantum information processor, with a high fidelity. Toffoli and Fredkin gates in conjunction with the single-qubit Hadamard gates form a universal gate set for quantum computing and are an essential component of several quantum algorithms. Genetic algorithms are stochastic search algorithms based on the logic of natural selection and biological genetics and have been widely used for quantum information processing applications. We devised a new selection mechanism within the genetic algorithm framework to select individuals from a population. We call this mechanism the "Luck-Choose" mechanism and were able to achieve faster convergence to a solution using this mechanism, as compared to existing selection mechanisms. The optimization was performed under the constraint that the experimentally implemented pulses are of short duration and can be implemented with high fidelity. We demonstrate the advantage of our pulse sequences by comparing our results with existing experimental schemes and other numerical optimization methods.

  10. Quick Vegas: Improving Performance of TCP Vegas for High Bandwidth-Delay Product Networks

    NASA Astrophysics Data System (ADS)

    Chan, Yi-Cheng; Lin, Chia-Liang; Ho, Cheng-Yuan

    An important issue in designing a TCP congestion control algorithm is that it should allow the protocol to quickly adjust the end-to-end communication rate to the bandwidth on the bottleneck link. However, the TCP congestion control may function poorly in high bandwidth-delay product networks because of its slow response with large congestion windows. In this paper, we propose an enhanced version of TCP Vegas called Quick Vegas, in which we present an efficient congestion window control algorithm for a TCP source. Our algorithm improves the slow-start and congestion avoidance techniques of original Vegas. Simulation results show that Quick Vegas significantly improves the performance of connections as well as remaining fair when the bandwidth-delay product increases.

  11. High-performance multiprocessor architecture for a 3-D lattice gas model

    NASA Technical Reports Server (NTRS)

    Lee, F.; Flynn, M.; Morf, M.

    1991-01-01

    The lattice gas method has recently emerged as a promising discrete particle simulation method in areas such as fluid dynamics. We present a very high-performance scalable multiprocessor architecture, called ALGE, proposed for the simulation of a realistic 3-D lattice gas model, Henon's 24-bit FCHC isometric model. Each of these VLSI processors is as powerful as a CRAY-2 for this application. ALGE is scalable in the sense that it achieves linear speedup for both fixed and increasing problem sizes with more processors. The core computation of a lattice gas model consists of many repetitions of two alternating phases: particle collision and propagation. Functional decomposition by symmetry group and virtual move are the respective keys to efficient implementation of collision and propagation.

  12. Clustering high dimensional data using RIA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aziz, Nazrina

    2015-05-15

    Clustering may simply represent a convenient method for organizing a large data set so that it can easily be understood and information can efficiently be retrieved. However, identifying cluster in high dimensionality data sets is a difficult task because of the curse of dimensionality. Another challenge in clustering is some traditional functions cannot capture the pattern dissimilarity among objects. In this article, we used an alternative dissimilarity measurement called Robust Influence Angle (RIA) in the partitioning method. RIA is developed using eigenstructure of the covariance matrix and robust principal component score. We notice that, it can obtain cluster easily andmore » hence avoid the curse of dimensionality. It is also manage to cluster large data sets with mixed numeric and categorical value.« less

  13. Multiresolution molecular mechanics: Implementation and efficiency

    NASA Astrophysics Data System (ADS)

    Biyikli, Emre; To, Albert C.

    2017-01-01

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.

  14. Delivery of dsRNA through topical feeding for RNA interference in the citrus sap piercing-sucking hemipteran, Diaphorina citri.

    PubMed

    Killiny, Nabil; Kishk, Abdelaziz

    2017-06-01

    RNA interference (RNAi) is a powerful means to study functional genomics in insects. The delivery of dsRNA is a challenging step in the development of RNAi assay. Here, we describe a new delivery method to increase the effectiveness of RNAi in the Asian citrus psyllid Diaphorina citri. Bromophenol blue droplets were topically applied to fifth instar nymphs and adults on the ventral side of the thorax between the three pairs of legs. In addition to video recordings that showed sucking of the bromophenol blue by the stylets, dissected guts turned blue indicating that the uptake was through feeding. Thus, we called the method topical feeding. We targeted the abnormal wing disc gene (awd), also called nucleoside diphosphate kinase (NDPK), as a reporter gene to prove the uptake of dsRNA via this method of delivery. Our results showed that dsRNA-awd caused reduction of awd expression and nymph mortality. Survival and lifespan of adults emerged from treated nymphs and treated adults were affected. Silencing awd caused wing malformation in the adults emerged from treated nymphs. Topical feeding as a delivery of dsRNA is highly efficient for both nymphs and adults. The described method could be used to increase the efficiency of RNAi in D. citri and other sap piercing-sucking hemipterans. © 2017 Wiley Periodicals, Inc.

  15. Bovine somatic cell nuclear transfer.

    PubMed

    Ross, Pablo J; Cibelli, Jose B

    2010-01-01

    Somatic cell nuclear transfer (SCNT) is a technique by which the nucleus of a differentiated cell is introduced into an oocyte from which its genetic material has been removed by a process called enucleation. In mammals, the reconstructed embryo is artificially induced to initiate embryonic development (activation). The oocyte turns the somatic cell nucleus into an embryonic nucleus. This process is called nuclear reprogramming and involves an important change of cell fate, by which the somatic cell nucleus becomes capable of generating all the cell types required for the formation of a new individual, including extraembryonic tissues. Therefore, after transfer of a cloned embryo to a surrogate mother, an offspring genetically identical to the animal from which the somatic cells where isolated, is born. Cloning by nuclear transfer has potential applications in agriculture and biomedicine, but is limited by low efficiency. Cattle were the second mammalian species to be cloned after Dolly the sheep, and it is probably the most widely used species for SCNT experiments. This is, in part due to the high availability of bovine oocytes and the relatively higher efficiency levels usually obtained in cattle. Given the wide utilization of this species for cloning, several alternatives to this basic protocol can be found in the literature. Here we describe a basic protocol for bovine SCNT currently being used in our laboratory, which is amenable for the use of the nuclear transplantation technique for research or commercial purposes.

  16. Aerodynamic Performance Measurements for a Forward Swept Low Noise Fan

    NASA Technical Reports Server (NTRS)

    Fite, E. Brian

    2006-01-01

    One source of noise in high tip speed turbofan engines, caused by shocks, is called multiple pure tone noise (MPT's). A new fan, called the Quiet High Speed Fan (QHSF), showed reduced noise over the part speed operating range, which includes MPT's. The QHSF showed improved performance in most respects relative to a baseline fan; however, a partspeed instability discovered during testing reduced the operating range below acceptable limits. The measured QHSF adiabatic efficiency on the fixed nozzle acoustic operating line was 85.1 percent and the baseline fan 82.9 percent, a 2.2 percent improvement. The operating line pressure rise at design point rotational speed and mass flow was 1.764 and 1.755 for the QHSF and baseline fan, respectively. Weight flow at design point speed was 98.28 lbm/sec for the QHSF and 97.97 lbm/sec for the baseline fan. The operability margin for the QHSF approached 0 percent at the 75 percent speed operating condition. The baseline fan maintained sufficient margin throughout the operating range as expected. Based on the stage aerodynamic measurements, this concept shows promise for improved performance over current technology if the operability limitations can be solved.

  17. The hidden genomic landscape of acute myeloid leukemia: subclonal structure revealed by undetected mutations

    PubMed Central

    Bodini, Margherita; Ronchini, Chiara; Giacò, Luciano; Russo, Anna; Melloni, Giorgio E. M.; Luzi, Lucilla; Sardella, Domenico; Volorio, Sara; Hasan, Syed K.; Ottone, Tiziana; Lavorgna, Serena; Lo-Coco, Francesco; Candoni, Anna; Fanin, Renato; Toffoletti, Eleonora; Iacobucci, Ilaria; Martinelli, Giovanni; Cignetti, Alessandro; Tarella, Corrado; Bernard, Loris; Pelicci, Pier Giuseppe

    2015-01-01

    The analyses carried out using 2 different bioinformatics pipelines (SomaticSniper and MuTect) on the same set of genomic data from 133 acute myeloid leukemia (AML) patients, sequenced inside the Cancer Genome Atlas project, gave discrepant results. We subsequently tested these 2 variant-calling pipelines on 20 leukemia samples from our series (19 primary AMLs and 1 secondary AML). By validating many of the predicted somatic variants (variant allele frequencies ranging from 100% to 5%), we observed significantly different calling efficiencies. In particular, despite relatively high specificity, sensitivity was poor in both pipelines resulting in a high rate of false negatives. Our findings raise the possibility that landscapes of AML genomes might be more complex than previously reported and characterized by the presence of hundreds of genes mutated at low variant allele frequency, suggesting that the application of genome sequencing to the clinic requires a careful and critical evaluation. We think that improvements in technology and workflow standardization, through the generation of clear experimental and bioinformatics guidelines, are fundamental to translate the use of next-generation sequencing from research to the clinic and to transform genomic information into better diagnosis and outcomes for the patient. PMID:25499761

  18. Removing Fats, Oils and Greases from Grease Trap by Hybrid AOPs (Ozonation and Sonication)

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Michal Piotr; Satoh, Saburoh; Yamabe, Chobei; Ihara, Satoshi; Nieda, Masanori

    The purpose of this study was to investigate the electrical energy for the environmental applications using AOPs (advanced oxidation processes) combined with ozonation and sonication to remove the FOG (fats, oils and greases) from wastewater of the sewage system. This study focused on FOG removal from a grease trap using the hybrid AOPs. Fatty acids (linoleic, oleic, stearic and palmitic acids) were used as representative standards of FOG. The studies were conducted experimentally in a glass reactor under various operational conditions. The oxidation efficiency using the combination of the ozonation and sonication was determined by the KI dosimetry method and the calorimetry method. Fatty acids concentration were measured by GC/MS. The local reaction field of the high temperature and high pressure, so-called hot spot, was generated by the quasi-adiabatic collapse of bubbles produced in the water under sonication, which is called cavitation phenomenon. Mixing the ozone bubbles into the water under acoustic cavitation, the formation of OH radicals increased. The mechanical effect of acoustic cavitation such as microstreaming and shock waves have an influence on the probability of reactions of ozone and radicals with fatty acids.

  19. Comparing architectural solutions of IPT application SDKs utilizing H.323 and SIP

    NASA Astrophysics Data System (ADS)

    Keskinarkaus, Anja; Korhonen, Jani; Ohtonen, Timo; Kilpelanaho, Vesa; Koskinen, Esa; Sauvola, Jaakko J.

    2001-07-01

    This paper presents two approaches to efficient service development for Internet Telephony. In first approach we consider services ranging from core call signaling features and media control as stated in ITU-T's H.323 to end user services that supports user interaction. The second approach supports IETF's SIP protocol. We compare these from differing architectural perspectives, economy of network and terminal development, and propose efficient architecture models for both protocols. In their design, the main criteria were component independence, lightweight operation and portability in heterogeneous end-to-end environments. In proposed architecture, the vertical division of call signaling and streaming media control logic allows for using the components either individually or combined, depending on the level of functionality required by an application.

  20. ACFA 2020 - An FP7 project on active control of flexible fuel efficient aircraft configurations

    NASA Astrophysics Data System (ADS)

    Maier, R.

    2013-12-01

    This paper gives an overview about the project ACFA 2020 which is funded by the European Commission within the 7th framework program. The acronym ACFA 2020 stands for Active Control for Flexible Aircraft 2020. The project is dealing with the design of highly fuel efficient aircraft configurations and, in particular, on innovative active control concepts with the goal to reduce loads and structural weight. Major focus lays on blended wing body (BWB) aircraft. Blended wing body type aircraft configurations are seen as the most promising future concept to fulfill the so-called ACARE (Advisory Council for Aeronautics Research in Europe) vision 2020 goals in regards to reduce fuel consumption and external noise. The paper discusses in some detail the overall goals and how they are addressed in the workplan. Furthermore, the major achievements of the project are outlined and a short outlook on the remaining work is given.

  1. Radioisotope Power System Pool Concept

    NASA Technical Reports Server (NTRS)

    Rusick, Jeffrey J.; Bolotin, Gary S.

    2015-01-01

    Advanced Radioisotope Power Systems (RPS) for NASA deep space science missions have historically used static thermoelectric-based designs because they are highly reliable, and their radioisotope heat sources can be passively cooled throughout the mission life cycle. Recently, a significant effort to develop a dynamic RPS, the Advanced Stirling Radioisotope Generator (ASRG), was conducted by NASA and the Department of Energy, because Stirling based designs offer energy conversion efficiencies four times higher than heritage thermoelectric designs; and the efficiency would proportionately reduce the amount of radioisotope fuel needed for the same power output. However, the long term reliability of a Stirling based design is a concern compared to thermoelectric designs, because for certain Stirling system architectures the radioisotope heat sources must be actively cooled via the dynamic operation of Stirling converters throughout the mission life cycle. To address this reliability concern, a new dynamic Stirling cycle RPS architecture is proposed called the RPS Pool Concept.

  2. Aggregative Learning Method and Its Application for Communication Quality Evaluation

    NASA Astrophysics Data System (ADS)

    Akhmetov, Dauren F.; Kotaki, Minoru

    2007-12-01

    In this paper, so-called Aggregative Learning Method (ALM) is proposed to improve and simplify the learning and classification abilities of different data processing systems. It provides a universal basis for design and analysis of mathematical models of wide class. A procedure was elaborated for time series model reconstruction and analysis for linear and nonlinear cases. Data approximation accuracy (during learning phase) and data classification quality (during recall phase) are estimated from introduced statistic parameters. The validity and efficiency of the proposed approach have been demonstrated through its application for monitoring of wireless communication quality, namely, for Fixed Wireless Access (FWA) system. Low memory and computation resources were shown to be needed for the procedure realization, especially for data classification (recall) stage. Characterized with high computational efficiency and simple decision making procedure, the derived approaches can be useful for simple and reliable real-time surveillance and control system design.

  3. Unconventional Hamilton-type variational principle in phase space and symplectic algorithm

    NASA Astrophysics Data System (ADS)

    Luo, En; Huang, Weijiang; Zhang, Hexin

    2003-06-01

    By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.

  4. Spectroscopic investigation of the Cr to Tm energy transfer in Yttrium Aluminum Garnet (YAG) crystals

    NASA Technical Reports Server (NTRS)

    Dibartolo, B.

    1988-01-01

    New and interesting schemes have recently been considered for the efficient operation of solid-state ionic laser systems. Often the available data on these systems were obtained only because they seemed directly related to the laser performance and provide no insight into the physical processes. A more systematic approach is desirable, where more attention is devoted to the elementary basic processes and to the nature of the mechanisms at work. It is with this aim that we have undertaken the present study. Yttrium Aluminum Garnet (Y4Al5O12), called YAG, has two desirable properties as host for rare earth impurities: (1) trivalent rare earth ions can replace the yttrium without any charge compensation problem, and (2) YAG crystals have high cutoff energies. The results of measurements and calculations indicate that the Cr(3+) ion in YAG can be used to sensitize efficiently the Tm(3+) ion.

  5. Recent developments in terahertz sensing technology

    NASA Astrophysics Data System (ADS)

    Shur, Michael

    2016-05-01

    Terahertz technology has found numerous applications for the detection of biological and chemical hazardous agents, medical diagnostics, detection of explosives, providing security in buildings, airports, and other public spaces, shortrange covert communications (in the THz and sub-THz windows), and applications in radio astronomy and space research. The expansion of these applications will depend on the development of efficient electronic terahertz sources and sensitive low-noise terahertz detectors. Schottky diode frequency multipliers have emerged as a viable THz source technology reaching a few THz. High speed three terminal electronic devices (FETs and HBTs) have entered the THz range (with cutoff frequencies and maximum frequencies of operation above 1 THz). A new approach called plasma wave electronics recently demonstrated an efficient terahertz detection in GaAs-based and GaN-based HEMTs and in Si MOS, SOI, FINFETs and in FET arrays. This progress in THz electronic technology has promise for a significant expansion of THz applications.

  6. Preparation of Chitosan Coated Magnetic Hydroxyapatite Nanoparticles and Application for Adsorption of Reactive Blue 19 and Ni2+ Ions

    PubMed Central

    Nguyen, Van Cuong; Pho, Quoc Hue

    2014-01-01

    An adsorbent called chitosan coated magnetic hydroxyapatite nanoparticles (CS-MHAP) was prepared with the purpose of improvement for the removal of Ni2+ ions and textile dye by coprecipitation. Structure and properties of CS-MHAP were characterized by scanning electron microscopy (SEM), X-ray diffraction (XRD), Fourier transform infrared spectroscopy (FTIR), and vibrating sample magnetometer (VSM). Weight percent of chitosan was investigated by thermal gravimetric analysis (TGA). The prepared CS-MHAP presents a significant improvement on the removal efficiency of Ni2+ ions and reactive blue 19 dye (RB19) in comparison with chitosan and magnetic hydroxyapatite nanoparticles. Moreover, the adsorption capacities were affected by several parameters such as contact time, initial concentration, adsorbent dosage, and initial pH. Interestingly, the prepared adsorbent could be easily recycled from an aqueous solution by an external magnet and reused for adsorption with high removal efficiency. PMID:24592158

  7. MetaBAT, an efficient tool for accurately reconstructing single genomes from complex microbial communities

    DOE PAGES

    Kang, Dongwan D.; Froula, Jeff; Egan, Rob; ...

    2015-01-01

    Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically formsmore » hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.« less

  8. In-depth analysis of chloride treatments for thin-film CdTe solar cells

    DOE PAGES

    Major, J. D.; Al Turkestani, M.; Bowen, L.; ...

    2016-10-24

    CdTe thin-film solar cells are now the main industrially established alternative to silicon-based photovoltaics. These cells remain reliant on the so-called chloride activation step in order to achieve high conversion efficiencies. Here, by comparison of effective and ineffective chloride treatments, we show the main role of the chloride process to be the modification of grain boundaries through chlorine accumulation, which leads an increase in the carrier lifetime. It is also demonstrated that while improvements in fill factor and short circuit current may be achieved through use of the ineffective chlorides, or indeed simple air annealing, voltage improvement is linked directlymore » to chlorine incorporation at the grain boundaries. Lastly, this suggests that focus on improved or more controlled grain boundary treatments may provide a route to achieving higher cell voltages and thus efficiencies.« less

  9. Conceptual search in electronic patient record.

    PubMed

    Baud, R H; Lovis, C; Ruch, P; Rassinoux, A M

    2001-01-01

    Search by content in a large corpus of free texts in the medical domain is, today, only partially solved. The so-called GREP approach (Get Regular Expression and Print), based on highly efficient string matching techniques, is subject to inherent limitations, especially its inability to recognize domain specific knowledge. Such methods oblige the user to formulate his or her query in a logical Boolean style; if this constraint is not fulfilled, the results are poor. The authors present an enhancement to string matching search by the addition of a light conceptual model behind the word lexicon. The new system accepts any sentence as a query and radically improves the quality of results. Efficiency regarding execution time is obtained at the expense of implementing advanced indexing algorithms in a pre-processing phase. The method is described and commented and a brief account of the results illustrates this paper.

  10. Flexo-photovoltaic effect.

    PubMed

    Yang, Ming-Min; Kim, Dong Jik; Alexe, Marin

    2018-05-25

    It is highly desirable to discover photovoltaic mechanisms that enable enhanced efficiency of solar cells. Here we report that the bulk photovoltaic effect, which is free from the thermodynamic Shockley-Queisser limit but usually manifested only in noncentrosymmetric (piezoelectric or ferroelectric) materials, can be realized in any semiconductor, including silicon, by mediation of flexoelectric effect. We used either an atomic force microscope or a micrometer-scale indentation system to introduce strain gradients, thus creating very large photovoltaic currents from centrosymmetric single crystals of strontium titanate, titanium dioxide, and silicon. This strain gradient-induced bulk photovoltaic effect, which we call the flexo-photovoltaic effect, functions in the absence of a p-n junction. This finding may extend present solar cell technologies by boosting the solar energy conversion efficiency from a wide pool of established semiconductors. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  11. Adaptive Gait Control for a Quadruped Robot on 3D Path Planning

    NASA Astrophysics Data System (ADS)

    Igarashi, Hiroshi; Kakikura, Masayoshi

    A legged walking robot is able to not only move on irregular terrain but also change its posture. For example, the robot can pass under overhead obstacles by crouching. The purpose of our research is to realize efficient path planning with a quadruped robot. Therefore, the path planning is expected to extended in three dimensions because of the mobility. However, some issues of the quadruped robot, which are instability, workspace limitation, deadlock and slippage, complicate realizing such application. In order to improve these issues and reinforce the mobility, a new static gait pattern for a quadruped robot, called TFG: Trajectory Following Gait, is proposed. The TFG intends to obtain high controllability like a wheel robot. Additionally, the TFG allows to change it posture during the walk. In this paper, some experimental results show that the TFG improves the issues and it is available for efficient locomotion in three dimensional environment.

  12. Efficient steady-state solver for hierarchical quantum master equations

    NASA Astrophysics Data System (ADS)

    Zhang, Hou-Dao; Qiao, Qin; Xu, Rui-Xue; Zheng, Xiao; Yan, YiJing

    2017-07-01

    Steady states play pivotal roles in many equilibrium and non-equilibrium open system studies. Their accurate evaluations call for exact theories with rigorous treatment of system-bath interactions. Therein, the hierarchical equations-of-motion (HEOM) formalism is a nonperturbative and non-Markovian quantum dissipation theory, which can faithfully describe the dissipative dynamics and nonlinear response of open systems. Nevertheless, solving the steady states of open quantum systems via HEOM is often a challenging task, due to the vast number of dynamical quantities involved. In this work, we propose a self-consistent iteration approach that quickly solves the HEOM steady states. We demonstrate its high efficiency with accurate and fast evaluations of low-temperature thermal equilibrium of a model Fenna-Matthews-Olson pigment-protein complex. Numerically exact evaluation of thermal equilibrium Rényi entropies and stationary emission line shapes is presented with detailed discussion.

  13. LSAH: a fast and efficient local surface feature for point cloud registration

    NASA Astrophysics Data System (ADS)

    Lu, Rongrong; Zhu, Feng; Wu, Qingxiao; Kong, Yanzi

    2018-04-01

    Point cloud registration is a fundamental task in high level three dimensional applications. Noise, uneven point density and varying point cloud resolutions are the three main challenges for point cloud registration. In this paper, we design a robust and compact local surface descriptor called Local Surface Angles Histogram (LSAH) and propose an effectively coarse to fine algorithm for point cloud registration. The LSAH descriptor is formed by concatenating five normalized sub-histograms into one histogram. The five sub-histograms are created by accumulating a different type of angle from a local surface patch respectively. The experimental results show that our LSAH is more robust to uneven point density and point cloud resolutions than four state-of-the-art local descriptors in terms of feature matching. Moreover, we tested our LSAH based coarse to fine algorithm for point cloud registration. The experimental results demonstrate that our algorithm is robust and efficient as well.

  14. Enhancing photovoltaic output power by 3-band spectrum-splitting and concentration using a diffractive micro-optic

    DOE PAGES

    Mohammad, Nabil; Wang, Peng; Friedman, Daniel J.; ...

    2014-09-17

    We report the enhancement of photovoltaic output power by separating the incident spectrum into 3 bands, and concentrating these bands onto 3 different photovoltaic cells. The spectrum-splitting and concentration is achieved via a thin, planar micro-optical element that demonstrates high optical efficiency over the entire spectrum of interest. The optic (which we call a polychromat) was designed using a modified version of the direct-binary-search algorithm. The polychromat was fabricated using grayscale lithography. Rigorous optical characterization demonstrates excellent agreement with simulation results. Electrical characterization of the solar cells made from GaInP, GaAs and Si indicate increase in the peak output powermore » density of 43.63%, 30.84% and 30.86%, respectively when compared to normal operation without the polychromat. This represents an overall increase of 35.52% in output power density. As a result, the potential for cost-effective large-area manufacturing and for high system efficiencies makes our approach a strong candidate for low cost solar power.« less

  15. The radiation gas detectors with novel nanoporous converter for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Zarei, H.; Saramad, S.

    2018-02-01

    For many reason it is tried to improve the quantum efficiency (QE) of position sensitive gas detectors. For energetic X-rays, the imaging systems usually consist of a bulk converter and gas amplification region. But the bulk converters have their own limitation. For X-rays, the converter thickness should be increased to achieve a greater detection efficiency, however in this case, the chance of escaping the photoelectrons is reduced. To overcome this limitation, a new type of converter, called a nanoporous converter such as Anodizing Aluminum Oxide (AAO) membrane with higher surface to volume ratio is proposed. According to simulation results with GATE code, for this nanoporous converter with the 1 mm thickness and inter pore distance of 627 nm, for 20-100 keV X-ray energies with a reasonable gas pressure and different pore diameters, the QE can be one order of magnitude greater than the bulk ones, which is a new approach for proposing high QE position sensitive gas detectors for medical imaging application and also high energy physics.

  16. Coevolution at protein complex interfaces can be detected by the complementarity trace with important impact for predictive docking

    PubMed Central

    Madaoui, Hocine; Guerois, Raphaël

    2008-01-01

    Protein surfaces are under significant selection pressure to maintain interactions with their partners throughout evolution. Capturing how selection pressure acts at the interfaces of protein–protein complexes is a fundamental issue with high interest for the structural prediction of macromolecular assemblies. We tackled this issue under the assumption that, throughout evolution, mutations should minimally disrupt the physicochemical compatibility between specific clusters of interacting residues. This constraint drove the development of the so-called Surface COmplementarity Trace in Complex History score (SCOTCH), which was found to discriminate with high efficiency the structure of biological complexes. SCOTCH performances were assessed not only with respect to other evolution-based approaches, such as conservation and coevolution analyses, but also with respect to statistically based scoring methods. Validated on a set of 129 complexes of known structure exhibiting both permanent and transient intermolecular interactions, SCOTCH appears as a robust strategy to guide the prediction of protein–protein complex structures. Of particular interest, it also provides a basic framework to efficiently track how protein surfaces could evolve while keeping their partners in contact. PMID:18511568

  17. Fast algorithm of adaptive Fourier series

    NASA Astrophysics Data System (ADS)

    Gao, You; Ku, Min; Qian, Tao

    2018-05-01

    Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.

  18. iClimate: a climate data and analysis portal

    NASA Astrophysics Data System (ADS)

    Goodman, P. J.; Russell, J. L.; Merchant, N.; Miller, S. J.; Juneja, A.

    2015-12-01

    We will describe a new climate data and analysis portal called iClimate that facilitates direct comparisons between available climate observations and climate simulations. Modeled after the successful iPlant Collaborative Discovery Environment (www.iplantcollaborative.org) that allows plant scientists to trade and share environmental, physiological and genetic data and analyses, iClimate provides an easy-to-use platform for large-scale climate research, including the storage, sharing, automated preprocessing, analysis and high-end visualization of large and often disparate observational and model datasets. iClimate will promote data exploration and scientific discovery by providing: efficient and high-speed transfer of data from nodes around the globe (e.g. PCMDI and NASA); standardized and customized data/model metrics; efficient subsampling of datasets based on temporal period, geographical region or variable; and collaboration tools for sharing data, workflows, analysis results, and data visualizations with collaborators or with the community at large. We will present iClimate's capabilities, and demonstrate how it will simplify and enhance the ability to do basic or cutting-edge climate research by professionals, laypeople and students.

  19. Additive Manufacturing/Diagnostics via the High Frequency Induction Heating of Metal Powders: The Determination of the Power Transfer Factor for Fine Metallic Spheres

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rios, Orlando; Radhakrishnan, Balasubramaniam; Caravias, George

    2015-03-11

    Grid Logic Inc. is developing a method for sintering and melting fine metallic powders for additive manufacturing using spatially-compact, high-frequency magnetic fields called Micro-Induction Sintering (MIS). One of the challenges in advancing MIS technology for additive manufacturing is in understanding the power transfer to the particles in a powder bed. This knowledge is important to achieving efficient power transfer, control, and selective particle heating during the MIS process needed for commercialization of the technology. The project s work provided a rigorous physics-based model for induction heating of fine spherical particles as a function of frequency and particle size. This simulationmore » improved upon Grid Logic s earlier models and provides guidance that will make the MIS technology more effective. The project model will be incorporated into Grid Logic s power control circuit of the MIS 3D printer product and its diagnostics technology to optimize the sintering process for part quality and energy efficiency.« less

  20. Waiting time effect of a GM type orifice pulse tube refrigerator

    NASA Astrophysics Data System (ADS)

    Zhu, Shaowei; Kakimi, Yasuhiro; Matsubara, Yoichi

    In a general GM type orifice pulse tube refrigerator, there are two short periods during which both the high pressure valve and the low pressure valve are closed in one cycle. We call the short period `waiting time'. The pressure differences across the high pressure valve and the low pressure valve are decreased by using long waiting time. The pressure difference loss is decreased. Thus, the cooling capacity and the efficiency are increased, and the no-load temperature is decreased. The mechanism of the waiting time is discussed with numerical analysis and verified by experiments. Experiments show that there is an optimum waiting time for the no-load temperature, the cooling capacity and the efficiency, respectively. The no-load temperature of 40.3 K was achieved with a 90° waiting time. The cooling capacity of 58 W at 80 K was achieved with a 60° waiting time. The no-load temperature of 45.1 K and the cooling capacity of 45 W at 80 K were achieved with a 1° waiting time.

  1. An efficient and scalable analysis framework for variant extraction and refinement from population-scale DNA sequence data.

    PubMed

    Jun, Goo; Wing, Mary Kate; Abecasis, Gonçalo R; Kang, Hyun Min

    2015-06-01

    The analysis of next-generation sequencing data is computationally and statistically challenging because of the massive volume of data and imperfect data quality. We present GotCloud, a pipeline for efficiently detecting and genotyping high-quality variants from large-scale sequencing data. GotCloud automates sequence alignment, sample-level quality control, variant calling, filtering of likely artifacts using machine-learning techniques, and genotype refinement using haplotype information. The pipeline can process thousands of samples in parallel and requires less computational resources than current alternatives. Experiments with whole-genome and exome-targeted sequence data generated by the 1000 Genomes Project show that the pipeline provides effective filtering against false positive variants and high power to detect true variants. Our pipeline has already contributed to variant detection and genotyping in several large-scale sequencing projects, including the 1000 Genomes Project and the NHLBI Exome Sequencing Project. We hope it will now prove useful to many medical sequencing studies. © 2015 Jun et al.; Published by Cold Spring Harbor Laboratory Press.

  2. Application of M-type cathodes to high-power cw klystrons

    NASA Astrophysics Data System (ADS)

    Isagawa, S.; Higuchi, T.; Kobayashi, K.; Miyake, S.; Ohya, K.; Yoshida, M.

    1999-05-01

    Two types of high-power cw klystrons have been widely used at KEK in both TRISTAN and KEKB e +e - collider projects: one is a 0.8 MW/1.0 MW tube, called YK1302/YK1303 (Philips); the other is a 1.2 MW tube, called E3786/E3732 (Toshiba). Normally, the dispenser cathodes of the `B-type' and the `S-type' have been used, respectively, but for improved versions they have been replaced by low-temperature cathodes, called the `M-type'. An Os/Ru coating was applied to the former, whereas an Ir one was applied to the latter. Until now, all upgraded tubes installing M-type cathodes, 9 and 8 in number, respectively, have worked successfully without any dropout. A positive experience concerning the lifetime under real operation conditions has been obtained. M-type cathodes are, however, more easily poisoned. One tube installing an Os/Ru-coated cathode showed a gradual, and then sudden decrease in emission during an underheating test, although the emission could fortunately be recovered by aging at the KEK test field. Once sufficiently aged, the emission of an Ir-coated cathode proved to be very high and stable, and its lifetime is expected to be very long. One disadvantage of this cathode is, however, susceptibility to gas poisoning and the necessity of long-term initial aging. New techniques, like ion milling and fine-grained tungsten top layers, were not as successful as expected from their smaller scale applications to shorten the initial aging period. A burn-in process at higher cathode loading was efficient to make the poisoned cathode active and to decrease unwanted Wehnelt emission. On top of that, the emission cooling, and thus thermal conductivity near the emitting layer could play an important role in such large-current cathodes as ours.

  3. An Accuracy--Response Time Capacity Assessment Function that Measures Performance against Standard Parallel Predictions

    ERIC Educational Resources Information Center

    Townsend, James T.; Altieri, Nicholas

    2012-01-01

    Measures of human efficiency under increases in mental workload or attentional limitations are vital in studying human perception, cognition, and action. Assays of efficiency as workload changes have typically been confined to either reaction times (RTs) or accuracy alone. Within the realm of RTs, a nonparametric measure called the "workload…

  4. Improving Safety, Quality and Efficiency through the Management of Emerging Processes: The TenarisDalmine Experience

    ERIC Educational Resources Information Center

    Bonometti, Patrizia

    2012-01-01

    Purpose: The aim of this contribution is to describe a new complexity-science-based approach for improving safety, quality and efficiency and the way it was implemented by TenarisDalmine. Design/methodology/approach: This methodology is called "a safety-building community". It consists of a safety-behaviour social self-construction…

  5. Using a Polytope to Estimate Efficient Production Functions of Joint Product Processes.

    ERIC Educational Resources Information Center

    Simpson, William A.

    In the last decade, a modeling technique has been developed to handle complex input/output analyses where outputs involve joint products and there are no known mathematical relationships linking the outputs or inputs. The technique uses the geometrical concept of a six-dimensional shape called a polytope to analyze the efficiency of each…

  6. Deicing System Protects General Aviation Aircraft

    NASA Technical Reports Server (NTRS)

    2007-01-01

    Kelly Aerospace Thermal Systems LLC worked with researchers at Glenn Research Center on deicing technology with assistance from the Small Business Innovation Research (SBIR) program. Kelly Aerospace acquired Northcoast Technologies Ltd., a firm that had conducted work on a graphite foil heating element under a NASA SBIR contract and developed a lightweight, easy-to-install, reliable wing and tail deicing system. Kelly Aerospace engineers combined their experiences with those of the Northcoast engineers, leading to the certification and integration of a thermoelectric deicing system called Thermawing, a DC-powered air conditioner for single-engine aircraft called Thermacool, and high-output alternators to run them both. Thermawing, a reliable anti-icing and deicing system, allows pilots to safely fly through ice encounters and provides pilots of single-engine aircraft the heated wing technology usually reserved for larger, jet-powered craft. Thermacool, an innovative electric air conditioning system, uses a new compressor whose rotary pump design runs off an energy-efficient, brushless DC motor and allows pilots to use the air conditioner before the engine even starts

  7. The ALICE analysis train system

    NASA Astrophysics Data System (ADS)

    Zimmermann, Markus; ALICE Collaboration

    2015-05-01

    In the ALICE experiment hundreds of users are analyzing big datasets on a Grid system. High throughput and short turn-around times are achieved by a centralized system called the LEGO trains. This system combines analysis from different users in so-called analysis trains which are then executed within the same Grid jobs thereby reducing the number of times the data needs to be read from the storage systems. The centralized trains improve the performance, the usability for users and the bookkeeping in comparison to single user analysis. The train system builds upon the already existing ALICE tools, i.e. the analysis framework as well as the Grid submission and monitoring infrastructure. The entry point to the train system is a web interface which is used to configure the analysis and the desired datasets as well as to test and submit the train. Several measures have been implemented to reduce the time a train needs to finish and to increase the CPU efficiency.

  8. Parallelizing serial code for a distributed processing environment with an application to high frequency electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Work, Paul R.

    1991-12-01

    This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.

  9. Novel design of high voltage pulse source for efficient dielectric barrier discharge generation by using silicon diodes for alternating current.

    PubMed

    Truong, Hoa Thi; Hayashi, Misaki; Uesugi, Yoshihiko; Tanaka, Yasunori; Ishijima, Tatsuo

    2017-06-01

    This work focuses on design, construction, and optimization of configuration of a novel high voltage pulse power source for large-scale dielectric barrier discharge (DBD) generation. The pulses were generated by using the high-speed switching characteristic of an inexpensive device called silicon diodes for alternating current and the self-terminated characteristic of DBD. The operation started to be powered by a primary DC low voltage power supply flexibly equipped with a commercial DC power supply, or a battery, or DC output of an independent photovoltaic system without transformer employment. This flexible connection to different types of primary power supply could provide a promising solution for the application of DBD, especially in the area without power grid connection. The simple modular structure, non-control requirement, transformer elimination, and a minimum number of levels in voltage conversion could lead to a reduction in size, weight, simple maintenance, low cost of installation, and high scalability of a DBD generator. The performance of this pulse source has been validated by a load of resistor. A good agreement between theoretically estimated and experimentally measured responses has been achieved. The pulse source has also been successfully applied for an efficient DBD plasma generation.

  10. Novel design of high voltage pulse source for efficient dielectric barrier discharge generation by using silicon diodes for alternating current

    NASA Astrophysics Data System (ADS)

    Truong, Hoa Thi; Hayashi, Misaki; Uesugi, Yoshihiko; Tanaka, Yasunori; Ishijima, Tatsuo

    2017-06-01

    This work focuses on design, construction, and optimization of configuration of a novel high voltage pulse power source for large-scale dielectric barrier discharge (DBD) generation. The pulses were generated by using the high-speed switching characteristic of an inexpensive device called silicon diodes for alternating current and the self-terminated characteristic of DBD. The operation started to be powered by a primary DC low voltage power supply flexibly equipped with a commercial DC power supply, or a battery, or DC output of an independent photovoltaic system without transformer employment. This flexible connection to different types of primary power supply could provide a promising solution for the application of DBD, especially in the area without power grid connection. The simple modular structure, non-control requirement, transformer elimination, and a minimum number of levels in voltage conversion could lead to a reduction in size, weight, simple maintenance, low cost of installation, and high scalability of a DBD generator. The performance of this pulse source has been validated by a load of resistor. A good agreement between theoretically estimated and experimentally measured responses has been achieved. The pulse source has also been successfully applied for an efficient DBD plasma generation.

  11. Tar analysis from biomass gasification by means of online fluorescence spectroscopy

    NASA Astrophysics Data System (ADS)

    Baumhakl, Christoph; Karellas, Sotirios

    2011-07-01

    Optical methods in gas analysis are very valuable mainly due to their non-intrusive character. That gives the possibility to use them for in-situ or online measurements with only optical intervention in the measurement volume. In processes like the gasification of biomass, it is of high importance to monitor the gas quality in order to use the product gas in proper machines for energy production following the restrictions in the gas composition but also improving its quality, which leads to high efficient systems. One of the main problems in the biomass gasification process is the formation of tars. These higher hydrocarbons can lead to problems in the operation of the energy system. Up to date, the state of the art method used widely for the determination of tars is a standardized offline measurement system, the so-called "Tar Protocol". The aim of this work is to describe an innovative, online, optical method for determining the tar content of the product gas by means of fluorescence spectroscopy. This method uses optical sources and detectors that can be found in the market at low cost and therefore it is very attractive, especially for industrial applications where cost efficiency followed by medium to high precision are of high importance.

  12. Read count-based method for high-throughput allelic genotyping of transposable elements and structural variants.

    PubMed

    Kuhn, Alexandre; Ong, Yao Min; Quake, Stephen R; Burkholder, William F

    2015-07-08

    Like other structural variants, transposable element insertions can be highly polymorphic across individuals. Their functional impact, however, remains poorly understood. Current genome-wide approaches for genotyping insertion-site polymorphisms based on targeted or whole-genome sequencing remain very expensive and can lack accuracy, hence new large-scale genotyping methods are needed. We describe a high-throughput method for genotyping transposable element insertions and other types of structural variants that can be assayed by breakpoint PCR. The method relies on next-generation sequencing of multiplex, site-specific PCR amplification products and read count-based genotype calls. We show that this method is flexible, efficient (it does not require rounds of optimization), cost-effective and highly accurate. This method can benefit a wide range of applications from the routine genotyping of animal and plant populations to the functional study of structural variants in humans.

  13. Two-photon imaging of spatially extended neuronal network dynamics with high temporal resolution.

    PubMed

    Lillis, Kyle P; Eng, Alfred; White, John A; Mertz, Jerome

    2008-07-30

    We describe a simple two-photon fluorescence imaging strategy, called targeted path scanning (TPS), to monitor the dynamics of spatially extended neuronal networks with high spatiotemporal resolution. Our strategy combines the advantages of mirror-based scanning, minimized dead time, ease of implementation, and compatibility with high-resolution low-magnification objectives. To demonstrate the performance of TPS, we monitor the calcium dynamics distributed across an entire juvenile rat hippocampus (>1.5mm), at scan rates of 100 Hz, with single cell resolution and single action potential sensitivity. Our strategy for fast, efficient two-photon microscopy over spatially extended regions provides a particularly attractive solution for monitoring neuronal population activity in thick tissue, without sacrificing the signal-to-noise ratio or high spatial resolution associated with standard two-photon microscopy. Finally, we provide the code to make our technique generally available.

  14. Batched matrix computations on hardware accelerators based on GPUs

    DOE PAGES

    Haidar, Azzam; Dong, Tingxing; Luszczek, Piotr; ...

    2015-02-09

    Scientific applications require solvers that work on many small size problems that are independent from each other. At the same time, the high-end hardware evolves rapidly and becomes ever more throughput-oriented and thus there is an increasing need for an effective approach to develop energy-efficient, high-performance codes for these small matrix problems that we call batched factorizations. The many applications that need this functionality could especially benefit from the use of GPUs, which currently are four to five times more energy efficient than multicore CPUs on important scientific workloads. This study, consequently, describes the development of the most common, one-sidedmore » factorizations, Cholesky, LU, and QR, for a set of small dense matrices. The algorithms we present together with their implementations are, by design, inherently parallel. In particular, our approach is based on representing the process as a sequence of batched BLAS routines that are executed entirely on a GPU. Importantly, this is unlike the LAPACK and the hybrid MAGMA factorization algorithms that work under drastically different assumptions of hardware design and efficiency of execution of the various computational kernels involved in the implementation. Thus, our approach is more efficient than what works for a combination of multicore CPUs and GPUs for the problems sizes of interest of the application use cases. The paradigm where upon a single chip (a GPU or a CPU) factorizes a single problem at a time is not at all efficient in our applications’ context. We illustrate all of these claims through a detailed performance analysis. With the help of profiling and tracing tools, we guide our development of batched factorizations to achieve up to two-fold speedup and three-fold better energy efficiency as compared against our highly optimized batched CPU implementations based on MKL library. Finally, the tested system featured two sockets of Intel Sandy Bridge CPUs and we compared with a batched LU factorizations featured in the CUBLAS library for GPUs, we achieve as high as 2.5× speedup on the NVIDIA K40 GPU.« less

  15. Transpiration efficiency: new insights into an old story.

    PubMed

    Vadez, Vincent; Kholova, Jana; Medina, Susan; Kakkera, Aparna; Anderberg, Hanna

    2014-11-01

    Producing more food per unit of water has never been as important as it is at present, and the demand for water by economic sectors other than agriculture will necessarily put a great deal of pressure on a dwindling resource, leading to a call for increases in the productivity of water in agriculture. This topic has been given high priority in the research agenda for the last 30 years, but with the exception of a few specific cases, such as water-use-efficient wheat in Australia, breeding crops for water-use efficiency has yet to be accomplished. Here, we review the efforts to harness transpiration efficiency (TE); that is, the genetic component of water-use efficiency. As TE is difficult to measure, especially in the field, evaluations of TE have relied mostly on surrogate traits, although this has most likely resulted in over-dependence on the surrogates. A new lysimetric method for assessing TE gravimetrically throughout the entire cropping cycle has revealed high genetic variation in different cereals and legumes. Across species, water regimes, and a wide range of genotypes, this method has clearly established an absence of relationships between TE and total water use, which dismisses previous claims that high TE may lead to a lower production potential. More excitingly, a tight link has been found between these large differences in TE in several crops and attributes of plants that make them restrict water losses under high vapour-pressure deficits. This trait provides new insight into the genetics of TE, especially from the perspective of plant hydraulics, probably with close involvement of aquaporins, and opens new possibilities for achieving genetic gains via breeding focused on this trait. Last but not least, small amounts of water used in specific periods of the crop cycle, such as during grain filling, may be critical. We assessed the efficiency of water use at these critical stages. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  16. Identification of high-efficiency 3'GG gRNA motifs in indexed FASTA files with ngg2.

    PubMed

    Roberson, Elisha D O

    CRISPR/Cas9 is emerging as one of the most-used methods of genome modification in organisms ranging from bacteria to human cells. However, the efficiency of editing varies tremendously site-to-site. A recent report identified a novel motif, called the 3'GG motif, which substantially increases the efficiency of editing at all sites tested in C. elegans . Furthermore, they highlighted that previously published gRNAs with high editing efficiency also had this motif. I designed a python command-line tool, ngg2, to identify 3'GG gRNA sites from indexed FASTA files. As a proof-of-concept, I screened for these motifs in six model genomes: Saccharomyces cerevisiae , Caenorhabditis elegans , Drosophila melanogaster , Danio rerio , Mus musculus , and Homo sapiens. I also scanned the genomes of pig ( Sus scrofa ) and African elephant ( Loxodonta africana ) to demonstrate the utility in non-model organisms. I identified more than 60 million single match 3'GG motifs in these genomes. Greater than 61% of all protein coding genes in the reference genomes had at least one unique 3'GG gRNA site overlapping an exon. In particular, more than 96% of mouse and 93% of human protein coding genes have at least one unique, overlapping 3'GG gRNA. These identified sites can be used as a starting point in gRNA selection, and the ngg2 tool provides an important ability to identify 3'GG editing sites in any species with an available genome sequence.

  17. A dynamic model to assess tradeoffs in power production and riverine ecosystem protection.

    PubMed

    Miara, Ariel; Vörösmarty, Charles J

    2013-06-01

    Major strategic planning decisions loom as society aims to balance energy security, economic development and environmental protection. To achieve such balance, decisions involving the so-called water-energy nexus must necessarily embrace a regional multi-power plant perspective. We present here the Thermoelectric Power & Thermal Pollution Model (TP2M), a simulation model that simultaneously quantifies thermal pollution of rivers and estimates efficiency losses in electricity generation as a result of fluctuating intake temperatures and river flows typically encountered across the temperate zone. We demonstrate the model's theoretical framework by carrying out sensitivity tests based on energy, physical and environmental settings. We simulate a series of five thermoelectric plants aligned along a hypothetical river, where we find that warm ambient temperatures, acting both as a physical constraint and as a trigger for regulatory limits on plant operations directly reduce electricity generation. As expected, environmental regulation aimed at reducing thermal loads at a single plant reduces power production at that plant, but ironically can improve the net electricity output from multiple plants when they are optimally co-managed. On the technology management side, high efficiency can be achieved through the use of natural gas combined cycle plants, which can raise the overall efficiency of the aging population of plants, including that of coal. Tradeoff analysis clearly shows the benefit of attaining such high efficiencies, in terms of both limiting thermal loads that preserve ecosystem services and increasing electricity production that benefits economic development.

  18. BALSA: integrated secondary analysis for whole-genome and whole-exome sequencing, accelerated by GPU.

    PubMed

    Luo, Ruibang; Wong, Yiu-Lun; Law, Wai-Chun; Lee, Lap-Kei; Cheung, Jeanno; Liu, Chi-Man; Lam, Tak-Wah

    2014-01-01

    This paper reports an integrated solution, called BALSA, for the secondary analysis of next generation sequencing data; it exploits the computational power of GPU and an intricate memory management to give a fast and accurate analysis. From raw reads to variants (including SNPs and Indels), BALSA, using just a single computing node with a commodity GPU board, takes 5.5 h to process 50-fold whole genome sequencing (∼750 million 100 bp paired-end reads), or just 25 min for 210-fold whole exome sequencing. BALSA's speed is rooted at its parallel algorithms to effectively exploit a GPU to speed up processes like alignment, realignment and statistical testing. BALSA incorporates a 16-genotype model to support the calling of SNPs and Indels and achieves competitive variant calling accuracy and sensitivity when compared to the ensemble of six popular variant callers. BALSA also supports efficient identification of somatic SNVs and CNVs; experiments showed that BALSA recovers all the previously validated somatic SNVs and CNVs, and it is more sensitive for somatic Indel detection. BALSA outputs variants in VCF format. A pileup-like SNAPSHOT format, while maintaining the same fidelity as BAM in variant calling, enables efficient storage and indexing, and facilitates the App development of downstream analyses. BALSA is available at: http://sourceforge.net/p/balsa.

  19. FastBit Reference Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Kesheng

    2007-08-02

    An index in a database system is a data structure that utilizes redundant information about the base data to speed up common searching and retrieval operations. Most commonly used indexes are variants of B-trees, such as B+-tree and B*-tree. FastBit implements a set of alternative indexes call compressed bitmap indexes. Compared with B-tree variants, these indexes provide very efficient searching and retrieval operations by sacrificing the efficiency of updating the indexes after the modification of an individual record. In addition to the well-known strengths of bitmap indexes, FastBit has a special strength stemming from the bitmap compression scheme used. Themore » compression method is called the Word-Aligned Hybrid (WAH) code. It reduces the bitmap indexes to reasonable sizes and at the same time allows very efficient bitwise logical operations directly on the compressed bitmaps. Compared with the well-known compression methods such as LZ77 and Byte-aligned Bitmap code (BBC), WAH sacrifices some space efficiency for a significant improvement in operational efficiency. Since the bitwise logical operations are the most important operations needed to answer queries, using WAH compression has been shown to answer queries significantly faster than using other compression schemes. Theoretical analyses showed that WAH compressed bitmap indexes are optimal for one-dimensional range queries. Only the most efficient indexing schemes such as B+-tree and B*-tree have this optimality property. However, bitmap indexes are superior because they can efficiently answer multi-dimensional range queries by combining the answers to one-dimensional queries.« less

  20. An Extensible "SCHEMA-LESS" Database Framework for Managing High-Throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.

    2003-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object-oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK, is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword search of records spanning across both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semistructured documents existing within NASA enterprises. Today, NETMARK is a flexible, high-throughput open database framework for managing, storing, and searching unstructured or semi-structured arbitrary hierarchal models, such as XML and HTML.

  1. An Extensible Schema-less Database Framework for Managing High-throughput Semi-Structured Documents

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.; La, Tracy; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Object-Relational database management system is an integrated hybrid cooperative approach to combine the best practices of both the relational model utilizing SQL queries and the object oriented, semantic paradigm for supporting complex data creation. In this paper, a highly scalable, information on demand database framework, called NETMARK is introduced. NETMARK takes advantages of the Oracle 8i object-relational database using physical addresses data types for very efficient keyword searches of records for both context and content. NETMARK was originally developed in early 2000 as a research and development prototype to solve the vast amounts of unstructured and semi-structured documents existing within NASA enterprises. Today, NETMARK is a flexible, high throughput open database framework for managing, storing, and searching unstructured or semi structured arbitrary hierarchal models, XML and HTML.

  2. Hard X-ray and gamma-ray imaging spectroscopy for the next solar maximum

    NASA Technical Reports Server (NTRS)

    Hudson, H. S.; Crannell, C. J.; Dennis, B. R.; Spicer, D. S.; Davis, J. M.; Hurford, G. J.; Lin, R. P.

    1990-01-01

    The objectives and principles are described of a single spectroscopic imaging package that can provide effective imaging in the hard X- and gamma-ray ranges. Called the High-Energy Solar Physics (HESP) mission instrument for solar investigation, the device is based on rotating modulation collimators with germanium semiconductor spectrometers. The instrument is planned to incorporate thick modulation plates, and the range of coverage is discussed. The optics permit the coverage of high-contrast hard X-ray images from small- and medium-sized flares with large signal-to-noise ratios. The detectors allow angular resolution of less than 1 arcsec, time resolution of less than 1 arcsec, and spectral resolution of about 1 keV. The HESP package is considered an effective and important instrument for investigating the high-energy solar events of the near-term future efficiently.

  3. Spin-orbit torques in high-resistivity-W/CoFeB/MgO

    NASA Astrophysics Data System (ADS)

    Takeuchi, Yutaro; Zhang, Chaoliang; Okada, Atsushi; Sato, Hideo; Fukami, Shunsuke; Ohno, Hideo

    2018-05-01

    Magnetic heterostructures consisting of high-resistivity (238 ± 5 µΩ cm)-W/CoFeB/MgO are prepared by sputtering and their spin-orbit torques are evaluated as a function of W thickness through an extended harmonic measurement. W thickness dependence of the spin-orbit torque with the Slonczewski-like symmetry is well described by the drift-diffusion model with an efficiency parameter, the so-called effective spin Hall angle, of -0.62 ± 0.03. In contrast, the field-like spin-orbit torque is one order of magnitude smaller than the Slonczewski-like torque and shows no appreciable dependence on the W thickness, suggesting a different origin from the Slonczewski-like torque. The results indicate that high-resistivity W is promising for low-current and reliable spin-orbit torque-controlled devices.

  4. Optical and heat transfer performance of a novel non-imaging concentrator

    NASA Astrophysics Data System (ADS)

    Sellami, Nazmi; Meng, Xian-long; Xia, Xin-Lin; Knox, Andrew R.; Mallick, Tapas K.

    2015-09-01

    In this study, the Crossed Compound Parabolic Concentrator CCPC is modified to demonstrate for the first time a new generation of solar concentrators working simultaneously as an electricity generator and thermal collector. It is designed to have two complementary surfaces, one reflective and one absorptive, and is called an absorptive/reflective CCPC (AR-CCPC). Usually, the height of the CCPC is truncated with a minor sacrifice of the geometric concentration. These truncated surfaces rather than being eliminated are instead replaced with absorbent surfaces to collect heat from solar radiation. The optical, thermal and total efficiency of the AR-CCPC was simulated and compared for different geometric concentration ratios varying from 3.6x to 4x. It was found that the combined electrical and thermal efficiency of the AR-CCPC 3.6x/4x remains constant and high all day long and the overall efficiency reach up to 94%. In addition, the temperature distributions of AR-CCPC surfaces and the assembled solar cell were simulated based on those heat flux boundary conditions. It shows that the adding of thermal absorbent surface can apparently increase the wall temperature.

  5. Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr

    2010-03-24

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stabilitymore » and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.« less

  6. Dopant Distribution in Atomic Layer Deposited ZnO:Al Films Visualized by Transmission Electron Microscopy and Atom Probe Tomography.

    PubMed

    Wu, Yizhi; Giddings, A Devin; Verheijen, Marcel A; Macco, Bart; Prosa, Ty J; Larson, David J; Roozeboom, Fred; Kessels, Wilhelmus M M

    2018-02-27

    The maximum conductivity achievable in Al-doped ZnO thin films prepared by atomic layer deposition (ALD) is limited by the low doping efficiency of Al. To better understand the limiting factors for the doping efficiency, the three-dimensional distribution of Al atoms in the ZnO host material matrix has been examined on the atomic scale using a combination of high-resolution transmission electron microscopy (TEM) and atom probe tomography (APT). Although the Al distribution in ZnO films prepared by so-called "ALD supercycles" is often presented as atomically flat δ-doped layers, in reality a broadening of the Al-dopant layers is observed with a full-width-half-maximum of ∼2 nm. In addition, an enrichment of the Al at grain boundaries is observed. The low doping efficiency for local Al densities > ∼1 nm -3 can be ascribed to the Al solubility limit in ZnO and to the suppression of the ionization of Al dopants from adjacent Al donors.

  7. Dopant Distribution in Atomic Layer Deposited ZnO:Al Films Visualized by Transmission Electron Microscopy and Atom Probe Tomography

    PubMed Central

    2018-01-01

    The maximum conductivity achievable in Al-doped ZnO thin films prepared by atomic layer deposition (ALD) is limited by the low doping efficiency of Al. To better understand the limiting factors for the doping efficiency, the three-dimensional distribution of Al atoms in the ZnO host material matrix has been examined on the atomic scale using a combination of high-resolution transmission electron microscopy (TEM) and atom probe tomography (APT). Although the Al distribution in ZnO films prepared by so-called “ALD supercycles” is often presented as atomically flat δ-doped layers, in reality a broadening of the Al-dopant layers is observed with a full-width–half-maximum of ∼2 nm. In addition, an enrichment of the Al at grain boundaries is observed. The low doping efficiency for local Al densities > ∼1 nm–3 can be ascribed to the Al solubility limit in ZnO and to the suppression of the ionization of Al dopants from adjacent Al donors. PMID:29515290

  8. Catching errors with patient-specific pretreatment machine log file analysis.

    PubMed

    Rangaraj, Dharanipathy; Zhu, Mingyao; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar H; Brame, Scott; Mutic, Sasa

    2013-01-01

    A robust, efficient, and reliable quality assurance (QA) process is highly desired for modern external beam radiation therapy treatments. Here, we report the results of a semiautomatic, pretreatment, patient-specific QA process based on dynamic machine log file analysis clinically implemented for intensity modulated radiation therapy (IMRT) treatments delivered by high energy linear accelerators (Varian 2100/2300 EX, Trilogy, iX-D, Varian Medical Systems Inc, Palo Alto, CA). The multileaf collimator machine (MLC) log files are called Dynalog by Varian. Using an in-house developed computer program called "Dynalog QA," we automatically compare the beam delivery parameters in the log files that are generated during pretreatment point dose verification measurements, with the treatment plan to determine any discrepancies in IMRT deliveries. Fluence maps are constructed and compared between the delivered and planned beams. Since clinical introduction in June 2009, 912 machine log file analyses QA were performed by the end of 2010. Among these, 14 errors causing dosimetric deviation were detected and required further investigation and intervention. These errors were the result of human operating mistakes, flawed treatment planning, and data modification during plan file transfer. Minor errors were also reported in 174 other log file analyses, some of which stemmed from false positives and unreliable results; the origins of these are discussed herein. It has been demonstrated that the machine log file analysis is a robust, efficient, and reliable QA process capable of detecting errors originating from human mistakes, flawed planning, and data transfer problems. The possibility of detecting these errors is low using point and planar dosimetric measurements. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  9. Efficient topological chaos embedded in the blinking vortex system.

    PubMed

    Kin, Eiko; Sakajo, Takashi

    2005-06-01

    We consider the particle mixing in the plane by two vortex points appearing one after the other, called the blinking vortex system. Mathematical and numerical studies of the system reveal that the chaotic particle mixing, i.e., the chaotic advection, is observed due to the homoclinic chaos, but the mixing region is restricted locally in the neighborhood of the vortex points. The present article shows that it is possible to realize a global and efficient chaotic advection in the blinking vortex system with the help of the Thurston-Nielsen theory, which classifies periodic orbits for homeomorphisms in the plane into three types: periodic, reducible, and pseudo-Anosov (pA). It is mathematically shown that periodic orbits of pA type generate a complicated dynamics, which is called topological chaos. We show that the combination of the local chaotic mixing due to the topological chaos and the dipole-like return orbits realize an efficient and global particle mixing in the blinking vortex system.

  10. Oregon aviation plan

    DOT National Transportation Integrated Search

    2000-02-01

    The 1992 Oregon Transportation Plan created policies and investment strategies for Oregon's multimodal transportation system. The statewide plan called for a transportation system marked by modal balance, efficiency, accessibility, environmental resp...

  11. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    PubMed

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. © 2015. Published by The Company of Biologists Ltd.

  12. Sleep, activity and fatigue reported by postgraduate year 1 residents: a prospective cohort study comparing the effects of night-float versus traditional overnight on-call.

    PubMed

    Low, Jia Ming; Tan, Mae Yue; See, Kay Choong; Aw, Marion M

    2018-03-19

    As traditional overnight on-call was shown to contribute to fatigue, Singapore moved to implement a shift system in 2014. We aimed to compare activity levels, sleep (using a wrist actigraph), fatigue and professional quality-of-life between residents working on night-float and those working on traditional overnight on-call. All postgraduate year 1 (PGY1) residents at our institution were invited to participate. Participants were required to wear a wrist actigraph for four months, and complete two validated surveys (Epworth Sleepiness Scale [ESS] and Professional Quality of Life Scale [ProQOL]) once each at the start and at the end of study. 49 residents were recruited. Residents on night-float and on-call showed comparable median (range) number of steps (10,061 [1,195-15,923] vs. 10,649 [308-21,910]; p = 0.429), amount of sleep logged (361 [149-630] minutes vs. 380 [175-484] minutes; p = 0.369) and time taken to fall asleep (6 [0-14] minutes vs. 6 [range 0-45] minutes; p = 0.726) respectively. Residents on night-float had less efficient sleep, with 90.5% participants having over 85% sleep efficiency compared to 100% of residents on on-call (p = 0.127). More residents on night-float reported ESS > 10 (73.8% vs. 38.5%) and higher burnout scores on ProQOL (41.4% vs. 21.4%) at the start of the study. However, this was similar to the end of the study and was not statistically significant. The physical activity and amount of sleep of residents on night-float and on-call rota were not significantly different. Residents on night-float reported comparatively higher fatigue and burnout.

  13. High-aspect ratio zone plate fabrication for hard x-ray nanoimaging

    NASA Astrophysics Data System (ADS)

    Parfeniukas, Karolis; Giakoumidis, Stylianos; Akan, Rabia; Vogt, Ulrich

    2017-08-01

    We present our results in fabricating Fresnel zone plate optics for the NanoMAX beamline at the fourth-generation synchrotron radiation facility MAX IV, to be used in the energy range of 6-10 keV. The results and challenges of tungsten nanofabrication are discussed, and an alternative approach using metal-assisted chemical etching (MACE) of silicon is showcased. We successfully manufactured diffraction-limited zone plates in tungsten with 30 nm outermost zone width and an aspect ratio of 21:1. These optics were used for nanoimaging experiments at NanoMAX. However, we found it challenging to further improve resolution and diffraction efficiency using tungsten. High efficiency is desirable to fully utilize the advantage of increased coherence on the optics at MAX IV. Therefore, we started to investigate MACE of silicon for the nanofabrication of high-resolution and high-efficiency zone plates. The first type of structures we propose use the silicon directly as the phase-shifting material. We have achieved 6 μm deep dense vertical structures with 100 nm linewidth. The second type of optics use iridium as the phase material. The structures in the silicon substrate act as a mold for iridium coating via atomic layer deposition (ALD). A semi-dense pattern is used with line-to-space ratio of 1:3 for a so-called frequency-doubled zone plate. This way, it is possible to produce smaller structures with the tradeoff of the additional ALD step. We have fabricated 45 nm-wide and 3.6 μm-tall silicon/iridium structures.

  14. The improving efficiency frontier of inpatient rehabilitation hospitals.

    PubMed

    Harrison, Jeffrey P; Kirkpatrick, Nicole

    2011-01-01

    This study uses a linear programming technique called data envelopment analysis to identify changes in the efficiency frontier of inpatient rehabilitation hospitals after implementation of the prospective payment system. The study provides a time series analysis of the efficiency frontier for inpatient rehabilitation hospitals in 2003 immediately after implementation of PPS and then again in 2006. Results indicate that the efficiency frontier of inpatient rehabilitation hospitals increased from 84% in 2003 to 85% in 2006. Similarly, an analysis of slack or inefficiency shows improvements in output efficiency over the study period. This clearly documents that efficiency in the inpatient rehabilitation hospital industry after implementation of PPS is improving. Hospital executives, health care policymakers, taxpayers, and other stakeholders benefit from studies that improve health care efficiency.

  15. A Good Beginning Makes a Good Market: The Effect of Different Market Opening Structures on Market Quality

    PubMed Central

    Hinterleitner, Gernot; Leopold-Wildburger, Ulrike

    2015-01-01

    This paper deals with the market structure at the opening of the trading day and its influence on subsequent trading. We compare a single continuous double auction and two complement markets with different call auction designs as opening mechanisms in a unified experimental framework. The call auctions differ with respect to their levels of transparency. We find that a call auction not only improves market efficiency and liquidity at the beginning of the trading day when compared to the stand-alone continuous double auction, but also causes positive spillover effects on subsequent trading. Concerning the design of the opening call auction, we find no significant differences between the transparent and nontransparent specification with respect to opening prices and liquidity. In the course of subsequent continuous trading, however, market quality is slightly higher after a nontransparent call auction. PMID:26351653

  16. A Good Beginning Makes a Good Market: The Effect of Different Market Opening Structures on Market Quality.

    PubMed

    Hinterleitner, Gernot; Leopold-Wildburger, Ulrike; Mestel, Roland; Palan, Stefan

    2015-01-01

    This paper deals with the market structure at the opening of the trading day and its influence on subsequent trading. We compare a single continuous double auction and two complement markets with different call auction designs as opening mechanisms in a unified experimental framework. The call auctions differ with respect to their levels of transparency. We find that a call auction not only improves market efficiency and liquidity at the beginning of the trading day when compared to the stand-alone continuous double auction, but also causes positive spillover effects on subsequent trading. Concerning the design of the opening call auction, we find no significant differences between the transparent and nontransparent specification with respect to opening prices and liquidity. In the course of subsequent continuous trading, however, market quality is slightly higher after a nontransparent call auction.

  17. Iodine Hall Thruster

    NASA Technical Reports Server (NTRS)

    Szabo, James

    2015-01-01

    Iodine enables dramatic mass and cost savings for lunar and Mars cargo missions, including Earth escape and near-Earth space maneuvers. The demonstrated throttling ability of iodine is important for a singular thruster that might be called upon to propel a spacecraft from Earth to Mars or Venus. The ability to throttle efficiently is even more important for missions beyond Mars. In the Phase I project, Busek Company, Inc., tested an existing Hall thruster, the BHT-8000, on iodine propellant. The thruster was fed by a high-flow iodine feed system and supported by an existing Busek hollow cathode flowing xenon gas. The Phase I propellant feed system was evolved from a previously demonstrated laboratory feed system. Throttling of the thruster between 2 and 11 kW at 200 to 600 V was demonstrated. Testing showed that the efficiency of iodine fueled BHT-8000 is the same as with xenon, with iodine delivering a slightly higher thrust-to-power (T/P) ratio. In Phase II, a complete iodine-fueled system was developed, including the thruster, hollow cathode, and iodine propellant feed system. The nominal power of the Phase II system is 8 kW; however, it can be deeply throttled as well as clustered to much higher power levels. The technology also can be scaled to greater than 100 kW per thruster to support megawatt-class missions. The target thruster efficiency for the full-scale system is 65 percent at high specific impulse (Isp) (approximately 3,000 s) and 60 percent at high thrust (Isp approximately 2,000 s).

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biyikli, Emre; To, Albert C., E-mail: albertto@pitt.edu

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with themore » associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3–8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.« less

  19. Comparison of two hardware-based hit filtering methods for trackers in high-pileup environments

    NASA Astrophysics Data System (ADS)

    Gradin, J.; Mårtensson, M.; Brenner, R.

    2018-04-01

    As experiments in high energy physics aim to measure increasingly rare processes, the experiments continually strive to increase the expected signal yields. In the case of the High Luminosity upgrade of the LHC, the luminosity is raised by increasing the number of simultaneous proton-proton interactions, so-called pile-up. This increases the expected yields of signal and background processes alike. The signal is embedded in a large background of processes that mimic that of signal events. It is therefore imperative for the experiments to develop new triggering methods to effectively distinguish the interesting events from the background. We present a comparison of two methods for filtering detector hits to be used for triggering on particle tracks: one based on a pattern matching technique using Associative Memory (AM) chips and the other based on the Hough transform. Their efficiency and hit rejection are evaluated for proton-proton collisions with varying amounts of pile-up using a simulation of a generic silicon tracking detector. It is found that, while both methods are feasible options for a track trigger with single muon efficiencies around 98–99%, the AM based pattern matching produces a lower number of hit combinations with respect to the Hough transform whilst keeping more of the true signal hits. We also present the effect on the two methods of increasing the amount of support material in the detector and of introducing inefficiencies by deactivating detector modules. The increased support material has negligable effects on the efficiency for both methods, while dropping 5% (10%) of the available modules decreases the efficiency to about 95% (87%) for both methods, irrespective of the amount of pile-up.

  20. Chroma intra prediction based on inter-channel correlation for HEVC.

    PubMed

    Zhang, Xingyu; Gisquet, Christophe; François, Edouard; Zou, Feng; Au, Oscar C

    2014-01-01

    In this paper, we investigate a new inter-channel coding mode called LM mode proposed for the next generation video coding standard called high efficiency video coding. This mode exploits inter-channel correlation using reconstructed luma to predict chroma linearly with parameters derived from neighboring reconstructed luma and chroma pixels at both encoder and decoder to avoid overhead signaling. In this paper, we analyze the LM mode and prove that the LM parameters for predicting original chroma and reconstructed chroma are statistically the same. We also analyze the error sensitivity of the LM parameters. We identify some LM mode problematic situations and propose three novel LM-like modes called LMA, LML, and LMO to address the situations. To limit the increase in complexity due to the LM-like modes, we propose some fast algorithms with the help of some new cost functions. We further identify some potentially-problematic conditions in the parameter estimation (including regression dilution problem) and introduce a novel model correction technique to detect and correct those conditions. Simulation results suggest that considerable BD-rate reduction can be achieved by the proposed LM-like modes and model correction technique. In addition, the performance gain of the two techniques appears to be essentially additive when combined.

  1. Females that experience threat are better teachers

    PubMed Central

    Kleindorfer, Sonia; Evans, Christine; Colombelli-Négrel, Diane

    2014-01-01

    Superb fairy-wren (Malurus cyaneus) females use an incubation call to teach their embryos a vocal password to solicit parental feeding care after hatching. We previously showed that high call rate by the female was correlated with high call similarity in fairy-wren chicks, but not in cuckoo chicks, and that parent birds more often fed chicks with high call similarity. Hosts should be selected to increase their defence behaviour when the risk of brood parasitism is highest, such as when cuckoos are present in the area. Therefore, we experimentally test whether hosts increase call rate to embryos in the presence of a singing Horsfield's bronze-cuckoo (Chalcites basalis). Female fairy-wrens increased incubation call rate when we experimentally broadcast cuckoo song near the nest. Embryos had higher call similarity when females had higher incubation call rate. We interpret the findings of increased call rate as increased teaching effort in response to a signal of threat. PMID:24806422

  2. Efficiency of primary care in rural Burkina Faso. A two-stage DEA analysis

    PubMed Central

    2011-01-01

    Background Providing health care services in Africa is hampered by severe scarcity of personnel, medical supplies and financial funds. Consequently, managers of health care institutions are called to measure and improve the efficiency of their facilities in order to provide the best possible services with their resources. However, very little is known about the efficiency of health care facilities in Africa and instruments of performance measurement are hardly applied in this context. Objective This study determines the relative efficiency of primary care facilities in Nouna, a rural health district in Burkina Faso. Furthermore, it analyses the factors influencing the efficiency of these institutions. Methodology We apply a two-stage Data Envelopment Analysis (DEA) based on data from a comprehensive provider and household information system. In the first stage, the relative efficiency of each institution is calculated by a traditional DEA model. In the second stage, we identify the reasons for being inefficient by regression technique. Results The DEA projections suggest that inefficiency is mainly a result of poor utilization of health care facilities as they were either too big or the demand was too low. Regression results showed that distance is an important factor influencing the efficiency of a health care institution Conclusions Compared to the findings of existing one-stage DEA analyses of health facilities in Africa, the share of relatively efficient units is slightly higher. The difference might be explained by a rather homogenous structure of the primary care facilities in the Burkina Faso sample. The study also indicates that improving the accessibility of primary care facilities will have a major impact on the efficiency of these institutions. Thus, health decision-makers are called to overcome the demand-side barriers in accessing health care. PMID:22828358

  3. Increased Mechanical Cost of Walking in Children with Diplegia: The Role of the Passenger Unit Cannot Be Neglected

    ERIC Educational Resources Information Center

    Van de Walle, P.; Hallemans, A.; Truijen, S.; Gosselink, R.; Heyrman, L.; Molenaers, G.; Desloovere, K.

    2012-01-01

    Gait efficiency in children with cerebral palsy is decreased. To date, most research did not include the upper body as a separate functional unit when exploring these changes in gait efficiency. Since children with spastic diplegia often experience problems with trunk control, they could benefit from separate evaluation of the so-called "passenger…

  4. Proposing a Master's Programme on Participatory Integrated Assessment of Energy Systems to Promote Energy Access and Energy Efficiency in Southern Africa

    ERIC Educational Resources Information Center

    Kiravu, Cheddi; Diaz-Maurin, François; Giampietro, Mario; Brent, Alan C.; Bukkens, Sandra G.F.; Chiguvare, Zivayi; Gasennelwe-Jeffrey, Mandu A.; Gope, Gideon; Kovacic, Zora; Magole, Lapologang; Musango, Josephine Kaviti; Ruiz-Rivas Hernando, Ulpiano; Smit, Suzanne; Vázquez Barquero, Antonio; Yunta Mezquita, Felipe

    2018-01-01

    Purpose: This paper aims to present a new master's programme for promoting energy access and energy efficiency in Southern Africa. Design/methodology/approach: A transdisciplinary approach called "participatory integrated assessment of energy systems" (PARTICIPIA) was used for the development of the curriculum. This approach is based on…

  5. [Organization of clinical emergency units. Mission and environmental factors determine the organizational concept].

    PubMed

    Genewein, U; Jakob, M; Bingisser, R; Burla, S; Heberer, M

    2009-02-01

    Mission and organization of emergency units were analysed to understand the underlying principles and concepts. The recent literature (2000-2007) on organizational structures and functional concepts of clinical emergency units was reviewed. An organizational portfolio based on the criteria specialization (presence of medical specialists on the emergency unit) and integration (integration of the emergency unit into the hospital structure) was established. The resulting organizational archetypes were comparatively assessed based on established efficiency criteria (efficiency of resource utilization, process efficiency, market efficiency). Clinical emergency units differ with regard to autonomy (within the hospital structure), range of services and service depth (horizontal and vertical integration). The "specialization"-"integration"-portfolio enabled the definition of typical organizational patterns (so-called archetypes): profit centres primarily driven by economic objectives, service centres operating on the basis of agreements with the hospital board, functional clinical units integrated into medical specialty units (e.g., surgery, gynaecology) and modular organizations characterized by small emergency teams that would call specialists immediately after triage and initial diagnostic. There is no "one fits all" concept for the organization of clinical emergency units. Instead, a number of well characterized organizational concepts are available enabling a rational choice based on a hospital's mission and demand.

  6. Development of a Mammographic Image Processing Environment Using MATLAB.

    DTIC Science & Technology

    1994-12-01

    S. .° : i. .... ...... Correctness Reliability Efficiency Integrity Usability Figure 1.2 - McCall’s software quality factors [ Pressman , 1987] 1.4...quality factors [ Pressman , 1987] 3-13 Each quality factor itself is related to independent attributes called criteria [Cooper and Fisher, 1979], or...metrics by [ Pressman , 1987], that can be used to judge, define, and measure quality [Cooper and Fisher, 1979]. Figure 3.9 shows the criteria that are used

  7. Electrodeposition of ZnO-doped films as window layer for Cd-free CIGS-based solar cells

    NASA Astrophysics Data System (ADS)

    Tsin, Fabien; Vénérosy, Amélie; Hildebrandt, Thibaud; Hariskos, Dimitrios; Naghavi, Negar; Lincot, Daniel; Rousset, Jean

    2016-02-01

    The Cu(In,Ga)Se2 (CIGS) thin film solar cell technology has made a steady progress within the last decade reaching efficiency up to 22.3% on laboratory scale, thus overpassing the highest efficiency for polycrystalline silicon solar cells. High efficiency CIGS modules employ a so-called buffer layer of cadmium sulfide CdS deposited by Chemical Bath Deposition (CBD), which presence and Cd-containing waste present some environmental concerns. A second potential bottleneck for CIGS technology is its window layer made of i-ZnO/ZnO:Al, which is deposited by sputtering requiring expensive vacuum equipment. A non-vacuum deposition of transparent conductive oxide (TCO) relying on simpler equipment with lower investment costs will be more economically attractive, and could increase competitiveness of CIGS-based modules with the mainstream silicon-based technologies. In the frame of Novazolar project, we have developed a low-cost aqueous solution photo assisted electrodeposition process of the ZnO-based window layer for high efficiency CIGS-based solar cells. The window layer deposition have been first optimized on classical CdS buffer layer leading to cells with efficiencies similar to those measured with the sputtered references on the same absorber (15%). The the optimized ZnO doped layer has been adapted to cadmium free devices where the CdS is replaced by chemical bath deposited zinc oxysulfide Zn(S,O) buffer layer. The effect of different growth parameters has been studied on CBD-Zn(S,O)-plated co-evaporated Cu(In,Ga)Se2 substrates provided by the Zentrum für Sonnenenergie-und Wasserstoff-Forschung (ZSW). This optimization of the electrodeposition of ZnO:Cl on CIGS/Zn(S,O) stacks led to record efficiency of 14%, while the reference cell with a sputtered (Zn,Mg)O/ZnO:Al window layer has an efficiency of 15.2%.

  8. Does Vessel Noise Affect Oyster Toadfish Calling Rates?

    PubMed

    Luczkovich, Joseph J; Krahforst, Cecilia S; Hoppe, Harry; Sprague, Mark W

    2016-01-01

    The question we addressed in this study is whether oyster toadfish respond to vessel disturbances by calling less when vessels with lower frequency spectra are present in a sound recording and afterward. Long-term data recorders were deployed at the Neuse (high vessel-noise site) and Pamlico (low vessel-noise site) Rivers. There were many fewer toadfish detections at the high vessel-noise site than the low-noise station. Calling rates were lower in the high-boat traffic area, suggesting that toadfish cannot call over loud vessel noise, reducing the overall calling rate, and may have to call more often when vessels are not present.

  9. Design of a 3D Navigation Technique Supporting VR Interaction

    NASA Astrophysics Data System (ADS)

    Boudoin, Pierre; Otmane, Samir; Mallem, Malik

    2008-06-01

    Multimodality is a powerful paradigm to increase the realness and the easiness of the interaction in Virtual Environments (VEs). In particular, the search for new metaphors and techniques for 3D interaction adapted to the navigation task is an important stage for the realization of future 3D interaction systems that support multimodality, in order to increase efficiency and usability. In this paper we propose a new multimodal 3D interaction model called Fly Over. This model is especially devoted to the navigation task. We present a qualitative comparison between Fly Over and a classical navigation technique called gaze-directed steering. The results from preliminary evaluation on the IBISC semi-immersive Virtual Reality/Augmented Realty EVR@ platform show that Fly Over is a user friendly and efficient navigation technique.

  10. Call-handlers' working conditions and their subjective experience of work: a transversal study.

    PubMed

    Croidieu, Sophie; Charbotel, Barbara; Vohito, Michel; Renaud, Liliane; Jaussaud, Joelle; Bourboul, Christian; Ardiet, Dominique; Imbard, Isabelle; Guerin, Anne Céline; Bergeret, Alain

    2008-10-01

    The present study sought to describe call-center working conditions and call-handlers' subjective experience of their work. A transversal study was performed in companies followed by the 47 occupational physicians taking part. A dedicated questionnaire included one part on working conditions (work-station organization, task types, work schedules, and controls) and another on the perception of working conditions. Psychosocial risk factors were explored by three dimensions of the Karasek questionnaire, decision latitude, psychological demands and social support. A descriptive stage characterized the population and quantified the frequency of the various types of work organization, working conditions and perception. Certain working conditions data were crossed with perception data. The total sample comprised 2,130 call-handlers from around 100 different companies. The population was 71.9% female, with a mean age of 32.4 years. The general educational level was high, with 1,443 (68.2%) of call-handlers having at least 2 years' higher education; 1,937 of the workers (91.2%) had permanent work contracts. Some working situations were found to be associated with low decision latitude and high psychological demands: i.e., where the schedule (full-time or part-time) was imposed, where the call-handlers had not chosen to work in a call-center, or where they received prior warning of controls. Moreover, the rate of low decision latitude and high psychological demands increased with seniority in the job. The rate of low decision latitude increased with the size of the company and was higher when call duration was imposed and when the call-handlers handled only incoming calls. The rate of high psychological demands was higher when call-handlers handled both incoming and outgoing calls. This study confirmed the high rate of psychosocial constraints for call-handlers and identified work situations at risk.

  11. Efficient free energy calculations by combining two complementary tempering sampling methods.

    PubMed

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-14

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  12. Efficient free energy calculations by combining two complementary tempering sampling methods

    NASA Astrophysics Data System (ADS)

    Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun

    2017-01-01

    Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.

  13. Transitioning to High Performance Homes: Successes and Lessons Learned From Seven Builders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widder, Sarah H.; Kora, Angela R.; Baechler, Michael C.

    2013-03-01

    As homebuyers are becoming increasingly concerned about rising energy costs and the impact of fossil fuels as a major source of greenhouse gases, the returning new home market is beginning to demand energy-efficient and comfortable high-performance homes. In response to this, some innovative builders are gaining market share because they are able to market their homes’ comfort, better indoor air quality, and aesthetics, in addition to energy efficiency. The success and marketability of these high-performance homes is creating a builder demand for house plans and information about how to design, build, and sell their own low-energy homes. To help makemore » these and other builders more successful in the transition to high-performance construction techniques, Pacific Northwest National Laboratory (PNNL) partnered with seven interested builders in the hot humid and mixed humid climates to provide technical and design assistance through two building science firms, Florida Home Energy and Resources Organization (FL HERO) and Calcs-Plus, and a designer that offers a line of stock plans designed specifically for energy efficiency, called Energy Smart Home Plans (ESHP). This report summarizes the findings of research on cost-effective high-performance whole-house solutions, focusing on real-world implementation and challenges and identifying effective solutions. The ensuing sections provide project background, profile each of the builders who participated in the program, and describe their houses’ construction characteristics, key challenges the builders encountered during the construction and transaction process); and present primary lessons learned to be applied to future projects. As a result of this technical assistance, 17 homes have been built featuring climate-appropriate efficient envelopes, ducts in conditioned space, and correctly sized and controlled heating, ventilation, and air-conditioning systems. In addition, most builders intend to integrate high-performance features into most or all their homes in the future. As these seven builders have demonstrated, affordable, high-performance homes are possible, but require attention to detail and flexibility in design to accommodate specific regional geographic or market-driven constraints that can increase cost. With better information regarding how energy-efficiency trade-offs or design choices affect overall home performance, builders can make informed decisions regarding home design and construction to minimize cost without sacrificing performance and energy savings.« less

  14. Two-photon polymerization as a structuring technology in production: future or fiction?

    NASA Astrophysics Data System (ADS)

    Harnisch, Emely Marie; Schmitt, Robert

    2017-02-01

    Two-photon polymerization (TPP) has become an established generative fabrication technique for individual, up to three-dimensional micro- and nanostructures. Due to its high resolution beyond the diffraction limit, its writing speed is limited and in most cases, very special structures are fabricated in small quantities. With regard to the trends of the optical market towards higher efficiencies, miniaturization and higher functionalities, there is a high demand for so called intelligent light management systems, including also individual optical elements. Here, TPP could offer a fabrication technique, enabling higher complexities of structures than conventional cutting and lithographic technologies do. But how can TPP opened up for production? In the following, some approaches to establish TPP as a mastering technique for molding are presented against this background.

  15. What can we learn about lyssavirus genomes using 454 sequencing?

    PubMed

    Höper, Dirk; Finke, Stefan; Freuling, Conrad M; Hoffmann, Bernd; Beer, Martin

    2012-01-01

    The main task of the individual project number four"Whole genome sequencing, virus-host adaptation, and molecular epidemiological analyses of lyssaviruses "within the network" Lyssaviruses--a potential re-emerging public health threat" is to provide high quality complete genome sequences from lyssaviruses. These sequences are analysed in-depth with regard to the diversity of the viral populations as to both quasi-species and so-called defective interfering RNAs. Moreover, the sequence data will facilitate further epidemiological analyses, will provide insight into the evolution of lyssaviruses and will be the basis for the design of novel nucleic acid based diagnostics. The first results presented here indicate that not only high quality full-length lyssavirus genome sequences can be generated, but indeed efficient analysis of the viral population gets feasible.

  16. Information filtering via biased heat conduction.

    PubMed

    Liu, Jian-Guo; Zhou, Tao; Guo, Qiang

    2011-09-01

    The process of heat conduction has recently found application in personalized recommendation [Zhou et al., Proc. Natl. Acad. Sci. USA 107, 4511 (2010)], which is of high diversity but low accuracy. By decreasing the temperatures of small-degree objects, we present an improved algorithm, called biased heat conduction, which could simultaneously enhance the accuracy and diversity. Extensive experimental analyses demonstrate that the accuracy on MovieLens, Netflix, and Delicious datasets could be improved by 43.5%, 55.4% and 19.2%, respectively, compared with the standard heat conduction algorithm and also the diversity is increased or approximately unchanged. Further statistical analyses suggest that the present algorithm could simultaneously identify users' mainstream and special tastes, resulting in better performance than the standard heat conduction algorithm. This work provides a creditable way for highly efficient information filtering.

  17. High-performance multi-functional reverse osmosis membranes obtained by carbon nanotube·polyamide nanocomposite

    PubMed Central

    Inukai, Shigeki; Cruz-Silva, Rodolfo; Ortiz-Medina, Josue; Morelos-Gomez, Aaron; Takeuchi, Kenji; Hayashi, Takuya; Tanioka, Akihiko; Araki, Takumi; Tejima, Syogo; Noguchi, Toru; Terrones, Mauricio; Endo, Morinobu

    2015-01-01

    Clean water obtained by desalinating sea water or by purifying wastewater, constitutes a major technological objective in the so-called water century. In this work, a high-performance reverse osmosis (RO) composite thin membrane using multi-walled carbon nanotubes (MWCNT) and aromatic polyamide (PA), was successfully prepared by interfacial polymerization. The effect of MWCNT on the chlorine resistance, antifouling and desalination performances of the nanocomposite membranes were studied. We found that a suitable amount of MWCNT in PA, 15.5 wt.%, not only improves the membrane performance in terms of flow and antifouling, but also inhibits the chlorine degradation on these membranes. Therefore, the present results clearly establish a solid foundation towards more efficient large-scale water desalination and other water treatment processes. PMID:26333385

  18. Chemiluminescent Nanomicelles for Imaging Hydrogen Peroxide and Self-Therapy in Photodynamic Therapy

    PubMed Central

    Chen, Rui; Zhang, Luzhong; Gao, Jian; Wu, Wei; Hu, Yong; Jiang, Xiqun

    2011-01-01

    Hydrogen peroxide is a signal molecule of the tumor, and its overproduction makes a higher concentration in tumor tissue compared to normal tissue. Based on the fact that peroxalates can make chemiluminescence with a high efficiency in the presence of hydrogen peroxide, we developed nanomicelles composed of peroxalate ester oligomers and fluorescent dyes, called peroxalate nanomicelles (POMs), which could image hydrogen peroxide with high sensitivity and stability. The potential application of the POMs in photodynamic therapy (PDT) for cancer was also investigated. It was found that the PDT-drug-loaded POMs were sensitive to hydrogen peroxide, and the PDT drug could be stimulated by the chemiluminescence from the reaction between POMs and hydrogen peroxide, which carried on a self-therapy of the tumor without the additional laser light resource. PMID:21765637

  19. Efficiency optimization of a fast Poisson solver in beam dynamics simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula

    2016-01-01

    Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.

  20. Learning moment-based fast local binary descriptor

    NASA Astrophysics Data System (ADS)

    Bellarbi, Abdelkader; Zenati, Nadia; Otmane, Samir; Belghit, Hayet

    2017-03-01

    Recently, binary descriptors have attracted significant attention due to their speed and low memory consumption; however, using intensity differences to calculate the binary descriptive vector is not efficient enough. We propose an approach to binary description called POLAR_MOBIL, in which we perform binary tests between geometrical and statistical information using moments in the patch instead of the classical intensity binary test. In addition, we introduce a learning technique used to select an optimized set of binary tests with low correlation and high variance. This approach offers high distinctiveness against affine transformations and appearance changes. An extensive evaluation on well-known benchmark datasets reveals the robustness and the effectiveness of the proposed descriptor, as well as its good performance in terms of low computation complexity when compared with state-of-the-art real-time local descriptors.

  1. Enabling MEMS technologies for communications systems

    NASA Astrophysics Data System (ADS)

    Lubecke, Victor M.; Barber, Bradley P.; Arney, Susanne

    2001-11-01

    Modern communications demands have been steadily growing not only in size, but sophistication. Phone calls over copper wires have evolved into high definition video conferencing over optical fibers, and wireless internet browsing. The technology used to meet these demands is under constant pressure to provide increased capacity, speed, and efficiency, all with reduced size and cost. Various MEMS technologies have shown great promise for meeting these challenges by extending the performance of conventional circuitry and introducing radical new systems approaches. A variety of strategic MEMS structures including various cost-effective free-space optics and high-Q RF components are described, along with related practical implementation issues. These components are rapidly becoming essential for enabling the development of progressive new communications systems technologies including all-optical networks, and low cost multi-system wireless terminals and basestations.

  2. A new uniformly valid asymptotic integration algorithm for elasto-plastic creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, Abhisak; Walker, Kevin P.

    1991-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  3. A high performance load balance strategy for real-time multicore systems.

    PubMed

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper.

  4. The extreme ultraviolet spectrograph: A radial groove grating, sounding rocket-borne, astronomical instrument

    NASA Technical Reports Server (NTRS)

    Wilkinson, Erik; Green, James C.; Cash, Webster

    1993-01-01

    The design, calibration, and sounding rocket flight performance of a novel spectrograph suitable for moderate-resolution EUV spectroscopy are presented. The sounding rocket-borne instrument uses a radial groove grating to maintain a high system efficiency while controlling the aberrations induced when doing spectroscopy in a converging beam. The instrument has a resolution of approximately 2 A across the 200-330 A bandpass with an average effective area of 2 sq cm. The instrument, called the Extreme Ultraviolet Spectrograph, acquired the first EUV spectra in this wavelength region of the hot white dwarf G191-B2B and the late-type star Capella.

  5. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection.

    PubMed

    Ahn, Junho; Han, Richard

    2016-05-23

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users' daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period.

  6. Modified harmony search

    NASA Astrophysics Data System (ADS)

    Mohamed, Najihah; Lutfi Amri Ramli, Ahmad; Majid, Ahmad Abd; Piah, Abd Rahni Mt

    2017-09-01

    A metaheuristic algorithm, called Harmony Search is quite highly applied in optimizing parameters in many areas. HS is a derivative-free real parameter optimization algorithm, and draws an inspiration from the musical improvisation process of searching for a perfect state of harmony. Propose in this paper Modified Harmony Search for solving optimization problems, which employs a concept from genetic algorithm method and particle swarm optimization for generating new solution vectors that enhances the performance of HS algorithm. The performances of MHS and HS are investigated on ten benchmark optimization problems in order to make a comparison to reflect the efficiency of the MHS in terms of final accuracy, convergence speed and robustness.

  7. myBlackBox: Blackbox Mobile Cloud Systems for Personalized Unusual Event Detection

    PubMed Central

    Ahn, Junho; Han, Richard

    2016-01-01

    We demonstrate the feasibility of constructing a novel and practical real-world mobile cloud system, called myBlackBox, that efficiently fuses multimodal smartphone sensor data to identify and log unusual personal events in mobile users’ daily lives. The system incorporates a hybrid architectural design that combines unsupervised classification of audio, accelerometer and location data with supervised joint fusion classification to achieve high accuracy, customization, convenience and scalability. We show the feasibility of myBlackBox by implementing and evaluating this end-to-end system that combines Android smartphones with cloud servers, deployed for 15 users over a one-month period. PMID:27223292

  8. Modulating complex beams in amplitude and phase using fast tilt-micromirror arrays and phase masks.

    PubMed

    Roth, Matthias; Heber, Jörg; Janschek, Klaus

    2018-06-15

    The Letter proposes a system for the spatial modulation of light in amplitude and phase at kilohertz frame rates and high spatial resolution. The focus is fast spatial light modulators (SLMs) consisting of continuously tiltable micromirrors. We investigate the utilization of such SLMs in combination with a static phase mask in a 4f setup. The phase mask enables the complex beam modulation in a linear optical arrangement. Furthermore, adding so-called phase steps to the phase mask increases both the number of image pixels at constant SLM resolution and the optical efficiency. We illustrate our concept based on numerical simulations.

  9. Cyclical parthenogenesis algorithm for layout optimization of truss structures with frequency constraints

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Zolghadr, A.

    2017-08-01

    Structural optimization with frequency constraints is seen as a challenging problem because it is associated with highly nonlinear, discontinuous and non-convex search spaces consisting of several local optima. Therefore, competent optimization algorithms are essential for addressing these problems. In this article, a newly developed metaheuristic method called the cyclical parthenogenesis algorithm (CPA) is used for layout optimization of truss structures subjected to frequency constraints. CPA is a nature-inspired, population-based metaheuristic algorithm, which imitates the reproductive and social behaviour of some animal species such as aphids, which alternate between sexual and asexual reproduction. The efficiency of the CPA is validated using four numerical examples.

  10. A new uniformly valid asymptotic integration algorithm for elasto-plastic-creep and unified viscoplastic theories including continuum damage

    NASA Technical Reports Server (NTRS)

    Chulya, A.; Walker, K. P.

    1989-01-01

    A new scheme to integrate a system of stiff differential equations for both the elasto-plastic creep and the unified viscoplastic theories is presented. The method has high stability, allows large time increments, and is implicit and iterative. It is suitable for use with continuum damage theories. The scheme was incorporated into MARC, a commercial finite element code through a user subroutine called HYPELA. Results from numerical problems under complex loading histories are presented for both small and large scale analysis. To demonstrate the scheme's accuracy and efficiency, comparisons to a self-adaptive forward Euler method are made.

  11. A High Performance Load Balance Strategy for Real-Time Multicore Systems

    PubMed Central

    Cho, Keng-Mao; Tsai, Chun-Wei; Chiu, Yi-Shiuan; Yang, Chu-Sing

    2014-01-01

    Finding ways to distribute workloads to each processor core and efficiently reduce power consumption is of vital importance, especially for real-time systems. In this paper, a novel scheduling algorithm is proposed for real-time multicore systems to balance the computation loads and save power. The developed algorithm simultaneously considers multiple criteria, a novel factor, and task deadline, and is called power and deadline-aware multicore scheduling (PDAMS). Experiment results show that the proposed algorithm can greatly reduce energy consumption by up to 54.2% and the deadline times missed, as compared to the other scheduling algorithms outlined in this paper. PMID:24955382

  12. Catalyst and electrode research for phosphoric acid fuel cells

    NASA Technical Reports Server (NTRS)

    Antoine, A. C.; King, R. B.

    1987-01-01

    An account is given of the development status of phosphoric acid fuel cells' high performance catalyst and electrode materials. Binary alloys have been identified which outperform the baseline platinum catalyst; it has also become apparent that pressurized operation is required to reach the desired efficiencies, calling in turn for the use of graphitized carbon blacks in the role of catalyst supports. Efforts to improve cell performance and reduce catalyst costs have led to the investigation of a class of organometallic cathode catalysts represented by the tetraazaannulenes, and a mixed catalyst which is a mixture of carbons catalyzed with an organometallic and a noble metal.

  13. Multi-species call-broadcast improved detection of endangered Yuma clapper rail compared to single-species call-broadcast

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.; Piest, Linden; Burger, William P.

    2013-01-01

    Broadcasting calls of marsh birds during point-count surveys increases their detection probability and decreases variation in the number of birds detected across replicate surveys. However, multi-species monitoring using call-broadcast may reduce these benefits if birds are reluctant to call once they hear broadcasted calls of other species. We compared a protocol that uses call-broadcast for only one species (Yuma clapper rail [Rallus longirostris yumanensis]) to a protocol that uses call-broadcast for multiple species. We detected more of each of the following species using the multi-species protocol: 25 % more pied-billed grebes, 160 % more American bitterns, 52 % more least bitterns, 388 % more California black rails, 12 % more Yuma clapper rails, 156 % more Virginia rails, 214 % more soras, and 19 % more common gallinules. Moreover, the coefficient of variation was smaller when using the multi-species protocol: 10 % smaller for pied-billed grebes, 38 % smaller for American bitterns, 19 % smaller for least bitterns, 55 % smaller for California black rails, 5 % smaller for Yuma clapper rails, 38 % smaller for Virginia rails, 44 % smaller for soras, and 8 % smaller for common gallinules. Our results suggest that multi-species monitoring approaches may be more effective and more efficient than single-species approaches even when using call-broadcast.

  14. Photosymbiotic giant clams are transformers of solar flux.

    PubMed

    Holt, Amanda L; Vahidinia, Sanaz; Gagnon, Yakir Luc; Morse, Daniel E; Sweeney, Alison M

    2014-12-06

    'Giant' tridacnid clams have evolved a three-dimensional, spatially efficient, photodamage-preventing system for photosymbiosis. We discovered that the mantle tissue of giant clams, which harbours symbiotic nutrition-providing microalgae, contains a layer of iridescent cells called iridocytes that serve to distribute photosynthetically productive wavelengths by lateral and forward-scattering of light into the tissue while back-reflecting non-productive wavelengths with a Bragg mirror. The wavelength- and angle-dependent scattering from the iridocytes is geometrically coupled to the vertically pillared microalgae, resulting in an even re-distribution of the incoming light along the sides of the pillars, thus enabling photosynthesis deep in the tissue. There is a physical analogy between the evolved function of the clam system and an electric transformer, which changes energy flux per area in a system while conserving total energy. At incident light levels found on shallow coral reefs, this arrangement may allow algae within the clam system to both efficiently use all incident solar energy and avoid the photodamage and efficiency losses due to non-photochemical quenching that occur in the reef-building coral photosymbiosis. Both intra-tissue radiometry and multiscale optical modelling support our interpretation of the system's photophysics. This highly evolved 'three-dimensional' biophotonic system suggests a strategy for more efficient, damage-resistant photovoltaic materials and more spatially efficient solar production of algal biofuels, foods and chemicals.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatt, A.

    The 60th anniversary of the discovery of neutron activation analysis (NAA) by Hevesy and Levi is being celebrated in 1996. With the availability of nuclear reactors capable of producing fluxes of the order of 10{sup 12} to 10{sup 14} n/cm{sup 2}s, the development of high-resolution and high-efficiency conventional and anticoincidence gamma-ray detectors, multichannel pulse-height analyzers, and personal computer-based softwares, NAA has become an extremely valuable analytical technique, especially for the simultaneous determinations of multielement concentrations. This technique can be used in a number of ways, depending on the nature of the matrix, the major elements in the sample, and onmore » the elements of interest. In most cases, several elements can be determined without any chemical pretreatment of the sample; the technique is then called instrumental NAA (INAA). In other cases, an element can be concentrated from an interfering matrix prior to irradiation; the technique is then termed preconcentration NAA (PNAA). In opposite instances, the irradiation is followed by a chemical separation of the desired element; the technique is then called radiochemical NAA (RNAA). All three forms of NAA can provide elemental concentrations of high accuracy and precision with excellent sensitivity. The number of research reactors in developing countries has increased steadily from 17 in 1955 through 71 in 1975 to 89 in 1995. Low flux reactors such as SLOWPOKE and the Chinese MNSR are primarily used for NAA.« less

  16. Multimodal observational assessment of quality and productivity benefits from the implementation of wireless technology for out of hours working

    PubMed Central

    Blakey, John D; Guy, Debbie; Simpson, Carl; Fearn, Andrew; Cannaby, Sharon; Wilson, Petra

    2012-01-01

    Objectives The authors investigated if a wireless system of call handling and task management for out of hours care could replace a standard pager-based system and improve markers of efficiency, patient safety and staff satisfaction. Design Prospective assessment using both quantitative and qualitative methods, including interviews with staff, a standard satisfaction questionnaire, independent observation, data extraction from work logs and incident reporting systems and analysis of hospital committee reports. Setting A large teaching hospital in the UK. Participants Hospital at night co-ordinators, clinical support workers and junior doctors handling approximately 10 000 tasks requested out of hours per month. Outcome measures Length of hospital stay, incidents reported, co-ordinator call logging activity, user satisfaction questionnaire, staff interviews. Results Users were more satisfied with the new system (satisfaction score 62/90 vs 82/90, p=0.0080). With the new system over 70 h/week of co-ordinator time was released, and there were fewer untoward incidents related to handover and medical response (OR=0.30, p=0.02). Broad clinical measures (cardiac arrest calls for peri-arrest situations and length of hospital stay) improved significantly in the areas covered by the new system. Conclusions The introduction of call handling software and mobile technology over a medical-grade wireless network improved staff satisfaction with the Hospital at Night system. Improvements in efficiency and information flow have been accompanied by a reduction in untoward incidents, length of stay and peri-arrest calls. PMID:22466035

  17. Calling Chromosome Alterations, DNA Methylation Statuses, and Mutations in Tumors by Simple Targeted Next-Generation Sequencing: A Solution for Transferring Integrated Pangenomic Studies into Routine Practice?

    PubMed

    Garinet, Simon; Néou, Mario; de La Villéon, Bruno; Faillot, Simon; Sakat, Julien; Da Fonseca, Juliana P; Jouinot, Anne; Le Tourneau, Christophe; Kamal, Maud; Luscap-Rondof, Windy; Boeva, Valentina; Gaujoux, Sebastien; Vidaud, Michel; Pasmant, Eric; Letourneur, Franck; Bertherat, Jérôme; Assié, Guillaume

    2017-09-01

    Pangenomic studies identified distinct molecular classes for many cancers, with major clinical applications. However, routine use requires cost-effective assays. We assessed whether targeted next-generation sequencing (NGS) could call chromosomal alterations and DNA methylation status. A training set of 77 tumors and a validation set of 449 (43 tumor types) were analyzed by targeted NGS and single-nucleotide polymorphism (SNP) arrays. Thirty-two tumors were analyzed by NGS after bisulfite conversion, and compared to methylation array or methylation-specific multiplex ligation-dependent probe amplification. Considering allelic ratios, correlation was strong between targeted NGS and SNP arrays (r = 0.88). In contrast, considering DNA copy number, for variations of one DNA copy, correlation was weaker between read counts and SNP array (r = 0.49). Thus, we generated TARGOMICs, optimized for detecting chromosome alterations by combining allelic ratios and read counts generated by targeted NGS. Sensitivity for calling normal, lost, and gained chromosomes was 89%, 72%, and 31%, respectively. Specificity was 81%, 93%, and 98%, respectively. These results were confirmed in the validation set. Finally, TARGOMICs could efficiently align and compute proportions of methylated cytosines from bisulfite-converted DNA from targeted NGS. In conclusion, beyond calling mutations, targeted NGS efficiently calls chromosome alterations and methylation status in tumors. A single run and minor design/protocol adaptations are sufficient. Optimizing targeted NGS should expand translation of genomics to clinical routine. Copyright © 2017 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  18. Need for recovery among male technical distal on-call workers.

    PubMed

    van de Ven, Hardy A; Bültmann, Ute; de Looze, Michiel P; Koolhaas, Wendy; Kantermann, Thomas; Brouwer, Sandra; van der Klink, Jac J L

    2015-01-01

    The objectives of this study were to (1) examine whether need for recovery differs between workers (i) not on-call, (ii) on-call but not called and (iii) on-call and called, and (2) investigate the associations between age, health, work and social characteristics with need for recovery for the three scenarios (i-iii). Cross-sectional data of N = 169 Dutch distal on-call workers were analysed with multivariate logistic regression. Need for recovery differed significantly between the three scenarios (i-iii), with lowest need for recovery for scenario (i) 'not on-call' and highest need for recovery for scenario (iii) 'on-call and called'. Poor mental health and high work-family interference were associated with higher need for recovery in all three scenarios (i-iii), whereas high work demands was only associated with being on-call (ii and iii). The results suggest that the mere possibility of being called affects the need for recovery, especially in workers reporting poor mental health, high-work demands and work-family interference. Practitioner summary: On-call work is a scarcely studied but demanding working time arrangement. We examined need for recovery and its associations with age, health, work and social characteristics among distal on-call workers. The results suggest that the mere possibility of being called can affect worker well-being and need for recovery.

  19. Analysis of entropy extraction efficiencies in random number generation systems

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu

    2016-05-01

    Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.

  20. A novel generation of 3D SAR-based passive micromixer: efficient mixing and low pressure drop at a low Reynolds number

    NASA Astrophysics Data System (ADS)

    Viktorov, Vladimir; Nimafar, Mohammad

    2013-05-01

    This study introduces a novel generation of 3D splitting and recombination (SAR) passive micromixer with microstructures placed on the top and bottom floors of microchannels called a ‘chain mixer’. Both experimental verification and numerical analysis of the flow structure of this type of passive micromixer have been performed to evaluate the mixing performance and pressure drop of the microchannel, respectively. We propose here two types of chain mixer—chain 1 and chain 2—and compare their mixing performance and pressure drop with other micromixers, T-, o- and tear-drop micromixers. Experimental tests carried out in the laminar flow regime with a low Reynolds number range, 0.083 ≤ Re ≤ 4.166, and image-based techniques are used to evaluate the mixing efficiency. Also, the computational fluid dynamics code, ANSYS FLUENT-13.0 has been used to analyze the flow and pressure drop in the microchannel. Experimental results show that the chain and tear-drop mixer's efficiency is very high because of the SAR process: specifically, an efficiency of up to 98% can be achieved at the tested Reynolds number. The results also show that chain mixers have a lower required pressure drop in comparison with a tear-drop micromixer.

  1. Efficiently sampling conformations and pathways using the concurrent adaptive sampling (CAS) algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Surl-Hee; Grate, Jay W.; Darve, Eric F.

    Molecular dynamics (MD) simulations are useful in obtaining thermodynamic and kinetic properties of bio-molecules but are limited by the timescale barrier, i.e., we may be unable to efficiently obtain properties because we need to run microseconds or longer simulations using femtoseconds time steps. While there are several existing methods to overcome this timescale barrier and efficiently sample thermodynamic and/or kinetic properties, problems remain in regard to being able to sample un- known systems, deal with high-dimensional space of collective variables, and focus the computational effort on slow timescales. Hence, a new sampling method, called the “Concurrent Adaptive Sampling (CAS) algorithm,”more » has been developed to tackle these three issues and efficiently obtain conformations and pathways. The method is not constrained to use only one or two collective variables, unlike most reaction coordinate-dependent methods. Instead, it can use a large number of collective vari- ables and uses macrostates (a partition of the collective variable space) to enhance the sampling. The exploration is done by running a large number of short simula- tions, and a clustering technique is used to accelerate the sampling. In this paper, we introduce the new methodology and show results from two-dimensional models and bio-molecules, such as penta-alanine and triazine polymer« less

  2. Fast global image smoothing based on weighted least squares.

    PubMed

    Min, Dongbo; Choi, Sunghwan; Lu, Jiangbo; Ham, Bumsub; Sohn, Kwanghoon; Do, Minh N

    2014-12-01

    This paper presents an efficient technique for performing a spatially inhomogeneous edge-preserving image smoothing, called fast global smoother. Focusing on sparse Laplacian matrices consisting of a data term and a prior term (typically defined using four or eight neighbors for 2D image), our approach efficiently solves such global objective functions. In particular, we approximate the solution of the memory-and computation-intensive large linear system, defined over a d-dimensional spatial domain, by solving a sequence of 1D subsystems. Our separable implementation enables applying a linear-time tridiagonal matrix algorithm to solve d three-point Laplacian matrices iteratively. Our approach combines the best of two paradigms, i.e., efficient edge-preserving filters and optimization-based smoothing. Our method has a comparable runtime to the fast edge-preserving filters, but its global optimization formulation overcomes many limitations of the local filtering approaches. Our method also achieves high-quality results as the state-of-the-art optimization-based techniques, but runs ∼10-30 times faster. Besides, considering the flexibility in defining an objective function, we further propose generalized fast algorithms that perform Lγ norm smoothing (0 < γ < 2) and support an aggregated (robust) data term for handling imprecise data constraints. We demonstrate the effectiveness and efficiency of our techniques in a range of image processing and computer graphics applications.

  3. An adaptive proper orthogonal decomposition method for model order reduction of multi-disc rotor system

    NASA Astrophysics Data System (ADS)

    Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu

    2017-12-01

    The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.

  4. Development of the Tensoral Computer Language

    NASA Technical Reports Server (NTRS)

    Ferziger, Joel; Dresselhaus, Eliot

    1996-01-01

    The research scientist or engineer wishing to perform large scale simulations or to extract useful information from existing databases is required to have expertise in the details of the particular database, the numerical methods and the computer architecture to be used. This poses a significant practical barrier to the use of simulation data. The goal of this research was to develop a high-level computer language called Tensoral, designed to remove this barrier. The Tensoral language provides a framework in which efficient generic data manipulations can be easily coded and implemented. First of all, Tensoral is general. The fundamental objects in Tensoral represent tensor fields and the operators that act on them. The numerical implementation of these tensors and operators is completely and flexibly programmable. New mathematical constructs and operators can be easily added to the Tensoral system. Tensoral is compatible with existing languages. Tensoral tensor operations co-exist in a natural way with a host language, which may be any sufficiently powerful computer language such as Fortran, C, or Vectoral. Tensoral is very-high-level. Tensor operations in Tensoral typically act on entire databases (i.e., arrays) at one time and may, therefore, correspond to many lines of code in a conventional language. Tensoral is efficient. Tensoral is a compiled language. Database manipulations are simplified optimized and scheduled by the compiler eventually resulting in efficient machine code to implement them.

  5. Home teleradiology system

    NASA Astrophysics Data System (ADS)

    Komo, Darmadi; Garra, Brian S.; Freedman, Matthew T.; Mun, Seong K.

    1997-05-01

    The Home Teleradiology Server system has been developed and installed at the Department of Radiology, Georgetown University Medical Center. The main purpose of the system is to provide a service for on-call physicians to view patients' medical images at home during off-hours. This service will reduce the overhead time required by on-call physicians to travel to the hospital, thereby increasing the efficiency of patient care and improving the total quality of the health care. Typically when a new case is conducted, the medical images generated from CT, US, and/or MRI modalities are transferred to a central server at the hospital via DICOM messages over an existing hospital network. The server has a DICOM network agent that listens to DICOM messages sent by CT, US, and MRI modalities and stores them into separate DICOM files for sending purposes. The server also has a general purpose, flexible scheduling software that can be configured to send image files to specific user(s) at certain times on any day(s) of the week. The server will then distribute the medical images to on- call physicians' homes via a high-speed modem. All file transmissions occur in the background without human interaction after the scheduling software is pre-configured accordingly. At the receiving end, the physicians' computers consist of high-end workstations that have high-speed modems to receive the medical images sent by the central server from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital, and DICOM compatible viewer software to view the transmitted medical images in DICOM format. A technician from the hospital will notify the physician(s) after all the image files have been completely sent. The physician(s) will then examine the medical images and decide if it is necessary to travel to the hospital for further examination on the patients. Overall, the Home Teleradiology system provides the on-call physicians with a cost-effective and convenient environment for viewing patients' medical images at home.

  6. AmpliVar: mutation detection in high-throughput sequence from amplicon-based libraries.

    PubMed

    Hsu, Arthur L; Kondrashova, Olga; Lunke, Sebastian; Love, Clare J; Meldrum, Cliff; Marquis-Nicholson, Renate; Corboy, Greg; Pham, Kym; Wakefield, Matthew; Waring, Paul M; Taylor, Graham R

    2015-04-01

    Conventional means of identifying variants in high-throughput sequencing align each read against a reference sequence, and then call variants at each position. Here, we demonstrate an orthogonal means of identifying sequence variation by grouping the reads as amplicons prior to any alignment. We used AmpliVar to make key-value hashes of sequence reads and group reads as individual amplicons using a table of flanking sequences. Low-abundance reads were removed according to a selectable threshold, and reads above this threshold were aligned as groups, rather than as individual reads, permitting the use of sensitive alignment tools. We show that this approach is more sensitive, more specific, and more computationally efficient than comparable methods for the analysis of amplicon-based high-throughput sequencing data. The method can be extended to enable alignment-free confirmation of variants seen in hybridization capture target-enrichment data. © 2015 WILEY PERIODICALS, INC.

  7. Using a small hybrid pulse power transformer unit as component of a high-current opening switch for a railgun

    NASA Astrophysics Data System (ADS)

    Leung, E. M. W.; Bailey, R. E.; Michels, P. H.

    1989-03-01

    The hybrid pulse power transformer (HPPT) is a unique concept utilizing the ultrafast superconducting-to-normal transition process of a superconductor. When used in the form of a hybrid transformer current-zero switch (HTCS), this creates an approach in which the large, high-power, high-current opening switch in a conventional railgun system can be eliminated. This represents an innovative application of superconductivity to pulsed power conditioning required for the Strategic Defense Initiative (SDI). The authors explain the working principles of a 100-KJ unit capable of switching up to 500 kA at a frequency of 0.5 Hz and with a system efficiency of greater than 90 percent. Circuit analysis using a computer code called SPICE PLUS was used to verify the HTCS concept. This concept can be scaled up to applications in the several mega-ampere levels.

  8. Investigations of HID Lamp Electrodes under HF Operation

    NASA Astrophysics Data System (ADS)

    Reinelt, Jens; Langenscheidt, Oliver; Westermeier, Michael; Mentel, Juergen; Awakowicz, Peter

    2007-10-01

    Low pressure lamps are operated many years at high frequencies to improve the efficiency of these lamps and drivers. For high pressure discharge lamps this operation mode has not been installed yet. Generally it can be assumed that there are changes in the electrode physics which may lead to an undesired lamp behavior if HID lamps are operated at a high frequency. To gain insights into these fundamental changes the so called Bochum Model Lamp is used. It is an easy system which allows a fundamental research on HID electrode behavior and the near electrode region without the occurrence of acoustic resonances. For the investigation phase resolved photography, pyrometry and spectrometry is used. The presented results describe changes in the electrode temperature and changes in the kind of arc attachment on the electrodes (diffuse and spot mode) depending on frequency. Also measurements of the Electrode-Sheath-Voltage (ESV), depending on frequency, are presented.

  9. Cox-nnet: An artificial neural network method for prognosis prediction of high-throughput omics data.

    PubMed

    Ching, Travers; Zhu, Xun; Garmire, Lana X

    2018-04-01

    Artificial neural networks (ANN) are computing architectures with many interconnections of simple neural-inspired computing elements, and have been applied to biomedical fields such as imaging analysis and diagnosis. We have developed a new ANN framework called Cox-nnet to predict patient prognosis from high throughput transcriptomics data. In 10 TCGA RNA-Seq data sets, Cox-nnet achieves the same or better predictive accuracy compared to other methods, including Cox-proportional hazards regression (with LASSO, ridge, and mimimax concave penalty), Random Forests Survival and CoxBoost. Cox-nnet also reveals richer biological information, at both the pathway and gene levels. The outputs from the hidden layer node provide an alternative approach for survival-sensitive dimension reduction. In summary, we have developed a new method for accurate and efficient prognosis prediction on high throughput data, with functional biological insights. The source code is freely available at https://github.com/lanagarmire/cox-nnet.

  10. Phase-detected Brillouin optical correlation-domain reflectometry

    NASA Astrophysics Data System (ADS)

    Mizuno, Yosuke; Hayashi, Neisei; Fukuda, Hideyuki; Nakamura, Kentaro

    2018-05-01

    Optical fiber sensing techniques based on Brillouin scattering have been extensively studied for structural health monitoring owing to their capability of distributed strain and temperature measurement. Although a higher signal-to-noise ratio (leading to high spatial resolution and high-speed measurement) is generally obtained for two-end-access systems, they reduce the degree of freedom in embedding the sensors into structures, and render the measurement no longer feasible when extremely high loss or breakage occurs at a point of the sensing fiber. To overcome these drawbacks, a one-end-access sensing technique called Brillouin optical correlation-domain reflectometry (BOCDR) has been developed. BOCDR has a high spatial resolution and cost efficiency, but its conventional configuration suffered from relatively low-speed operation. In this paper, we review the recently developed high-speed configurations of BOCDR, including phase-detected BOCDR, with which we demonstrate real-time distributed measurement by tracking a propagating mechanical wave. We also demonstrate breakage detection with a wide strain dynamic range.

  11. Phase-detected Brillouin optical correlation-domain reflectometry

    NASA Astrophysics Data System (ADS)

    Mizuno, Yosuke; Hayashi, Neisei; Fukuda, Hideyuki; Nakamura, Kentaro

    2018-06-01

    Optical fiber sensing techniques based on Brillouin scattering have been extensively studied for structural health monitoring owing to their capability of distributed strain and temperature measurement. Although a higher signal-to-noise ratio (leading to high spatial resolution and high-speed measurement) is generally obtained for two-end-access systems, they reduce the degree of freedom in embedding the sensors into structures, and render the measurement no longer feasible when extremely high loss or breakage occurs at a point of the sensing fiber. To overcome these drawbacks, a one-end-access sensing technique called Brillouin optical correlation-domain reflectometry (BOCDR) has been developed. BOCDR has a high spatial resolution and cost efficiency, but its conventional configuration suffered from relatively low-speed operation. In this paper, we review the recently developed high-speed configurations of BOCDR, including phase-detected BOCDR, with which we demonstrate real-time distributed measurement by tracking a propagating mechanical wave. We also demonstrate breakage detection with a wide strain dynamic range.

  12. Milestones Toward 50% Efficient Solar Cell Modules

    DTIC Science & Technology

    2007-09-01

    efficiency, both at solar cells and module level. The optical system consists of a tiled nonimaging concentrating system, coupled with a spectral...which combines a nonimaging optical concentrator (which does not require tracking and is called a static concentrator) with spectral splitting...DESIGN AND RESULTS The optical design is based on non-symmetric, nonimaging optics, tiled into an array. The central issues in the optical system

  13. optGpSampler: an improved tool for uniformly sampling the solution-space of genome-scale metabolic networks.

    PubMed

    Megchelenbrink, Wout; Huynen, Martijn; Marchiori, Elena

    2014-01-01

    Constraint-based models of metabolic networks are typically underdetermined, because they contain more reactions than metabolites. Therefore the solutions to this system do not consist of unique flux rates for each reaction, but rather a space of possible flux rates. By uniformly sampling this space, an estimated probability distribution for each reaction's flux in the network can be obtained. However, sampling a high dimensional network is time-consuming. Furthermore, the constraints imposed on the network give rise to an irregularly shaped solution space. Therefore more tailored, efficient sampling methods are needed. We propose an efficient sampling algorithm (called optGpSampler), which implements the Artificial Centering Hit-and-Run algorithm in a different manner than the sampling algorithm implemented in the COBRA Toolbox for metabolic network analysis, here called gpSampler. Results of extensive experiments on different genome-scale metabolic networks show that optGpSampler is up to 40 times faster than gpSampler. Application of existing convergence diagnostics on small network reconstructions indicate that optGpSampler converges roughly ten times faster than gpSampler towards similar sampling distributions. For networks of higher dimension (i.e. containing more than 500 reactions), we observed significantly better convergence of optGpSampler and a large deviation between the samples generated by the two algorithms. optGpSampler for Matlab and Python is available for non-commercial use at: http://cs.ru.nl/~wmegchel/optGpSampler/.

  14. Expectations of iPad use in an internal medicine residency program: is it worth the "hype"?

    PubMed

    Luo, Nancy; Chapman, Christopher G; Patel, Bhakti K; Woodruff, James N; Arora, Vineet M

    2013-05-08

    While early reports highlight the benefits of tablet computing in hospitals, introducing any new technology can result in inflated expectations. The aim of the study is to compare anticipated expectations of Apple iPad use and perceptions after deployment among residents. 115 internal medicine residents received Apple iPads in October 2010. Residents completed matched surveys on anticipated usage and perceptions after distribution 1 month prior and 4 months after deployment. In total, 99% (114/115) of residents responded. Prior to deployment, most residents believed that the iPad would improve patient care and efficiency on the wards; however, fewer residents "strongly agreed" after deployment (34% vs 15% for patient care, P<.001; 41% vs 24% for efficiency, P=.005). Residents with higher expectations were more likely to report using the iPad for placing orders post call and during admission (71% vs 44% post call, P=.01, and 16% vs 0% admission, P=.04). Previous Apple iOS product owners were also more likely to use the iPad in key areas. Overall, 84% of residents thought the iPad was a good investment for the residency program, and over half of residents (58%) reported that patients commented on the iPad in a positive way. While the use of tablets such as the iPad by residents is generally well received, high initial expectations highlight the danger of implementing new technologies. Education on the realistic expectations of iPad benefits may be warranted.

  15. Clustering methods applied in the detection of Ki67 hot-spots in whole tumor slide images: an efficient way to characterize heterogeneous tissue-based biomarkers.

    PubMed

    Lopez, Xavier Moles; Debeir, Olivier; Maris, Calliope; Rorive, Sandrine; Roland, Isabelle; Saerens, Marco; Salmon, Isabelle; Decaestecker, Christine

    2012-09-01

    Whole-slide scanners allow the digitization of an entire histological slide at very high resolution. This new acquisition technique opens a wide range of possibilities for addressing challenging image analysis problems, including the identification of tissue-based biomarkers. In this study, we use whole-slide scanner technology for imaging the proliferating activity patterns in tumor slides based on Ki67 immunohistochemistry. Faced with large images, pathologists require tools that can help them identify tumor regions that exhibit high proliferating activity, called "hot-spots" (HSs). Pathologists need tools that can quantitatively characterize these HS patterns. To respond to this clinical need, the present study investigates various clustering methods with the aim of identifying Ki67 HSs in whole tumor slide images. This task requires a method capable of identifying an unknown number of clusters, which may be highly variable in terms of shape, size, and density. We developed a hybrid clustering method, referred to as Seedlink. Compared to manual HS selections by three pathologists, we show that Seedlink provides an efficient way of detecting Ki67 HSs and improves the agreement among pathologists when identifying HSs. Copyright © 2012 International Society for Advancement of Cytometry.

  16. Study on the near-field non-linearity (SMILE) of high power diode laser arrays

    NASA Astrophysics Data System (ADS)

    Zhang, Hongyou; Jia, Yangtao; Li, Changxuan; Zah, Chung-en; Liu, Xingsheng

    2018-02-01

    High power laser diodes have been found a wide range of industrial, space, medical applications, characterized by high conversion efficiency, small size, light weight and a long lifetime. However, due to thermal induced stress, each emitter in a semiconductor laser bar or array is displaced along p-n junction, resulting of each emitter is not in a line, called Near-field Non-linearity. Near-field Non-linearity along laser bar (also known as "SMILE") determines the outcome of optical coupling and beam shaping [1]. The SMILE of a laser array is the main obstacle to obtain good optical coupling efficiency and beam shaping from a laser array. Larger SMILE value causes a larger divergence angle and a wider line after collimation and focusing, respectively. In this letter, we simulate two different package structures based on MCC (Micro Channel Cooler) with Indium and AuSn solders, including the distribution of normal stress and the SMILE value. According to the theoretical results, we found the distribution of normal stress on laser bar shows the largest in the middle and drops rapidly near both ends. At last, we did another experiment to prove that the SMILE value of a laser bar was mainly affected by the die bonding process, rather than the operating condition.

  17. Beta-Cell Replacement: Pancreas and Islet Cell Transplantation.

    PubMed

    Niclauss, Nadja; Meier, Raphael; Bédat, Benoît; Berishvili, Ekaterine; Berney, Thierry

    2016-01-01

    Pancreas and islet transplantation are 2 types of beta-cell replacement therapies for type 1 diabetes mellitus. Since 1966, when pancreas transplantation was first performed, it has evolved to become a highly efficient procedure with high success rates, thanks to advances in surgical technique and immunosuppression. Pancreas transplantation is mostly performed as simultaneous pancreas-kidney transplantation in patients with end-stage nephropathy secondary to diabetes. In spite of its efficiency, pancreas transplantation is still a major surgical procedure burdened by high morbidity, which called for the development of less invasive and hazardous ways of replacing beta-cell function in the past. Islet transplantation was developed in the 1970s as a minimally invasive procedure with initially poor outcomes. However, since the report of the 'Edmonton protocol' in 2000, the functional results of islet transplantation have substantially and constantly improved and are about to match those of whole pancreas transplantation. Islet transplantation is primarily performed alone in nonuremic patients with severe hypoglycemia. Both pancreas transplantation and islet transplantation are able to abolish hypoglycemia and to prevent or slow down the development of secondary complications of diabetes. Pancreas transplantation and islet transplantation should be seen as two complementary, rather than competing, therapeutic approaches for beta-cell replacement that are able to optimize organ donor use and patient care. © 2016 S. Karger AG, Basel.

  18. Analyses of mode filling factor of a laser end-pumped by a LD with high-order transverse modes

    NASA Astrophysics Data System (ADS)

    Han, Juhong; Wang, You; An, Guofei; Rong, Kepeng; Yu, Hang; Wang, Shunyan; Zhang, Wei; Cai, He; Xue, Liangping; Wang, Hongyuan; Zhou, Jie

    2017-05-01

    Although the concept of the mode filling factor (also named as "mode-matching efficiency") has been well discussed decades before, the concept of so-called overlap coefficient is often confused by the laser technicians because there are several different formulae for various engineering purposes. Furthermore, the LD-pumped configurations have become the mainstream of solid-state lasers since their compact size, high optical-to-optical efficiency, low heat generation, etc. As the beam quality of LDs are usually very unsatisfactory, it is necessary to investigate how the mode filling factor of a laser system is affected by a high-powered LD pump source. In this paper, theoretical analyses of an end-pumped laser are carried out based on the normalized overlap coefficient formalism. The study provides a convenient tool to describe the intrinsically complex issue of mode interaction corresponding to a laser and an end-pumped source. The mode filling factor has been studied for many cases in which the pump mode and the laser mode have been considered together in the calculation based on analyses of the rate equations. The results should be applied for analyses of any other types of lasers with the similar optical geometry.

  19. Three-Dimensional High-Order Spectral Finite Volume Method for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel; Wang, Z. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    Many areas require a very high-order accurate numerical solution of conservation laws for complex shapes. This paper deals with the extension to three dimensions of the Spectral Finite Volume (SV) method for unstructured grids, which was developed to solve such problems. We first summarize the limitations of traditional methods such as finite-difference, and finite-volume for both structured and unstructured grids. We then describe the basic formulation of the spectral finite volume method. What distinguishes the SV method from conventional high-order finite-volume methods for unstructured triangular or tetrahedral grids is the data reconstruction. Instead of using a large stencil of neighboring cells to perform a high-order reconstruction, the stencil is constructed by partitioning each grid cell, called a spectral volume (SV), into 'structured' sub-cells, called control volumes (CVs). One can show that if all the SV cells are partitioned into polygonal or polyhedral CV sub-cells in a geometrically similar manner, the reconstructions for all the SVs become universal, irrespective of their shapes, sizes, orientations, or locations. It follows that the reconstruction is reduced to a weighted sum of unknowns involving just a few simple adds and multiplies, and those weights are universal and can be pre-determined once for all. The method is thus very efficient, accurate, and yet geometrically flexible. The most critical part of the SV method is the partitioning of the SV into CVs. In this paper we present the partitioning of a tetrahedral SV into polyhedral CVs with one free parameter for polynomial reconstructions up to degree of precision five. (Note that the order of accuracy of the method is one order higher than the reconstruction degree of precision.) The free parameter will be determined by minimizing the Lebesgue constant of the reconstruction matrix or similar criteria to obtain optimized partitions. The details of an efficient, parallelizable code to solve three-dimensional problems for any order of accuracy are then presented. Important aspects of the data structure are discussed. Comparisons with the Discontinuous Galerkin (DG) method are made. Numerical examples for wave propagation problems are presented.

  20. UGbS-Flex, a novel bioinformatics pipeline for imputation-free SNP discovery in polyploids without a reference genome: finger millet as a case study.

    PubMed

    Qi, Peng; Gimode, Davis; Saha, Dipnarayan; Schröder, Stephan; Chakraborty, Debkanta; Wang, Xuewen; Dida, Mathews M; Malmberg, Russell L; Devos, Katrien M

    2018-06-15

    Research on orphan crops is often hindered by a lack of genomic resources. With the advent of affordable sequencing technologies, genotyping an entire genome or, for large-genome species, a representative fraction of the genome has become feasible for any crop. Nevertheless, most genotyping-by-sequencing (GBS) methods are geared towards obtaining large numbers of markers at low sequence depth, which excludes their application in heterozygous individuals. Furthermore, bioinformatics pipelines often lack the flexibility to deal with paired-end reads or to be applied in polyploid species. UGbS-Flex combines publicly available software with in-house python and perl scripts to efficiently call SNPs from genotyping-by-sequencing reads irrespective of the species' ploidy level, breeding system and availability of a reference genome. Noteworthy features of the UGbS-Flex pipeline are an ability to use paired-end reads as input, an effective approach to cluster reads across samples with enhanced outputs, and maximization of SNP calling. We demonstrate use of the pipeline for the identification of several thousand high-confidence SNPs with high representation across samples in an F 3 -derived F 2 population in the allotetraploid finger millet. Robust high-density genetic maps were constructed using the time-tested mapping program MAPMAKER which we upgraded to run efficiently and in a semi-automated manner in a Windows Command Prompt Environment. We exploited comparative GBS with one of the diploid ancestors of finger millet to assign linkage groups to subgenomes and demonstrate the presence of chromosomal rearrangements. The paper combines GBS protocol modifications, a novel flexible GBS analysis pipeline, UGbS-Flex, recommendations to maximize SNP identification, updated genetic mapping software, and the first high-density maps of finger millet. The modules used in the UGbS-Flex pipeline and for genetic mapping were applied to finger millet, an allotetraploid selfing species without a reference genome, as a case study. The UGbS-Flex modules, which can be run independently, are easily transferable to species with other breeding systems or ploidy levels.

  1. Uncertainty Quantification in High Throughput Screening ...

    EPA Pesticide Factsheets

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  2. Foraging decisions in wild versus domestic Mus musculus: What does life in the lab select for?

    PubMed

    Troxell-Smith, Sandra M; Tutka, Michal J; Albergo, Jessica M; Balu, Deebika; Brown, Joel S; Leonard, John P

    2016-01-01

    What does domestication select for in terms of foraging and anti-predator behaviors? We applied principles of patch use and foraging theory to test foraging strategies and fear responses of three strains of Mus musculus: wild-caught, control laboratory, and genetically modified strains. Foraging choices were quantified using giving-up densities (GUDs) under three foraging scenarios: (1) patches varying in microhabitat (covered versus open), and initial resource density (low versus high); (2) daily variation in auditory cues (aerial predators and control calls); (3) patches with varying seed aggregations. Overall, both domestic strains harvested significantly more food than wild mice. Each strain revealed a significant preference for foraging under cover compared to the open, and predator calls had no detectable effects on foraging. Both domestic strains biased their harvest toward high quality patches; wild mice did not. In terms of exploiting favorable and avoiding unfavorable distributions of seeds within patches, the lab strain performed best, the wild strain worst, and the mutant strain in between. Our study provides support for hypothesis that domestic animals have more energy-efficient foraging strategies than their wild counterparts, but retain residual fear responses. Furthermore, patch-use studies can reveal the aptitudes and priorities of both domestic and wild animals. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. [Patient safety in primary care: PREFASEG project].

    PubMed

    Catalán, Arantxa; Borrell, Francesc; Pons, Angels; Amado, Ester; Baena, José Miguel; Morales, Vicente

    2014-07-01

    The Institut Català de la Salut (ICS) has designed and integrated in electronic clinical station of primary care a new software tool to support the prescription of drugs, which can detect on-line certain medication errors. The software called PREFASEG (stands for Secure drug prescriptions) aims to prevent adverse events related to medication use in the field of primary health care (PHC). This study was made on the computerized medical record called CPT, which is used by all PHC physicians in our institution -3,750- and prescribing physicians through it. PREFASEG integrated in eCAP in July 2010 and six months later we performed a cross-sectional study to evaluate their usefulness and refine their design. The software alerts on-line in 5 dimensions: drug interactions, redundant treatments, allergies, contraindications of drugs with disease, and advises against drugs in over 75 years. PREFASEG generated 1,162,765 alerts (1 per 10 high treatment), with the detection of therapeutic duplication (62%) the most alerted. The overall acceptance rate is 35%, redundancies pharmacological (43%) and allergies (26%) are the most accepted. A total of 10,808 professionals (doctors and nurses) have accepted some of the recommendations of the program. PREFASEG is a feasible and highly efficient strategy to achieve an objective of Quality Plan for the NHS. Copyright © 2014. Published by Elsevier Espana.

  4. PANGEA: pipeline for analysis of next generation amplicons

    PubMed Central

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz FW; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-01-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including preprocessing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the χ2 step, are joined into one program called the ‘backbone’. PMID:20182525

  5. PANGEA: pipeline for analysis of next generation amplicons.

    PubMed

    Giongo, Adriana; Crabb, David B; Davis-Richardson, Austin G; Chauliac, Diane; Mobberley, Jennifer M; Gano, Kelsey A; Mukherjee, Nabanita; Casella, George; Roesch, Luiz F W; Walts, Brandon; Riva, Alberto; King, Gary; Triplett, Eric W

    2010-07-01

    High-throughput DNA sequencing can identify organisms and describe population structures in many environmental and clinical samples. Current technologies generate millions of reads in a single run, requiring extensive computational strategies to organize, analyze and interpret those sequences. A series of bioinformatics tools for high-throughput sequencing analysis, including pre-processing, clustering, database matching and classification, have been compiled into a pipeline called PANGEA. The PANGEA pipeline was written in Perl and can be run on Mac OSX, Windows or Linux. With PANGEA, sequences obtained directly from the sequencer can be processed quickly to provide the files needed for sequence identification by BLAST and for comparison of microbial communities. Two different sets of bacterial 16S rRNA sequences were used to show the efficiency of this workflow. The first set of 16S rRNA sequences is derived from various soils from Hawaii Volcanoes National Park. The second set is derived from stool samples collected from diabetes-resistant and diabetes-prone rats. The workflow described here allows the investigator to quickly assess libraries of sequences on personal computers with customized databases. PANGEA is provided for users as individual scripts for each step in the process or as a single script where all processes, except the chi(2) step, are joined into one program called the 'backbone'.

  6. CdTe Based Hard X-ray Imager Technology For Space Borne Missions

    NASA Astrophysics Data System (ADS)

    Limousin, Olivier; Delagnes, E.; Laurent, P.; Lugiez, F.; Gevin, O.; Meuris, A.

    2009-01-01

    CEA Saclay has recently developed an innovative technology for CdTe based Pixelated Hard X-Ray Imagers with high spectral performance and high timing resolution for efficient background rejection when the camera is coupled to an active veto shield. This development has been done in a R&D program supported by CNES (French National Space Agency) and has been optimized towards the Simbol-X mission requirements. In the latter telescope, the hard X-Ray imager is 64 cm² and is equipped with 625µm pitch pixels (16384 independent channels) operating at -40°C in the range of 4 to 80 keV. The camera we demonstrate in this paper consists of a mosaic of 64 independent cameras, divided in 8 independent sectors. Each elementary detection unit, called Caliste, is the hybridization of a 256-pixel Cadmium Telluride (CdTe) detector with full custom front-end electronics into a unique 1 cm² component, juxtaposable on its four sides. Recently, promising results have been obtained from the first micro-camera prototypes called Caliste 64 and will be presented to illustrate the capabilities of the device as well as the expected performance of an instrument based on it. The modular design of Caliste enables to consider extended developments toward IXO type mission, according to its specific scientific requirements.

  7. Slurry combustion. Volume 2: Appendices, Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Essenhigh, R.

    1993-06-01

    Volume II contains the following appendices: coal analyses and slurryability characteristics; listings of programs used to call and file experimental data, and to reduce data in enthalpy and efficiency calculations; and tabulated data sets.

  8. RCQ-GA: RDF Chain Query Optimization Using Genetic Algorithms

    NASA Astrophysics Data System (ADS)

    Hogenboom, Alexander; Milea, Viorel; Frasincar, Flavius; Kaymak, Uzay

    The application of Semantic Web technologies in an Electronic Commerce environment implies a need for good support tools. Fast query engines are needed for efficient querying of large amounts of data, usually represented using RDF. We focus on optimizing a special class of SPARQL queries, the so-called RDF chain queries. For this purpose, we devise a genetic algorithm called RCQ-GA that determines the order in which joins need to be performed for an efficient evaluation of RDF chain queries. The approach is benchmarked against a two-phase optimization algorithm, previously proposed in literature. The more complex a query is, the more RCQ-GA outperforms the benchmark in solution quality, execution time needed, and consistency of solution quality. When the algorithms are constrained by a time limit, the overall performance of RCQ-GA compared to the benchmark further improves.

  9. An efficient, versatile and scalable pattern growth approach to mine frequent patterns in unaligned protein sequences.

    PubMed

    Ye, Kai; Kosters, Walter A; Ijzerman, Adriaan P

    2007-03-15

    Pattern discovery in protein sequences is often based on multiple sequence alignments (MSA). The procedure can be computationally intensive and often requires manual adjustment, which may be particularly difficult for a set of deviating sequences. In contrast, two algorithms, PRATT2 (http//www.ebi.ac.uk/pratt/) and TEIRESIAS (http://cbcsrv.watson.ibm.com/) are used to directly identify frequent patterns from unaligned biological sequences without an attempt to align them. Here we propose a new algorithm with more efficiency and more functionality than both PRATT2 and TEIRESIAS, and discuss some of its applications to G protein-coupled receptors, a protein family of important drug targets. In this study, we designed and implemented six algorithms to mine three different pattern types from either one or two datasets using a pattern growth approach. We compared our approach to PRATT2 and TEIRESIAS in efficiency, completeness and the diversity of pattern types. Compared to PRATT2, our approach is faster, capable of processing large datasets and able to identify the so-called type III patterns. Our approach is comparable to TEIRESIAS in the discovery of the so-called type I patterns but has additional functionality such as mining the so-called type II and type III patterns and finding discriminating patterns between two datasets. The source code for pattern growth algorithms and their pseudo-code are available at http://www.liacs.nl/home/kosters/pg/.

  10. EB66 cell line, a duck embryonic stem cell-derived substrate for the industrial production of therapeutic monoclonal antibodies with enhanced ADCC activity.

    PubMed

    Olivier, Stéphane; Jacoby, Marine; Brillon, Cédric; Bouletreau, Sylvana; Mollet, Thomas; Nerriere, Olivier; Angel, Audrey; Danet, Sévérine; Souttou, Boussad; Guehenneux, Fabienne; Gauthier, Laurent; Berthomé, Mathilde; Vié, Henri; Beltraminelli, Nicola; Mehtali, Majid

    2010-01-01

    Monoclonal antibodies (mAbs) represent the fastest growing class of therapeutic proteins. The increasing demand for mAb manufacturing and the associated high production costs call for the pharmaceutical industry to improve its current production processes or develop more efficient alternative production platforms. The experimental control of IgG fucosylation to enhance antibody dependent cell cytotoxicity (ADCC) activity constitutes one of the promising strategies to improve the efficacy of monoclonal antibodies and to potentially reduce the therapeutic cost. We report here that the EB66 cell line derived from duck embryonic stem cells can be efficiently genetically engineered to produce mAbs at yields beyond a 1 g/L, as suspension cells grown in serum-free culture media. EB66 cells display additional attractive grown characteristics such as a very short population doubling time of 12 to 14 hours, a capacity to reach very high cell density (> 30 million cells/mL) and a unique metabolic profile resulting in low ammonium and lactate accumulation and low glutamine consumption, even at high cell densities. Furthermore, mAbs produced on EB66 cells display a naturally reduced fucose content resulting in strongly enhanced ADCC activity. The EB66 cells have therefore the potential to evolve as a novel cellular platform for the production of high potency therapeutic antibodies.

  11. Energy efficiency of substance and energy recovery of selected waste fractions.

    PubMed

    Fricke, Klaus; Bahr, Tobias; Bidlingmaier, Werner; Springer, Christian

    2011-04-01

    In order to reduce the ecological impact of resource exploitation, the EU calls for sustainable options to increase the efficiency and productivity of the utilization of natural resources. This target can only be achieved by considering resource recovery from waste comprehensively. However, waste management measures have to be investigated critically and all aspects of substance-related recycling and energy recovery have to be carefully balanced. This article compares recovery methods for selected waste fractions with regard to their energy efficiency. Whether material recycling or energy recovery is the most energy efficient solution, is a question of particular relevance with regard to the following waste fractions: paper and cardboard, plastics and biowaste and also indirectly metals. For the described material categories material recycling has advantages compared to energy recovery. In accordance with the improved energy efficiency of substance opposed to energy recovery, substance-related recycling causes lower emissions of green house gases. For the fractions paper and cardboard, plastics, biowaste and metals it becomes apparent, that intensification of the separate collection systems in combination with a more intensive use of sorting technologies can increase the extent of material recycling. Collection and sorting systems must be coordinated. The objective of the overall system must be to achieve an optimum of the highest possible recovery rates in combination with a high quality of recyclables. The energy efficiency of substance related recycling of biowaste can be increased by intensifying the use of anaerobic technologies. In order to increase the energy efficiency of the overall system, the energy efficiencies of energy recovery plants must be increased so that the waste unsuitable for substance recycling is recycled or treated with the highest possible energy yield. Copyright © 2010 Elsevier Ltd. All rights reserved.

  12. Energy efficiency of substance and energy recovery of selected waste fractions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fricke, Klaus, E-mail: klaus.fricke@tu-bs.de; Bahr, Tobias, E-mail: t.bahr@tu-bs.de; Bidlingmaier, Werner, E-mail: werner.bidlingmaier@uni-weimar.de

    In order to reduce the ecological impact of resource exploitation, the EU calls for sustainable options to increase the efficiency and productivity of the utilization of natural resources. This target can only be achieved by considering resource recovery from waste comprehensively. However, waste management measures have to be investigated critically and all aspects of substance-related recycling and energy recovery have to be carefully balanced. This article compares recovery methods for selected waste fractions with regard to their energy efficiency. Whether material recycling or energy recovery is the most energy efficient solution, is a question of particular relevance with regard tomore » the following waste fractions: paper and cardboard, plastics and biowaste and also indirectly metals. For the described material categories material recycling has advantages compared to energy recovery. In accordance with the improved energy efficiency of substance opposed to energy recovery, substance-related recycling causes lower emissions of green house gases. For the fractions paper and cardboard, plastics, biowaste and metals it becomes apparent, that intensification of the separate collection systems in combination with a more intensive use of sorting technologies can increase the extent of material recycling. Collection and sorting systems must be coordinated. The objective of the overall system must be to achieve an optimum of the highest possible recovery rates in combination with a high quality of recyclables. The energy efficiency of substance related recycling of biowaste can be increased by intensifying the use of anaerobic technologies. In order to increase the energy efficiency of the overall system, the energy efficiencies of energy recovery plants must be increased so that the waste unsuitable for substance recycling is recycled or treated with the highest possible energy yield.« less

  13. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  14. Optimization of the Controlled Evaluation of Closed Relational Queries

    NASA Astrophysics Data System (ADS)

    Biskup, Joachim; Lochner, Jan-Hendrik; Sonntag, Sebastian

    For relational databases, controlled query evaluation is an effective inference control mechanism preserving confidentiality regarding a previously declared confidentiality policy. Implementations of controlled query evaluation usually lack efficiency due to costly theorem prover calls. Suitably constrained controlled query evaluation can be implemented efficiently, but is not flexible enough from the perspective of database users and security administrators. In this paper, we propose an optimized framework for controlled query evaluation in relational databases, being efficiently implementable on the one hand and relaxing the constraints of previous approaches on the other hand.

  15. [The effects of instruction about strategies for efficient calculation].

    PubMed

    Suzuki, Masayuki; Ichikawa, Shin'ichi

    2016-06-01

    Calculation problems such as "12x7÷3" can be solved rapidly and easily by using certain techniques; we call these problems "efficient calculation problems." However, it has been pointed out that many students do not always solve them efficiently. In the present study, we examined the effects of an intervention on 35 seventh grade students (23 males, 12 females). The students were instructed to use an overview strategy that stated, "Think carefully about the whole expression", and were then taught three sub-strategies. The results showed that students solved similar problems efficiently after the intervention and the effects were preserved for five months.

  16. Ends-in Vs. Ends-Out Recombination in Yeast

    PubMed Central

    Hastings, P. J.; McGill, C.; Shafer, B.; Strathern, J. N.

    1993-01-01

    Integration of linearized plasmids into yeast chromosomes has been used as a model system for the study of recombination initiated by double-strand breaks. The linearized plasmid DNA recombines efficiently into sequences homologous to the ends of the DNA. This efficient recombination occurs both for the configuration in which the break is in a contiguous region of homology (herein called the ends-in configuration) and for ``omega'' insertions in which plasmid sequences interrupt a linear region of homology (herein called the ends-out configuration). The requirements for integration of these two configurations are expected to be different. We compared these two processes in a yeast strain containing an ends-in target and an ends-out target for the same cut plasmid. Recovery of ends-in events exceeds ends-out events by two- to threefold. Possible causes for the origin of this small bias are discussed. The lack of an extreme difference in frequency implies that cooperativity between the two ends does not contribute to the efficiency with which cut circular plasmids are integrated. This may also be true for the repair of chromosomal double-strand breaks. PMID:8307337

  17. Call intercalation in dyadic interactions in natural choruses of Johnstone's whistling frog Eleutherodactylus johnstonei (Anura: Eleutherodactylidae).

    PubMed

    Tárano, Zaida; Carballo, Luisana

    2016-05-01

    Communal signaling increases the likelihood of acoustic interference and impairs mate choice; consequently, mechanisms of interference avoidance are expected. Adjustment of the timing of the calls between signalers, specifically call alternation, is probably the most efficient strategy. For this reason, in the present study we analyzed call timing in dyads of males of E. johnstonei in six natural assemblages. We addressed whether males entrain their calls with those of other males at the assemblage and if they show selective attention in relation to perceived amplitude of the other males' calls, inter-male distance, or intrinsic call features (call duration, period or dominant frequency). We expected males to selectively attend to closer or louder males and/or to those of higher or similar attractiveness for females than themselves, because those would be their strongest competitors. We found that most males intercalated their calls with those of at least one male. In assemblages of 3 individuals, males seemed to attend to a fixed number of males regardless of their characteristics. In assemblages of more than 3 individuals, the perceived amplitude of the call of the neighboring male was higher, and the call periods of the males were more similar in alternating dyads than in the non-alternating ones. At the proximate level, selective attention based on perceived amplitude may relate to behavioral hearing thresholds. Selective attention based on the similarity of call periods may relate to the properties of the call oscillators controlling calling rhythms. At the ultimate level, selective attention may be related to the likelihood of acoustic competition for females. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. An Energy-Efficient and High-Quality Video Transmission Architecture in Wireless Video-Based Sensor Networks.

    PubMed

    Aghdasi, Hadi S; Abbaspour, Maghsoud; Moghadam, Mohsen Ebrahimi; Samei, Yasaman

    2008-08-04

    Technological progress in the fields of Micro Electro-Mechanical Systems (MEMS) and wireless communications and also the availability of CMOS cameras, microphones and small-scale array sensors, which may ubiquitously capture multimedia content from the field, have fostered the development of low-cost limited resources Wireless Video-based Sensor Networks (WVSN). With regards to the constraints of videobased sensor nodes and wireless sensor networks, a supporting video stream is not easy to implement with the present sensor network protocols. In this paper, a thorough architecture is presented for video transmission over WVSN called Energy-efficient and high-Quality Video transmission Architecture (EQV-Architecture). This architecture influences three layers of communication protocol stack and considers wireless video sensor nodes constraints like limited process and energy resources while video quality is preserved in the receiver side. Application, transport, and network layers are the layers in which the compression protocol, transport protocol, and routing protocol are proposed respectively, also a dropping scheme is presented in network layer. Simulation results over various environments with dissimilar conditions revealed the effectiveness of the architecture in improving the lifetime of the network as well as preserving the video quality.

  19. Proteomic Investigations into Hemodialysis Therapy

    PubMed Central

    Bonomini, Mario; Sirolli, Vittorio; Pieroni, Luisa; Felaco, Paolo; Amoroso, Luigi; Urbani, Andrea

    2015-01-01

    The retention of a number of solutes that may cause adverse biochemical/biological effects, called uremic toxins, characterizes uremic syndrome. Uremia therapy is based on renal replacement therapy, hemodialysis being the most commonly used modality. The membrane contained in the hemodialyzer represents the ultimate determinant of the success and quality of hemodialysis therapy. Membrane’s performance can be evaluated in terms of removal efficiency for unwanted solutes and excess fluid, and minimization of negative interactions between the membrane material and blood components that define the membrane’s bio(in)compatibility. Given the high concentration of plasma proteins and the complexity of structural functional relationships of this class of molecules, the performance of a membrane is highly influenced by its interaction with the plasma protein repertoire. Proteomic investigations have been increasingly applied to describe the protein uremic milieu, to compare the blood purification efficiency of different dialyzer membranes or different extracorporeal techniques, and to evaluate the adsorption of plasma proteins onto hemodialysis membranes. In this article, we aim to highlight investigations in the hemodialysis setting making use of recent developments in proteomic technologies. Examples are presented of why proteomics may be helpful to nephrology and may possibly affect future directions in renal research. PMID:26690416

  20. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

Top