Sample records for ultra-large scale integrated

  1. Atomic-order thermal nitridation of group IV semiconductors for ultra-large-scale integration

    NASA Astrophysics Data System (ADS)

    Murota, Junichi; Le Thanh, Vinh

    2015-03-01

    One of the main requirements for ultra-large-scale integration (ULSI) is atomic-order control of process technology. Our concept of atomically controlled processing for group IV semiconductors is based on atomic-order surface reaction control in Si-based CVD epitaxial growth. On the atomic-order surface nitridation of a few nm-thick Ge/about 4 nm-thick Si0.5Ge0.5/Si(100) by NH3, it is found that N atoms diffuse through nm-order thick Ge layer into Si0.5Ge0.5/Si(100) substrate and form Si nitride, even at 500 °C. By subsequent H2 heat treatment, although N atomic amount in Ge layer is reduced drastically, the reduction of the Si nitride is slight. It is suggested that N diffusion in Ge layer is suppressed by the formation of Si nitride and that Ge/atomic-order N layer/Si1-xGex/Si (100) heterostructure is formed. These results demonstrate the capability of CVD technology for atomically controlled nitridation of group IV semiconductors for ultra-large-scale integration. Invited talk at the 7th International Workshop on Advanced Materials Science and Nanotechnology IWAMSN2014, 2-6 November, 2014, Ha Long, Vietnam.

  2. Impurity engineering of Czochralski silicon used for ultra large-scaled-integrated circuits

    NASA Astrophysics Data System (ADS)

    Yang, Deren; Chen, Jiahe; Ma, Xiangyang; Que, Duanlin

    2009-01-01

    Impurities in Czochralski silicon (Cz-Si) used for ultra large-scaled-integrated (ULSI) circuits have been believed to deteriorate the performance of devices. In this paper, a review of the recent processes from our investigation on internal gettering in Cz-Si wafers which were doped with nitrogen, germanium and/or high content of carbon is presented. It has been suggested that those impurities enhance oxygen precipitation, and create both denser bulk microdefects and enough denuded zone with the desirable width, which is benefit of the internal gettering of metal contamination. Based on the experimental facts, a potential mechanism of impurity doping on the internal gettering structure is interpreted and, a new concept of 'impurity engineering' for Cz-Si used for ULSI is proposed.

  3. Research on precision grinding technology of large scale and ultra thin optics

    NASA Astrophysics Data System (ADS)

    Zhou, Lian; Wei, Qiancai; Li, Jie; Chen, Xianhua; Zhang, Qinghua

    2018-03-01

    The flatness and parallelism error of large scale and ultra thin optics have an important influence on the subsequent polishing efficiency and accuracy. In order to realize the high precision grinding of those ductile elements, the low deformation vacuum chuck was designed first, which was used for clamping the optics with high supporting rigidity in the full aperture. Then the optics was planar grinded under vacuum adsorption. After machining, the vacuum system was turned off. The form error of optics was on-machine measured using displacement sensor after elastic restitution. The flatness would be convergenced with high accuracy by compensation machining, whose trajectories were integrated with the measurement result. For purpose of getting high parallelism, the optics was turned over and compensation grinded using the form error of vacuum chuck. Finally, the grinding experiment of large scale and ultra thin fused silica optics with aperture of 430mm×430mm×10mm was performed. The best P-V flatness of optics was below 3 μm, and parallelism was below 3 ″. This machining technique has applied in batch grinding of large scale and ultra thin optics.

  4. Nonlinear modulation of the HI power spectrum on ultra-large scales. I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umeh, Obinna; Maartens, Roy; Santos, Mario, E-mail: umeobinna@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: mgrsantos@uwc.ac.za

    2016-03-01

    Intensity mapping of the neutral hydrogen brightness temperature promises to provide a three-dimensional view of the universe on very large scales. Nonlinear effects are typically thought to alter only the small-scale power, but we show how they may bias the extraction of cosmological information contained in the power spectrum on ultra-large scales. For linear perturbations to remain valid on large scales, we need to renormalize perturbations at higher order. In the case of intensity mapping, the second-order contribution to clustering from weak lensing dominates the nonlinear contribution at high redshift. Renormalization modifies the mean brightness temperature and therefore the evolutionmore » bias. It also introduces a term that mimics white noise. These effects may influence forecasting analysis on ultra-large scales.« less

  5. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  6. Probing Inflation Using Galaxy Clustering On Ultra-Large Scales

    NASA Astrophysics Data System (ADS)

    Dalal, Roohi; de Putter, Roland; Dore, Olivier

    2018-01-01

    A detailed understanding of curvature perturbations in the universe is necessary to constrain theories of inflation. In particular, measurements of the local non-gaussianity parameter, flocNL, enable us to distinguish between two broad classes of inflationary theories, single-field and multi-field inflation. While most single-field theories predict flocNL ≈ ‑5/12 (ns -1), in multi-field theories, flocNL is not constrained to this value and is allowed to be observably large. Achieving σ(flocNL) = 1 would give us discovery potential for detecting multi-field inflation, while finding flocNL=0 would rule out a good fraction of interesting multi-field models. We study the use of galaxy clustering on ultra-large scales to achieve this level of constraint on flocNL. Upcoming surveys such as Euclid and LSST will give us galaxy catalogs from which we can construct the galaxy power spectrum and hence infer a value of flocNL. We consider two possible methods of determining the galaxy power spectrum from a catalog of galaxy positions: the traditional Feldman Kaiser Peacock (FKP) Power Spectrum Estimator, and an Optimal Quadratic Estimator (OQE). We implemented and tested each method using mock galaxy catalogs, and compared the resulting constraints on flocNL. We find that the FKP estimator can measure flocNL in an unbiased way, but there remains room for improvement in its precision. We also find that the OQE is not computationally fast, but remains a promising option due to its ability to isolate the power spectrum at large scales. We plan to extend this research to study alternative methods, such as pixel-based likelihood functions. We also plan to study the impact of general relativistic effects at these scales on our ability to measure flocNL.

  7. Ultra-large nonlinear parameter in graphene-silicon waveguide structures.

    PubMed

    Donnelly, Christine; Tan, Dawn T H

    2014-09-22

    Mono-layer graphene integrated with optical waveguides is studied for the purpose of maximizing E-field interaction with the graphene layer, for the generation of ultra-large nonlinear parameters. It is shown that the common approach used to minimize the waveguide effective modal area does not accurately predict the configuration with the maximum nonlinear parameter. Both photonic and plasmonic waveguide configurations and graphene integration techniques realizable with today's fabrication tools are studied. Importantly, nonlinear parameters exceeding 10(4) W(-1)/m, two orders of magnitude larger than that in silicon on insulator waveguides without graphene, are obtained for the quasi-TE mode in silicon waveguides incorporating mono-layer graphene in the evanescent part of the optical field. Dielectric loaded surface plasmon polariton waveguides incorporating mono-layer graphene are observed to generate nonlinear parameters as large as 10(5) W(-1)/m, three orders of magnitude larger than that in silicon on insulator waveguides without graphene. The ultra-large nonlinear parameters make such waveguides promising platforms for nonlinear integrated optics at ultra-low powers, and for previously unobserved nonlinear optical effects to be studied in a waveguide platform.

  8. Mems: Platform for Large-Scale Integrated Vacuum Electronic Circuits

    DTIC Science & Technology

    2017-03-20

    SECURITY CLASSIFICATION OF: The objective of the LIVEC advanced study project was to develop a platform for large-scale integrated vacuum electronic ...Distribution Unlimited UU UU UU UU 20-03-2017 1-Jul-2014 30-Jun-2015 Final Report: MEMS Platform for Large-Scale Integrated Vacuum Electronic ... Electronic Circuits (LIVEC) Contract No: W911NF-14-C-0093 COR Dr. James Harvey U.S. ARO RTP, NC 27709-2211 Phone: 702-696-2533 e-mail

  9. Studying Teacher Selection of Resources in an Ultra-Large Scale Interactive System: Does Metadata Guide the Way?

    ERIC Educational Resources Information Center

    Abramovich, Samuel; Schunn, Christian

    2012-01-01

    Ultra-large-scale interactive systems on the Internet have begun to change how teachers prepare for instruction, particularly in regards to resource selection. Consequently, it is important to look at how teachers are currently selecting resources beyond content or keyword search. We conducted a two-part observational study of an existing popular…

  10. Topological Properties of Some Integrated Circuits for Very Large Scale Integration Chip Designs

    NASA Astrophysics Data System (ADS)

    Swanson, S.; Lanzerotti, M.; Vernizzi, G.; Kujawski, J.; Weatherwax, A.

    2015-03-01

    This talk presents topological properties of integrated circuits for Very Large Scale Integration chip designs. These circuits can be implemented in very large scale integrated circuits, such as those in high performance microprocessors. Prior work considered basic combinational logic functions and produced a mathematical framework based on algebraic topology for integrated circuits composed of logic gates. Prior work also produced an historically-equivalent interpretation of Mr. E. F. Rent's work for today's complex circuitry in modern high performance microprocessors, where a heuristic linear relationship was observed between the number of connections and number of logic gates. This talk will examine topological properties and connectivity of more complex functionally-equivalent integrated circuits. The views expressed in this article are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense or the U.S. Government.

  11. Progress on Ultra-Dense Quantum Communication Using Integrated Photonic Architecture

    DTIC Science & Technology

    2012-05-09

    REPORT Progress on Ultra-Dense Quantum Communication Using Integrated Photonic Architecture 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The goal of...including the development of a large-alphabet quantum key distribution protocol that uses measurements in mutually unbiased bases. 1. REPORT DATE (DD-MM... quantum information, integrated optics, photonic integrated chip Dirk Englund, Karl Berggren, Jeffrey Shapiro, Chee Wei Wong, Franco Wong, and Gregory

  12. Organic field effect transistor with ultra high amplification

    NASA Astrophysics Data System (ADS)

    Torricelli, Fabrizio

    2016-09-01

    High-gain transistors are essential for the large-scale circuit integration, high-sensitivity sensors and signal amplification in sensing systems. Unfortunately, organic field-effect transistors show limited gain, usually of the order of tens, because of the large contact resistance and channel-length modulation. Here we show organic transistors fabricated on plastic foils enabling unipolar amplifiers with ultra-gain. The proposed approach is general and opens up new opportunities for ultra-large signal amplification in organic circuits and sensors.

  13. Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System

    NASA Astrophysics Data System (ADS)

    Yue, Liu; Hang, Mend

    2018-01-01

    With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.

  14. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  15. Integrated computational study of ultra-high heat flux cooling using cryogenic micro-solid nitrogen spray

    NASA Astrophysics Data System (ADS)

    Ishimoto, Jun; Oh, U.; Tan, Daisuke

    2012-10-01

    A new type of ultra-high heat flux cooling system using the atomized spray of cryogenic micro-solid nitrogen (SN2) particles produced by a superadiabatic two-fluid nozzle was developed and numerically investigated for application to next generation super computer processor thermal management. The fundamental characteristics of heat transfer and cooling performance of micro-solid nitrogen particulate spray impinging on a heated substrate were numerically investigated and experimentally measured by a new type of integrated computational-experimental technique. The employed Computational Fluid Dynamics (CFD) analysis based on the Euler-Lagrange model is focused on the cryogenic spray behavior of atomized particulate micro-solid nitrogen and also on its ultra-high heat flux cooling characteristics. Based on the numerically predicted performance, a new type of cryogenic spray cooling technique for application to a ultra-high heat power density device was developed. In the present integrated computation, it is clarified that the cryogenic micro-solid spray cooling characteristics are affected by several factors of the heat transfer process of micro-solid spray which impinges on heated surface as well as by atomization behavior of micro-solid particles. When micro-SN2 spraying cooling was used, an ultra-high cooling heat flux level was achieved during operation, a better cooling performance than that with liquid nitrogen (LN2) spray cooling. As micro-SN2 cooling has the advantage of direct latent heat transport which avoids the film boiling state, the ultra-short time scale heat transfer in a thin boundary layer is more possible than in LN2 spray. The present numerical prediction of the micro-SN2 spray cooling heat flux profile can reasonably reproduce the measurement results of cooling wall heat flux profiles. The application of micro-solid spray as a refrigerant for next generation computer processors is anticipated, and its ultra-high heat flux technology is expected

  16. Recent developments in microfluidic large scale integration.

    PubMed

    Araci, Ismail Emre; Brisk, Philip

    2014-02-01

    In 2002, Thorsen et al. integrated thousands of micromechanical valves on a single microfluidic chip and demonstrated that the control of the fluidic networks can be simplified through multiplexors [1]. This enabled realization of highly parallel and automated fluidic processes with substantial sample economy advantage. Moreover, the fabrication of these devices by multilayer soft lithography was easy and reliable hence contributed to the power of the technology; microfluidic large scale integration (mLSI). Since then, mLSI has found use in wide variety of applications in biology and chemistry. In the meantime, efforts to improve the technology have been ongoing. These efforts mostly focus on; novel materials, components, micromechanical valve actuation methods, and chip architectures for mLSI. In this review, these technological advances are discussed and, recent examples of the mLSI applications are summarized. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  18. Integral criteria for large-scale multiple fingerprint solutions

    NASA Astrophysics Data System (ADS)

    Ushmaev, Oleg S.; Novikov, Sergey O.

    2004-08-01

    We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.

  19. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  20. Do large-scale assessments measure students' ability to integrate scientific knowledge?

    NASA Astrophysics Data System (ADS)

    Lee, Hee-Sun

    2010-03-01

    Large-scale assessments are used as means to diagnose the current status of student achievement in science and compare students across schools, states, and countries. For efficiency, multiple-choice items and dichotomously-scored open-ended items are pervasively used in large-scale assessments such as Trends in International Math and Science Study (TIMSS). This study investigated how well these items measure secondary school students' ability to integrate scientific knowledge. This study collected responses of 8400 students to 116 multiple-choice and 84 open-ended items and applied an Item Response Theory analysis based on the Rasch Partial Credit Model. Results indicate that most multiple-choice items and dichotomously-scored open-ended items can be used to determine whether students have normative ideas about science topics, but cannot measure whether students integrate multiple pieces of relevant science ideas. Only when the scoring rubric is redesigned to capture subtle nuances of student open-ended responses, open-ended items become a valid and reliable tool to assess students' knowledge integration ability.

  1. Ultra-Large Solar Sail

    NASA Technical Reports Server (NTRS)

    Burton, Rodney; Coverstone, Victoria

    2009-01-01

    UltraSail is a next-generation ultra-large (km2 class) sail system. Analysis of the launch, deployment, stabilization, and control of these sails shows that high-payload-mass fractions for interplanetary and deep-space missions are possible. UltraSail combines propulsion and control systems developed for formation-flying microsatellites with a solar sail architecture to achieve controllable sail areas approaching 1 km2. Electrically conductive CP-1 polyimide film results in sail subsystem area densities as low as 5 g/m2. UltraSail produces thrust levels many times those of ion thrusters used for comparable deep-space missions. The primary innovation involves the near-elimination of sail-supporting structures by attaching each blade tip to a formation- flying microsatellite, which deploys the sail and then articulates the sail to provide attitude control, including spin stabilization and precession of the spin axis. These microsatellite tips are controlled by microthrusters for sail-film deployment and mission operations. UltraSail also avoids the problems inherent in folded sail film, namely stressing, yielding, or perforating, by storing the film in a roll for launch and deployment. A 5-km long by 2 micrometer thick film roll on a mandrel with a 1 m circumference (32 cm diameter) has a stored thickness of 5 cm. A 5 m-long mandrel can store a film area of 25,000 m2, and a four-blade system has an area of 0.1 sq km.

  2. MAINTAINING DATA QUALITY IN THE PERFORMANCE OF A LARGE SCALE INTEGRATED MONITORING EFFORT

    EPA Science Inventory

    Macauley, John M. and Linda C. Harwell. In press. Maintaining Data Quality in the Performance of a Large Scale Integrated Monitoring Effort (Abstract). To be presented at EMAP Symposium 2004: Integrated Monitoring and Assessment for Effective Water Quality Management, 3-7 May 200...

  3. The INTEGRAL long monitoring of persistent ultra compact X-ray bursters

    NASA Astrophysics Data System (ADS)

    Fiocchi, M.; Bazzano, A.; Ubertini, P.; Bird, A. J.; Natalucci, L.; Sguera, V.

    2008-12-01

    Context: The combination of compact objects, short period variability and peculiar chemical composition of the ultra compact X-ray binaries make up a very interesting laboratory to study accretion processes and thermonuclear burning on the neutron star surface. Improved large optical telescopes and more sensitive X-ray satellites have increased the number of known ultra compact X-ray binaries allowing their study with unprecedented detail. Aims: We analyze the average properties common to all ultra compact bursters observed by INTEGRAL from 0.2 keV to 150 keV. Methods: We have performed a systematic analysis of the INTEGRAL public data and Key-Program proprietary observations of a sample of the ultra compact X-ray binaries. In order to study their average properties in a very broad energy band, we combined INTEGRAL with BeppoSAX and SWIFT data whenever possible. For sources not showing any significant flux variations along the INTEGRAL monitoring, we build the average spectrum by combining all available data; in the case of variable fluxes, we use simultaneous INTEGRAL and SWIFT observations when available. Otherwise we compared IBIS and PDS data to check the variability and combine BeppoSAX with INTEGRAL /IBIS data. Results: All spectra are well represented by a two component model consisting of a disk-blackbody and Comptonised emission. The majority of these compact sources spend most of the time in a canonical low/hard state, with a dominating Comptonised component and accretion rate dot {M} lower than 10-9 {M⊙}/yr, not depending on the model used to fit the data. INTEGRAL is an ESA project with instruments and Science Data Center funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), Czech Republic and Poland, and with the participation of Russia and the USA.

  4. Integration of Host Strain Bioengineering and Bioprocess Development Using Ultra-Scale Down Studies to Select the Optimum Combination: An Antibody Fragment Primary Recovery Case Study

    PubMed Central

    Aucamp, Jean P; Davies, Richard; Hallet, Damien; Weiss, Amanda; Titchener-Hooker, Nigel J

    2014-01-01

    An ultra scale-down primary recovery sequence was established for a platform E. coli Fab production process. It was used to evaluate the process robustness of various bioengineered strains. Centrifugal discharge in the initial dewatering stage was determined to be the major cause of cell breakage. The ability of cells to resist breakage was dependant on a combination of factors including host strain, vector, and fermentation strategy. Periplasmic extraction studies were conducted in shake flasks and it was demonstrated that key performance parameters such as Fab titre and nucleic acid concentrations were mimicked. The shake flask system also captured particle aggregation effects seen in a large scale stirred vessel, reproducing the fine particle size distribution that impacts the final centrifugal clarification stage. The use of scale-down primary recovery process sequences can be used to screen a larger number of engineered strains. This can lead to closer integration with and better feedback between strain development, fermentation development, and primary recovery studies. Biotechnol. Bioeng. 2014;111: 1971–1981. © 2014 Wiley Periodicals, Inc. PMID:24838387

  5. Multidimensional quantum entanglement with large-scale integrated optics.

    PubMed

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G

    2018-04-20

    The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  6. Microfluidic large-scale integration: the evolution of design rules for biological automation.

    PubMed

    Melin, Jessica; Quake, Stephen R

    2007-01-01

    Microfluidic large-scale integration (mLSI) refers to the development of microfluidic chips with thousands of integrated micromechanical valves and control components. This technology is utilized in many areas of biology and chemistry and is a candidate to replace today's conventional automation paradigm, which consists of fluid-handling robots. We review the basic development of mLSI and then discuss design principles of mLSI to assess the capabilities and limitations of the current state of the art and to facilitate the application of mLSI to areas of biology. Many design and practical issues, including economies of scale, parallelization strategies, multiplexing, and multistep biochemical processing, are discussed. Several microfluidic components used as building blocks to create effective, complex, and highly integrated microfluidic networks are also highlighted.

  7. KA-SB: from data integration to large scale reasoning

    PubMed Central

    Roldán-García, María del Mar; Navas-Delgado, Ismael; Kerzazi, Amine; Chniber, Othmane; Molina-Castro, Joaquín; Aldana-Montes, José F

    2009-01-01

    Background The analysis of information in the biological domain is usually focused on the analysis of data from single on-line data sources. Unfortunately, studying a biological process requires having access to disperse, heterogeneous, autonomous data sources. In this context, an analysis of the information is not possible without the integration of such data. Methods KA-SB is a querying and analysis system for final users based on combining a data integration solution with a reasoner. Thus, the tool has been created with a process divided into two steps: 1) KOMF, the Khaos Ontology-based Mediator Framework, is used to retrieve information from heterogeneous and distributed databases; 2) the integrated information is crystallized in a (persistent and high performance) reasoner (DBOWL). This information could be further analyzed later (by means of querying and reasoning). Results In this paper we present a novel system that combines the use of a mediation system with the reasoning capabilities of a large scale reasoner to provide a way of finding new knowledge and of analyzing the integrated information from different databases, which is retrieved as a set of ontology instances. This tool uses a graphical query interface to build user queries easily, which shows a graphical representation of the ontology and allows users o build queries by clicking on the ontology concepts. Conclusion These kinds of systems (based on KOMF) will provide users with very large amounts of information (interpreted as ontology instances once retrieved), which cannot be managed using traditional main memory-based reasoners. We propose a process for creating persistent and scalable knowledgebases from sets of OWL instances obtained by integrating heterogeneous data sources with KOMF. This process has been applied to develop a demo tool , which uses the BioPax Level 3 ontology as the integration schema, and integrates UNIPROT, KEGG, CHEBI, BRENDA and SABIORK databases. PMID:19796402

  8. Large-scale data integration framework provides a comprehensive view on glioblastoma multiforme.

    PubMed

    Ovaska, Kristian; Laakso, Marko; Haapa-Paananen, Saija; Louhimo, Riku; Chen, Ping; Aittomäki, Viljami; Valo, Erkka; Núñez-Fontarnau, Javier; Rantanen, Ville; Karinen, Sirkku; Nousiainen, Kari; Lahesmaa-Korpinen, Anna-Maria; Miettinen, Minna; Saarinen, Lilli; Kohonen, Pekka; Wu, Jianmin; Westermarck, Jukka; Hautaniemi, Sampsa

    2010-09-07

    Coordinated efforts to collect large-scale data sets provide a basis for systems level understanding of complex diseases. In order to translate these fragmented and heterogeneous data sets into knowledge and medical benefits, advanced computational methods for data analysis, integration and visualization are needed. We introduce a novel data integration framework, Anduril, for translating fragmented large-scale data into testable predictions. The Anduril framework allows rapid integration of heterogeneous data with state-of-the-art computational methods and existing knowledge in bio-databases. Anduril automatically generates thorough summary reports and a website that shows the most relevant features of each gene at a glance, allows sorting of data based on different parameters, and provides direct links to more detailed data on genes, transcripts or genomic regions. Anduril is open-source; all methods and documentation are freely available. We have integrated multidimensional molecular and clinical data from 338 subjects having glioblastoma multiforme, one of the deadliest and most poorly understood cancers, using Anduril. The central objective of our approach is to identify genetic loci and genes that have significant survival effect. Our results suggest several novel genetic alterations linked to glioblastoma multiforme progression and, more specifically, reveal Moesin as a novel glioblastoma multiforme-associated gene that has a strong survival effect and whose depletion in vitro significantly inhibited cell proliferation. All analysis results are available as a comprehensive website. Our results demonstrate that integrated analysis and visualization of multidimensional and heterogeneous data by Anduril enables drawing conclusions on functional consequences of large-scale molecular data. Many of the identified genetic loci and genes having significant survival effect have not been reported earlier in the context of glioblastoma multiforme. Thus, in addition to

  9. Ultra-stiff large-area carpets of carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Meysami, Seyyed Shayan; Dallas, Panagiotis; Britton, Jude; Lozano, Juan G.; Murdock, Adrian T.; Ferraro, Claudio; Gutierrez, Eduardo Saiz; Rijnveld, Niek; Holdway, Philip; Porfyrakis, Kyriakos; Grobert, Nicole

    2016-06-01

    Herewith, we report the influence of post-synthesis heat treatment (<=2350 °C and plasma temperatures) on the crystal structure, defect density, purity, alignment and dispersibility of free-standing large-area (several cm2) carpets of ultra-long (several mm) vertically aligned multi-wall carbon nanotubes (VA-MWCNTs). VA-MWCNTs were produced in large quantities (20-30 g per batch) using a semi-scaled-up aerosol-assisted chemical vapour deposition (AACVD) setup. Electron and X-ray diffraction showed that the heat treatment at 2350 °C under inert atmosphere purifies, removes residual catalyst particles, and partially aligns adjacent single crystals (crystallites) in polycrystalline MWCNTs. The purification and improvement in the crystallites alignment within the MWCNTs resulted in reduced dispersibility of the VA-MWCNTs in liquid media. High-resolution microscopy revealed that the crystallinity is improved in scales of few tens of nanometres while the point defects remain largely unaffected. The heat treatment also had a marked benefit on the mechanical properties of the carpets. For the first time, we report compression moduli as high as 120 MPa for VA-MWCNT carpets, i.e. an order of magnitude higher than previously reported figures. The application of higher temperatures (arc-discharge plasma, >=4000 °C) resulted in the formation of a novel graphite-matrix composite reinforced with CVD and arc-discharge-like carbon nanotubes.Herewith, we report the influence of post-synthesis heat treatment (<=2350 °C and plasma temperatures) on the crystal structure, defect density, purity, alignment and dispersibility of free-standing large-area (several cm2) carpets of ultra-long (several mm) vertically aligned multi-wall carbon nanotubes (VA-MWCNTs). VA-MWCNTs were produced in large quantities (20-30 g per batch) using a semi-scaled-up aerosol-assisted chemical vapour deposition (AACVD) setup. Electron and X-ray diffraction showed that the heat treatment at 2350 °C under

  10. Optical correlator using very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators

    NASA Technical Reports Server (NTRS)

    Turner, Richard M.; Jared, David A.; Sharp, Gary D.; Johnson, Kristina M.

    1993-01-01

    The use of 2-kHz 64 x 64 very-large-scale integrated circuit/ferroelectric-liquid-crystal electrically addressed spatial light modulators as the input and filter planes of a VanderLugt-type optical correlator is discussed. Liquid-crystal layer thickness variations that are present in the devices are analyzed, and the effects on correlator performance are investigated through computer simulations. Experimental results from the very-large-scale-integrated / ferroelectric-liquid-crystal optical-correlator system are presented and are consistent with the level of performance predicted by the simulations.

  11. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as

  12. Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector

    NASA Astrophysics Data System (ADS)

    Kumar, P.; Mishra, T.; Banerjee, R.

    2017-12-01

    India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.

  13. St. Louis Initiative for Integrated Care Excellence (SLI(2)CE): integrated-collaborative care on a large scale model.

    PubMed

    Brawer, Peter A; Martielli, Richard; Pye, Patrice L; Manwaring, Jamie; Tierney, Anna

    2010-06-01

    The primary care health setting is in crisis. Increasing demand for services, with dwindling numbers of providers, has resulted in decreased access and decreased satisfaction for both patients and providers. Moreover, the overwhelming majority of primary care visits are for behavioral and mental health concerns rather than issues of a purely medical etiology. Integrated-collaborative models of health care delivery offer possible solutions to this crisis. The purpose of this article is to review the existing data available after 2 years of the St. Louis Initiative for Integrated Care Excellence; an example of integrated-collaborative care on a large scale model within a regional Veterans Affairs Health Care System. There is clear evidence that the SLI(2)CE initiative rather dramatically increased access to health care, and modified primary care practitioners' willingness to address mental health issues within the primary care setting. In addition, data suggests strong fidelity to a model of integrated-collaborative care which has been successful in the past. Integrated-collaborative care offers unique advantages to the traditional view and practice of medical care. Through careful implementation and practice, success is possible on a large scale model. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  14. Retaining large and adjustable elastic strains of kilogram-scale Nb nanowires [Better Superconductor by Elastic Strain Engineering: Kilogram-scale Free-Standing Niobium Metal Composite with Large Retained Elastic Strains

    DOE PAGES

    Hao, Shijie; Cui, Lishan; Wang, Hua; ...

    2016-02-10

    Crystals held at ultrahigh elastic strains and stresses may exhibit exceptional physical and chemical properties. Individual metallic nanowires can sustain ultra-large elastic strains of 4-7%. However, retaining elastic strains of such magnitude in kilogram-scale nanowires is challenging. Here, we find that under active load, ~5.6% elastic strain can be achieved in Nb nanowires in a composite material. Moreover, large tensile (2.8%) and compressive (-2.4%) elastic strains can be retained in kilogram-scale Nb nanowires when the composite is unloaded to a free-standing condition. It is then demonstrated that the retained tensile elastic strains of Nb nanowires significantly increase their superconducting transitionmore » temperature and critical magnetic fields, corroborating ab initio calculations based on BCS theory. This free-standing nanocomposite design paradigm opens new avenues for retaining ultra-large elastic strains in great quantities of nanowires and elastic-strain-engineering at industrial scale.« less

  15. Ultra-Low Loss Waveguides with Application to Photonic Integrated Circuits

    NASA Astrophysics Data System (ADS)

    Bauters, Jared F.

    The integration of photonic components using a planar platform promises advantages in cost, size, weight, and power consumption for optoelectronic systems. Yet, the typical propagation loss of 5-10 dB/m in a planar silica waveguide is nearly five orders-of-magnitude larger than that in low loss optical fibers. For some applications, the miniaturization of the photonic system and resulting smaller propagation lengths from integration are enough to overcome the increase in propagation loss. For other more demanding systems or applications, such as those requiring long optical time delays or high-quality-factor (Q factor) resonators, the high propagation loss can degrade system performance to a degree that trumps the potential advantages offered by integration. Thus, the reduction of planar waveguide propagation loss in a Si3-N4 based waveguide platform is a primary focus of this dissertation. The ultra-low loss stoichiometric Si3-N4 waveguide platform offers the additional advantages of fabrication process stability and repeatability. Yet, active devices such as lasers, amplifiers, and photodetectors have not been monolithically integrated with ultra-low loss waveguides due to the incompatibility of the active and ultra-low loss processing thermal budgets (ultra-low loss waveguides are annealed at temperatures exceeding 1000 °C in order to drive out impurities). So a platform that enables the integration of active devices with the ultra-low losses of the Si3- N4 waveguide platform is this dissertation's second focus. The work enables the future fabrication of sensor, gyroscope, true time delay, and low phase noise oscillator photonic integrated circuits.

  16. An innovative large scale integration of silicon nanowire-based field effect transistors

    NASA Astrophysics Data System (ADS)

    Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.

    2018-05-01

    Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.

  17. Integrative approaches for large-scale transcriptome-wide association studies

    PubMed Central

    Gusev, Alexander; Ko, Arthur; Shi, Huwenbo; Bhatia, Gaurav; Chung, Wonil; Penninx, Brenda W J H; Jansen, Rick; de Geus, Eco JC; Boomsma, Dorret I; Wright, Fred A; Sullivan, Patrick F; Nikkola, Elina; Alvarez, Marcus; Civelek, Mete; Lusis, Aldons J.; Lehtimäki, Terho; Raitoharju, Emma; Kähönen, Mika; Seppälä, Ilkka; Raitakari, Olli T.; Kuusisto, Johanna; Laakso, Markku; Price, Alkes L.; Pajukanta, Päivi; Pasaniuc, Bogdan

    2016-01-01

    Many genetic variants influence complex traits by modulating gene expression, thus altering the abundance levels of one or multiple proteins. Here, we introduce a powerful strategy that integrates gene expression measurements with summary association statistics from large-scale genome-wide association studies (GWAS) to identify genes whose cis-regulated expression is associated to complex traits. We leverage expression imputation to perform a transcriptome wide association scan (TWAS) to identify significant expression-trait associations. We applied our approaches to expression data from blood and adipose tissue measured in ~3,000 individuals overall. We imputed gene expression into GWAS data from over 900,000 phenotype measurements to identify 69 novel genes significantly associated to obesity-related traits (BMI, lipids, and height). Many of the novel genes are associated with relevant phenotypes in the Hybrid Mouse Diversity Panel. Our results showcase the power of integrating genotype, gene expression and phenotype to gain insights into the genetic basis of complex traits. PMID:26854917

  18. Strategies for Large Scale Implementation of a Multiscale, Multiprocess Integrated Hydrologic Model

    NASA Astrophysics Data System (ADS)

    Kumar, M.; Duffy, C.

    2006-05-01

    Distributed models simulate hydrologic state variables in space and time while taking into account the heterogeneities in terrain, surface, subsurface properties and meteorological forcings. Computational cost and complexity associated with these model increases with its tendency to accurately simulate the large number of interacting physical processes at fine spatio-temporal resolution in a large basin. A hydrologic model run on a coarse spatial discretization of the watershed with limited number of physical processes needs lesser computational load. But this negatively affects the accuracy of model results and restricts physical realization of the problem. So it is imperative to have an integrated modeling strategy (a) which can be universally applied at various scales in order to study the tradeoffs between computational complexity (determined by spatio- temporal resolution), accuracy and predictive uncertainty in relation to various approximations of physical processes (b) which can be applied at adaptively different spatial scales in the same domain by taking into account the local heterogeneity of topography and hydrogeologic variables c) which is flexible enough to incorporate different number and approximation of process equations depending on model purpose and computational constraint. An efficient implementation of this strategy becomes all the more important for Great Salt Lake river basin which is relatively large (~89000 sq. km) and complex in terms of hydrologic and geomorphic conditions. Also the types and the time scales of hydrologic processes which are dominant in different parts of basin are different. Part of snow melt runoff generated in the Uinta Mountains infiltrates and contributes as base flow to the Great Salt Lake over a time scale of decades to centuries. The adaptive strategy helps capture the steep topographic and climatic gradient along the Wasatch front. Here we present the aforesaid modeling strategy along with an associated

  19. XLinkDB 2.0: integrated, large-scale structural analysis of protein crosslinking data

    PubMed Central

    Schweppe, Devin K.; Zheng, Chunxiang; Chavez, Juan D.; Navare, Arti T.; Wu, Xia; Eng, Jimmy K.; Bruce, James E.

    2016-01-01

    Motivation: Large-scale chemical cross-linking with mass spectrometry (XL-MS) analyses are quickly becoming a powerful means for high-throughput determination of protein structural information and protein–protein interactions. Recent studies have garnered thousands of cross-linked interactions, yet the field lacks an effective tool to compile experimental data or access the network and structural knowledge for these large scale analyses. We present XLinkDB 2.0 which integrates tools for network analysis, Protein Databank queries, modeling of predicted protein structures and modeling of docked protein structures. The novel, integrated approach of XLinkDB 2.0 enables the holistic analysis of XL-MS protein interaction data without limitation to the cross-linker or analytical system used for the analysis. Availability and Implementation: XLinkDB 2.0 can be found here, including documentation and help: http://xlinkdb.gs.washington.edu/. Contact: jimbruce@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153666

  20. Performance Improvement of Receivers Based on Ultra-Tight Integration in GNSS-Challenged Environments

    PubMed Central

    Qin, Feng; Zhan, Xingqun; Du, Gang

    2013-01-01

    Ultra-tight integration was first proposed by Abbott in 2003 with the purpose of integrating a global navigation satellite system (GNSS) and an inertial navigation system (INS). This technology can improve the tracking performances of a receiver by reconfiguring the tracking loops in GNSS-challenged environments. In this paper, the models of all error sources known to date in the phase lock loops (PLLs) of a standard receiver and an ultra-tightly integrated GNSS/INS receiver are built, respectively. Based on these models, the tracking performances of the two receivers are compared to verify the improvement due to the ultra-tight integration. Meanwhile, the PLL error distributions of the two receivers are also depicted to analyze the error changes of the tracking loops. These results show that the tracking error is significantly reduced in the ultra-tightly integrated GNSS/INS receiver since the receiver's dynamics are estimated and compensated by an INS. Moreover, the mathematical relationship between the tracking performances of the ultra-tightly integrated GNSS/INS receiver and the quality of the selected inertial measurement unit (IMU) is derived from the error models and proved by the error comparisons of four ultra-tightly integrated GNSS/INS receivers aided by different grade IMUs.

  1. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  2. Integration and segregation of large-scale brain networks during short-term task automatization

    PubMed Central

    Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F.; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes

    2016-01-01

    The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes. PMID:27808095

  3. 76 FR 14688 - In the Matter of Certain Large Scale Integrated Circuit Semiconductor Chips and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-17

    ... Integrated Circuit Semiconductor Chips and Products Containing the Same; Notice of a Commission Determination... certain large scale integrated circuit semiconductor chips and products containing same by reason of... existence of a domestic industry. The Commission's notice of investigation named several respondents...

  4. Gravity at the horizon: on relativistic effects, CMB-LSS correlations and ultra-large scales in Horndeski's theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Renk, Janina; Zumalacárregui, Miguel; Montanari, Francesco, E-mail: renk@thphys.uni-heidelberg.de, E-mail: miguel.zumalacarregui@nordita.org, E-mail: francesco.montanari@helsinki.fi

    2016-07-01

    We address the impact of consistent modifications of gravity on the largest observable scales, focusing on relativistic effects in galaxy number counts and the cross-correlation between the matter large scale structure (LSS) distribution and the cosmic microwave background (CMB). Our analysis applies to a very broad class of general scalar-tensor theories encoded in the Horndeski Lagrangian and is fully consistent on linear scales, retaining the full dynamics of the scalar field and not assuming quasi-static evolution. As particular examples we consider self-accelerating Covariant Galileons, Brans-Dicke theory and parameterizations based on the effective field theory of dark energy, using the himore » class code to address the impact of these models on relativistic corrections to LSS observables. We find that especially effects which involve integrals along the line of sight (lensing convergence, time delay and the integrated Sachs-Wolfe effect—ISW) can be considerably modified, and even lead to O(1000%) deviations from General Relativity in the case of the ISW effect for Galileon models, for which standard probes such as the growth function only vary by O(10%). These effects become dominant when correlating galaxy number counts at different redshifts and can lead to ∼ 50% deviations in the total signal that might be observable by future LSS surveys. Because of their integrated nature, these deep-redshift cross-correlations are sensitive to modifications of gravity even when probing eras much before dark energy domination. We further isolate the ISW effect using the cross-correlation between LSS and CMB temperature anisotropies and use current data to further constrain Horndeski models. Forthcoming large-volume galaxy surveys using multiple-tracers will search for all these effects, opening a new window to probe gravity and cosmic acceleration at the largest scales available in our universe.« less

  5. Ten-channel InP-based large-scale photonic integrated transmitter fabricated by SAG technology

    NASA Astrophysics Data System (ADS)

    Zhang, Can; Zhu, Hongliang; Liang, Song; Cui, Xiao; Wang, Huitao; Zhao, Lingjuan; Wang, Wei

    2014-12-01

    A 10-channel InP-based large-scale photonic integrated transmitter was fabricated by selective area growth (SAG) technology combined with butt-joint regrowth (BJR) technology. The SAG technology was utilized to fabricate the electroabsorption modulated distributed feedback (DFB) laser (EML) arrays at the same time. The design of coplanar electrodes for electroabsorption modulator (EAM) was used for the flip-chip bonding package. The lasing wavelength of DFB laser could be tuned by the integrated micro-heater to match the ITU grids, which only needs one electrode pad. The average output power of each channel is 250 μW with an injection current of 200 mA. The static extinction ratios of the EAMs for 10 channels tested are ranged from 15 to 27 dB with a reverse bias of 6 V. The frequencies of 3 dB bandwidth of the chip for each channel are around 14 GHz. The novel design and simple fabrication process show its enormous potential in reducing the cost of large-scale photonic integrated circuit (LS-PIC) transmitter with high chip yields.

  6. Reducing the two-loop large-scale structure power spectrum to low-dimensional, radial integrals

    DOE PAGES

    Schmittfull, Marcel; Vlah, Zvonimir

    2016-11-28

    Modeling the large-scale structure of the universe on nonlinear scales has the potential to substantially increase the science return of upcoming surveys by increasing the number of modes available for model comparisons. One way to achieve this is to model nonlinear scales perturbatively. Unfortunately, this involves high-dimensional loop integrals that are cumbersome to evaluate. Here, trying to simplify this, we show how two-loop (next-to-next-to-leading order) corrections to the density power spectrum can be reduced to low-dimensional, radial integrals. Many of those can be evaluated with a one-dimensional fast Fourier transform, which is significantly faster than the five-dimensional Monte-Carlo integrals thatmore » are needed otherwise. The general idea of this fast fourier transform perturbation theory method is to switch between Fourier and position space to avoid convolutions and integrate over orientations, leaving only radial integrals. This reformulation is independent of the underlying shape of the initial linear density power spectrum and should easily accommodate features such as those from baryonic acoustic oscillations. We also discuss how to account for halo bias and redshift space distortions.« less

  7. The prospect of modern thermomechanics in structural integrity calculations of large-scale pressure vessels

    NASA Astrophysics Data System (ADS)

    Fekete, Tamás

    2018-05-01

    Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well

  8. Wafer integrated micro-scale concentrating photovoltaics

    NASA Astrophysics Data System (ADS)

    Gu, Tian; Li, Duanhui; Li, Lan; Jared, Bradley; Keeler, Gordon; Miller, Bill; Sweatt, William; Paap, Scott; Saavedra, Michael; Das, Ujjwal; Hegedus, Steve; Tauke-Pedretti, Anna; Hu, Juejun

    2017-09-01

    Recent development of a novel micro-scale PV/CPV technology is presented. The Wafer Integrated Micro-scale PV approach (WPV) seamlessly integrates multijunction micro-cells with a multi-functional silicon platform that provides optical micro-concentration, hybrid photovoltaic, and mechanical micro-assembly. The wafer-embedded micro-concentrating elements is shown to considerably improve the concentration-acceptance-angle product, potentially leading to dramatically reduced module materials and fabrication costs, sufficient angular tolerance for low-cost trackers, and an ultra-compact optical architecture, which makes the WPV module compatible with commercial flat panel infrastructures. The PV/CPV hybrid architecture further allows the collection of both direct and diffuse sunlight, thus extending the geographic and market domains for cost-effective PV system deployment. The WPV approach can potentially benefits from both the high performance of multijunction cells and the low cost of flat plate Si PV systems.

  9. Ultra-Scale Computing for Emergency Evacuation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhaduri, Budhendra L; Nutaro, James J; Liu, Cheng

    2010-01-01

    Emergency evacuations are carried out in anticipation of a disaster such as hurricane landfall or flooding, and in response to a disaster that strikes without a warning. Existing emergency evacuation modeling and simulation tools are primarily designed for evacuation planning and are of limited value in operational support for real time evacuation management. In order to align with desktop computing, these models reduce the data and computational complexities through simple approximations and representations of real network conditions and traffic behaviors, which rarely represent real-world scenarios. With the emergence of high resolution physiographic, demographic, and socioeconomic data and supercomputing platforms, itmore » is possible to develop micro-simulation based emergency evacuation models that can foster development of novel algorithms for human behavior and traffic assignments, and can simulate evacuation of millions of people over a large geographic area. However, such advances in evacuation modeling and simulations demand computational capacity beyond the desktop scales and can be supported by high performance computing platforms. This paper explores the motivation and feasibility of ultra-scale computing for increasing the speed of high resolution emergency evacuation simulations.« less

  10. 75 FR 24742 - In the Matter of Certain Large Scale Integrated Circuit Semiconductor Chips and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-05

    ... Integrated Circuit Semiconductor Chips and Products Containing Same; Notice of Investigation AGENCY: U.S... of certain large scale integrated circuit semiconductor chips and products containing same by reason... alleges that an industry in the United States exists as required by subsection (a)(2) of section 337. The...

  11. A Methodology for Integrated, Multiregional Life Cycle Assessment Scenarios under Large-Scale Technological Change.

    PubMed

    Gibon, Thomas; Wood, Richard; Arvesen, Anders; Bergesen, Joseph D; Suh, Sangwon; Hertwich, Edgar G

    2015-09-15

    Climate change mitigation demands large-scale technological change on a global level and, if successfully implemented, will significantly affect how products and services are produced and consumed. In order to anticipate the life cycle environmental impacts of products under climate mitigation scenarios, we present the modeling framework of an integrated hybrid life cycle assessment model covering nine world regions. Life cycle assessment databases and multiregional input-output tables are adapted using forecasted changes in technology and resources up to 2050 under a 2 °C scenario. We call the result of this modeling "technology hybridized environmental-economic model with integrated scenarios" (THEMIS). As a case study, we apply THEMIS in an integrated environmental assessment of concentrating solar power. Life-cycle greenhouse gas emissions for this plant range from 33 to 95 g CO2 eq./kWh across different world regions in 2010, falling to 30-87 g CO2 eq./kWh in 2050. Using regional life cycle data yields insightful results. More generally, these results also highlight the need for systematic life cycle frameworks that capture the actual consequences and feedback effects of large-scale policies in the long term.

  12. Measurements of Ultra-fine and Fine Aerosol Particles over Siberia: Large-scale Airborne Campaigns

    NASA Astrophysics Data System (ADS)

    Arshinov, Mikhail; Paris, Jean-Daniel; Stohl, Andreas; Belan, Boris; Ciais, Philippe; Nédélec, Philippe

    2010-05-01

    In this paper we discuss the results of in-situ measurements of ultra-fine and fine aerosol particles carried out in the troposphere from 500 to 7000 m in the framework of several International and Russian State Projects. Number concentrations of ultra-fine and fine aerosol particles measured during intensive airborne campaigns are presented. Measurements carried over a great part of Siberia were focused on particles with diameters from 3 to 21 nm to study new particle formation in the free/upper troposphere over middle and high latitudes of Asia, which is the most unexplored region of the Northern Hemisphere. Joint International airborne surveys were performed along the following routes: Novosibirsk-Salekhard-Khatanga-Chokurdakh-Pevek-Yakutsk-Mirny-Novosibirsk (YAK-AEROSIB/PLARCAT2008 Project) and Novosibirsk-Mirny-Yakutsk-Lensk-Bratsk-Novosibirsk (YAK-AEROSIB Project). The flights over Lake Baikal was conducted under Russian State contract. Concentrations of ultra-fine and fine particles were measured with automated diffusion battery (ADB, designed by ICKC SB RAS, Novosibirsk, Russia) modified for airborne applications. The airborne ADB coupled with CPC has an additional aspiration unit to compensate ambient pressure and changing flow rate. It enabled to classify nanoparticles in three size ranges: 3-6 nm, 6-21 nm, and 21-200 nm. To identify new particle formation events we used similar specific criteria as Young et al. (2007): (1) N3-6nm >10 cm-3, (2) R1=N3-6/N621 >1 and R2=N321/N21200 >0.5. So when one of the ratios R1 or R2 tends to decrease to the above limits the new particle formation is weakened. It is very important to notice that space scale where new particle formation was observed is rather large. All the events revealed in the FT occurred under clean air conditions (low CO mixing ratios). Measurements carried out in the atmospheric boundary layer over Baikal Lake did not reveal any event of new particle formation. Concentrations of ultra

  13. Searching for Ultra-cool Objects at the Limits of Large-scale Surveys

    NASA Astrophysics Data System (ADS)

    Pinfield, D. J.; Patel, K.; Zhang, Z.; Gomes, J.; Burningham, B.; Day-Jones, A. C.; Jenkins, J.

    2011-12-01

    We have made a search (to Y=19.6) of the UKIDSS Large Area Survey (LAS DR7) for objects detected only in the Y-band. We have identified and removed contamination due to solar system objects, dust specs in the WFCAM optical path, persistence in the WFCAM detectors, and other sources of spurious single source Y-detections in the UKIDSS LAS data-base. In addition to our automated selection procedure we have visually inspected the ˜600 automatically selected candidates to provide an additional level of quality filtering. This has resulted in 55 good candidates that await follow-up observations to confirm their nature. Ultra-cool LAS Y-only objects would have blue Y-J colours combined with very red optical-NIR SEDs - characteristics shared by Jupiter, and suggested by an extrapolation of the Y-J colour trend seen for the latest T dwarfs currently known.

  14. Report on phase 1 of the Microprocessor Seminar. [and associated large scale integration

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Proceedings of a seminar on microprocessors and associated large scale integrated (LSI) circuits are presented. The potential for commonality of device requirements, candidate processes and mechanisms for qualifying candidate LSI technologies for high reliability applications, and specifications for testing and testability were among the topics discussed. Various programs and tentative plans of the participating organizations in the development of high reliability LSI circuits are given.

  15. Van der Waals epitaxial growth and optoelectronics of large-scale WSe2/SnS2 vertical bilayer p-n junctions.

    PubMed

    Yang, Tiefeng; Zheng, Biyuan; Wang, Zhen; Xu, Tao; Pan, Chen; Zou, Juan; Zhang, Xuehong; Qi, Zhaoyang; Liu, Hongjun; Feng, Yexin; Hu, Weida; Miao, Feng; Sun, Litao; Duan, Xiangfeng; Pan, Anlian

    2017-12-04

    High-quality two-dimensional atomic layered p-n heterostructures are essential for high-performance integrated optoelectronics. The studies to date have been largely limited to exfoliated and restacked flakes, and the controlled growth of such heterostructures remains a significant challenge. Here we report the direct van der Waals epitaxial growth of large-scale WSe 2 /SnS 2 vertical bilayer p-n junctions on SiO 2 /Si substrates, with the lateral sizes reaching up to millimeter scale. Multi-electrode field-effect transistors have been integrated on a single heterostructure bilayer. Electrical transport measurements indicate that the field-effect transistors of the junction show an ultra-low off-state leakage current of 10 -14 A and a highest on-off ratio of up to 10 7 . Optoelectronic characterizations show prominent photoresponse, with a fast response time of 500 μs, faster than all the directly grown vertical 2D heterostructures. The direct growth of high-quality van der Waals junctions marks an important step toward high-performance integrated optoelectronic devices and systems.

  16. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    NASA Astrophysics Data System (ADS)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  17. A Simplified Baseband Prefilter Model with Adaptive Kalman Filter for Ultra-Tight COMPASS/INS Integration

    PubMed Central

    Luo, Yong; Wu, Wenqi; Babu, Ravindra; Tang, Kanghua; Luo, Bing

    2012-01-01

    COMPASS is an indigenously developed Chinese global navigation satellite system and will share many features in common with GPS (Global Positioning System). Since the ultra-tight GPS/INS (Inertial Navigation System) integration shows its advantage over independent GPS receivers in many scenarios, the federated ultra-tight COMPASS/INS integration has been investigated in this paper, particularly, by proposing a simplified prefilter model. Compared with a traditional prefilter model, the state space of this simplified system contains only carrier phase, carrier frequency and carrier frequency rate tracking errors. A two-quadrant arctangent discriminator output is used as a measurement. Since the code tracking error related parameters were excluded from the state space of traditional prefilter models, the code/carrier divergence would destroy the carrier tracking process, and therefore an adaptive Kalman filter algorithm tuning process noise covariance matrix based on state correction sequence was incorporated to compensate for the divergence. The federated ultra-tight COMPASS/INS integration was implemented with a hardware COMPASS intermediate frequency (IF), and INS's accelerometers and gyroscopes signal sampling system. Field and simulation test results showed almost similar tracking and navigation performances for both the traditional prefilter model and the proposed system; however, the latter largely decreased the computational load. PMID:23012564

  18. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    NASA Astrophysics Data System (ADS)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  19. Ultra Compact Optical Pickup with Integrated Optical System

    NASA Astrophysics Data System (ADS)

    Nakata, Hideki; Nagata, Takayuki; Tomita, Hironori

    2006-08-01

    Smaller and thinner optical pickups are needed for portable audio-visual (AV) products and notebook personal computers (PCs). We have newly developed an ultra compact recordable optical pickup for Mini Disc (MD) that measures less than 4 mm from the disc surface to the bottom of the optical pickup, making the optical system markedly compact. We have integrated all the optical components into an objective lens actuator moving unit, while fully satisfying recording and playback performance requirements. In this paper, we propose an ultra compact optical pickup applicable to portable MD recorders.

  20. Hierarchical hybrid control of manipulators: Artificial intelligence in large scale integrated circuits

    NASA Technical Reports Server (NTRS)

    Greene, P. H.

    1972-01-01

    Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.

  1. Large Scale Screening of Low Cost Ferritic Steel Designs For Advanced Ultra Supercritical Boiler Using First Principles Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ouyang, Lizhi

    Advanced Ultra Supercritical Boiler (AUSC) requires materials that can operate in corrosive environment at temperature and pressure as high as 760°C (or 1400°F) and 5000psi, respectively, while at the same time maintain good ductility at low temperature. We develop automated simulation software tools to enable fast large scale screening studies of candidate designs. While direct evaluation of creep rupture strength and ductility are currently not feasible, properties such as energy, elastic constants, surface energy, interface energy, and stack fault energy can be used to assess their relative ductility and creeping strength. We implemented software to automate the complex calculations tomore » minimize human inputs in the tedious screening studies which involve model structures generation, settings for first principles calculations, results analysis and reporting. The software developed in the project and library of computed mechanical properties of phases found in ferritic steels, many are complex solid solutions estimated for the first time, will certainly help the development of low cost ferritic steel for AUSC.« less

  2. Optimal Wind Energy Integration in Large-Scale Electric Grids

    NASA Astrophysics Data System (ADS)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  3. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    PubMed

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  4. Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

    PubMed

    Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius

    2018-06-01

    Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    NASA Astrophysics Data System (ADS)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  6. Ultra-large suspended graphene as a highly elastic membrane for capacitive pressure sensors

    NASA Astrophysics Data System (ADS)

    Chen, Yu-Min; He, Shih-Ming; Huang, Chi-Hsien; Huang, Cheng-Chun; Shih, Wen-Pin; Chu, Chun-Lin; Kong, Jing; Li, Ju; Su, Ching-Yuan

    2016-02-01

    In this work, we fabricate ultra-large suspended graphene membranes, where stacks of a few layers of graphene could be suspended over a circular hole with a diameter of up to 1.5 mm, with a diameter to thickness aspect ratio of 3 × 105, which is the record for free-standing graphene membranes. The process is based on large crystalline graphene (~55 μm) obtained using a chemical vapor deposition (CVD) method, followed by a gradual solvent replacement technique. Combining a hydrogen bubbling transfer approach with thermal annealing to reduce polymer residue results in an extremely clean surface, where the ultra-large suspended graphene retains the intrinsic features of graphene, including phonon response and an enhanced carrier mobility (200% higher than that of graphene on a substrate). The highly elastic mechanical properties of the graphene membrane are demonstrated, and the Q-factor under 2 MHz stimulation is measured to be 200-300. A graphene-based capacitive pressure sensor is fabricated, where it shows a linear response and a high sensitivity of 15.15 aF Pa-1, which is 770% higher than that of frequently used silicon-based membranes. The reported approach is universal, which could be employed to fabricate other suspended 2D materials with macro-scale sizes on versatile support substrates, such as arrays of Si nano-pillars and deep trenches.In this work, we fabricate ultra-large suspended graphene membranes, where stacks of a few layers of graphene could be suspended over a circular hole with a diameter of up to 1.5 mm, with a diameter to thickness aspect ratio of 3 × 105, which is the record for free-standing graphene membranes. The process is based on large crystalline graphene (~55 μm) obtained using a chemical vapor deposition (CVD) method, followed by a gradual solvent replacement technique. Combining a hydrogen bubbling transfer approach with thermal annealing to reduce polymer residue results in an extremely clean surface, where the ultra-large

  7. Ultra-large scale AFM of lipid droplet arrays: investigating the ink transfer volume in dip pen nanolithography.

    PubMed

    Förste, Alexander; Pfirrmann, Marco; Sachs, Johannes; Gröger, Roland; Walheim, Stefan; Brinkmann, Falko; Hirtz, Michael; Fuchs, Harald; Schimmel, Thomas

    2015-05-01

    There are only few quantitative studies commenting on the writing process in dip-pen nanolithography with lipids. Lipids are important carrier ink molecules for the delivery of bio-functional patters in bio-nanotechnology. In order to better understand and control the writing process, more information on the transfer of lipid material from the tip to the substrate is needed. The dependence of the transferred ink volume on the dwell time of the tip on the substrate was investigated by topography measurements with an atomic force microscope (AFM) that is characterized by an ultra-large scan range of 800 × 800 μm(2). For this purpose arrays of dots of the phospholipid1,2-dioleoyl-sn-glycero-3-phosphocholine were written onto planar glass substrates and the resulting pattern was imaged by large scan area AFM. Two writing regimes were identified, characterized of either a steady decline or a constant ink volume transfer per dot feature. For the steady state ink transfer, a linear relationship between the dwell time and the dot volume was determined, which is characterized by a flow rate of about 16 femtoliters per second. A dependence of the ink transport from the length of pauses before and in between writing the structures was observed and should be taken into account during pattern design when aiming at best writing homogeneity. The ultra-large scan range of the utilized AFM allowed for a simultaneous study of the entire preparation area of almost 1 mm(2), yielding good statistic results.

  8. Ultra-large scale AFM of lipid droplet arrays: investigating the ink transfer volume in dip pen nanolithography

    NASA Astrophysics Data System (ADS)

    Förste, Alexander; Pfirrmann, Marco; Sachs, Johannes; Gröger, Roland; Walheim, Stefan; Brinkmann, Falko; Hirtz, Michael; Fuchs, Harald; Schimmel, Thomas

    2015-05-01

    There are only few quantitative studies commenting on the writing process in dip-pen nanolithography with lipids. Lipids are important carrier ink molecules for the delivery of bio-functional patters in bio-nanotechnology. In order to better understand and control the writing process, more information on the transfer of lipid material from the tip to the substrate is needed. The dependence of the transferred ink volume on the dwell time of the tip on the substrate was investigated by topography measurements with an atomic force microscope (AFM) that is characterized by an ultra-large scan range of 800 × 800 μm2. For this purpose arrays of dots of the phospholipid1,2-dioleoyl-sn-glycero-3-phosphocholine were written onto planar glass substrates and the resulting pattern was imaged by large scan area AFM. Two writing regimes were identified, characterized of either a steady decline or a constant ink volume transfer per dot feature. For the steady state ink transfer, a linear relationship between the dwell time and the dot volume was determined, which is characterized by a flow rate of about 16 femtoliters per second. A dependence of the ink transport from the length of pauses before and in between writing the structures was observed and should be taken into account during pattern design when aiming at best writing homogeneity. The ultra-large scan range of the utilized AFM allowed for a simultaneous study of the entire preparation area of almost 1 mm2, yielding good statistic results.

  9. SMUVS: Spitzer Matching survey of the UltraVISTA ultra-deep Stripes

    NASA Astrophysics Data System (ADS)

    Caputi, Karina; Ashby, Matthew; Fazio, Giovanni; Huang, Jiasheng; Dunlop, James; Franx, Marijn; Le Fevre, Olivier; Fynbo, Johan; McCracken, Henry; Milvang-Jensen, Bo; Muzzin, Adam; Ilbert, Olivier; Somerville, Rachel; Wechsler, Risa; Behroozi, Peter; Lu, Yu

    2014-12-01

    We request 2026.5 hours to homogenize the matching ultra-deep IRAC data of the UltraVISTA ultra-deep stripes, producing a final area of ~0.6 square degrees with the deepest near- and mid-IR coverage existing in any such large area of the sky (H, Ks, [3.6], [4.5] ~ 25.3-26.1 AB mag; 5 sigma). The UltraVISTA ultra-deep stripes are contained within the larger COSMOS field, which has a rich collection of multi-wavelength, ancillary data, making it ideal to study different aspects of galaxy evolution with high statistical significance and excellent redshift accuracy. The UltraVISTA ultra-deep stripes are the region of the COSMOS field where these studies can be pushed to the highest redshifts, but securely identifying high-z galaxies, and determining their stellar masses, will only be possible if ultra-deep mid-IR data are available. Our IRAC observations will allow us to: 1) extend the galaxy stellar mass function at redshifts z=3 to z=5 to the intermediate mass regime (M~5x10^9-10^10 Msun), which is critical to constrain galaxy formation models; 2) gain a factor of six in the area where it is possible to effectively search for z>=6 galaxies and study their properties; 3) measure, for the first time, the large-scale structure traced by an unbiased galaxy sample at z=5 to z=7, and make the link to their host dark matter haloes. This cannot be done in any other field of the sky, as the UltraVISTA ultra-deep stripes form a quasi-contiguous, regular-shape field, which has a unique combination of large area and photometric depth. 4) provide a unique resource for the selection of secure z>5 targets for JWST and ALMA follow up. Our observations will have an enormous legacy value which amply justifies this new observing-time investment in the COSMOS field. Spitzer cannot miss this unique opportunity to open up a large 0.6 square-degree window to the early Universe.

  10. Large-scale electrophysiology: acquisition, compression, encryption, and storage of big data.

    PubMed

    Brinkmann, Benjamin H; Bower, Mark R; Stengel, Keith A; Worrell, Gregory A; Stead, Matt

    2009-05-30

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single-neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single-neuron action potentials, high frequency oscillations, and high amplitude ultra-slow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range-encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information.

  11. Multi-format all-optical processing based on a large-scale, hybridly integrated photonic circuit.

    PubMed

    Bougioukos, M; Kouloumentas, Ch; Spyropoulou, M; Giannoulis, G; Kalavrouziotis, D; Maziotis, A; Bakopoulos, P; Harmon, R; Rogers, D; Harrison, J; Poustie, A; Maxwell, G; Avramopoulos, H

    2011-06-06

    We investigate through numerical studies and experiments the performance of a large scale, silica-on-silicon photonic integrated circuit for multi-format regeneration and wavelength-conversion. The circuit encompasses a monolithically integrated array of four SOAs inside two parallel Mach-Zehnder structures, four delay interferometers and a large number of silica waveguides and couplers. Exploiting phase-incoherent techniques, the circuit is capable of processing OOK signals at variable bit rates, DPSK signals at 22 or 44 Gb/s and DQPSK signals at 44 Gbaud. Simulation studies reveal the wavelength-conversion potential of the circuit with enhanced regenerative capabilities for OOK and DPSK modulation formats and acceptable quality degradation for DQPSK format. Regeneration of 22 Gb/s OOK signals with amplified spontaneous emission (ASE) noise and DPSK data signals degraded with amplitude, phase and ASE noise is experimentally validated demonstrating a power penalty improvement up to 1.5 dB.

  12. Femtosecond laser machining and lamination for large-area flexible organic microfluidic chips

    NASA Astrophysics Data System (ADS)

    Malek, C. Khan; Robert, L.; Salut, R.

    2009-04-01

    A hybrid process compatible with reel-to-reel manufacturing is developed for ultra low-cost large-scale manufacture of disposable microfluidic chips. It combines ultra-short laser microstructuring and lamination technology. Microchannels in polyester foils were formed using focused, high-intensity femtosecond laser pulses. Lamination using a commercial SU8-epoxy resist layer was used to seal the microchannel layer and cover foil. This hybrid process also enables heterogeneous material structuration and integration.

  13. Real-time adaptive ramp metering : phase I, MILOS proof of concept (multi-objective, integrated, large-scale, optimized system).

    DOT National Transportation Integrated Search

    2006-12-01

    Over the last several years, researchers at the University of Arizonas ATLAS Center have developed an adaptive ramp : metering system referred to as MILOS (Multi-Objective, Integrated, Large-Scale, Optimized System). The goal of this project : is ...

  14. Flexible, transparent and ultra-broadband photodetector based on large-area WSe2 film for wearable devices

    NASA Astrophysics Data System (ADS)

    Zheng, Zhaoqiang; Zhang, Tanmei; Yao, Jiandomg; Zhang, Yi; Xu, Jiarui; Yang, Guowei

    2016-06-01

    Although two-dimensional (2D) materials have attracted considerable research interest for use in the development of innovative wearable optoelectronic systems, the integrated optoelectronic performance of 2D materials photodetectors, including flexibility, transparency, broadband response and stability in air, remains quite low to date. Here, we demonstrate a flexible, transparent, high-stability and ultra-broadband photodetector made using large-area and highly-crystalline WSe2 films that were prepared by pulsed-laser deposition (PLD). Benefiting from the 2D physics of WSe2 films, this device exhibits excellent average transparency of 72% in the visible range and superior photoresponse characteristics, including an ultra-broadband detection spectral range from 370 to 1064 nm, reversible photoresponsivity approaching 0.92 A W-1, external quantum efficiency of up to 180% and a relatively fast response time of 0.9 s. The fabricated photodetector also demonstrates outstanding mechanical flexibility and durability in air. Also, because of the wide compatibility of the PLD-grown WSe2 film, we can fabricate various photodetectors on multiple flexible or rigid substrates, and all these devices will exhibit distinctive switching behavior and superior responsivity. These indicate a possible new strategy for the design and integration of flexible, transparent and broadband photodetectors based on large-area WSe2 films, with great potential for practical applications in the wearable optoelectronic devices.

  15. Skin Friction Reduction Through Large-Scale Forcing

    NASA Astrophysics Data System (ADS)

    Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer

    2017-11-01

    Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.

  16. On-Chip Integrated Distributed Amplifier and Antenna Systems in SiGe BiCMOS for Transceivers with Ultra-Large Bandwidth

    NASA Astrophysics Data System (ADS)

    Valerio Testa, Paolo; Klein, Bernhard; Hahnel, Ronny; Plettemeier, Dirk; Carta, Corrado; Ellinger, Frank

    2017-09-01

    This paper presents an overview of the research work currently being performed within the frame of project DAAB and its successor DAAB-TX towards the integration of ultra-wideband transceivers operating at mm-wave frequencies and capable of data rates up to 100 Gbits-1. Two basic system architectures are being considered: integrating a broadband antenna with a distributed amplifier and integrate antennas centered at adjacent frequencies with broadband active combiners or dividers. The paper discusses in detail the design of such systems and their components, from the distributed amplifiers and combiners, to the broadband silicon antennas and their single-chip integration. All components are designed for fabrication in a commercially available SiGe:C BiCMOS technology. The presented results represent the state of the art in their respective areas: 170 GHz is the highest reported bandwidth for distributed amplifiers integrated in Silicon; 89 GHz is the widest reported bandwidth for integrated-system antennas; the simulated performance of the two antenna integrated receiver spans 105 GHz centered at 148GHz, which would improve the state of the art by a factor in excess of 4 even against III-V implementations, if confirmed by measurements.

  17. Why self-catalyzed nanowires are most suitable for large-scale hierarchical integrated designs of nanowire nanoelectronics

    NASA Astrophysics Data System (ADS)

    Noor Mohammad, S.

    2011-10-01

    Nanowires are grown by a variety of mechanisms, including vapor-liquid-solid, vapor-quasiliquid-solid or vapor-quasisolid-solid, oxide-assisted growth, and self-catalytic growth (SCG) mechanisms. A critical analysis of the suitability of self-catalyzed nanowires, as compared to other nanowires, for next-generation technology development has been carried out. Basic causes of superiority of self-catalyzed (SCG) nanowires over other nanowires have been described. Polytypism in nanowires has been studied, and a model for polytypism has been proposed. The model predicts polytypism in good agreement with available experiments. This model, together with various evidences, demonstrates lower defects, dislocations, and stacking faults in SCG nanowires, as compared to those in other nanowires. Calculations of carrier mobility due to dislocation scattering, ionized impurity scattering, and acoustic phonon scattering explain the impact of defects, dislocations, and stacking faults on carrier transports in SCG and other nanowires. Analyses of growth mechanisms for nanowire growth directions indicate SCG nanowires to exhibit the most controlled growth directions. In-depth investigation uncovers the fundamental physics underlying the control of growth direction by the SCG mechanism. Self-organization of nanowires in large hierarchical arrays is crucial for ultra large-scale integration (ULSI). Unique features and advantages of self-organized SCG nanowires, unlike other nanowires, for this ULSI have been discussed. Investigations of nanowire dimension indicate self-catalyzed nanowires to have better control of dimension, higher stability, and higher probability, even for thinner structures. Theoretical calculations show that self-catalyzed nanowires, unlike catalyst-mediated nanowires, can have higher growth rate and lower growth temperature. Nanowire and nanotube characteristics have been found also to dictate the performance of nanoelectromechanical systems. Defects, such as

  18. Small-scale monitoring - can it be integrated with large-scale programs?

    Treesearch

    C. M. Downes; J. Bart; B. T. Collins; B. Craig; B. Dale; E. H. Dunn; C. M. Francis; S. Woodley; P. Zorn

    2005-01-01

    There are dozens of programs and methodologies for monitoring and inventory of bird populations, differing in geographic scope, species focus, field methods and purpose. However, most of the emphasis has been placed on large-scale monitoring programs. People interested in assessing bird numbers and long-term trends in small geographic areas such as a local birding area...

  19. Ultra-wideband WDM VCSEL arrays by lateral heterogeneous integration

    NASA Astrophysics Data System (ADS)

    Geske, Jon

    Advancements in heterogeneous integration are a driving factor in the development of evermore sophisticated and functional electronic and photonic devices. Such advancements will merge the optical and electronic capabilities of different material systems onto a common integrated device platform. This thesis presents a new lateral heterogeneous integration technology called nonplanar wafer bonding. The technique is capable of integrating multiple dissimilar semiconductor device structures on the surface of a substrate in a single wafer bond step, leaving different integrated device structures adjacent to each other on the wafer surface. Material characterization and numerical simulations confirm that the material quality is not compromised during the process. Nonplanar wafer bonding is used to fabricate ultra-wideband wavelength division multiplexed (WDM) vertical-cavity surface-emitting laser (VCSEL) arrays. The optically-pumped VCSEL arrays span 140 nm from 1470 to 1610 nm, a record wavelength span for devices operating in this wavelength range. The array uses eight wavelength channels to span the 140 nm with all channels separated by precisely 20 nm. All channels in the array operate single mode to at least 65°C with output power uniformity of +/- 1 dB. The ultra-wideband WDM VCSEL arrays are a significant first step toward the development of a single-chip source for optical networks based on coarse WDM (CWDM), a low-cost alternative to traditional dense WDM. The CWDM VCSEL arrays make use of fully-oxidized distributed Bragg reflectors (DBRs) to provide the wideband reflectivity required for optical feedback and lasing across 140 rim. In addition, a novel optically-pumped active region design is presented. It is demonstrated, with an analytical model and experimental results, that the new active-region design significantly improves the carrier uniformity in the quantum wells and results in a 50% lasing threshold reduction and a 20°C improvement in the peak

  20. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL wasmore » to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  1. Vertical resonant tunneling transistors with molecular quantum dots for large-scale integration.

    PubMed

    Hayakawa, Ryoma; Chikyow, Toyohiro; Wakayama, Yutaka

    2017-08-10

    Quantum molecular devices have a potential for the construction of new data processing architectures that cannot be achieved using current complementary metal-oxide-semiconductor (CMOS) technology. The relevant basic quantum transport properties have been examined by specific methods such as scanning probe and break-junction techniques. However, these methodologies are not compatible with current CMOS applications, and the development of practical molecular devices remains a persistent challenge. Here, we demonstrate a new vertical resonant tunneling transistor for large-scale integration. The transistor channel is comprised of a MOS structure with C 60 molecules as quantum dots, and the structure behaves like a double tunnel junction. Notably, the transistors enabled the observation of stepwise drain currents, which originated from resonant tunneling via the discrete molecular orbitals. Applying side-gate voltages produced depletion layers in Si substrates, to achieve effective modulation of the drain currents and obvious peak shifts in the differential conductance curves. Our device configuration thus provides a promising means of integrating molecular functions into future CMOS applications.

  2. The Plant Phenology Ontology: A New Informatics Resource for Large-Scale Integration of Plant Phenology Data.

    PubMed

    Stucky, Brian J; Guralnick, Rob; Deck, John; Denny, Ellen G; Bolmgren, Kjell; Walls, Ramona

    2018-01-01

    Plant phenology - the timing of plant life-cycle events, such as flowering or leafing out - plays a fundamental role in the functioning of terrestrial ecosystems, including human agricultural systems. Because plant phenology is often linked with climatic variables, there is widespread interest in developing a deeper understanding of global plant phenology patterns and trends. Although phenology data from around the world are currently available, truly global analyses of plant phenology have so far been difficult because the organizations producing large-scale phenology data are using non-standardized terminologies and metrics during data collection and data processing. To address this problem, we have developed the Plant Phenology Ontology (PPO). The PPO provides the standardized vocabulary and semantic framework that is needed for large-scale integration of heterogeneous plant phenology data. Here, we describe the PPO, and we also report preliminary results of using the PPO and a new data processing pipeline to build a large dataset of phenology information from North America and Europe.

  3. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  4. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    NASA Astrophysics Data System (ADS)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  5. Design of a broadband ultra-large area acoustic cloak based on a fluid medium

    NASA Astrophysics Data System (ADS)

    Zhu, Jian; Chen, Tianning; Liang, Qingxuan; Wang, Xiaopeng; Jiang, Ping

    2014-10-01

    A broadband ultra-large area acoustic cloak based on fluid medium was designed and numerically implemented with homogeneous metamaterials according to the transformation acoustics. In the present work, fluid medium as the body of the inclusion could be tuned by changing the fluid to satisfy the variant acoustic parameters instead of redesign the whole cloak. The effective density and bulk modulus of the composite materials were designed to agree with the parameters calculated from the coordinate transformation methodology by using the effective medium theory. Numerical simulation results showed that the sound propagation and scattering signature could be controlled in the broadband ultra-large area acoustic invisibility cloak, and good cloaking performance has been achieved and physically realized with homogeneous materials. The broadband ultra-large area acoustic cloaking properties have demonstrated great potentials in the promotion of the practical applications of acoustic cloak.

  6. Ultra-Dense Quantum Communication Using Integrated Photonic Architecture: First Annual Report

    DTIC Science & Technology

    2011-08-24

    REPORT Ultra-Dense Quantum Communication Using Integrated Photonic Architecture: First Annual Report 14. ABSTRACT 16. SECURITY CLASSIFICATION OF: The...goal of this program is to establish a fundamental information-theoretic understand of quantum secure communication and to devise a practical...scalable implementation of quantum key distribution protocols in an integrated photonic architecture. We report our progress on experimental and

  7. Ontological modelling of knowledge management for human-machine integrated design of ultra-precision grinding machine

    NASA Astrophysics Data System (ADS)

    Hong, Haibo; Yin, Yuehong; Chen, Xing

    2016-11-01

    Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.

  8. ReSeqTools: an integrated toolkit for large-scale next-generation sequencing based resequencing analysis.

    PubMed

    He, W; Zhao, S; Liu, X; Dong, S; Lv, J; Liu, D; Wang, J; Meng, Z

    2013-12-04

    Large-scale next-generation sequencing (NGS)-based resequencing detects sequence variations, constructs evolutionary histories, and identifies phenotype-related genotypes. However, NGS-based resequencing studies generate extraordinarily large amounts of data, making computations difficult. Effective use and analysis of these data for NGS-based resequencing studies remains a difficult task for individual researchers. Here, we introduce ReSeqTools, a full-featured toolkit for NGS (Illumina sequencing)-based resequencing analysis, which processes raw data, interprets mapping results, and identifies and annotates sequence variations. ReSeqTools provides abundant scalable functions for routine resequencing analysis in different modules to facilitate customization of the analysis pipeline. ReSeqTools is designed to use compressed data files as input or output to save storage space and facilitates faster and more computationally efficient large-scale resequencing studies in a user-friendly manner. It offers abundant practical functions and generates useful statistics during the analysis pipeline, which significantly simplifies resequencing analysis. Its integrated algorithms and abundant sub-functions provide a solid foundation for special demands in resequencing projects. Users can combine these functions to construct their own pipelines for other purposes.

  9. System design and integration of the large-scale advanced prop-fan

    NASA Technical Reports Server (NTRS)

    Huth, B. P.

    1986-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel consumption. Studies have shown that blades with thin airfoils and aerodynamic sweep extend the inherent efficiency advantage that turboprop propulsion systems have demonstrated to the higher speed to today's aircraft. Hamilton Standard has designed a 9-foot diameter single-rotation Prop-Fan. It will test the hardware on a static test stand, in low speed and high speed wind tunnels and on a research aircraft. The major objective of this testing is to establish the structural integrity of large scale Prop-Fans of advanced construction, in addition to the evaluation of aerodynamic performance and the aeroacoustic design. The coordination efforts performed to ensure smooth operation and assembly of the Prop-Fan are summarized. A summary of the loads used to size the system components, the methodology used to establish material allowables and a review of the key analytical results are given.

  10. A convenient method for large-scale STM mapping of freestanding atomically thin conductive membranes

    NASA Astrophysics Data System (ADS)

    Uder, B.; Hartmann, U.

    2017-06-01

    Two-dimensional atomically flat sheets with a high flexibility are very attractive as ultrathin membranes but are also inherently challenging for microscopic investigations. We report on a method using Scanning Tunneling Microscopy (STM) under ultra-high vacuum conditions for large-scale mapping of several-micrometer-sized freestanding single and multilayer graphene membranes. This is achieved by operating the STM at unusual parameters. We found that large-scale scanning on atomically thin membranes delivers valuable results using very high tip-scan speeds combined with high feedback-loop gain and low tunneling currents. The method ultimately relies on the particular behavior of the freestanding membrane in the STM which is much different from that of a solid substrate.

  11. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    NASA Technical Reports Server (NTRS)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  12. Applications of Magnetic Suspension Technology to Large Scale Facilities: Progress, Problems and Promises

    NASA Technical Reports Server (NTRS)

    Britcher, Colin P.

    1997-01-01

    This paper will briefly review previous work in wind tunnel Magnetic Suspension and Balance Systems (MSBS) and will examine the handful of systems around the world currently known to be in operational condition or undergoing recommissioning. Technical developments emerging from research programs at NASA and elsewhere will be reviewed briefly, where there is potential impact on large-scale MSBSS. The likely aerodynamic applications for large MSBSs will be addressed, since these applications should properly drive system designs. A recently proposed application to ultra-high Reynolds number testing will then be addressed in some detail. Finally, some opinions on the technical feasibility and usefulness of a large MSBS will be given.

  13. Chip-scale integrated optical interconnects: a key enabler for future high-performance computing

    NASA Astrophysics Data System (ADS)

    Haney, Michael; Nair, Rohit; Gu, Tian

    2012-01-01

    High Performance Computing (HPC) systems are putting ever-increasing demands on the throughput efficiency of their interconnection fabrics. In this paper, the limits of conventional metal trace-based inter-chip interconnect fabrics are examined in the context of state-of-the-art HPC systems, which currently operate near the 1 GFLOPS/W level. The analysis suggests that conventional metal trace interconnects will limit performance to approximately 6 GFLOPS/W in larger HPC systems that require many computer chips to be interconnected in parallel processing architectures. As the HPC communications bottlenecks push closer to the processing chips, integrated Optical Interconnect (OI) technology may provide the ultra-high bandwidths needed at the inter- and intra-chip levels. With inter-chip photonic link energies projected to be less than 1 pJ/bit, integrated OI is projected to enable HPC architecture scaling to the 50 GFLOPS/W level and beyond - providing a path to Peta-FLOPS-level HPC within a single rack, and potentially even Exa-FLOPSlevel HPC for large systems. A new hybrid integrated chip-scale OI approach is described and evaluated. The concept integrates a high-density polymer waveguide fabric directly on top of a multiple quantum well (MQW) modulator array that is area-bonded to the Silicon computing chip. Grayscale lithography is used to fabricate 5 μm x 5 μm polymer waveguides and associated novel small-footprint total internal reflection-based vertical input/output couplers directly onto a layer containing an array of GaAs MQW devices configured to be either absorption modulators or photodetectors. An external continuous wave optical "power supply" is coupled into the waveguide links. Contrast ratios were measured using a test rider chip in place of a Silicon processing chip. The results suggest that sub-pJ/b chip-scale communication is achievable with this concept. When integrated into high-density integrated optical interconnect fabrics, it could provide

  14. Large-scale quantum photonic circuits in silicon

    NASA Astrophysics Data System (ADS)

    Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk

    2016-08-01

    Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards

  15. A cellphone based system for large-scale monitoring of black carbon

    NASA Astrophysics Data System (ADS)

    Ramanathan, N.; Lukac, M.; Ahmed, T.; Kar, A.; Praveen, P. S.; Honles, T.; Leong, I.; Rehman, I. H.; Schauer, J. J.; Ramanathan, V.

    2011-08-01

    Black carbon aerosols are a major component of soot and are also a major contributor to global and regional climate change. Reliable and cost-effective systems to measure near-surface black carbon (BC) mass concentrations (hereafter denoted as [BC]) globally are necessary to validate air pollution and climate models and to evaluate the effectiveness of BC mitigation actions. Toward this goal we describe a new wireless, low-cost, ultra low-power, BC cellphone based monitoring system (BC_CBM). BC_CBM integrates a Miniaturized Aerosol filter Sampler (MAS) with a cellphone for filter image collection, transmission and image analysis for determining [BC] in real time. The BC aerosols in the air accumulate on the MAS quartz filter, resulting in a coloration of the filter. A photograph of the filter is captured by the cellphone camera and transmitted by the cellphone to the analytics component of BC_CBM. The analytics component compares the image with a calibrated reference scale (also included in the photograph) to estimate [BC]. We demonstrate with field data collected from vastly differing environments, ranging from southern California to rural regions in the Indo-Gangetic plains of Northern India, that the total BC deposited on the filter is directly and uniquely related to the reflectance of the filter in the red wavelength, irrespective of its source or how the particles were deposited. [BC] varied from 0.1 to 1 μg m -3 in Southern California and from 10 to 200 μg m -3 in rural India in our field studies. In spite of the 3 orders of magnitude variation in [BC], the BC_CBM system was able to determine the [BC] well within the experimental error of two independent reference instruments for both indoor air and outdoor ambient air. Accurate, global-scale measurements of [BC] in urban and remote rural locations, enabled by the wireless, low-cost, ultra low-power operation of BC_CBM, will make it possible to better capture the large spatial and temporal variations in

  16. Ultra-sensitive all-fibre photothermal spectroscopy with large dynamic range

    PubMed Central

    Jin, Wei; Cao, Yingchun; Yang, Fan; Ho, Hoi Lut

    2015-01-01

    Photothermal interferometry is an ultra-sensitive spectroscopic means for trace chemical detection in gas- and liquid-phase materials. Previous photothermal interferometry systems used free-space optics and have limitations in efficiency of light–matter interaction, size and optical alignment, and integration into photonic circuits. Here we exploit photothermal-induced phase change in a gas-filled hollow-core photonic bandgap fibre, and demonstrate an all-fibre acetylene gas sensor with a noise equivalent concentration of 2 p.p.b. (2.3 × 10−9 cm−1 in absorption coefficient) and an unprecedented dynamic range of nearly six orders of magnitude. The realization of photothermal interferometry with low-cost near infrared semiconductor lasers and fibre-based technology allows a class of optical sensors with compact size, ultra sensitivity and selectivity, applicability to harsh environment, and capability for remote and multiplexed multi-point detection and distributed sensing. PMID:25866015

  17. Evolution of clustering length, large-scale bias, and host halo mass at 2 < z < 5 in the VIMOS Ultra Deep Survey (VUDS)⋆

    NASA Astrophysics Data System (ADS)

    Durkalec, A.; Le Fèvre, O.; Pollo, A.; de la Torre, S.; Cassata, P.; Garilli, B.; Le Brun, V.; Lemaux, B. C.; Maccagni, D.; Pentericci, L.; Tasca, L. A. M.; Thomas, R.; Vanzella, E.; Zamorani, G.; Zucca, E.; Amorín, R.; Bardelli, S.; Cassarà, L. P.; Castellano, M.; Cimatti, A.; Cucciati, O.; Fontana, A.; Giavalisco, M.; Grazian, A.; Hathi, N. P.; Ilbert, O.; Paltani, S.; Ribeiro, B.; Schaerer, D.; Scodeggio, M.; Sommariva, V.; Talia, M.; Tresse, L.; Vergani, D.; Capak, P.; Charlot, S.; Contini, T.; Cuby, J. G.; Dunlop, J.; Fotopoulou, S.; Koekemoer, A.; López-Sanjuan, C.; Mellier, Y.; Pforr, J.; Salvato, M.; Scoville, N.; Taniguchi, Y.; Wang, P. W.

    2015-11-01

    We investigate the evolution of galaxy clustering for galaxies in the redshift range 2.0 Ultra Deep Survey (VUDS). We present the projected (real-space) two-point correlation function wp(rp) measured by using 3022 galaxies with robust spectroscopic redshifts in two independent fields (COSMOS and VVDS-02h) covering in total 0.8deg2. We quantify how the scale dependent clustering amplitude r0 changes with redshift making use of mock samples to evaluate and correct the survey selection function. Using a power-law model ξ(r) = (r/r0)- γ we find that the correlation function for the general population is best fit by a model with a clustering length r0 = 3.95+0.48-0.54 h-1 Mpc and slope γ = 1.8+0.02-0.06 at z ~ 2.5, r0 = 4.35 ± 0.60 h-1 Mpc and γ = 1.6+0.12-0.13 at z ~ 3.5. We use these clustering parameters to derive the large-scale linear galaxy bias bLPL, between galaxies and dark matter. We find bLPL = 2.68 ± 0.22 at redshift z ~ 3 (assuming σ8 = 0.8), significantly higher than found at intermediate and low redshifts for the similarly general galaxy populations. We fit a halo occupation distribution (HOD) model to the data and we obtain that the average halo mass at redshift z ~ 3 is Mh = 1011.75 ± 0.23 h-1M⊙. From this fit we confirm that the large-scale linear galaxy bias is relatively high at bLHOD = 2.82 ± 0.27. Comparing these measurements with similar measurements at lower redshifts we infer that the star-forming population of galaxies at z ~ 3 should evolve into the massive and bright (Mr< -21.5)galaxy population, which typically occupy haloes of mass ⟨ Mh ⟩ = 1013.9 h-1M⊙ at redshift z = 0. Based on data obtained with the European Southern Observatory Very Large Telescope, Paranal, Chile, under Large Program 185.A-0791.Appendices are available in electronic form at http://www.aanda.org

  18. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  19. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  20. The large-scale organization of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z. N.; Barabási, A.-L.

    2000-10-01

    In a cell or microorganism, the processes that generate mass, energy, information transfer and cell-fate specification are seamlessly integrated through a complex network of cellular constituents and reactions. However, despite the key role of these networks in sustaining cellular functions, their large-scale structure is essentially unknown. Here we present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems. This may indicate that metabolic organization is not only identical for all living organisms, but also complies with the design principles of robust and error-tolerant scale-free networks, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents.

  1. Convolutional auto-encoder for image denoising of ultra-low-dose CT.

    PubMed

    Nishio, Mizuho; Nagashima, Chihiro; Hirabayashi, Saori; Ohnishi, Akinori; Sasaki, Kaori; Sagawa, Tomoyuki; Hamada, Masayuki; Yamashita, Tatsuo

    2017-08-01

    The purpose of this study was to validate a patch-based image denoising method for ultra-low-dose CT images. Neural network with convolutional auto-encoder and pairs of standard-dose CT and ultra-low-dose CT image patches were used for image denoising. The performance of the proposed method was measured by using a chest phantom. Standard-dose and ultra-low-dose CT images of the chest phantom were acquired. The tube currents for standard-dose and ultra-low-dose CT were 300 and 10 mA, respectively. Ultra-low-dose CT images were denoised with our proposed method using neural network, large-scale nonlocal mean, and block-matching and 3D filtering. Five radiologists and three technologists assessed the denoised ultra-low-dose CT images visually and recorded their subjective impressions of streak artifacts, noise other than streak artifacts, visualization of pulmonary vessels, and overall image quality. For the streak artifacts, noise other than streak artifacts, and visualization of pulmonary vessels, the results of our proposed method were statistically better than those of block-matching and 3D filtering (p-values < 0.05). On the other hand, the difference in the overall image quality between our proposed method and block-matching and 3D filtering was not statistically significant (p-value = 0.07272). The p-values obtained between our proposed method and large-scale nonlocal mean were all less than 0.05. Neural network with convolutional auto-encoder could be trained using pairs of standard-dose and ultra-low-dose CT image patches. According to the visual assessment by radiologists and technologists, the performance of our proposed method was superior to that of large-scale nonlocal mean and block-matching and 3D filtering.

  2. Ultra-fast photon counting with a passive quenching silicon photomultiplier in the charge integration regime

    NASA Astrophysics Data System (ADS)

    Zhang, Guoqing; Lina, Liu

    2018-02-01

    An ultra-fast photon counting method is proposed based on the charge integration of output electrical pulses of passive quenching silicon photomultipliers (SiPMs). The results of the numerical analysis with actual parameters of SiPMs show that the maximum photon counting rate of a state-of-art passive quenching SiPM can reach ~THz levels which is much larger than that of the existing photon counting devices. The experimental procedure is proposed based on this method. This photon counting regime of SiPMs is promising in many fields such as large dynamic light power detection.

  3. Large-scale 3D inversion of marine controlled source electromagnetic data using the integral equation method

    NASA Astrophysics Data System (ADS)

    Zhdanov, M. S.; Cuma, M.; Black, N.; Wilson, G. A.

    2009-12-01

    The marine controlled source electromagnetic (MCSEM) method has become widely used in offshore oil and gas exploration. Interpretation of MCSEM data is still a very challenging problem, especially if one would like to take into account the realistic 3D structure of the subsurface. The inversion of MCSEM data is complicated by the fact that the EM response of a hydrocarbon-bearing reservoir is very weak in comparison with the background EM fields generated by an electric dipole transmitter in complex geoelectrical structures formed by a conductive sea-water layer and the terranes beneath it. In this paper, we present a review of the recent developments in the area of large-scale 3D EM forward modeling and inversion. Our approach is based on using a new integral form of Maxwell’s equations allowing for an inhomogeneous background conductivity, which results in a numerically effective integral representation for 3D EM field. This representation provides an efficient tool for the solution of 3D EM inverse problems. To obtain a robust inverse model of the conductivity distribution, we apply regularization based on a focusing stabilizing functional which allows for the recovery of models with both smooth and sharp geoelectrical boundaries. The method is implemented in a fully parallel computer code, which makes it possible to run large-scale 3D inversions on grids with millions of inversion cells. This new technique can be effectively used for active EM detection and monitoring of the subsurface targets.

  4. Ultra Small Integrated Optical Fiber Sensing System

    PubMed Central

    Van Hoe, Bram; Lee, Graham; Bosman, Erwin; Missinne, Jeroen; Kalathimekkad, Sandeep; Maskery, Oliver; Webb, David J.; Sugden, Kate; Van Daele, Peter; Van Steenberge, Geert

    2012-01-01

    This paper introduces a revolutionary way to interrogate optical fiber sensors based on fiber Bragg gratings (FBGs) and to integrate the necessary driving optoelectronic components with the sensor elements. Low-cost optoelectronic chips are used to interrogate the optical fibers, creating a portable dynamic sensing system as an alternative for the traditionally bulky and expensive fiber sensor interrogation units. The possibility to embed these laser and detector chips is demonstrated resulting in an ultra thin flexible optoelectronic package of only 40 μm, provided with an integrated planar fiber pigtail. The result is a fully embedded flexible sensing system with a thickness of only 1 mm, based on a single Vertical-Cavity Surface-Emitting Laser (VCSEL), fiber sensor and photodetector chip. Temperature, strain and electrodynamic shaking tests have been performed on our system, not limited to static read-out measurements but dynamically reconstructing full spectral information datasets.

  5. GIGGLE: a search engine for large-scale integrated genome analysis.

    PubMed

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-02-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation.

  6. The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems

    ERIC Educational Resources Information Center

    Diamanti, Eirini Ilana

    2012-01-01

    Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…

  7. GIGGLE: a search engine for large-scale integrated genome analysis

    PubMed Central

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-01-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation. PMID:29309061

  8. Large scale integration of graphene transistors for potential applications in the back end of the line

    NASA Astrophysics Data System (ADS)

    Smith, A. D.; Vaziri, S.; Rodriguez, S.; Östling, M.; Lemme, M. C.

    2015-06-01

    A chip to wafer scale, CMOS compatible method of graphene device fabrication has been established, which can be integrated into the back end of the line (BEOL) of conventional semiconductor process flows. In this paper, we present experimental results of graphene field effect transistors (GFETs) which were fabricated using this wafer scalable method. The carrier mobilities in these transistors reach up to several hundred cm2 V-1 s-1. Further, these devices exhibit current saturation regions similar to graphene devices fabricated using mechanical exfoliation. The overall performance of the GFETs can not yet compete with record values reported for devices based on mechanically exfoliated material. Nevertheless, this large scale approach is an important step towards reliability and variability studies as well as optimization of device aspects such as electrical contacts and dielectric interfaces with statistically relevant numbers of devices. It is also an important milestone towards introducing graphene into wafer scale process lines.

  9. A large-scale clinical validation of an integrated monitoring system in the emergency department.

    PubMed

    Clifton, David A; Wong, David; Clifton, Lei; Wilson, Sarah; Way, Rob; Pullinger, Richard; Tarassenko, Lionel

    2013-07-01

    We consider an integrated patient monitoring system, combining electronic patient records with high-rate acquisition of patient physiological data. There remain many challenges in increasing the robustness of "e-health" applications to a level at which they are clinically useful, particularly in the use of automated algorithms used to detect and cope with artifact in data contained within the electronic patient record, and in analyzing and communicating the resultant data for reporting to clinicians. There is a consequential "plague of pilots," in which engineering prototype systems do not enter into clinical use. This paper describes an approach in which, for the first time, the Emergency Department (ED) of a major research hospital has adopted such systems for use during a large clinical trial. We describe the disadvantages of existing evaluation metrics when applied to such large trials, and propose a solution suitable for large-scale validation. We demonstrate that machine learning technologies embedded within healthcare information systems can provide clinical benefit, with the potential to improve patient outcomes in the busy environment of a major ED and other high-dependence areas of patient care.

  10. Weight optimization of ultra large space structures

    NASA Technical Reports Server (NTRS)

    Reinert, R. P.

    1979-01-01

    The paper describes the optimization of a solar power satellite structure for minimum mass and system cost. The solar power satellite is an ultra large low frequency and lightly damped space structure; derivation of its structural design requirements required accommodation of gravity gradient torques which impose primary loads, life up to 100 years in the rigorous geosynchronous orbit radiation environment, and prevention of continuous wave motion in a solar array blanket suspended from a huge, lightly damped structure subject to periodic excitations. The satellite structural design required a parametric study of structural configurations and consideration of the fabrication and assembly techniques, which resulted in a final structure which met all requirements at a structural mass fraction of 10%.

  11. Stochastic inflation lattice simulations - Ultra-large scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Salopek, D. S.

    1991-01-01

    Non-Gaussian fluctuations for structure formation may arise in inflation from the nonlinear interaction of long wavelength gravitational and scalar fields. Long wavelength fields have spatial gradients, a (exp -1), small compared to the Hubble radius, and they are described in terms of classical random fields that are fed by short wavelength quantum noise. Lattice Langevin calculations are given for a toy model with a scalar field interacting with an exponential potential where one can obtain exact analytic solutions of the Fokker-Planck equation. For single scalar field models that are consistent with current microwave background fluctuations, the fluctuations are Gaussian. However, for scales much larger than our observable Universe, one expects large metric fluctuations that are non-Gaussian. This example illuminates non-Gaussian models involving multiple scalar fields which are consistent with current microwave background limits.

  12. Large-scale network integration in the human brain tracks temporal fluctuations in memory encoding performance.

    PubMed

    Keerativittayayut, Ruedeerat; Aoki, Ryuta; Sarabi, Mitra Taghizadeh; Jimura, Koji; Nakahara, Kiyoshi

    2018-06-18

    Although activation/deactivation of specific brain regions have been shown to be predictive of successful memory encoding, the relationship between time-varying large-scale brain networks and fluctuations of memory encoding performance remains unclear. Here we investigated time-varying functional connectivity patterns across the human brain in periods of 30-40 s, which have recently been implicated in various cognitive functions. During functional magnetic resonance imaging, participants performed a memory encoding task, and their performance was assessed with a subsequent surprise memory test. A graph analysis of functional connectivity patterns revealed that increased integration of the subcortical, default-mode, salience, and visual subnetworks with other subnetworks is a hallmark of successful memory encoding. Moreover, multivariate analysis using the graph metrics of integration reliably classified the brain network states into the period of high (vs. low) memory encoding performance. Our findings suggest that a diverse set of brain systems dynamically interact to support successful memory encoding. © 2018, Keerativittayayut et al.

  13. Large-scale complementary macroelectronics using hybrid integration of carbon nanotubes and IGZO thin-film transistors.

    PubMed

    Chen, Haitian; Cao, Yu; Zhang, Jialu; Zhou, Chongwu

    2014-06-13

    Carbon nanotubes and metal oxide semiconductors have emerged as important materials for p-type and n-type thin-film transistors, respectively; however, realizing sophisticated macroelectronics operating in complementary mode has been challenging due to the difficulty in making n-type carbon nanotube transistors and p-type metal oxide transistors. Here we report a hybrid integration of p-type carbon nanotube and n-type indium-gallium-zinc-oxide thin-film transistors to achieve large-scale (>1,000 transistors for 501-stage ring oscillators) complementary macroelectronic circuits on both rigid and flexible substrates. This approach of hybrid integration allows us to combine the strength of p-type carbon nanotube and n-type indium-gallium-zinc-oxide thin-film transistors, and offers high device yield and low device variation. Based on this approach, we report the successful demonstration of various logic gates (inverter, NAND and NOR gates), ring oscillators (from 51 stages to 501 stages) and dynamic logic circuits (dynamic inverter, NAND and NOR gates).

  14. Ultra-weak sector, Higgs boson mass, and the dilaton

    DOE PAGES

    Allison, Kyle; Hill, Christopher T.; Ross, Graham G.

    2014-09-26

    The Higgs boson mass may arise from a portal coupling to a singlet fieldmore » $$\\sigma$$ which has a very large VEV $$f \\gg m_\\text{Higgs}$$. This requires a sector of "ultra-weak" couplings $$\\zeta_i$$, where $$\\zeta_i \\lesssim m_\\text{Higgs}^2 / f^2$$. Ultra-weak couplings are technically naturally small due to a custodial shift symmetry of $$\\sigma$$ in the $$\\zeta_i \\rightarrow 0$$ limit. The singlet field $$\\sigma$$ has properties similar to a pseudo-dilaton. We engineer explicit breaking of scale invariance in the ultra-weak sector via a Coleman-Weinberg potential, which requires hierarchies amongst the ultra-weak couplings.« less

  15. Large area scanning probe microscope in ultra-high vacuum demonstrated for electrostatic force measurements on high-voltage devices.

    PubMed

    Gysin, Urs; Glatzel, Thilo; Schmölzer, Thomas; Schöner, Adolf; Reshanov, Sergey; Bartolf, Holger; Meyer, Ernst

    2015-01-01

    The resolution in electrostatic force microscopy (EFM), a descendant of atomic force microscopy (AFM), has reached nanometre dimensions, necessary to investigate integrated circuits in modern electronic devices. However, the characterization of conducting or semiconducting power devices with EFM methods requires an accurate and reliable technique from the nanometre up to the micrometre scale. For high force sensitivity it is indispensable to operate the microscope under high to ultra-high vacuum (UHV) conditions to suppress viscous damping of the sensor. Furthermore, UHV environment allows for the analysis of clean surfaces under controlled environmental conditions. Because of these requirements we built a large area scanning probe microscope operating under UHV conditions at room temperature allowing to perform various electrical measurements, such as Kelvin probe force microscopy, scanning capacitance force microscopy, scanning spreading resistance microscopy, and also electrostatic force microscopy at higher harmonics. The instrument incorporates beside a standard beam deflection detection system a closed loop scanner with a scan range of 100 μm in lateral and 25 μm in vertical direction as well as an additional fibre optics. This enables the illumination of the tip-sample interface for optically excited measurements such as local surface photo voltage detection. We present Kelvin probe force microscopy (KPFM) measurements before and after sputtering of a copper alloy with chromium grains used as electrical contact surface in ultra-high power switches. In addition, we discuss KPFM measurements on cross sections of cleaved silicon carbide structures: a calibration layer sample and a power rectifier. To demonstrate the benefit of surface photo voltage measurements, we analysed the contact potential difference of a silicon carbide p/n-junction under illumination.

  16. Gas-Enhanced Ultra-High Shear Mixing: A Concept and Applications

    NASA Astrophysics Data System (ADS)

    Czerwinski, Frank; Birsan, Gabriel

    2017-04-01

    The processes of mixing, homogenizing, and deagglomeration are of paramount importance in many industries for modifying properties of liquids or liquid-based dispersions at room temperature and treatment of molten or semi-molten alloys at high temperatures, prior to their solidification. To implement treatments, a variety of technologies based on mechanical, electromagnetic, and ultrasonic principles are used commercially or tested at the laboratory scale. In a large number of techniques, especially those tailored toward metallurgical applications, the vital role is played by cavitation, generation of gas bubbles, and their interaction with the melt. This paper describes a novel concept exploring an integration of gas injection into the shear zone with ultra-high shear mixing. As revealed via experiments with a prototype of the cylindrical rotor-stator apparatus and transparent media, gases injected radially through the high-speed rotor generate highly refined bubbles of high concentration directly in the shear zone of the mixer. It is believed that an interaction of large volume of fine gas bubbles with the liquid, superimposed on ultra-high shear, will enhance mixing capabilities and cause superior refining and homogenizing of the liquids or solid-liquid slurries, thus allowing their effective property modification.

  17. Sustainable p-type copper selenide solar material with ultra-large absorption coefficient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Erica M.; Williams, Logan; Olvera, Alan

    We report the synthesis of CTSe, a p-type titanium copper selenide semiconductor. Its band gap (1.15 eV) and its ultra-large absorption coefficient (10 5 cm −1 ) in the entire visible range make it a promising Earth-abundant solar absorber material.

  18. Sustainable p-type copper selenide solar material with ultra-large absorption coefficient

    DOE PAGES

    Chen, Erica M.; Williams, Logan; Olvera, Alan; ...

    2018-01-01

    We report the synthesis of CTSe, a p-type titanium copper selenide semiconductor. Its band gap (1.15 eV) and its ultra-large absorption coefficient (10 5 cm −1 ) in the entire visible range make it a promising Earth-abundant solar absorber material.

  19. A Magnetic Bead-Integrated Chip for the Large Scale Manufacture of Normalized esiRNAs

    PubMed Central

    Wang, Zhao; Huang, Huang; Zhang, Hanshuo; Sun, Changhong; Hao, Yang; Yang, Junyu; Fan, Yu; Xi, Jianzhong Jeff

    2012-01-01

    The chemically-synthesized siRNA duplex has become a powerful and widely used tool for RNAi loss-of-function studies, but suffers from a high off-target effect problem. Recently, endoribonulease-prepared siRNA (esiRNA) has been shown to be an attractive alternative due to its lower off-target effect and cost effectiveness. However, the current manufacturing method for esiRNA is complicated, mainly in regards to purification and normalization on a large-scale level. In this study, we present a magnetic bead-integrated chip that can immobilize amplification or transcription products on beads and accomplish transcription, digestion, normalization and purification in a robust and convenient manner. This chip is equipped to manufacture ready-to-use esiRNAs on a large-scale level. Silencing specificity and efficiency of these esiRNAs were validated at the transcriptional, translational and functional levels. Manufacture of several normalized esiRNAs in a single well, including those silencing PARP1 and BRCA1, was successfully achieved, and the esiRNAs were subsequently utilized to effectively investigate their synergistic effect on cell viability. A small esiRNA library targeting 68 tyrosine kinase genes was constructed for a loss-of-function study, and four genes were identified in regulating the migration capability of Hela cells. We believe that this approach provides a more robust and cost-effective choice for manufacturing esiRNAs than current approaches, and therefore these heterogeneous RNA strands may have utility in most intensive and extensive applications. PMID:22761791

  20. SSI/MSI/LSI/VLSI/ULSI.

    ERIC Educational Resources Information Center

    Alexander, George

    1984-01-01

    Discusses small-scale integrated (SSI), medium-scale integrated (MSI), large-scale integrated (LSI), very large-scale integrated (VLSI), and ultra large-scale integrated (ULSI) chips. The development and properties of these chips, uses of gallium arsenide, Josephson devices (two superconducting strips sandwiching a thin insulator), and future…

  1. Large Scale Processes and Extreme Floods in Brazil

    NASA Astrophysics Data System (ADS)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  2. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  3. A vision for an ultra-high resolution integrated water cycle observation and prediction system

    NASA Astrophysics Data System (ADS)

    Houser, P. R.

    2013-05-01

    Society's welfare, progress, and sustainable economic growth—and life itself—depend on the abundance and vigorous cycling and replenishing of water throughout the global environment. The water cycle operates on a continuum of time and space scales and exchanges large amounts of energy as water undergoes phase changes and is moved from one part of the Earth system to another. We must move toward an integrated observation and prediction paradigm that addresses broad local-to-global science and application issues by realizing synergies associated with multiple, coordinated observations and prediction systems. A central challenge of a future water and energy cycle observation strategy is to progress from single variable water-cycle instruments to multivariable integrated instruments in electromagnetic-band families. The microwave range in the electromagnetic spectrum is ideally suited for sensing the state and abundance of water because of water's dielectric properties. Eventually, a dedicated high-resolution water-cycle microwave-based satellite mission may be possible based on large-aperture antenna technology that can harvest the synergy that would be afforded by simultaneous multichannel active and passive microwave measurements. A partial demonstration of these ideas can even be realized with existing microwave satellite observations to support advanced multivariate retrieval methods that can exploit the totality of the microwave spectral information. The simultaneous multichannel active and passive microwave retrieval would allow improved-accuracy retrievals that are not possible with isolated measurements. Furthermore, the simultaneous monitoring of several of the land, atmospheric, oceanic, and cryospheric states brings synergies that will substantially enhance understanding of the global water and energy cycle as a system. The multichannel approach also affords advantages to some constituent retrievals—for instance, simultaneous retrieval of vegetation

  4. Removal of two large-scale cosmic microwave background anomalies after subtraction of the integrated Sachs-Wolfe effect

    NASA Astrophysics Data System (ADS)

    Rassat, A.; Starck, J.-L.; Dupé, F.-X.

    2013-09-01

    Context. Although there is currently a debate over the significance of the claimed large-scale anomalies in the cosmic microwave background (CMB), their existence is not totally dismissed. In parallel to the debate over their statistical significance, recent work has also focussed on masks and secondary anisotropies as potential sources of these anomalies. Aims: In this work we investigate simultaneously the impact of the method used to account for masked regions as well as the impact of the integrated Sachs-Wolfe (ISW) effect, which is the large-scale secondary anisotropy most likely to affect the CMB anomalies. In this sense, our work is an update of previous works. Our aim is to identify trends in CMB data from different years and with different mask treatments. Methods: We reconstruct the ISW signal due to 2 Micron All-Sky Survey (2MASS) and NRAO VLA Sky Survey (NVSS) galaxies, effectively reconstructing the low-redshift ISW signal out to z ~ 1. We account for regions of missing data using the sparse inpainting technique. We test sparse inpainting of the CMB, large scale structure and ISW and find that it constitutes a bias-free reconstruction method suitable to study large-scale statistical isotropy and the ISW effect. Results: We focus on three large-scale CMB anomalies: the low quadrupole, the quadrupole/octopole alignment, and the octopole planarity. After sparse inpainting, the low quadrupole becomes more anomalous, whilst the quadrupole/octopole alignment becomes less anomalous. The significance of the low quadrupole is unchanged after subtraction of the ISW effect, while the trend amongst the CMB maps is that both the low quadrupole and the quadrupole/octopole alignment have reduced significance, yet other hypotheses remain possible as well (e.g. exotic physics). Our results also suggest that both of these anomalies may be due to the quadrupole alone. While the octopole planarity significance is reduced after inpainting and after ISW subtraction, however

  5. A Review of Large-Scale Fracture Experiments Relevant to Pressure Vessel Integrity Under Pressurized Thermal Shock Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pugh, C.E.

    2001-01-29

    Numerous large-scale fracture experiments have been performed over the past thirty years to advance fracture mechanics methodologies applicable to thick-wall pressure vessels. This report first identifies major factors important to nuclear reactor pressure vessel (RPV) integrity under pressurized thermal shock (PTS) conditions. It then covers 20 key experiments that have contributed to identifying fracture behavior of RPVs and to validating applicable assessment methodologies. The experiments are categorized according to four types of specimens: (1) cylindrical specimens, (2) pressurized vessels, (3) large plate specimens, and (4) thick beam specimens. These experiments were performed in laboratories in six different countries. This reportmore » serves as a summary of those experiments, and provides a guide to references for detailed information.« less

  6. Disposable photonic integrated circuits for evanescent wave sensors by ultra-high volume roll-to-roll method.

    PubMed

    Aikio, Sanna; Hiltunen, Jussi; Hiitola-Keinänen, Johanna; Hiltunen, Marianne; Kontturi, Ville; Siitonen, Samuli; Puustinen, Jarkko; Karioja, Pentti

    2016-02-08

    Flexible photonic integrated circuit technology is an emerging field expanding the usage possibilities of photonics, particularly in sensor applications, by enabling the realization of conformable devices and introduction of new alternative production methods. Here, we demonstrate that disposable polymeric photonic integrated circuit devices can be produced in lengths of hundreds of meters by ultra-high volume roll-to-roll methods on a flexible carrier. Attenuation properties of hundreds of individual devices were measured confirming that waveguides with good and repeatable performance were fabricated. We also demonstrate the applicability of the devices for the evanescent wave sensing of ambient refractive index. The production of integrated photonic devices using ultra-high volume fabrication, in a similar manner as paper is produced, may inherently expand methods of manufacturing low-cost disposable photonic integrated circuits for a wide range of sensor applications.

  7. 1 million-Q optomechanical microdisk resonators for sensing with very large scale integration

    NASA Astrophysics Data System (ADS)

    Hermouet, M.; Sansa, M.; Banniard, L.; Fafin, A.; Gely, M.; Allain, P. E.; Santos, E. Gil; Favero, I.; Alava, T.; Jourdan, G.; Hentz, S.

    2018-02-01

    Cavity optomechanics have become a promising route towards the development of ultrasensitive sensors for a wide range of applications including mass, chemical and biological sensing. In this study, we demonstrate the potential of Very Large Scale Integration (VLSI) with state-of-the-art low-loss performance silicon optomechanical microdisks for sensing applications. We report microdisks exhibiting optical Whispering Gallery Modes (WGM) with 1 million quality factors, yielding high displacement sensitivity and strong coupling between optical WGMs and in-plane mechanical Radial Breathing Modes (RBM). Such high-Q microdisks with mechanical resonance frequencies in the 102 MHz range were fabricated on 200 mm wafers with Variable Shape Electron Beam lithography. Benefiting from ultrasensitive readout, their Brownian motion could be resolved with good Signal-to-Noise ratio at ambient pressure, as well as in liquid, despite high frequency operation and large fluidic damping: the mechanical quality factor reduced from few 103 in air to 10's in liquid, and the mechanical resonance frequency shifted down by a few percent. Proceeding one step further, we performed an all-optical operation of the resonators in air using a pump-probe scheme. Our results show our VLSI process is a viable approach for the next generation of sensors operating in vacuum, gas or liquid phase.

  8. Ultra High Bypass Integrated System Test

    NASA Image and Video Library

    2015-09-14

    NASA’s Environmentally Responsible Aviation Project, in collaboration with the Federal Aviation Administration (FAA) and Pratt & Whitney, completed testing of an Ultra High Bypass Ratio Turbofan Model in the 9’ x 15’ Low Speed Wind Tunnel at NASA Glenn Research Center. The fan model is representative of the next generation of efficient and quiet Ultra High Bypass Ratio Turbofan Engine designs.

  9. Monolithic Ge-on-Si lasers for large-scale electronic-photonic integration

    NASA Astrophysics Data System (ADS)

    Liu, Jifeng; Kimerling, Lionel C.; Michel, Jurgen

    2012-09-01

    A silicon-based monolithic laser source has long been envisioned as a key enabling component for large-scale electronic-photonic integration in future generations of high-performance computation and communication systems. In this paper we present a comprehensive review on the development of monolithic Ge-on-Si lasers for this application. Starting with a historical review of light emission from the direct gap transition of Ge dating back to the 1960s, we focus on the rapid progress in band-engineered Ge-on-Si lasers in the past five years after a nearly 30-year gap in this research field. Ge has become an interesting candidate for active devices in Si photonics in the past decade due to its pseudo-direct gap behavior and compatibility with Si complementary metal oxide semiconductor (CMOS) processing. In 2007, we proposed combing tensile strain with n-type doping to compensate the energy difference between the direct and indirect band gap of Ge, thereby achieving net optical gain for CMOS-compatible diode lasers. Here we systematically present theoretical modeling, material growth methods, spontaneous emission, optical gain, and lasing under optical and electrical pumping from band-engineered Ge-on-Si, culminated by recently demonstrated electrically pumped Ge-on-Si lasers with >1 mW output in the communication wavelength window of 1500-1700 nm. The broad gain spectrum enables on-chip wavelength division multiplexing. A unique feature of band-engineered pseudo-direct gap Ge light emitters is that the emission intensity increases with temperature, exactly opposite to conventional direct gap semiconductor light-emitting devices. This extraordinary thermal anti-quenching behavior greatly facilitates monolithic integration on Si microchips where temperatures can reach up to 80 °C during operation. The same band-engineering approach can be extended to other pseudo-direct gap semiconductors, allowing us to achieve efficient light emission at wavelengths previously

  10. Visual influence on path integration in darkness indicates a multimodal representation of large-scale space

    PubMed Central

    Tcheang, Lili; Bülthoff, Heinrich H.; Burgess, Neil

    2011-01-01

    Our ability to return to the start of a route recently performed in darkness is thought to reflect path integration of motion-related information. Here we provide evidence that motion-related interoceptive representations (proprioceptive, vestibular, and motor efference copy) combine with visual representations to form a single multimodal representation guiding navigation. We used immersive virtual reality to decouple visual input from motion-related interoception by manipulating the rotation or translation gain of the visual projection. First, participants walked an outbound path with both visual and interoceptive input, and returned to the start in darkness, demonstrating the influences of both visual and interoceptive information in a virtual reality environment. Next, participants adapted to visual rotation gains in the virtual environment, and then performed the path integration task entirely in darkness. Our findings were accurately predicted by a quantitative model in which visual and interoceptive inputs combine into a single multimodal representation guiding navigation, and are incompatible with a model of separate visual and interoceptive influences on action (in which path integration in darkness must rely solely on interoceptive representations). Overall, our findings suggest that a combined multimodal representation guides large-scale navigation, consistent with a role for visual imagery or a cognitive map. PMID:21199934

  11. Large-scale PACS implementation.

    PubMed

    Carrino, J A; Unkel, P J; Miller, I D; Bowser, C L; Freckleton, M W; Johnson, T G

    1998-08-01

    The transition to filmless radiology is a much more formidable task than making the request for proposal to purchase a (Picture Archiving and Communications System) PACS. The Department of Defense and the Veterans Administration have been pioneers in the transformation of medical diagnostic imaging to the electronic environment. Many civilian sites are expected to implement large-scale PACS in the next five to ten years. This presentation will related the empirical insights gleaned at our institution from a large-scale PACS implementation. Our PACS integration was introduced into a fully operational department (not a new hospital) in which work flow had to continue with minimal impact. Impediments to user acceptance will be addressed. The critical components of this enormous task will be discussed. The topics covered during this session will include issues such as phased implementation, DICOM (digital imaging and communications in medicine) standard-based interaction of devices, hospital information system (HIS)/radiology information system (RIS) interface, user approval, networking, workstation deployment and backup procedures. The presentation will make specific suggestions regarding the implementation team, operating instructions, quality control (QC), training and education. The concept of identifying key functional areas is relevant to transitioning the facility to be entirely on line. Special attention must be paid to specific functional areas such as the operating rooms and trauma rooms where the clinical requirements may not match the PACS capabilities. The printing of films may be necessary for certain circumstances. The integration of teleradiology and remote clinics into a PACS is a salient topic with respect to the overall role of the radiologists providing rapid consultation. A Web-based server allows a clinician to review images and reports on a desk-top (personal) computer and thus reduce the number of dedicated PACS review workstations. This session

  12. Ultra-Structure database design methodology for managing systems biology data and analyses

    PubMed Central

    Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C

    2009-01-01

    Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers

  13. A unified large/small-scale dynamo in helical turbulence

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Subramanian, Kandaswamy; Brandenburg, Axel

    2016-09-01

    We use high resolution direct numerical simulations (DNS) to show that helical turbulence can generate significant large-scale fields even in the presence of strong small-scale dynamo action. During the kinematic stage, the unified large/small-scale dynamo grows fields with a shape-invariant eigenfunction, with most power peaked at small scales or large k, as in Subramanian & Brandenburg. Nevertheless, the large-scale field can be clearly detected as an excess power at small k in the negatively polarized component of the energy spectrum for a forcing with positively polarized waves. Its strength overline{B}, relative to the total rms field Brms, decreases with increasing magnetic Reynolds number, ReM. However, as the Lorentz force becomes important, the field generated by the unified dynamo orders itself by saturating on successively larger scales. The magnetic integral scale for the positively polarized waves, characterizing the small-scale field, increases significantly from the kinematic stage to saturation. This implies that the small-scale field becomes as coherent as possible for a given forcing scale, which averts the ReM-dependent quenching of overline{B}/B_rms. These results are obtained for 10243 DNS with magnetic Prandtl numbers of PrM = 0.1 and 10. For PrM = 0.1, overline{B}/B_rms grows from about 0.04 to about 0.4 at saturation, aided in the final stages by helicity dissipation. For PrM = 10, overline{B}/B_rms grows from much less than 0.01 to values of the order the 0.2. Our results confirm that there is a unified large/small-scale dynamo in helical turbulence.

  14. Large Scale Integrated Circuits for Military Applications.

    DTIC Science & Technology

    1977-05-01

    economic incentive for riarrowing this gap is examined, y (U)^wo"categories of cost are analyzed: the direct life cycle cost of the integrated circuit...dependence of these costs on the physical charac- teristics of the integrated circuits is discussed. (U) The economic and physical characteristics of... economic incentive for narrowing this gap is examined. Two categories of cost are analyzed: the direct life cycle cost of the integrated circuit

  15. Compact component for integrated quantum optic processing

    PubMed Central

    Sahu, Partha Pratim

    2015-01-01

    Quantum interference is indispensable to derive integrated quantum optic technologies (1–2). For further progress in large scale integration of quantum optic circuit, we have introduced first time two mode interference (TMI) coupler as an ultra compact component. The quantum interference varying with coupling length corresponding to the coupling ratio is studied and the larger HOM dip with peak visibility ~0.963 ± 0.009 is found at half coupling length of TMI coupler. Our results also demonstrate complex quantum interference with high fabrication tolerance and quantum visibility in TMI coupler. PMID:26584759

  16. Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Dean N.; Silva, Claudio

    2013-09-30

    For the past three years, a large analysis and visualization effort—funded by the Department of Energy’s Office of Biological and Environmental Research (BER), the National Aeronautics and Space Administration (NASA), and the National Oceanic and Atmospheric Administration (NOAA)—has brought together a wide variety of industry-standard scientific computing libraries and applications to create Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT) to serve the global climate simulation and observational research communities. To support interactive analysis and visualization, all components connect through a provenance application–programming interface to capture meaningful history and workflow. Components can be loosely coupled into the framework for fast integrationmore » or tightly coupled for greater system functionality and communication with other components. The overarching goal of UV-CDAT is to provide a new paradigm for access to and analysis of massive, distributed scientific data collections by leveraging distributed data architectures located throughout the world. The UV-CDAT framework addresses challenges in analysis and visualization and incorporates new opportunities, including parallelism for better efficiency, higher speed, and more accurate scientific inferences. Today, it provides more than 600 users access to more analysis and visualization products than any other single source.« less

  17. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  18. Ultra-Fine Scale Spatially-Integrated Mapping of Habitat and Occupancy Using Structure-From-Motion.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-01-01

    Organisms respond to and often simultaneously modify their environment. While these interactions are apparent at the landscape extent, the driving mechanisms often occur at very fine spatial scales. Structure-from-Motion (SfM), a computer vision technique, allows the simultaneous mapping of organisms and fine scale habitat, and will greatly improve our understanding of habitat suitability, ecophysiology, and the bi-directional relationship between geomorphology and habitat use. SfM can be used to create high-resolution (centimeter-scale) three-dimensional (3D) habitat models at low cost. These models can capture the abiotic conditions formed by terrain and simultaneously record the position of individual organisms within that terrain. While coloniality is common in seabird species, we have a poor understanding of the extent to which dense breeding aggregations are driven by fine-scale active aggregation or limited suitable habitat. We demonstrate the use of SfM for fine-scale habitat suitability by reconstructing the locations of nests in a gentoo penguin colony and fitting models that explicitly account for conspecific attraction. The resulting digital elevation models (DEMs) are used as covariates in an inhomogeneous hybrid point process model. We find that gentoo penguin nest site selection is a function of the topography of the landscape, but that nests are far more aggregated than would be expected based on terrain alone, suggesting a strong role of behavioral aggregation in driving coloniality in this species. This integrated mapping of organisms and fine scale habitat will greatly improve our understanding of fine-scale habitat suitability, ecophysiology, and the complex bi-directional relationship between geomorphology and habitat use.

  19. Ultra-Fine Scale Spatially-Integrated Mapping of Habitat and Occupancy Using Structure-From-Motion

    PubMed Central

    McDowall, Philip; Lynch, Heather J.

    2017-01-01

    Organisms respond to and often simultaneously modify their environment. While these interactions are apparent at the landscape extent, the driving mechanisms often occur at very fine spatial scales. Structure-from-Motion (SfM), a computer vision technique, allows the simultaneous mapping of organisms and fine scale habitat, and will greatly improve our understanding of habitat suitability, ecophysiology, and the bi-directional relationship between geomorphology and habitat use. SfM can be used to create high-resolution (centimeter-scale) three-dimensional (3D) habitat models at low cost. These models can capture the abiotic conditions formed by terrain and simultaneously record the position of individual organisms within that terrain. While coloniality is common in seabird species, we have a poor understanding of the extent to which dense breeding aggregations are driven by fine-scale active aggregation or limited suitable habitat. We demonstrate the use of SfM for fine-scale habitat suitability by reconstructing the locations of nests in a gentoo penguin colony and fitting models that explicitly account for conspecific attraction. The resulting digital elevation models (DEMs) are used as covariates in an inhomogeneous hybrid point process model. We find that gentoo penguin nest site selection is a function of the topography of the landscape, but that nests are far more aggregated than would be expected based on terrain alone, suggesting a strong role of behavioral aggregation in driving coloniality in this species. This integrated mapping of organisms and fine scale habitat will greatly improve our understanding of fine-scale habitat suitability, ecophysiology, and the complex bi-directional relationship between geomorphology and habitat use. PMID:28076351

  20. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  1. Networking for large-scale science: infrastructure, provisioning, transport and application mapping

    NASA Astrophysics Data System (ADS)

    Rao, Nageswara S.; Carter, Steven M.; Wu, Qishi; Wing, William R.; Zhu, Mengxia; Mezzacappa, Anthony; Veeraraghavan, Malathi; Blondin, John M.

    2005-01-01

    Large-scale science computations and experiments require unprecedented network capabilities in the form of large bandwidth and dynamically stable connections to support data transfers, interactive visualizations, and monitoring and steering operations. A number of component technologies dealing with the infrastructure, provisioning, transport and application mappings must be developed and/or optimized to achieve these capabilities. We present a brief account of the following technologies that contribute toward achieving these network capabilities: (a) DOE UltraScienceNet and NSF CHEETAH network testbeds that provide on-demand and scheduled dedicated network connections; (b) experimental results on transport protocols that achieve close to 100% utilization on dedicated 1Gbps wide-area channels; (c) a scheme for optimally mapping a visualization pipeline onto a network to minimize the end-to-end delays; and (d) interconnect configuration and protocols that provides multiple Gbps flows from Cray X1 to external hosts.

  2. Ultra Efficient Engine Technology Systems Integration and Environmental Assessment

    NASA Technical Reports Server (NTRS)

    Daggett, David L.; Geiselhart, Karl A. (Technical Monitor)

    2002-01-01

    This study documents the design and analysis of four types of advanced technology commercial transport airplane configurations (small, medium large and very large) with an assumed technology readiness date of 2010. These airplane configurations were used as a platform to evaluate the design concept and installed performance of advanced technology engines being developed under the NASA Ultra Efficient Engine Technology (UEET) program. Upon installation of the UEET engines onto the UEET advanced technology airframes, the small and medium airplanes both achieved an additional 16% increase in fuel efficiency when using GE advanced turbofan engines. The large airplane achieved an 18% increase in fuel efficiency when using the P&W geared fan engine. The very large airplane (i.e. BWB), also using P&W geared fan engines, only achieved an additional 16% that was attributed to a non-optimized airplane/engine combination.

  3. Construction of large scale switch matrix by interconnecting integrated optical switch chips with EDFAs

    NASA Astrophysics Data System (ADS)

    Liao, Mingle; Wu, Baojian; Hou, Jianhong; Qiu, Kun

    2018-03-01

    Large scale optical switches are essential components in optical communication network. We aim to build up a large scale optical switch matrix by the interconnection of silicon-based optical switch chips using 3-stage CLOS structure, where EDFAs are needed to compensate for the insertion loss of the chips. The optical signal-to-noise ratio (OSNR) performance of the resulting large scale optical switch matrix is investigated for TE-mode light and the experimental results are in agreement with the theoretical analysis. We build up a 64 ×64 switch matrix by use of 16 ×16 optical switch chips and the OSNR and receiver sensibility can respectively be improved by 0.6 dB and 0.2 dB by optimizing the gain configuration of the EDFAs.

  4. Fabrication of the HIAD Large-Scale Demonstration Assembly

    NASA Technical Reports Server (NTRS)

    Swanson, G. T.; Johnson, R. K.; Hughes, S. J.; DiNonno, J. M.; Cheatwood, F. M.

    2017-01-01

    Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) deployable aeroshell system and associated analytical tools. NASAs HIAD project team has developed, fabricated, and tested inflatable structures (IS) integrated with flexible thermal protection system (F-TPS), ranging in diameters from 3-6m, with cone angles of 60 and 70 deg.In 2015, United Launch Alliance (ULA) announced that they will use a HIAD (10-12m) as part of their Sensible, Modular, Autonomous Return Technology (SMART) for their upcoming Vulcan rocket. ULA expects SMART reusability, coupled with other advancements for Vulcan, will substantially reduce the cost of access to space. The first booster engine recovery via HIAD is scheduled for 2024. To meet this near-term need, as well as future NASA applications, the HIAD team is investigating taking the technology to the 10-15m diameter scale. In the last year, many significant development and fabrication efforts have been accomplished, culminating in the construction of a large-scale inflatable structure demonstration assembly. This assembly incorporated the first three tori for a 12m Mars Human-Scale Pathfinder HIAD conceptual design that was constructed with the current state of the art material set. Numerous design trades and torus fabrication demonstrations preceded this effort. In 2016, three large-scale tori (0.61m cross-section) and six subscale tori (0.25m cross-section) were manufactured to demonstrate fabrication techniques using the newest candidate material sets. These tori were tested to evaluate durability and load capacity. This work led to the selection of the inflatable structures third generation (Gen-3) structural liner. In late 2016, the three tori required for the large-scale demonstration assembly were fabricated, and then

  5. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  6. Data Integration: Charting a Path Forward to 2035

    DTIC Science & Technology

    2011-02-14

    New York, NY: Gotham Books, 2004. Seligman , Len. Mitre Corporation, e-mail interview, 6 Dec 2010. Singer, P.W. Wired for War: The Robotics...articles.aspx (accessed 4 Dec 2010). Ultra-Large-Scale Systems: The Software Challenge of the Future. Study lead Linda Northrup. Pittsburgh, PA: Carnegie...Virtualization?‖ 1. 41 Ultra-Large-Scale Systems: The Software Challenge of the Future. Study lead Linda Northrup. Pittsburgh, PA: Carnegie Mellon Software

  7. Towards a uniform and large-scale deposition of MoS2 nanosheets via sulfurization of ultra-thin Mo-based solid films.

    PubMed

    Vangelista, Silvia; Cinquanta, Eugenio; Martella, Christian; Alia, Mario; Longo, Massimo; Lamperti, Alessio; Mantovan, Roberto; Basset, Francesco Basso; Pezzoli, Fabio; Molle, Alessandro

    2016-04-29

    Large-scale integration of MoS2 in electronic devices requires the development of reliable and cost-effective deposition processes, leading to uniform MoS2 layers on a wafer scale. Here we report on the detailed study of the heterogeneous vapor-solid reaction between a pre-deposited molybdenum solid film and sulfur vapor, thus resulting in a controlled growth of MoS2 films onto SiO2/Si substrates with a tunable thickness and cm(2)-scale uniformity. Based on Raman spectroscopy and photoluminescence, we show that the degree of crystallinity in the MoS2 layers is dictated by the deposition temperature and thickness. In particular, the MoS2 structural disorder observed at low temperature (<750 °C) and low thickness (two layers) evolves to a more ordered crystalline structure at high temperature (1000 °C) and high thickness (four layers). From an atomic force microscopy investigation prior to and after sulfurization, this parametrical dependence is associated with the inherent granularity of the MoS2 nanosheet that is inherited by the pristine morphology of the pre-deposited Mo film. This work paves the way to a closer control of the synthesis of wafer-scale and atomically thin MoS2, potentially extendable to other transition metal dichalcogenides and hence targeting massive and high-volume production for electronic device manufacturing.

  8. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  9. Comparison of Cryopreserved Human Sperm between Ultra Rapid Freezing and Slow Programmable Freezing: Effect on Motility, Morphology and DNA Integrity.

    PubMed

    Tongdee, Pattama; Sukprasert, Matchuporn; Satirapod, Chonticha; Wongkularb, Anna; Choktanasiri, Wicham

    2015-05-01

    Cryopreservation of sperm is common methods to preserve male fertility. Sperm freezing, suggest slow programmable freezing caused lower change of sperm morphology than sperm freezing in vapor of liquid nitrogen. Ultra rapid freezing is easy to be worked on, less time, low cost and does not need high experience. To compare the effect on sperm motility, morphology and DNA integrity of post-thawed sperm after ultra rapid freezing and slow programmable freezing methods. Experimental study at laboratory of infertility unit, Department of Obstetrics and Gynecology, Faculty of Medicine Ramathibodi Hospital. Thirty-seven semen samples with normal semen analysis according to World Health Organization (WHO) 1999 [normal sperm volume ( 2 ml) and normal sperm concentration (≥ 20 x10(6)/ml) and sperm motility (≥ 50%)]. Semen samples were washed. Then each semen sample was divided into six cryovials. Two cryovials, 0.5 ml each, were cryopreserved by slow programmable freezing. Four 0.25 ml containing cryovials, were cryopreserved by ultra rapidfreezing method. After cryopreservationfor 1 month, thawedprocess was carried out at room temperature. Main outcomes are sperm motility was determined by Computer-Assisted Semen Analysis (CASA), sperm morphology was determined by eosin-methylene blue staining and sperm DNA integrity was assessed by TUNEL assay. Sperm motility was reduced significantly by both methods, from 70.4 (9.0)% to 29.1 (12.3)% in slowprogrammable freezing and to 19.7 (9.8)% in ultra rapid freezing (p < 0.05). Sperm motility decreased significantly more by ultra rapid freezing (p < 0.001). The percentage of normal sperm morphology and DNA integrity were also reduced significantly by both methods. However, no significant difference between the two methods was found (p > 0.05). Cryopreservation of human sperm for 1 month significantly decreased sperm motility, morphology and DNA integrity in both methods. However sperm motility was decreased more by ultra rapid

  10. Energy transfers in large-scale and small-scale dynamos

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  11. Integrating weather and geotechnical monitoring data for assessing the stability of large scale surface mining operations

    NASA Astrophysics Data System (ADS)

    Steiakakis, Chrysanthos; Agioutantis, Zacharias; Apostolou, Evangelia; Papavgeri, Georgia; Tripolitsiotis, Achilles

    2016-01-01

    The geotechnical challenges for safe slope design in large scale surface mining operations are enormous. Sometimes one degree of slope inclination can significantly reduce the overburden to ore ratio and therefore dramatically improve the economics of the operation, while large scale slope failures may have a significant impact on human lives. Furthermore, adverse weather conditions, such as high precipitation rates, may unfavorably affect the already delicate balance between operations and safety. Geotechnical, weather and production parameters should be systematically monitored and evaluated in order to safely operate such pits. Appropriate data management, processing and storage are critical to ensure timely and informed decisions. This paper presents an integrated data management system which was developed over a number of years as well as the advantages through a specific application. The presented case study illustrates how the high production slopes of a mine that exceed depths of 100-120 m were successfully mined with an average displacement rate of 10- 20 mm/day, approaching an almost slow to moderate landslide velocity. Monitoring data of the past four years are included in the database and can be analyzed to produce valuable results. Time-series data correlations of movements, precipitation records, etc. are evaluated and presented in this case study. The results can be used to successfully manage mine operations and ensure the safety of the mine and the workforce.

  12. A Proposal for Six Sigma Integration for Large-Scale Production of Penicillin G and Subsequent Conversion to 6-APA.

    PubMed

    Nandi, Anirban; Pan, Sharadwata; Potumarthi, Ravichandra; Danquah, Michael K; Sarethy, Indira P

    2014-01-01

    Six Sigma methodology has been successfully applied to daily operations by several leading global private firms including GE and Motorola, to leverage their net profits. Comparatively, limited studies have been conducted to find out whether this highly successful methodology can be applied to research and development (R&D). In the current study, we have reviewed and proposed a process for a probable integration of Six Sigma methodology to large-scale production of Penicillin G and its subsequent conversion to 6-aminopenicillanic acid (6-APA). It is anticipated that the important aspects of quality control and quality assurance will highly benefit from the integration of Six Sigma methodology in mass production of Penicillin G and/or its conversion to 6-APA.

  13. A Proposal for Six Sigma Integration for Large-Scale Production of Penicillin G and Subsequent Conversion to 6-APA

    PubMed Central

    Nandi, Anirban; Danquah, Michael K.

    2014-01-01

    Six Sigma methodology has been successfully applied to daily operations by several leading global private firms including GE and Motorola, to leverage their net profits. Comparatively, limited studies have been conducted to find out whether this highly successful methodology can be applied to research and development (R&D). In the current study, we have reviewed and proposed a process for a probable integration of Six Sigma methodology to large-scale production of Penicillin G and its subsequent conversion to 6-aminopenicillanic acid (6-APA). It is anticipated that the important aspects of quality control and quality assurance will highly benefit from the integration of Six Sigma methodology in mass production of Penicillin G and/or its conversion to 6-APA. PMID:25057428

  14. Ultra-High Density Single Nanometer-Scale Anodic Alumina Nanofibers Fabricated by Pyrophosphoric Acid Anodizing

    NASA Astrophysics Data System (ADS)

    Kikuchi, Tatsuya; Nishinaga, Osamu; Nakajima, Daiki; Kawashima, Jun; Natsui, Shungo; Sakaguchi, Norihito; Suzuki, Ryosuke O.

    2014-12-01

    Anodic oxide fabricated by anodizing has been widely used for nanostructural engineering, but the nanomorphology is limited to only two oxides: anodic barrier and porous oxides. Therefore, the discovery of an additional anodic oxide with a unique nanofeature would expand the applicability of anodizing. Here we demonstrate the fabrication of a third-generation anodic oxide, specifically, anodic alumina nanofibers, by anodizing in a new electrolyte, pyrophosphoric acid. Ultra-high density single nanometer-scale anodic alumina nanofibers (1010 nanofibers/cm2) consisting of an amorphous, pure aluminum oxide were successfully fabricated via pyrophosphoric acid anodizing. The nanomorphologies of the anodic nanofibers can be controlled by the electrochemical conditions. Anodic tungsten oxide nanofibers can also be fabricated by pyrophosphoric acid anodizing. The aluminum surface covered by the anodic alumina nanofibers exhibited ultra-fast superhydrophilic behavior, with a contact angle of less than 1°, within 1 second. Such ultra-narrow nanofibers can be used for various nanoapplications including catalysts, wettability control, and electronic devices.

  15. Ultra-High Density Single Nanometer-Scale Anodic Alumina Nanofibers Fabricated by Pyrophosphoric Acid Anodizing

    PubMed Central

    Kikuchi, Tatsuya; Nishinaga, Osamu; Nakajima, Daiki; Kawashima, Jun; Natsui, Shungo; Sakaguchi, Norihito; Suzuki, Ryosuke O.

    2014-01-01

    Anodic oxide fabricated by anodizing has been widely used for nanostructural engineering, but the nanomorphology is limited to only two oxides: anodic barrier and porous oxides. Therefore, the discovery of an additional anodic oxide with a unique nanofeature would expand the applicability of anodizing. Here we demonstrate the fabrication of a third-generation anodic oxide, specifically, anodic alumina nanofibers, by anodizing in a new electrolyte, pyrophosphoric acid. Ultra-high density single nanometer-scale anodic alumina nanofibers (1010 nanofibers/cm2) consisting of an amorphous, pure aluminum oxide were successfully fabricated via pyrophosphoric acid anodizing. The nanomorphologies of the anodic nanofibers can be controlled by the electrochemical conditions. Anodic tungsten oxide nanofibers can also be fabricated by pyrophosphoric acid anodizing. The aluminum surface covered by the anodic alumina nanofibers exhibited ultra-fast superhydrophilic behavior, with a contact angle of less than 1°, within 1 second. Such ultra-narrow nanofibers can be used for various nanoapplications including catalysts, wettability control, and electronic devices. PMID:25491282

  16. Atypical language laterality is associated with large-scale disruption of network integration in children with intractable focal epilepsy.

    PubMed

    Ibrahim, George M; Morgan, Benjamin R; Doesburg, Sam M; Taylor, Margot J; Pang, Elizabeth W; Donner, Elizabeth; Go, Cristina Y; Rutka, James T; Snead, O Carter

    2015-04-01

    Epilepsy is associated with disruption of integration in distributed networks, together with altered localization for functions such as expressive language. The relation between atypical network connectivity and altered localization is unknown. In the current study we tested whether atypical expressive language laterality was associated with the alteration of large-scale network integration in children with medically-intractable localization-related epilepsy (LRE). Twenty-three right-handed children (age range 8-17) with medically-intractable LRE performed a verb generation task in fMRI. Language network activation was identified and the Laterality index (LI) was calculated within the pars triangularis and pars opercularis. Resting-state data from the same cohort were subjected to independent component analysis. Dual regression was used to identify associations between resting-state integration and LI values. Higher positive values of the LI, indicating typical language localization were associated with stronger functional integration of various networks including the default mode network (DMN). The normally symmetric resting-state networks showed a pattern of lateralized connectivity mirroring that of language function. The association between atypical language localization and network integration implies a widespread disruption of neural network development. These findings may inform the interpretation of localization studies by providing novel insights into reorganization of neural networks in epilepsy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Large Scale Integration of Renewable Power Sources into the Vietnamese Power System

    NASA Astrophysics Data System (ADS)

    Kies, Alexander; Schyska, Bruno; Thanh Viet, Dinh; von Bremen, Lueder; Heinemann, Detlev; Schramm, Stefan

    2017-04-01

    The Vietnamese Power system is expected to expand considerably in upcoming decades. Power capacities installed are projected to grow from 39 GW in 2015 to 129.5 GW by 2030. Installed wind power capacities are expected to grow to 6 GW (0.8 GW 2015) and solar power capacities to 12 GW (0.85 GW 2015). This goes hand in hand with an increase of the renewable penetration in the power mix from 1.3% from wind and photovoltaics (PV) in 2015 to 5.4% by 2030. The overall potential for wind power in Vietnam is estimated to be around 24 GW. Moreover, the up-scaling of renewable energy sources was formulated as one of the priorized targets of the Vietnamese government in the National Power Development Plan VII. In this work, we investigate the transition of the Vietnamese power system towards high shares of renewables. For this purpose, we jointly optimise the expansion of renewable generation facilities for wind and PV, and the transmission grid within renewable build-up pathways until 2030 and beyond. To simulate the Vietnamese power system and its generation from renewable sources, we use highly spatially and temporally resolved historical weather and load data and the open source modelling toolbox Python for Power System Analysis (PyPSA). We show that the highest potential of renewable generation for wind and PV is observed in southern Vietnam and discuss the resulting need for transmission grid extensions in dependency of the optimal pathway. Furthermore, we show that the smoothing effect of wind power has several considerable beneficial effects and that the Vietnamese hydro power potential can be efficiently used to provide balancing opportunities. This work is part of the R&D Project "Analysis of the Large Scale Integration of Renewable Power into the Future Vietnamese Power System" (GIZ, 2016-2018).

  18. Data Portal for the Library of Integrated Network-based Cellular Signatures (LINCS) program: integrated access to diverse large-scale cellular perturbation response data

    PubMed Central

    Koleti, Amar; Terryn, Raymond; Stathias, Vasileios; Chung, Caty; Cooper, Daniel J; Turner, John P; Vidović, Dušica; Forlin, Michele; Kelley, Tanya T; D’Urso, Alessandro; Allen, Bryce K; Torre, Denis; Jagodnik, Kathleen M; Wang, Lily; Jenkins, Sherry L; Mader, Christopher; Niu, Wen; Fazel, Mehdi; Mahi, Naim; Pilarczyk, Marcin; Clark, Nicholas; Shamsaei, Behrouz; Meller, Jarek; Vasiliauskas, Juozas; Reichard, John; Medvedovic, Mario; Ma’ayan, Avi; Pillai, Ajay

    2018-01-01

    Abstract The Library of Integrated Network-based Cellular Signatures (LINCS) program is a national consortium funded by the NIH to generate a diverse and extensive reference library of cell-based perturbation-response signatures, along with novel data analytics tools to improve our understanding of human diseases at the systems level. In contrast to other large-scale data generation efforts, LINCS Data and Signature Generation Centers (DSGCs) employ a wide range of assay technologies cataloging diverse cellular responses. Integration of, and unified access to LINCS data has therefore been particularly challenging. The Big Data to Knowledge (BD2K) LINCS Data Coordination and Integration Center (DCIC) has developed data standards specifications, data processing pipelines, and a suite of end-user software tools to integrate and annotate LINCS-generated data, to make LINCS signatures searchable and usable for different types of users. Here, we describe the LINCS Data Portal (LDP) (http://lincsportal.ccs.miami.edu/), a unified web interface to access datasets generated by the LINCS DSGCs, and its underlying database, LINCS Data Registry (LDR). LINCS data served on the LDP contains extensive metadata and curated annotations. We highlight the features of the LDP user interface that is designed to enable search, browsing, exploration, download and analysis of LINCS data and related curated content. PMID:29140462

  19. Adaptive Fault-Tolerant Control of Uncertain Nonlinear Large-Scale Systems With Unknown Dead Zone.

    PubMed

    Chen, Mou; Tao, Gang

    2016-08-01

    In this paper, an adaptive neural fault-tolerant control scheme is proposed and analyzed for a class of uncertain nonlinear large-scale systems with unknown dead zone and external disturbances. To tackle the unknown nonlinear interaction functions in the large-scale system, the radial basis function neural network (RBFNN) is employed to approximate them. To further handle the unknown approximation errors and the effects of the unknown dead zone and external disturbances, integrated as the compounded disturbances, the corresponding disturbance observers are developed for their estimations. Based on the outputs of the RBFNN and the disturbance observer, the adaptive neural fault-tolerant control scheme is designed for uncertain nonlinear large-scale systems by using a decentralized backstepping technique. The closed-loop stability of the adaptive control system is rigorously proved via Lyapunov analysis and the satisfactory tracking performance is achieved under the integrated effects of unknown dead zone, actuator fault, and unknown external disturbances. Simulation results of a mass-spring-damper system are given to illustrate the effectiveness of the proposed adaptive neural fault-tolerant control scheme for uncertain nonlinear large-scale systems.

  20. Large-scale fabrication of micro-lens array by novel end-fly-cutting-servo diamond machining.

    PubMed

    Zhu, Zhiwei; To, Suet; Zhang, Shaojian

    2015-08-10

    Fast/slow tool servo (FTS/STS) diamond turning is a very promising technique for the generation of micro-lens array (MLA). However, it is still a challenge to process MLA in large scale due to certain inherent limitations of this technique. In the present study, a novel ultra-precision diamond cutting method, as the end-fly-cutting-servo (EFCS) system, is adopted and investigated for large-scale generation of MLA. After a detailed discussion of the characteristic advantages for processing MLA, the optimal toolpath generation strategy for the EFCS is developed with consideration of the geometry and installation pose of the diamond tool. A typical aspheric MLA over a large area is experimentally fabricated, and the resulting form accuracy, surface micro-topography and machining efficiency are critically investigated. The result indicates that the MLA with homogeneous quality over the whole area is obtained. Besides, high machining efficiency, extremely small volume of control points for the toolpath, and optimal usage of system dynamics of the machine tool during the whole cutting can be simultaneously achieved.

  1. Hierarchical Engine for Large-scale Infrastructure Co-Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-04-24

    HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.

  2. Ultra-large distance modification of gravity from Lorentz symmetry breaking at the Planck scale

    NASA Astrophysics Data System (ADS)

    Gorbunov, Dmitry S.; Sibiryakov, Sergei M.

    2005-09-01

    We present an extension of the Randall-Sundrum model in which, due to spontaneous Lorentz symmetry breaking, graviton mixes with bulk vector fields and becomes quasilocalized. The masses of KK modes comprising the four-dimensional graviton are naturally exponentially small. This allows to push the Lorentz breaking scale to as high as a few tenth of the Planck mass. The model does not contain ghosts or tachyons and does not exhibit the van Dam-Veltman-Zakharov discontinuity. The gravitational attraction between static point masses becomes gradually weaker with increasing of separation and gets replaced by repulsion (antigravity) at exponentially large distances.

  3. Large Scale System Safety Integration for Human Rated Space Vehicles

    NASA Astrophysics Data System (ADS)

    Massie, Michael J.

    2005-12-01

    Since the 1960s man has searched for ways to establish a human presence in space. Unfortunately, the development and operation of human spaceflight vehicles carry significant safety risks that are not always well understood. As a result, the countries with human space programs have felt the pain of loss of lives in the attempt to develop human space travel systems. Integrated System Safety is a process developed through years of experience (since before Apollo and Soyuz) as a way to assess risks involved in space travel and prevent such losses. The intent of Integrated System Safety is to take a look at an entire program and put together all the pieces in such a way that the risks can be identified, understood and dispositioned by program management. This process has many inherent challenges and they need to be explored, understood and addressed.In order to prepare truly integrated analysis safety professionals must gain a level of technical understanding of all of the project's pieces and how they interact. Next, they must find a way to present the analysis so the customer can understand the risks and make decisions about managing them. However, every organization in a large-scale project can have different ideas about what is or is not a hazard, what is or is not an appropriate hazard control, and what is or is not adequate hazard control verification. NASA provides some direction on these topics, but interpretations of those instructions can vary widely.Even more challenging is the fact that every individual/organization involved in a project has different levels of risk tolerance. When the discrete hazard controls of the contracts and agreements cannot be met, additional risk must be accepted. However, when one has left the arena of compliance with the known rules, there can be no longer be specific ground rules on which to base a decision as to what is acceptable and what is not. The integrator must find common grounds between all parties to achieve

  4. Disordered Nanohole Patterns in Metal-Insulator Multilayer for Ultra-broadband Light Absorption: Atomic Layer Deposition for Lithography Free Highly repeatable Large Scale Multilayer Growth.

    PubMed

    Ghobadi, Amir; Hajian, Hodjat; Dereshgi, Sina Abedini; Bozok, Berkay; Butun, Bayram; Ozbay, Ekmel

    2017-11-08

    In this paper, we demonstrate a facile, lithography free, and large scale compatible fabrication route to synthesize an ultra-broadband wide angle perfect absorber based on metal-insulator-metal-insulator (MIMI) stack design. We first conduct a simulation and theoretical modeling approach to study the impact of different geometries in overall stack absorption. Then, a Pt-Al 2 O 3 multilayer is fabricated using a single atomic layer deposition (ALD) step that offers high repeatability and simplicity in the fabrication step. In the best case, we get an absorption bandwidth (BW) of 600 nm covering a range of 400 nm-1000 nm. A substantial improvement in the absorption BW is attained by incorporating a plasmonic design into the middle Pt layer. Our characterization results demonstrate that the best configuration can have absorption over 0.9 covering a wavelength span of 400 nm-1490 nm with a BW that is 1.8 times broader compared to that of planar design. On the other side, the proposed structure retains its absorption high at angles as wide as 70°. The results presented here can serve as a beacon for future performance enhanced multilayer designs where a simple fabrication step can boost the overall device response without changing its overall thickness and fabrication simplicity.

  5. Modified Fabry-Perot interferometer for displacement measurement in ultra large measuring range

    NASA Astrophysics Data System (ADS)

    Chang, Chung-Ping; Tung, Pi-Cheng; Shyu, Lih-Horng; Wang, Yung-Cheng; Manske, Eberhard

    2013-05-01

    Laser interferometers have demonstrated outstanding measuring performances for high precision positioning or dimensional measurements in the precision industry, especially in the length measurement. Due to the non-common-optical-path structure, appreciable measurement errors can be easily induced under ordinary measurement conditions. That will lead to the limitation and inconvenience for in situ industrial applications. To minimize the environmental and mechanical effects, a new interferometric displacement measuring system with the common-optical-path structure and the resistance to tilt-angle is proposed. With the integration of optomechatronic modules in the novel interferometric system, the resolution up to picometer order, high precision, and ultra large measuring range have been realized. For the signal stabilization of displacement measurement, an automatic gain control module has been proposed. A self-developed interpolation model has been employed for enhancing the resolution. The novel interferometer can hold the advantage of high resolution and large measuring range simultaneously. By the experimental verifications, it has been proven that the actual resolution of 2.5 nm can be achieved in the measuring range of 500 mm. According to the comparison experiments, the maximal standard deviation of the difference between the self-developed Fabry-Perot interferometer and the reference commercial Michelson interferometer is 0.146 μm in the traveling range of 500 mm. With the prominent measuring characteristics, this should be the largest dynamic measurement range of a Fabry-Perot interferometer up till now.

  6. Structured approaches to large-scale systems: Variational integrators for interconnected Lagrange-Dirac systems and structured model reduction on Lie groups

    NASA Astrophysics Data System (ADS)

    Parks, Helen Frances

    This dissertation presents two projects related to the structured integration of large-scale mechanical systems. Structured integration uses the considerable differential geometric structure inherent in mechanical motion to inform the design of numerical integration schemes. This process improves the qualitative properties of simulations and becomes especially valuable as a measure of accuracy over long time simulations in which traditional Gronwall accuracy estimates lose their meaning. Often, structured integration schemes replicate continuous symmetries and their associated conservation laws at the discrete level. Such is the case for variational integrators, which discretely replicate the process of deriving equations of motion from variational principles. This results in the conservation of momenta associated to symmetries in the discrete system and conservation of a symplectic form when applicable. In the case of Lagrange-Dirac systems, variational integrators preserve a discrete analogue of the Dirac structure preserved in the continuous flow. In the first project of this thesis, we extend Dirac variational integrators to accommodate interconnected systems. We hope this work will find use in the fields of control, where a controlled system can be thought of as a "plant" system joined to its controller, and in the approach of very large systems, where modular modeling may prove easier than monolithically modeling the entire system. The second project of the thesis considers a different approach to large systems. Given a detailed model of the full system, can we reduce it to a more computationally efficient model without losing essential geometric structures in the system? Asked without the reference to structure, this is the essential question of the field of model reduction. The answer there has been a resounding yes, with Principal Orthogonal Decomposition (POD) with snapshots rising as one of the most successful methods. Our project builds on previous work

  7. Analysis of Large-scale Anisotropy of Ultra-high Energy Cosmic Rays in HiRes Data

    NASA Astrophysics Data System (ADS)

    Abbasi, R. U.; Abu-Zayyad, T.; Allen, M.; Amann, J. F.; Archbold, G.; Belov, K.; Belz, J. W.; Bergman, D. R.; Blake, S. A.; Brusova, O. A.; Burt, G. W.; Cannon, C.; Cao, Z.; Deng, W.; Fedorova, Y.; Findlay, J.; Finley, C. B.; Gray, R. C.; Hanlon, W. F.; Hoffman, C. M.; Holzscheiter, M. H.; Hughes, G.; Hüntemeyer, P.; Ivanov, D.; Jones, B. F.; Jui, C. C. H.; Kim, K.; Kirn, M. A.; Koers, H.; Loh, E. C.; Maestas, M. M.; Manago, N.; Marek, L. J.; Martens, K.; Matthews, J. A. J.; Matthews, J. N.; Moore, S. A.; O'Neill, A.; Painter, C. A.; Perera, L.; Reil, K.; Riehle, R.; Roberts, M. D.; Rodriguez, D.; Sasaki, M.; Schnetzer, S. R.; Scott, L. M.; Sinnis, G.; Smith, J. D.; Sokolsky, P.; Song, C.; Springer, R. W.; Stokes, B. T.; Stratton, S. R.; Thomas, J. R.; Thomas, S. B.; Thomson, G. B.; Tinyakov, P.; Tupa, D.; Wiencke, L. R.; Zech, A.; Zhang, X.; High Resolution Fly's Eye Collaboration

    2010-04-01

    Stereo data collected by the HiRes experiment over a six-year period are examined for large-scale anisotropy related to the inhomogeneous distribution of matter in the nearby universe. We consider the generic case of small cosmic-ray deflections and a large number of sources tracing the matter distribution. In this matter tracer model the expected cosmic-ray flux depends essentially on a single free parameter, the typical deflection angle θ s . We find that the HiRes data with threshold energies of 40 EeV and 57 EeV are incompatible with the matter tracer model at a 95% confidence level unless θ s > 10° and are compatible with an isotropic flux. The data set above 10 EeV is compatible with both the matter tracer model and an isotropic flux.

  8. Hexagonal boron nitride intercalated multi-layer graphene: a possible ultimate solution to ultra-scaled interconnect technology

    NASA Astrophysics Data System (ADS)

    Li, Yong-Jun; Sun, Qing-Qing; Chen, Lin; Zhou, Peng; Wang, Peng-Fei; Ding, Shi-Jin; Zhang, David Wei

    2012-03-01

    We proposed intercalation of hexagonal boron nitride (hBN) in multilayer graphene to improve its performance in ultra-scaled interconnects for integrated circuit. The effect of intercalated hBN layer in bilayer graphene is investigated using non-equilibrium Green's functions. We find the hBN intercalated bilayer graphene exhibit enhanced transport properties compared with pristine bilayer ones, and the improvement is attributed to suppression of interlayer scattering and good planar bonding condition of inbetween hBN layer. Based on these results, we proposed a via structure that not only benefits from suppressed interlayer scattering between multilayer graphene, but also sustains the unique electrical properties of graphene when many graphene layers are stacking together. The ideal current density across the structure can be as high as 4.6×109 A/cm2 at 1V, which is very promising for the future high-performance interconnect.

  9. Linking crop yield anomalies to large-scale atmospheric circulation in Europe.

    PubMed

    Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J

    2017-06-15

    Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.

  10. Development and Verification of a Novel Robot-Integrated Fringe Projection 3D Scanning System for Large-Scale Metrology.

    PubMed

    Du, Hui; Chen, Xiaobo; Xi, Juntong; Yu, Chengyi; Zhao, Bao

    2017-12-12

    Large-scale surfaces are prevalent in advanced manufacturing industries, and 3D profilometry of these surfaces plays a pivotal role for quality control. This paper proposes a novel and flexible large-scale 3D scanning system assembled by combining a robot, a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. A mathematical model is established for the global data fusion. Subsequently, a robust method is introduced for the establishment of the end coordinate system. As for hand-eye calibration, the calibration ball is observed by the scanner and the laser tracker simultaneously. With this data, the hand-eye relationship is solved, and then an algorithm is built to get the transformation matrix between the end coordinate system and the world coordinate system. A validation experiment is designed to verify the proposed algorithms. Firstly, a hand-eye calibration experiment is implemented and the computation of the transformation matrix is done. Then a car body rear is measured 22 times in order to verify the global data fusion algorithm. The 3D shape of the rear is reconstructed successfully. To evaluate the precision of the proposed method, a metric tool is built and the results are presented.

  11. ANALYSIS OF LARGE-SCALE ANISOTROPY OF ULTRA-HIGH ENERGY COSMIC RAYS IN HiRes DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbasi, R. U.; Abu-Zayyad, T.; Allen, M.

    2010-04-10

    Stereo data collected by the HiRes experiment over a six-year period are examined for large-scale anisotropy related to the inhomogeneous distribution of matter in the nearby universe. We consider the generic case of small cosmic-ray deflections and a large number of sources tracing the matter distribution. In this matter tracer model the expected cosmic-ray flux depends essentially on a single free parameter, the typical deflection angle {theta} {sub s}. We find that the HiRes data with threshold energies of 40 EeV and 57 EeV are incompatible with the matter tracer model at a 95% confidence level unless {theta} {sub s}more » > 10 deg. and are compatible with an isotropic flux. The data set above 10 EeV is compatible with both the matter tracer model and an isotropic flux.« less

  12. Does Scale Really Matter? Ultra-Large-Scale Systems Seven Years after the Study

    DTIC Science & Technology

    2013-05-24

    Beyonce Knowles releases second consecutive No.1 album and fourth No.1 single in the US BlackBerry users numbered 4,900,000 in March, 2006...And yet…there is a fast growing gap between our research and reality. 75 Does Scale Really Matter?: ULS Systems Seven Years Later Linda Northrop

  13. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  14. Large-scale geomorphology: Classical concepts reconciled and integrated with contemporary ideas via a surface processes model

    NASA Astrophysics Data System (ADS)

    Kooi, Henk; Beaumont, Christopher

    1996-02-01

    Linear systems analysis is used to investigate the response of a surface processes model (SPM) to tectonic forcing. The SPM calculates subcontinental scale denudational landscape evolution on geological timescales (1 to hundreds of million years) as the result of simultaneous hillslope transport, modeled by diffusion, and fluvial transport, modeled by advection and reaction. The tectonically forced SPM accommodates the large-scale behavior envisaged in classical and contemporary conceptual geomorphic models and provides a framework for their integration and unification. The following three model scales are considered: micro-, meso-, and macroscale. The concepts of dynamic equilibrium and grade are quantified at the microscale for segments of uniform gradient subject to tectonic uplift. At the larger meso- and macroscales (which represent individual interfluves and landscapes including a number of drainage basins, respectively) the system response to tectonic forcing is linear for uplift geometries that are symmetric with respect to baselevel and which impose a fully integrated drainage to baselevel. For these linear models the response time and the transfer function as a function of scale characterize the model behavior. Numerical experiments show that the styles of landscape evolution depend critically on the timescales of the tectonic processes in relation to the response time of the landscape. When tectonic timescales are much longer than the landscape response time, the resulting dynamic equilibrium landscapes correspond to those envisaged by Hack (1960). When tectonic timescales are of the same order as the landscape response time and when tectonic variations take the form of pulses (much shorter than the response time), evolving landscapes conform to the Penck type (1972) and to the Davis (1889, 1899) and King (1953, 1962) type frameworks, respectively. The behavior of the SPM highlights the importance of phase shifts or delays of the landform response and

  15. Inkjet printing ultra-large graphene oxide flakes

    NASA Astrophysics Data System (ADS)

    He, Pei; Derby, Brian

    2017-06-01

    Graphene oxide 2D materials inks with mean flake diameter 36 µm can be inkjet printed, with no significant blockage of the printer or apparent damage to the flakes, despite the mean flake size being  >50% of the printer nozzle diameter and the ink containing individual flakes considerably larger than the nozzle. Printed flakes show a similar level of wrinkle and fold defects as observed in flakes deposited by drop casting. Polarised light imaging of the ink in the printhead prior to printing shows alignment of the flakes in the shear flow and this is believed to allow passage without agglomeration or blocking of the nozzle. The bulk electrical conductivity of these ultra-large flake printed films is 2.48  ×  104 Sm-1 after reduction, which is comparable to that reported with printed pristine graphene. The conductivity of the printed films increases slightly with increasing flake size indicating that there is no increase in damage to electrical properties as the flakes approach and exceed the nozzle diameter.

  16. A Piezoelectric Unimorph Deformable Mirror Concept by Wafer Transfer for Ultra Large Space Telescopes

    NASA Technical Reports Server (NTRS)

    Yang, Eui-Hyeok; Shcheglov, Kirill

    2002-01-01

    Future concepts of ultra large space telescopes include segmented silicon mirrors and inflatable polymer mirrors. Primary mirrors for these systems cannot meet optical surface figure requirements and are likely to generate over several microns of wavefront errors. In order to correct for these large wavefront errors, high stroke optical quality deformable mirrors are required. JPL has recently developed a new technology for transferring an entire wafer-level mirror membrane from one substrate to another. A thin membrane, 100 mm in diameter, has been successfully transferred without using adhesives or polymers. The measured peak-to-valley surface error of a transferred and patterned membrane (1 mm x 1 mm x 0.016 mm) is only 9 nm. The mirror element actuation principle is based on a piezoelectric unimorph. A voltage applied to the piezoelectric layer induces stress in the longitudinal direction causing the film to deform and pull on the mirror connected to it. The advantage of this approach is that the small longitudinal strains obtainable from a piezoelectric material at modest voltages are thus translated into large vertical displacements. Modeling is performed for a unimorph membrane consisting of clamped rectangular membrane with a PZT layer with variable dimensions. The membrane transfer technology is combined with the piezoelectric bimorph actuator concept to constitute a compact deformable mirror device with a large stroke actuation of a continuous mirror membrane, resulting in a compact A0 systems for use in ultra large space telescopes.

  17. New probes of Cosmic Microwave Background large-scale anomalies

    NASA Astrophysics Data System (ADS)

    Aiola, Simone

    Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the

  18. Statistical Analysis of Large-Scale Structure of Universe

    NASA Astrophysics Data System (ADS)

    Tugay, A. V.

    While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.

  19. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  20. Achieving ultra-high temperatures with a resistive emitter array

    NASA Astrophysics Data System (ADS)

    Danielson, Tom; Franks, Greg; Holmes, Nicholas; LaVeigne, Joe; Matis, Greg; McHugh, Steve; Norton, Dennis; Vengel, Tony; Lannon, John; Goodwin, Scott

    2016-05-01

    The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to also develop larger-format infrared emitter arrays to support the testing of systems incorporating these detectors. In addition to larger formats, many scene projector users require much higher simulated temperatures than can be generated with current technology in order to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024 x 1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During earlier phases of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1400 K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. A 'scalable' Read In Integrated Circuit (RIIC) is also being developed under the same UHT program to drive the high temperature pixels. This RIIC will utilize through-silicon via (TSV) and Quilt Packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the yield limitations inherent in large-scale integrated circuits. Results of design verification testing of the completed RIIC will be presented and discussed.

  1. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  2. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.

    PubMed

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-11-24

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  3. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks

    PubMed Central

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-01-01

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper. PMID:27873941

  4. Efficient data management in a large-scale epidemiology research project.

    PubMed

    Meyer, Jens; Ostrzinski, Stefan; Fredrich, Daniel; Havemann, Christoph; Krafczyk, Janina; Hoffmann, Wolfgang

    2012-09-01

    This article describes the concept of a "Central Data Management" (CDM) and its implementation within the large-scale population-based medical research project "Personalized Medicine". The CDM can be summarized as a conjunction of data capturing, data integration, data storage, data refinement, and data transfer. A wide spectrum of reliable "Extract Transform Load" (ETL) software for automatic integration of data as well as "electronic Case Report Forms" (eCRFs) was developed, in order to integrate decentralized and heterogeneously captured data. Due to the high sensitivity of the captured data, high system resource availability, data privacy, data security and quality assurance are of utmost importance. A complex data model was developed and implemented using an Oracle database in high availability cluster mode in order to integrate different types of participant-related data. Intelligent data capturing and storage mechanisms are improving the quality of data. Data privacy is ensured by a multi-layered role/right system for access control and de-identification of identifying data. A well defined backup process prevents data loss. Over the period of one and a half year, the CDM has captured a wide variety of data in the magnitude of approximately 5terabytes without experiencing any critical incidents of system breakdown or loss of data. The aim of this article is to demonstrate one possible way of establishing a Central Data Management in large-scale medical and epidemiological studies. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  6. Density and temperature characterization of long-scale length, near-critical density controlled plasma produced from ultra-low density plastic foam

    PubMed Central

    Chen, S. N.; Iwawaki, T.; Morita, K.; Antici, P.; Baton, S. D.; Filippi, F.; Habara, H.; Nakatsutsumi, M.; Nicolaï , P.; Nazarov, W.; Rousseaux, C.; Starodubstev, M.; Tanaka, K. A.; Fuchs, J.

    2016-01-01

    The ability to produce long-scale length (i.e. millimeter scale-length), homogeneous plasmas is of interest in studying a wide range of fundamental plasma processes. We present here a validated experimental platform to create and diagnose uniform plasmas with a density close or above the critical density. The target consists of a polyimide tube filled with an ultra low-density plastic foam where it was heated by x-rays, produced by a long pulse laser irradiating a copper foil placed at one end of the tube. The density and temperature of the ionized foam was retrieved by using x-ray radiography and proton radiography was used to verify the uniformity of the plasma. Plasma temperatures of 5–10 eV and densities around 1021 cm−3 are measured. This well-characterized platform of uniform density and temperature plasma is of interest for experiments using large-scale laser platforms conducting High Energy Density Physics investigations. PMID:26923471

  7. Integrating an agent-based model into a large-scale hydrological model for evaluating drought management in California

    NASA Astrophysics Data System (ADS)

    Sheffield, J.; He, X.; Wada, Y.; Burek, P.; Kahil, M.; Wood, E. F.; Oppenheimer, M.

    2017-12-01

    California has endured record-breaking drought since winter 2011 and will likely experience more severe and persistent drought in the coming decades under changing climate. At the same time, human water management practices can also affect drought frequency and intensity, which underscores the importance of human behaviour in effective drought adaptation and mitigation. Currently, although a few large-scale hydrological and water resources models (e.g., PCR-GLOBWB) consider human water use and management practices (e.g., irrigation, reservoir operation, groundwater pumping), none of them includes the dynamic feedback between local human behaviors/decisions and the natural hydrological system. It is, therefore, vital to integrate social and behavioral dimensions into current hydrological modeling frameworks. This study applies the agent-based modeling (ABM) approach and couples it with a large-scale hydrological model (i.e., Community Water Model, CWatM) in order to have a balanced representation of social, environmental and economic factors and a more realistic representation of the bi-directional interactions and feedbacks in coupled human and natural systems. In this study, we focus on drought management in California and considers two types of agents, which are (groups of) farmers and state management authorities, and assumed that their corresponding objectives are to maximize the net crop profit and to maintain sufficient water supply, respectively. Farmers' behaviors are linked with local agricultural practices such as cropping patterns and deficit irrigation. More precisely, farmers' decisions are incorporated into CWatM across different time scales in terms of daily irrigation amount, seasonal/annual decisions on crop types and irrigated area as well as the long-term investment of irrigation infrastructure. This simulation-based optimization framework is further applied by performing different sets of scenarios to investigate and evaluate the effectiveness

  8. Large Scale Metal Additive Techniques Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environmentmore » friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.« less

  9. Dissociable effects of local inhibitory and excitatory theta-burst stimulation on large-scale brain dynamics

    PubMed Central

    Sale, Martin V.; Lord, Anton; Zalesky, Andrew; Breakspear, Michael; Mattingley, Jason B.

    2015-01-01

    Normal brain function depends on a dynamic balance between local specialization and large-scale integration. It remains unclear, however, how local changes in functionally specialized areas can influence integrated activity across larger brain networks. By combining transcranial magnetic stimulation with resting-state functional magnetic resonance imaging, we tested for changes in large-scale integration following the application of excitatory or inhibitory stimulation on the human motor cortex. After local inhibitory stimulation, regions encompassing the sensorimotor module concurrently increased their internal integration and decreased their communication with other modules of the brain. There were no such changes in modular dynamics following excitatory stimulation of the same area of motor cortex nor were there changes in the configuration and interactions between core brain hubs after excitatory or inhibitory stimulation of the same area. These results suggest the existence of selective mechanisms that integrate local changes in neural activity, while preserving ongoing communication between brain hubs. PMID:25717162

  10. Dynamic subfilter-scale stress model for large-eddy simulations

    NASA Astrophysics Data System (ADS)

    Rouhi, A.; Piomelli, U.; Geurts, B. J.

    2016-08-01

    We present a modification of the integral length-scale approximation (ILSA) model originally proposed by Piomelli et al. [Piomelli et al., J. Fluid Mech. 766, 499 (2015), 10.1017/jfm.2015.29] and apply it to plane channel flow and a backward-facing step. In the ILSA models the length scale is expressed in terms of the integral length scale of turbulence and is determined by the flow characteristics, decoupled from the simulation grid. In the original formulation the model coefficient was constant, determined by requiring a desired global contribution of the unresolved subfilter scales (SFSs) to the dissipation rate, known as SFS activity; its value was found by a set of coarse-grid calculations. Here we develop two modifications. We de-fine a measure of SFS activity (based on turbulent stresses), which adds to the robustness of the model, particularly at high Reynolds numbers, and removes the need for the prior coarse-grid calculations: The model coefficient can be computed dynamically and adapt to large-scale unsteadiness. Furthermore, the desired level of SFS activity is now enforced locally (and not integrated over the entire volume, as in the original model), providing better control over model activity and also improving the near-wall behavior of the model. Application of the local ILSA to channel flow and a backward-facing step and comparison with the original ILSA and with the dynamic model of Germano et al. [Germano et al., Phys. Fluids A 3, 1760 (1991), 10.1063/1.857955] show better control over the model contribution in the local ILSA, while the positive properties of the original formulation (including its higher accuracy compared to the dynamic model on coarse grids) are maintained. The backward-facing step also highlights the advantage of the decoupling of the model length scale from the mesh.

  11. Concordant integrative gene set enrichment analysis of multiple large-scale two-sample expression data sets.

    PubMed

    Lai, Yinglei; Zhang, Fanni; Nayak, Tapan K; Modarres, Reza; Lee, Norman H; McCaffrey, Timothy A

    2014-01-01

    improve detection power and discovery consistency through a concordant integrative analysis of multiple large-scale two-sample gene expression data sets.

  12. Using Hybrid Techniques for Generating Watershed-scale Flood Models in an Integrated Modeling Framework

    NASA Astrophysics Data System (ADS)

    Saksena, S.; Merwade, V.; Singhofen, P.

    2017-12-01

    There is an increasing global trend towards developing large scale flood models that account for spatial heterogeneity at watershed scales to drive the future flood risk planning. Integrated surface water-groundwater modeling procedures can elucidate all the hydrologic processes taking part during a flood event to provide accurate flood outputs. Even though the advantages of using integrated modeling are widely acknowledged, the complexity of integrated process representation, computation time and number of input parameters required have deterred its application to flood inundation mapping, especially for large watersheds. This study presents a faster approach for creating watershed scale flood models using a hybrid design that breaks down the watershed into multiple regions of variable spatial resolution by prioritizing higher order streams. The methodology involves creating a hybrid model for the Upper Wabash River Basin in Indiana using Interconnected Channel and Pond Routing (ICPR) and comparing the performance with a fully-integrated 2D hydrodynamic model. The hybrid approach involves simplification procedures such as 1D channel-2D floodplain coupling; hydrologic basin (HUC-12) integration with 2D groundwater for rainfall-runoff routing; and varying spatial resolution of 2D overland flow based on stream order. The results for a 50-year return period storm event show that hybrid model (NSE=0.87) performance is similar to the 2D integrated model (NSE=0.88) but the computational time is reduced to half. The results suggest that significant computational efficiency can be obtained while maintaining model accuracy for large-scale flood models by using hybrid approaches for model creation.

  13. A Systematic Multi-Time Scale Solution for Regional Power Grid Operation

    NASA Astrophysics Data System (ADS)

    Zhu, W. J.; Liu, Z. G.; Cheng, T.; Hu, B. Q.; Liu, X. Z.; Zhou, Y. F.

    2017-10-01

    Many aspects need to be taken into consideration in a regional grid while making schedule plans. In this paper, a systematic multi-time scale solution for regional power grid operation considering large scale renewable energy integration and Ultra High Voltage (UHV) power transmission is proposed. In the time scale aspect, we discuss the problem from month, week, day-ahead, within-day to day-behind, and the system also contains multiple generator types including thermal units, hydro-plants, wind turbines and pumped storage stations. The 9 subsystems of the scheduling system are described, and their functions and relationships are elaborated. The proposed system has been constructed in a provincial power grid in Central China, and the operation results further verified the effectiveness of the system.

  14. Advancing flood risk analysis by integrating adaptive behaviour in large-scale flood risk assessments

    NASA Astrophysics Data System (ADS)

    Haer, T.; Botzen, W.; Aerts, J.

    2016-12-01

    In the last four decades the global population living in the 1/100 year-flood zone has doubled from approximately 500 million to a little less than 1 billion people. Urbanization in low lying -flood prone- cities further increases the exposed assets, such as buildings and infrastructure. Moreover, climate change will further exacerbate flood risk in the future. Accurate flood risk assessments are important to inform policy-makers and society on current- and future flood risk levels. However, these assessment suffer from a major flaw in the way they estimate flood vulnerability and adaptive behaviour of individuals and governments. Current flood risk projections commonly assume that either vulnerability remains constant, or try to mimic vulnerability through incorporating an external scenario. Such a static approach leads to a misrepresentation of future flood risk, as humans respond adaptively to flood events, flood risk communication, and incentives to reduce risk. In our study, we integrate adaptive behaviour in a large-scale European flood risk framework through an agent-based modelling approach. This allows for the inclusion of heterogeneous agents, which dynamically respond to each other and a changing environment. We integrate state-of-the-art flood risk maps based on climate scenarios (RCP's), and socio-economic scenarios (SSP's), with government and household agents, which behave autonomously based on (micro-)economic behaviour rules. We show for the first time that excluding adaptive behaviour leads to a major misrepresentation of future flood risk. The methodology is applied to flood risk, but has similar implications for other research in the field of natural hazards. While more research is needed, this multi-disciplinary study advances our understanding of how future flood risk will develop.

  15. Ultra-Smooth, Fully Solution-Processed Large-Area Transparent Conducting Electrodes for Organic Devices

    PubMed Central

    Jin, Won-Yong; Ginting, Riski Titian; Ko, Keum-Jin; Kang, Jae-Wook

    2016-01-01

    A novel approach for the fabrication of ultra-smooth and highly bendable substrates consisting of metal grid-conducting polymers that are fully embedded into transparent substrates (ME-TCEs) was successfully demonstrated. The fully printed ME-TCEs exhibited ultra-smooth surfaces (surface roughness ~1.0 nm), were highly transparent (~90% transmittance at a wavelength of 550 nm), highly conductive (sheet resistance ~4 Ω ◻−1), and relatively stable under ambient air (retaining ~96% initial resistance up to 30 days). The ME-TCE substrates were used to fabricate flexible organic solar cells and organic light-emitting diodes exhibiting devices efficiencies comparable to devices fabricated on ITO/glass substrates. Additionally, the flexibility of the organic devices did not degrade their performance even after being bent to a bending radius of ~1 mm. Our findings suggest that ME-TCEs are a promising alternative to indium tin oxide and show potential for application toward large-area optoelectronic devices via fully printing processes. PMID:27808221

  16. Ultra-Smooth, Fully Solution-Processed Large-Area Transparent Conducting Electrodes for Organic Devices

    NASA Astrophysics Data System (ADS)

    Jin, Won-Yong; Ginting, Riski Titian; Ko, Keum-Jin; Kang, Jae-Wook

    2016-11-01

    A novel approach for the fabrication of ultra-smooth and highly bendable substrates consisting of metal grid-conducting polymers that are fully embedded into transparent substrates (ME-TCEs) was successfully demonstrated. The fully printed ME-TCEs exhibited ultra-smooth surfaces (surface roughness ~1.0 nm), were highly transparent (~90% transmittance at a wavelength of 550 nm), highly conductive (sheet resistance ~4 Ω ◻-1), and relatively stable under ambient air (retaining ~96% initial resistance up to 30 days). The ME-TCE substrates were used to fabricate flexible organic solar cells and organic light-emitting diodes exhibiting devices efficiencies comparable to devices fabricated on ITO/glass substrates. Additionally, the flexibility of the organic devices did not degrade their performance even after being bent to a bending radius of ~1 mm. Our findings suggest that ME-TCEs are a promising alternative to indium tin oxide and show potential for application toward large-area optoelectronic devices via fully printing processes.

  17. Ultra-Smooth, Fully Solution-Processed Large-Area Transparent Conducting Electrodes for Organic Devices.

    PubMed

    Jin, Won-Yong; Ginting, Riski Titian; Ko, Keum-Jin; Kang, Jae-Wook

    2016-11-03

    A novel approach for the fabrication of ultra-smooth and highly bendable substrates consisting of metal grid-conducting polymers that are fully embedded into transparent substrates (ME-TCEs) was successfully demonstrated. The fully printed ME-TCEs exhibited ultra-smooth surfaces (surface roughness ~1.0 nm), were highly transparent (~90% transmittance at a wavelength of 550 nm), highly conductive (sheet resistance ~4 Ω ◻ -1 ), and relatively stable under ambient air (retaining ~96% initial resistance up to 30 days). The ME-TCE substrates were used to fabricate flexible organic solar cells and organic light-emitting diodes exhibiting devices efficiencies comparable to devices fabricated on ITO/glass substrates. Additionally, the flexibility of the organic devices did not degrade their performance even after being bent to a bending radius of ~1 mm. Our findings suggest that ME-TCEs are a promising alternative to indium tin oxide and show potential for application toward large-area optoelectronic devices via fully printing processes.

  18. Pass-transistor very large scale integration

    NASA Technical Reports Server (NTRS)

    Maki, Gary K. (Inventor); Bhatia, Prakash R. (Inventor)

    2004-01-01

    Logic elements are provided that permit reductions in layout size and avoidance of hazards. Such logic elements may be included in libraries of logic cells. A logical function to be implemented by the logic element is decomposed about logical variables to identify factors corresponding to combinations of the logical variables and their complements. A pass transistor network is provided for implementing the pass network function in accordance with this decomposition. The pass transistor network includes ordered arrangements of pass transistors that correspond to the combinations of variables and complements resulting from the logical decomposition. The logic elements may act as selection circuits and be integrated with memory and buffer elements.

  19. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  20. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    NASA Astrophysics Data System (ADS)

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-04-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing.

  1. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  2. Chip-integrated optical power limiter based on an all-passive micro-ring resonator

    NASA Astrophysics Data System (ADS)

    Yan, Siqi; Dong, Jianji; Zheng, Aoling; Zhang, Xinliang

    2014-10-01

    Recent progress in silicon nanophotonics has dramatically advanced the possible realization of large-scale on-chip optical interconnects integration. Adopting photons as information carriers can break the performance bottleneck of electronic integrated circuit such as serious thermal losses and poor process rates. However, in integrated photonics circuits, few reported work can impose an upper limit of optical power therefore prevent the optical device from harm caused by high power. In this study, we experimentally demonstrate a feasible integrated scheme based on a single all-passive micro-ring resonator to realize the optical power limitation which has a similar function of current limiting circuit in electronics. Besides, we analyze the performance of optical power limiter at various signal bit rates. The results show that the proposed device can limit the signal power effectively at a bit rate up to 20 Gbit/s without deteriorating the signal. Meanwhile, this ultra-compact silicon device can be completely compatible with the electronic technology (typically complementary metal-oxide semiconductor technology), which may pave the way of very large scale integrated photonic circuits for all-optical information processors and artificial intelligence systems.

  3. High-power picosecond laser with 400W average power for large scale applications

    NASA Astrophysics Data System (ADS)

    Du, Keming; Brüning, Stephan; Gillner, Arnold

    2012-03-01

    Laser processing is generally known for low thermal influence, precise energy processing and the possibility to ablate every type of material independent on hardness and vaporisation temperature. The use of ultra-short pulsed lasers offers new possibilities in the manufacturing of high end products with extra high processing qualities. For achieving a sufficient and economical processing speed, high average power is needed. To scale the power for industrial uses the picosecond laser system has been developed, which consists of a seeder, a preamplifier and an end amplifier. With the oscillator/amplifier system more than 400W average power and maximum pulse energy 1mJ was obtained. For study of high speed processing of large embossing metal roller two different ps laser systems have been integrated into a cylinder engraving machine. One of the ps lasers has an average power of 80W while the other has 300W. With this high power ps laser fluencies of up to 30 J/cm2 at pulse repetition rates in the multi MHz range have been achieved. Different materials (Cu, Ni, Al, steel) have been explored for parameters like ablation rate per pulse, ablation geometry, surface roughness, influence of pulse overlap and number of loops. An enhanced ablation quality and an effective ablation rate of 4mm3/min have been achieved by using different scanning systems and an optimized processing strategy. The max. achieved volume rate is 20mm3/min.

  4. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    PubMed

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  5. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Technical Reports Server (NTRS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dunner, R.; Essinger-Hileman, T.; Eimer, J.; hide

    2015-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe approx.70% of the sky. A variable-delay polarization modulator provides modulation of the polarization at approx.10Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  6. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Astrophysics Data System (ADS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dünner, R.; Essinger-Hileman, T.; Eimer, J.; Fluxa, P.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G.; Hubmayr, J.; Iuliano, J.; Marriage, T. A.; Miller, N.; Moseley, S. H.; Mumby, G.; Petroff, M.; Reintsema, C.; Rostem, K.; U-Yen, K.; Watts, D.; Wagner, E.; Wollack, E. J.; Xu, Z.; Zeng, L.

    2016-08-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe ˜ 70 % of the sky. A variable-delay polarization modulator provides modulation of the polarization at ˜ 10 Hz to suppress the 1/ f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  7. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Treesearch

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  8. Study on data model of large-scale urban and rural integrated cadastre

    NASA Astrophysics Data System (ADS)

    Peng, Liangyong; Huang, Quanyi; Gao, Dequan

    2008-10-01

    Urban and Rural Integrated Cadastre (URIC) has been the subject of great interests for modern cadastre management. It is highly desirable to develop a rational data model for establishing an information system of URIC. In this paper, firstly, the old cadastral management mode in China was introduced, the limitation was analyzed, and the conception of URIC and its development course in China were described. Afterwards, based on the requirements of cadastre management in developed region, the goal of URIC and two key ideas for realizing URIC were proposed. Then, conceptual management mode was studied and a data model of URIC was designed. At last, based on the raw data of land use survey with a scale of 1:1000 and urban conversional cadastral survey with a scale of 1:500 in Jiangyin city, a well-defined information system of URIC was established according to the data model and an uniform management of land use and use right and landownership in urban and rural area was successfully realized. Its feasibility and practicability was well proved.

  9. Large-scale protein-protein interactions detection by integrating big biosensing data with computational model.

    PubMed

    You, Zhu-Hong; Li, Shuai; Gao, Xin; Luo, Xin; Ji, Zhen

    2014-01-01

    Protein-protein interactions are the basis of biological functions, and studying these interactions on a molecular level is of crucial importance for understanding the functionality of a living cell. During the past decade, biosensors have emerged as an important tool for the high-throughput identification of proteins and their interactions. However, the high-throughput experimental methods for identifying PPIs are both time-consuming and expensive. On the other hand, high-throughput PPI data are often associated with high false-positive and high false-negative rates. Targeting at these problems, we propose a method for PPI detection by integrating biosensor-based PPI data with a novel computational model. This method was developed based on the algorithm of extreme learning machine combined with a novel representation of protein sequence descriptor. When performed on the large-scale human protein interaction dataset, the proposed method achieved 84.8% prediction accuracy with 84.08% sensitivity at the specificity of 85.53%. We conducted more extensive experiments to compare the proposed method with the state-of-the-art techniques, support vector machine. The achieved results demonstrate that our approach is very promising for detecting new PPIs, and it can be a helpful supplement for biosensor-based PPI data detection.

  10. Integration of climatic water deficit and fine-scale physiography in process-based modeling of forest landscape resilience to large-scale tree mortality

    NASA Astrophysics Data System (ADS)

    Yang, J.; Weisberg, P.; Dilts, T.

    2016-12-01

    Climate warming can lead to large-scale drought-induced tree mortality events and greatly affect forest landscape resilience. Climatic water deficit (CWD) and its physiographic variations provide a key mechanism in driving landscape dynamics in response to climate change. Although CWD has been successfully applied in niche-based species distribution models, its application in process-based forest landscape models is still scarce. Here we present a framework incorporating fine-scale influence of terrain on ecohydrology in modeling forest landscape dynamics. We integrated CWD with a forest landscape succession and disturbance model (LANDIS-II) to evaluate how tree species distribution might shift in response to different climate-fire scenarios across an elevation-aspect gradient in a semi-arid montane landscape of northeastern Nevada, USA. Our simulations indicated that drought-intolerant tree species such as quaking aspen could experience greatly reduced distributions in the more arid portions of their existing ranges due to water stress limitations under future climate warming scenarios. However, even at the most xeric portions of its range, aspen is likely to persist in certain environmental settings due to unique and often fine-scale combinations of resource availability, species interactions and disturbance regime. The modeling approach presented here allowed identification of these refugia. In addition, this approach helped quantify how the direction and magnitude of fire influences on species distribution would vary across topoclimatic gradients, as well as furthers our understanding on the role of environmental conditions, fire, and inter-specific competition in shaping potential responses of landscape resilience to climate change.

  11. VisIO: enabling interactive visualization of ultra-scale, time-series data via high-bandwidth distributed I/O systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitchell, Christopher J; Ahrens, James P; Wang, Jun

    2010-10-15

    Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar tomore » other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.« less

  12. In-situ device integration of large-area patterned organic nanowire arrays for high-performance optical sensors

    PubMed Central

    Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng

    2013-01-01

    Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887

  13. Large-Scale Outflows in Seyfert Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  14. Aqueous Two-Phase Systems at Large Scale: Challenges and Opportunities.

    PubMed

    Torres-Acosta, Mario A; Mayolo-Deloisa, Karla; González-Valdez, José; Rito-Palomares, Marco

    2018-06-07

    Aqueous two-phase systems (ATPS) have proved to be an efficient and integrative operation to enhance recovery of industrially relevant bioproducts. After ATPS discovery, a variety of works have been published regarding their scaling from 10 to 1000 L. Although ATPS have achieved high recovery and purity yields, there is still a gap between their bench-scale use and potential industrial applications. In this context, this review paper critically analyzes ATPS scale-up strategies to enhance the potential industrial adoption. In particular, large-scale operation considerations, different phase separation procedures, the available optimization techniques (univariate, response surface methodology, and genetic algorithms) to maximize recovery and purity and economic modeling to predict large-scale costs, are discussed. ATPS intensification to increase the amount of sample to process at each system, developing recycling strategies and creating highly efficient predictive models, are still areas of great significance that can be further exploited with the use of high-throughput techniques. Moreover, the development of novel ATPS can maximize their specificity increasing the possibilities for the future industry adoption of ATPS. This review work attempts to present the areas of opportunity to increase ATPS attractiveness at industrial levels. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Ultra-smooth glassy graphene thin films for flexible transparent circuits

    PubMed Central

    Dai, Xiao; Wu, Jiang; Qian, Zhicheng; Wang, Haiyan; Jian, Jie; Cao, Yingjie; Rummeli, Mark H.; Yi, Qinghua; Liu, Huiyun; Zou, Guifu

    2016-01-01

    Large-area graphene thin films are prized in flexible and transparent devices. We report on a type of glassy graphene that is in an intermediate state between glassy carbon and graphene and that has high crystallinity but curly lattice planes. A polymer-assisted approach is introduced to grow an ultra-smooth (roughness, <0.7 nm) glassy graphene thin film at the inch scale. Owing to the advantages inherited by the glassy graphene thin film from graphene and glassy carbon, the glassy graphene thin film exhibits conductivity, transparency, and flexibility comparable to those of graphene, as well as glassy carbon–like mechanical and chemical stability. Moreover, glassy graphene–based circuits are fabricated using a laser direct writing approach. The circuits are transferred to flexible substrates and are shown to perform reliably. The glassy graphene thin film should stimulate the application of flexible transparent conductive materials in integrated circuits. PMID:28138535

  16. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  17. Synchronization of coupled large-scale Boolean networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Fangfei, E-mail: li-fangfei@163.com

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  18. Ultra-high-throughput Production of III-V/Si Wafer for Electronic and Photonic Applications

    PubMed Central

    Geum, Dae-Myeong; Park, Min-Su; Lim, Ju Young; Yang, Hyun-Duk; Song, Jin Dong; Kim, Chang Zoo; Yoon, Euijoon; Kim, SangHyeon; Choi, Won Jun

    2016-01-01

    Si-based integrated circuits have been intensively developed over the past several decades through ultimate device scaling. However, the Si technology has reached the physical limitations of the scaling. These limitations have fuelled the search for alternative active materials (for transistors) and the introduction of optical interconnects (called “Si photonics”). A series of attempts to circumvent the Si technology limits are based on the use of III-V compound semiconductor due to their superior benefits, such as high electron mobility and direct bandgap. To use their physical properties on a Si platform, the formation of high-quality III-V films on the Si (III-V/Si) is the basic technology ; however, implementing this technology using a high-throughput process is not easy. Here, we report new concepts for an ultra-high-throughput heterogeneous integration of high-quality III-V films on the Si using the wafer bonding and epitaxial lift off (ELO) technique. We describe the ultra-fast ELO and also the re-use of the III-V donor wafer after III-V/Si formation. These approaches provide an ultra-high-throughput fabrication of III-V/Si substrates with a high-quality film, which leads to a dramatic cost reduction. As proof-of-concept devices, this paper demonstrates GaAs-based high electron mobility transistors (HEMTs), solar cells, and hetero-junction phototransistors on Si substrates. PMID:26864968

  19. Ultra-compact air-mode photonic crystal nanobeam cavity integrated with bandstop filter for refractive index sensing.

    PubMed

    Sun, Fujun; Fu, Zhongyuan; Wang, Chunhong; Ding, Zhaoxiang; Wang, Chao; Tian, Huiping

    2017-05-20

    We propose and investigate an ultra-compact air-mode photonic crystal nanobeam cavity (PCNC) with an ultra-high quality factor-to-mode volume ratio (Q/V) by quadratically tapering the lattice space of the rectangular holes from the center to both ends while other parameters remain unchanged. By using the three-dimensional finite-difference time-domain method, an optimized geometry yields a Q of 7.2×10 6 and a V∼1.095(λ/n Si ) 3 in simulations, resulting in an ultra-high Q/V ratio of about 6.5×10 6 (λ/n Si ) -3 . When the number of holes on either side is 8, the cavity possesses a high sensitivity of 252 nm/RIU (refractive index unit), a high calculated Q-factor of 1.27×10 5 , and an ultra-small effective V of ∼0.758(λ/n Si ) 3 at the fundamental resonant wavelength of 1521.74 nm. Particularly, the footprint is only about 8×0.7  μm 2 . However, inevitably our proposed PCNC has several higher-order resonant modes in the transmission spectrum, which makes the PCNC difficult to be used for multiplexed sensing. Thus, a well-designed bandstop filter with weak sidelobes and broad bandwidth based on a photonic crystal nanobeam waveguide is created to connect with the PCNC to filter out the high-order modes. Therefore, the integrated structure presented in this work is promising for building ultra-compact lab-on-chip sensor arrays with high density and parallel-multiplexing capability.

  20. The Saskatchewan River Basin - a large scale observatory for water security research (Invited)

    NASA Astrophysics Data System (ADS)

    Wheater, H. S.

    2013-12-01

    The 336,000 km2 Saskatchewan River Basin (SaskRB) in Western Canada illustrates many of the issues of Water Security faced world-wide. It poses globally-important science challenges due to the diversity in its hydro-climate and ecological zones. With one of the world's more extreme climates, it embodies environments of global significance, including the Rocky Mountains (source of the major rivers in Western Canada), the Boreal Forest (representing 30% of Canada's land area) and the Prairies (home to 80% of Canada's agriculture). Management concerns include: provision of water resources to more than three million inhabitants, including indigenous communities; balancing competing needs for water between different uses, such as urban centres, industry, agriculture, hydropower and environmental flows; issues of water allocation between upstream and downstream users in the three prairie provinces; managing the risks of flood and droughts; and assessing water quality impacts of discharges from major cities and intensive agricultural production. Superimposed on these issues is the need to understand and manage uncertain water futures, including effects of economic growth and environmental change, in a highly fragmented water governance environment. Key science questions focus on understanding and predicting the effects of land and water management and environmental change on water quantity and quality. To address the science challenges, observational data are necessary across multiple scales. This requires focussed research at intensively monitored sites and small watersheds to improve process understanding and fine-scale models. To understand large-scale effects on river flows and quality, land-atmosphere feedbacks, and regional climate, integrated monitoring, modelling and analysis is needed at large basin scale. And to support water management, new tools are needed for operational management and scenario-based planning that can be implemented across multiple scales and

  1. Dissecting the large-scale galactic conformity

    NASA Astrophysics Data System (ADS)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  2. Integration of a wave rotor to an ultra-micro gas turbine (UmuGT)

    NASA Astrophysics Data System (ADS)

    Iancu, Florin

    2005-12-01

    Wave rotor technology has shown a significant potential for performance improvement of thermodynamic cycles. The wave rotor is an unsteady flow machine that utilizes shock waves to transfer energy from a high energy fluid to a low energy fluid, increasing both the temperature and the pressure of the low energy fluid. Used initially as a high pressure stage for a gas turbine locomotive engine, the wave rotor was commercialized only as a supercharging device for internal combustion engines, but recently there is a stronger research effort on implementing wave rotors as topping units or pressure gain combustors for gas turbines. At the same time, Ultra Micro Gas Turbines (UmuGT) are expected to be a next generation of power source for applications from propulsion to power generation, from aerospace industry to electronic industry. Starting in 1995, with the MIT "Micro Gas Turbine" project, the mechanical engineering research world has explored more and more the idea of "Power MEMS". Microfabricated turbomachinery like turbines, compressors, pumps, but also electric generators, heat exchangers, internal combustion engines and rocket engines have been on the focus list of researchers for the past 10 years. The reason is simple: the output power is proportional to the mass flow rate of the working fluid through the engine, or the cross-sectional area while the mass or volume of the engine is proportional to the cube of the characteristic length, thus the power density tends to increase at small scales (Power/Mass=L -1). This is the so-called "cube square law". This work investigates the possibilities of incorporating a wave rotor to an UmuGT and discusses the advantages of wave rotor as topping units for gas turbines, especially at microscale. Based on documented wave rotor efficiencies at larger scale and subsidized by both, a gasdynamic model that includes wall friction, and a CFD model, the wave rotor compression efficiency at microfabrication scale could be estimated

  3. An integrated approach to reconstructing genome-scale transcriptional regulatory networks

    DOE PAGES

    Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.; ...

    2015-02-27

    Transcriptional regulatory networks (TRNs) program cells to dynamically alter their gene expression in response to changing internal or environmental conditions. In this study, we develop a novel workflow for generating large-scale TRN models that integrates comparative genomics data, global gene expression analyses, and intrinsic properties of transcription factors (TFs). An assessment of this workflow using benchmark datasets for the well-studied γ-proteobacterium Escherichia coli showed that it outperforms expression-based inference approaches, having a significantly larger area under the precision-recall curve. Further analysis indicated that this integrated workflow captures different aspects of the E. coli TRN than expression-based approaches, potentially making themmore » highly complementary. We leveraged this new workflow and observations to build a large-scale TRN model for the α-Proteobacterium Rhodobacter sphaeroides that comprises 120 gene clusters, 1211 genes (including 93 TFs), 1858 predicted protein-DNA interactions and 76 DNA binding motifs. We found that ~67% of the predicted gene clusters in this TRN are enriched for functions ranging from photosynthesis or central carbon metabolism to environmental stress responses. We also found that members of many of the predicted gene clusters were consistent with prior knowledge in R. sphaeroides and/or other bacteria. Experimental validation of predictions from this R. sphaeroides TRN model showed that high precision and recall was also obtained for TFs involved in photosynthesis (PpsR), carbon metabolism (RSP_0489) and iron homeostasis (RSP_3341). In addition, this integrative approach enabled generation of TRNs with increased information content relative to R. sphaeroides TRN models built via other approaches. We also show how this approach can be used to simultaneously produce TRN models for each related organism used in the comparative genomics analysis. Our results highlight the advantages of

  4. A new large-scale manufacturing platform for complex biopharmaceuticals.

    PubMed

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.

  5. Implementation Status of a Ultra-Wideband Receiver Package for the next-generation Very Large Array

    NASA Astrophysics Data System (ADS)

    Lazio, T. Joseph W.; Velazco, Jose; Soriano, Melissa; Hoppe, Daniel; Russell, Damon; D'Addario, Larry; Long, Ezra; Bowen, James; Samoska, Lorene; Janzen, Andrew

    2017-01-01

    The next-generation Very Large Array (ngVLA) is a concept for a radio astronomical interferometric array operating in the frequency range 1.2 GHz to 116 GHz and designed to provide substantial improvements in sensitivity, angular resolution, and frequency coverage above the current Very Large Array (VLA). As notional design goals, it would have a continuous frequency coverage of 1.2 GHz to 48 GHz and be 10 times more sensitive than the VLA (and 25 times more sensitive than a 34 m diameter antenna of the Deep Space Network [DSN]). One of the key goals for the ngVLA is to reduce the operating costs without sacrificing performance. We are designing an ultra-wideband receiver package designed to operate across the 8 to 48 GHz frequency range, which can be contrasted to the current VLA, which covers this frequency range with five receiver packages. Reducing the number of receiving systems required to cover the full frequency range would reduce operating costs, and the objective of this work is to develop a prototype integrated feed-receiver package with a sensitivity performance comparable to current narrower band systems on radio telescopes and the DSN, but with a design that meets the requirement of low long-term operational costs. The ultra-wideband receiver package consists of a feed horn, low-noise amplifier (LNA), and down-converters to analog intermediate frequencies. Key features of this design are a quad-ridge feed horn with dielectric loading and a cryogenic receiver with a noise temperature of no more than 30 K at the low end of the band. We will report on the status of this receiver package development including the feed design and LNA implementation. We will present simulation studies of the feed horn including the insertion of dielectric components for improved illumination efficiencies across the band of interest. In addition, we will show experimental results of low-noise 35nm InP HEMT amplifier testing performed across the 8-50 GHz frequency range

  6. A large-scale perspective on stress-induced alterations in resting-state networks

    NASA Astrophysics Data System (ADS)

    Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron

    2016-02-01

    Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.

  7. The Large -scale Distribution of Galaxies

    NASA Astrophysics Data System (ADS)

    Flin, Piotr

    A review of the Large-scale structure of the Universe is given. A connection is made with the titanic work by Johannes Kepler in many areas of astronomy and cosmology. A special concern is made to spatial distribution of Galaxies, voids and walls (cellular structure of the Universe). Finaly, the author is concluding that the large scale structure of the Universe can be observed in much greater scale that it was thought twenty years ago.

  8. New-type steel plate with ultra high crack-arrestability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishikawa, T.; Nomiyama, Y.; Hagiwara, Y.

    1995-12-31

    A new-type steel plate has been developed by controlling the microstructure of the surface layers. The surface layer consists of ultra fine grain ferrite microstructure, which provides excellent fracture toughness even at cryogenic temperature. When an unstable brittle crack propagates in the developed steel plate, shear-lips can be easily formed due to the surface layers with ultra fine grain microstructure. Since unstable running crack behavior is strongly affected by side-ligaments (shear-lips), which are associated with extensive plastic deformation, enhanced formation of the shear-lips can improve crack arrestability. This paper describes the developed steel plates of HT500MPa tensile strength class formore » shipbuilding use. Fracture mechanics investigations using large-scale fracture testings (including ultrawide duplex ESSO tests) clarified that the developed steel plates have ultra high crack-arrestability. It was also confirmed that the plates possess sufficient properties, including weldability and workability, for ship building use.« less

  9. Compositional variations at ultra-structure length scales in coral skeleton

    NASA Astrophysics Data System (ADS)

    Meibom, Anders; Cuif, Jean-Pierre; Houlbreque, Fanny; Mostefaoui, Smail; Dauphin, Yannicke; Meibom, Karin L.; Dunbar, Robert

    2008-03-01

    Distributions of Mg and Sr in the skeletons of a deep-sea coral ( Caryophyllia ambrosia) and a shallow-water, reef-building coral ( Pavona clavus) have been obtained with a spatial resolution of 150 nm, using the NanoSIMS ion microprobe at the Muséum National d'Histoire Naturelle in Paris. These trace element analyses focus on the two primary ultra-structural components in the skeleton: centers of calcification (COC) and fibrous aragonite. In fibrous aragonite, the trace element variations are typically on the order of 10% or more, on length scales on the order of 1-10 μm. Sr/Ca and Mg/Ca variations are not correlated. However, Mg/Ca variations in Pavona are strongly correlated with the layered organization of the skeleton. These data allow for a direct comparison of trace element variations in zooxanthellate and non-zooxanthellate corals. In both corals, all trace elements show variations far beyond what can be attributed to variations in the marine environment. Furthermore, the observed trace element variations in the fibrous (bulk) part of the skeletons are not related to the activity of zooxanthellae, but result from other biological activity in the coral organism. To a large degree, this biological forcing is independent of the ambient marine environment, which is essentially constant on the growth timescales considered here. Finally, we discuss the possible detection of a new high-Mg calcium carbonate phase, which appears to be present in both deep-sea and reef-building corals and is neither aragonite nor calcite.

  10. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    NASA Astrophysics Data System (ADS)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  11. Construction, wind tunnel testing and data analysis for a 1/5 scale ultra-light wing model

    NASA Technical Reports Server (NTRS)

    James, Michael D.; Smith, Howard W.

    1993-01-01

    This report documents the construction, wind tunnel testing, and data analysis of a 1/5 scale ultra-light wing section. Wind tunnel testing provided accurate and meaningful lift, drag, and pitching moment data. This data was processed and graphically presented as follows: C(sub L) vs. gamma; C(sub D) vs. gamma; C(sub M) vs. gamma; and C(sub L) vs. C(sub D). The wing fabric flexure was found to be significant and its possible effects on aerodynamic data was discussed. The fabric flexure is directly related to wing angle of attack and airspeed. Different wing section shapes created by fabric flexure are presented with explanations of the types of pressures that act upon the wing surface. This report provides conclusive aerodynamic data for ultra-light wings.

  12. Very large scale heterogeneous integration (VLSHI) and wafer-level vacuum packaging for infrared bolometer focal plane arrays

    NASA Astrophysics Data System (ADS)

    Forsberg, Fredrik; Roxhed, Niclas; Fischer, Andreas C.; Samel, Björn; Ericsson, Per; Hoivik, Nils; Lapadatu, Adriana; Bring, Martin; Kittilsland, Gjermund; Stemme, Göran; Niklaus, Frank

    2013-09-01

    Imaging in the long wavelength infrared (LWIR) range from 8 to 14 μm is an extremely useful tool for non-contact measurement and imaging of temperature in many industrial, automotive and security applications. However, the cost of the infrared (IR) imaging components has to be significantly reduced to make IR imaging a viable technology for many cost-sensitive applications. This paper demonstrates new and improved fabrication and packaging technologies for next-generation IR imaging detectors based on uncooled IR bolometer focal plane arrays. The proposed technologies include very large scale heterogeneous integration for combining high-performance, SiGe quantum-well bolometers with electronic integrated read-out circuits and CMOS compatible wafer-level vacuum packing. The fabrication and characterization of bolometers with a pitch of 25 μm × 25 μm that are arranged on read-out-wafers in arrays with 320 × 240 pixels are presented. The bolometers contain a multi-layer quantum well SiGe thermistor with a temperature coefficient of resistance of -3.0%/K. The proposed CMOS compatible wafer-level vacuum packaging technology uses Cu-Sn solid-liquid interdiffusion (SLID) bonding. The presented technologies are suitable for implementation in cost-efficient fabless business models with the potential to bring about the cost reduction needed to enable low-cost IR imaging products for industrial, security and automotive applications.

  13. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Paul T; Yin, Shengjun; Klasky, Hilda B

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current statusmore » of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite

  14. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  15. Large Scale Software Building with CMake in ATLAS

    NASA Astrophysics Data System (ADS)

    Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration

    2017-10-01

    The offline software of the ATLAS experiment at the Large Hadron Collider (LHC) serves as the platform for detector data reconstruction, simulation and analysis. It is also used in the detector’s trigger system to select LHC collision events during data taking. The ATLAS offline software consists of several million lines of C++ and Python code organized in a modular design of more than 2000 specialized packages. Because of different workflows, many stable numbered releases are in parallel production use. To accommodate specific workflow requests, software patches with modified libraries are distributed on top of existing software releases on a daily basis. The different ATLAS software applications also require a flexible build system that strongly supports unit and integration tests. Within the last year this build system was migrated to CMake. A CMake configuration has been developed that allows one to easily set up and build the above mentioned software packages. This also makes it possible to develop and test new and modified packages on top of existing releases. The system also allows one to detect and execute partial rebuilds of the release based on single package changes. The build system makes use of CPack for building RPM packages out of the software releases, and CTest for running unit and integration tests. We report on the migration and integration of the ATLAS software to CMake and show working examples of this large scale project in production.

  16. A survey on routing protocols for large-scale wireless sensor networks.

    PubMed

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. "Large-scale" means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and

  17. Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid

    NASA Astrophysics Data System (ADS)

    Kuwayama, Akira

    The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.

  18. RAID-2: Design and implementation of a large scale disk array controller

    NASA Technical Reports Server (NTRS)

    Katz, R. H.; Chen, P. M.; Drapeau, A. L.; Lee, E. K.; Lutz, K.; Miller, E. L.; Seshan, S.; Patterson, D. A.

    1992-01-01

    We describe the implementation of a large scale disk array controller and subsystem incorporating over 100 high performance 3.5 inch disk drives. It is designed to provide 40 MB/s sustained performance and 40 GB capacity in three 19 inch racks. The array controller forms an integral part of a file server that attaches to a Gb/s local area network. The controller implements a high bandwidth interconnect between an interleaved memory, an XOR calculation engine, the network interface (HIPPI), and the disk interfaces (SCSI). The system is now functionally operational, and we are tuning its performance. We review the design decisions, history, and lessons learned from this three year university implementation effort to construct a truly large scale system assembly.

  19. Transition from large-scale to small-scale dynamo.

    PubMed

    Ponty, Y; Plunian, F

    2011-04-15

    The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.

  20. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    PubMed

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  1. Large Scale Wind and Solar Integration in Germany

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ernst, Bernhard; Schreirer, Uwe; Berster, Frank

    2010-02-28

    This report provides key information concerning the German experience with integrating of 25 gigawatts of wind and 7 gigawatts of solar power capacity and mitigating its impacts on the electric power system. The report has been prepared based on information provided by the Amprion GmbH and 50Hertz Transmission GmbH managers and engineers to the Bonneville Power Administration (BPA) and Pacific Northwest National Laboratory representatives during their visit to Germany in October 2009. The trip and this report have been sponsored by the BPA Technology Innovation office. Learning from the German experience could help the Bonneville Power Administration engineers to comparemore » and evaluate potential new solutions for managing higher penetrations of wind energy resources in their control area. A broader dissemination of this experience will benefit wind and solar resource integration efforts in the United States.« less

  2. Large-scale integrative network-based analysis identifies common pathways disrupted by copy number alterations across cancers

    PubMed Central

    2013-01-01

    Background Many large-scale studies analyzed high-throughput genomic data to identify altered pathways essential to the development and progression of specific types of cancer. However, no previous study has been extended to provide a comprehensive analysis of pathways disrupted by copy number alterations across different human cancers. Towards this goal, we propose a network-based method to integrate copy number alteration data with human protein-protein interaction networks and pathway databases to identify pathways that are commonly disrupted in many different types of cancer. Results We applied our approach to a data set of 2,172 cancer patients across 16 different types of cancers, and discovered a set of commonly disrupted pathways, which are likely essential for tumor formation in majority of the cancers. We also identified pathways that are only disrupted in specific cancer types, providing molecular markers for different human cancers. Analysis with independent microarray gene expression datasets confirms that the commonly disrupted pathways can be used to identify patient subgroups with significantly different survival outcomes. We also provide a network view of disrupted pathways to explain how copy number alterations affect pathways that regulate cell growth, cycle, and differentiation for tumorigenesis. Conclusions In this work, we demonstrated that the network-based integrative analysis can help to identify pathways disrupted by copy number alterations across 16 types of human cancers, which are not readily identifiable by conventional overrepresentation-based and other pathway-based methods. All the results and source code are available at http://compbio.cs.umn.edu/NetPathID/. PMID:23822816

  3. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  4. Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems.

    PubMed

    Park, Jongkil; Yu, Theodore; Joshi, Siddharth; Maier, Christoph; Cauwenberghs, Gert

    2017-10-01

    We present a hierarchical address-event routing (HiAER) architecture for scalable communication of neural and synaptic spike events between neuromorphic processors, implemented with five Xilinx Spartan-6 field-programmable gate arrays and four custom analog neuromophic integrated circuits serving 262k neurons and 262M synapses. The architecture extends the single-bus address-event representation protocol to a hierarchy of multiple nested buses, routing events across increasing scales of spatial distance. The HiAER protocol provides individually programmable axonal delay in addition to strength for each synapse, lending itself toward biologically plausible neural network architectures, and scales across a range of hierarchies suitable for multichip and multiboard systems in reconfigurable large-scale neuromorphic systems. We show approximately linear scaling of net global synaptic event throughput with number of routing nodes in the network, at 3.6×10 7 synaptic events per second per 16k-neuron node in the hierarchy.

  5. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    PubMed

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  6. Generation of Large-Scale Magnetic Fields by Small-Scale Dynamo in Shear Flows.

    PubMed

    Squire, J; Bhattacharjee, A

    2015-10-23

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.

  7. III-V Ultra-Thin-Body InGaAs/InAs MOSFETs for Low Standby Power Logic Applications

    NASA Astrophysics Data System (ADS)

    Huang, Cheng-Ying

    As device scaling continues to sub-10-nm regime, III-V InGaAs/InAs metal- oxide-semiconductor ?eld-e?ect transistors (MOSFETs) are promising candidates for replacing Si-based MOSFETs for future very-large-scale integration (VLSI) logic applications. III-V InGaAs materials have low electron effective mass and high electron velocity, allowing higher on-state current at lower VDD and reducing the switching power consumption. However, III-V InGaAs materials have a narrower band gap and higher permittivity, leading to large band-to-band tunneling (BTBT) leakage or gate-induced drain leakage (GIDL) at the drain end of the channel, and large subthreshold leakage due to worse electrostatic integrity. To utilize III-V MOSFETs in future logic circuits, III-V MOSFETs must have high on-state performance over Si MOSFETs as well as very low leakage current and low standby power consumption. In this dissertation, we will report InGaAs/InAs ultra-thin-body MOSFETs. Three techniques for reducing the leakage currents in InGaAs/InAs MOSFETs are reported as described below. 1) Wide band-gap barriers: We developed AlAs0.44Sb0.56 barriers lattice-match to InP by molecular beam epitaxy (MBE), and studied the electron transport in In0.53Ga0.47As/AlAs 0.44Sb0.56 heterostructures. The InGaAs channel MOSFETs using AlAs0.44Sb0.56 bottom barriers or p-doped In0.52 Al0.48As barriers were demonstrated, showing significant suppression on the back barrier leakage. 2) Ultra-thin channels: We investigated the electron transport in InGaAs and InAs ultra-thin quantum wells and ultra-thin body MOSFETs (t ch ~ 2-4 nm). For high performance logic, InAs channels enable higher on-state current, while for low power logic, InGaAs channels allow lower BTBT leakage current. 3) Source/Drain engineering: We developed raised InGaAs and recessed InP source/drain spacers. The raised InGaAs source/drain spacers improve electrostatics, reducing subthreshold leakage, and smooth the electric field near drain, reducing

  8. Generation of a large volume of clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles for cell culture studies

    PubMed Central

    Ingham, Eileen; Fisher, John; Tipper, Joanne L

    2014-01-01

    It has recently been shown that the wear of ultra-high-molecular-weight polyethylene in hip and knee prostheses leads to the generation of nanometre-sized particles, in addition to micron-sized particles. The biological activity of nanometre-sized ultra-high-molecular-weight polyethylene wear particles has not, however, previously been studied due to difficulties in generating sufficient volumes of nanometre-sized ultra-high-molecular-weight polyethylene wear particles suitable for cell culture studies. In this study, wear simulation methods were investigated to generate a large volume of endotoxin-free clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles. Both single-station and six-station multidirectional pin-on-plate wear simulators were used to generate ultra-high-molecular-weight polyethylene wear particles under sterile and non-sterile conditions. Microbial contamination and endotoxin levels in the lubricants were determined. The results indicated that microbial contamination was absent and endotoxin levels were low and within acceptable limits for the pharmaceutical industry, when a six-station pin-on-plate wear simulator was used to generate ultra-high-molecular-weight polyethylene wear particles in a non-sterile environment. Different pore-sized polycarbonate filters were investigated to isolate nanometre-sized ultra-high-molecular-weight polyethylene wear particles from the wear test lubricants. The use of the filter sequence of 10, 1, 0.1, 0.1 and 0.015 µm pore sizes allowed successful isolation of ultra-high-molecular-weight polyethylene wear particles with a size range of < 100 nm, which was suitable for cell culture studies. PMID:24658586

  9. Generation of a large volume of clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles for cell culture studies.

    PubMed

    Liu, Aiqin; Ingham, Eileen; Fisher, John; Tipper, Joanne L

    2014-04-01

    It has recently been shown that the wear of ultra-high-molecular-weight polyethylene in hip and knee prostheses leads to the generation of nanometre-sized particles, in addition to micron-sized particles. The biological activity of nanometre-sized ultra-high-molecular-weight polyethylene wear particles has not, however, previously been studied due to difficulties in generating sufficient volumes of nanometre-sized ultra-high-molecular-weight polyethylene wear particles suitable for cell culture studies. In this study, wear simulation methods were investigated to generate a large volume of endotoxin-free clinically relevant nanometre-sized ultra-high-molecular-weight polyethylene wear particles. Both single-station and six-station multidirectional pin-on-plate wear simulators were used to generate ultra-high-molecular-weight polyethylene wear particles under sterile and non-sterile conditions. Microbial contamination and endotoxin levels in the lubricants were determined. The results indicated that microbial contamination was absent and endotoxin levels were low and within acceptable limits for the pharmaceutical industry, when a six-station pin-on-plate wear simulator was used to generate ultra-high-molecular-weight polyethylene wear particles in a non-sterile environment. Different pore-sized polycarbonate filters were investigated to isolate nanometre-sized ultra-high-molecular-weight polyethylene wear particles from the wear test lubricants. The use of the filter sequence of 10, 1, 0.1, 0.1 and 0.015 µm pore sizes allowed successful isolation of ultra-high-molecular-weight polyethylene wear particles with a size range of < 100 nm, which was suitable for cell culture studies.

  10. Fabrication of the HIAD Large-Scale Demonstration Assembly and Upcoming Mission Applications

    NASA Technical Reports Server (NTRS)

    Swanson, G. T.; Johnson, R. K.; Hughes, S. J.; Dinonno, J. M.; Cheatwood, F M.

    2017-01-01

    Over a decade of work has been conducted in the development of NASAs Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) deployable aeroshell system and associated analytical tools. NASAs HIAD project team has developed, fabricated, and tested inflatable structures (IS) integrated with flexible thermal protection system (F-TPS), ranging in diameters from 3-6m, with cone angles of 60 and 70 deg.In 2015, United Launch Alliance (ULA) announced that they will use a HIAD (10-12m) as part of their Sensible, Modular, Autonomous Return Technology (SMART) for their upcoming Vulcan rocket. ULA expects SMART reusability, coupled with other advancements for Vulcan, will substantially reduce the cost of access to space. The first booster engine recovery via HIAD is scheduled for 2024. To meet this near-term need, as well as future NASA applications, the HIAD team is investigating taking the technology to the 10-15m diameter scale.In the last year, many significant development and fabrication efforts have been accomplished, culminating in the construction of a large-scale inflatable structure demonstration assembly. This assembly incorporated the first three tori for a 12m Mars Human-Scale Pathfinder HIAD conceptual design that was constructed with the current state of the art material set. Numerous design trades and torus fabrication demonstrations preceded this effort. In 2016, three large-scale tori (0.61m cross-section) and six subscale tori (0.25m cross-section) were manufactured to demonstrate fabrication techniques using the newest candidate material sets. These tori were tested to evaluate durability and load capacity. This work led to the selection of the inflatable structures third generation (Gen-3) structural liner. In late 2016, the three tori required for the large-scale demonstration assembly were fabricated, and then

  11. Large-Scale Document Automation: The Systems Integration Issue.

    ERIC Educational Resources Information Center

    Kalthoff, Robert J.

    1985-01-01

    Reviews current technologies for electronic imaging and its recording and transmission, including digital recording, optical data disks, automated image-delivery micrographics, high-density-magnetic recording, and new developments in telecommunications and computers. The role of the document automation systems integrator, who will bring these…

  12. Large Scale Traffic Simulations

    DOT National Transportation Integrated Search

    1997-01-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computation speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated "looping" between t...

  13. A VLSI Neural Monitoring System With Ultra-Wideband Telemetry for Awake Behaving Subjects.

    PubMed

    Greenwald, E; Mollazadeh, M; Hu, C; Wei Tang; Culurciello, E; Thakor, V

    2011-04-01

    Long-term monitoring of neuronal activity in awake behaving subjects can provide fundamental information about brain dynamics for neuroscience and neuroengineering applications. Here, we present a miniature, lightweight, and low-power recording system for monitoring neural activity in awake behaving animals. The system integrates two custom designed very-large-scale integrated chips, a neural interface module fabricated in 0.5 μm complementary metal-oxide semiconductor technology and an ultra-wideband transmitter module fabricated in a 0.5 μm silicon-on-sapphire (SOS) technology. The system amplifies, filters, digitizes, and transmits 16 channels of neural data at a rate of 1 Mb/s. The entire system, which includes the VLSI circuits, a digital interface board, a battery, and a custom housing, is small and lightweight (24 g) and, thus, can be chronically mounted on small animals. The system consumes 4.8 mA and records continuously for up to 40 h powered by a 3.7-V, 200-mAh rechargeable lithium-ion battery. Experimental benchtop characterizations as well as in vivo multichannel neural recordings from awake behaving rats are presented here.

  14. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    DOE PAGES

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic naturemore » of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.« less

  15. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    NASA Astrophysics Data System (ADS)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  16. Moditored unsaturated soil transport processes as a support for large scale soil and water management

    NASA Astrophysics Data System (ADS)

    Vanclooster, Marnik

    2010-05-01

    The current societal demand for sustainable soil and water management is very large. The drivers of global and climate change exert many pressures on the soil and water ecosystems, endangering appropriate ecosystem functioning. The unsaturated soil transport processes play a key role in soil-water system functioning as it controls the fluxes of water and nutrients from the soil to plants (the pedo-biosphere link), the infiltration flux of precipitated water to groundwater and the evaporative flux, and hence the feed back from the soil to the climate system. Yet, unsaturated soil transport processes are difficult to quantify since they are affected by huge variability of the governing properties at different space-time scales and the intrinsic non-linearity of the transport processes. The incompatibility of the scales between the scale at which processes reasonably can be characterized, the scale at which the theoretical process correctly can be described and the scale at which the soil and water system need to be managed, calls for further development of scaling procedures in unsaturated zone science. It also calls for a better integration of theoretical and modelling approaches to elucidate transport processes at the appropriate scales, compatible with the sustainable soil and water management objective. Moditoring science, i.e the interdisciplinary research domain where modelling and monitoring science are linked, is currently evolving significantly in the unsaturated zone hydrology area. In this presentation, a review of current moditoring strategies/techniques will be given and illustrated for solving large scale soil and water management problems. This will also allow identifying research needs in the interdisciplinary domain of modelling and monitoring and to improve the integration of unsaturated zone science in solving soil and water management issues. A focus will be given on examples of large scale soil and water management problems in Europe.

  17. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  18. A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks

    PubMed Central

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner

  19. Large Scale Flood Risk Analysis using a New Hyper-resolution Population Dataset

    NASA Astrophysics Data System (ADS)

    Smith, A.; Neal, J. C.; Bates, P. D.; Quinn, N.; Wing, O.

    2017-12-01

    Here we present the first national scale flood risk analyses, using high resolution Facebook Connectivity Lab population data and data from a hyper resolution flood hazard model. In recent years the field of large scale hydraulic modelling has been transformed by new remotely sensed datasets, improved process representation, highly efficient flow algorithms and increases in computational power. These developments have allowed flood risk analysis to be undertaken in previously unmodeled territories and from continental to global scales. Flood risk analyses are typically conducted via the integration of modelled water depths with an exposure dataset. Over large scales and in data poor areas, these exposure data typically take the form of a gridded population dataset, estimating population density using remotely sensed data and/or locally available census data. The local nature of flooding dictates that for robust flood risk analysis to be undertaken both hazard and exposure data should sufficiently resolve local scale features. Global flood frameworks are enabling flood hazard data to produced at 90m resolution, resulting in a mis-match with available population datasets which are typically more coarsely resolved. Moreover, these exposure data are typically focused on urban areas and struggle to represent rural populations. In this study we integrate a new population dataset with a global flood hazard model. The population dataset was produced by the Connectivity Lab at Facebook, providing gridded population data at 5m resolution, representing a resolution increase over previous countrywide data sets of multiple orders of magnitude. Flood risk analysis undertaken over a number of developing countries are presented, along with a comparison of flood risk analyses undertaken using pre-existing population datasets.

  20. Solutions of large-scale electromagnetics problems involving dielectric objects with the parallel multilevel fast multipole algorithm.

    PubMed

    Ergül, Özgür

    2011-11-01

    Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.

  1. A Fine-Grained Pipelined Implementation for Large-Scale Matrix Inversion on FPGA

    NASA Astrophysics Data System (ADS)

    Zhou, Jie; Dou, Yong; Zhao, Jianxun; Xia, Fei; Lei, Yuanwu; Tang, Yuxing

    Large-scale matrix inversion play an important role in many applications. However to the best of our knowledge, there is no FPGA-based implementation. In this paper, we explore the possibility of accelerating large-scale matrix inversion on FPGA. To exploit the computational potential of FPGA, we introduce a fine-grained parallel algorithm for matrix inversion. A scalable linear array processing elements (PEs), which is the core component of the FPGA accelerator, is proposed to implement this algorithm. A total of 12 PEs can be integrated into an Altera StratixII EP2S130F1020C5 FPGA on our self-designed board. Experimental results show that a factor of 2.6 speedup and the maximum power-performance of 41 can be achieved compare to Pentium Dual CPU with double SSE threads.

  2. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  3. Ultra-low-loss and broadband mode converters in Si3N4 technology

    NASA Astrophysics Data System (ADS)

    Mu, Jinfeng; Dijkstra, Meindert; de Goede, Michiel; Yong, Yean-Sheng; García-Blanco, Sonia M.

    2017-02-01

    Si3N4 grown by low pressure chemical vapor deposition (LPCVD) on thermally oxidized silicon wafers is largely utilized for creating integrated photonic devices due to its ultra-low propagation loss and large transparency window (400 nm to 2350 nm). In this paper, an ultra-low-loss and broadband mode converter for monolithic integration of different materials onto the passive Si3N4 photonic technology platform is presented. The mode size converter is constructed with a vertically tapered Si3N4 waveguide that is then buried by a polymer or an Al2O3 waveguide. The influence of the various design parameters on the converter characteristics are investigated. Optimal designs are proposed, in which the thickness of the Si3N4 waveguide is tapered from 200 nm to 40 nm. The calculated losses of the mode converters at 976 nm and 1550 nm wavelengths are well below 0.1 dB for the Si3N4-polymer coupler and below 0.3 dB for the Si3N4-Al2O3 coupler. The preliminary experimental results show good agreement with the design values, indicating that the mode converters can be utilized for the low-loss integration of different materials.

  4. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  5. Fractionaly Integrated Flux model and Scaling Laws in Weather and Climate

    NASA Astrophysics Data System (ADS)

    Schertzer, Daniel; Lovejoy, Shaun

    2013-04-01

    The Fractionaly Integrated Flux model (FIF) has been extensively used to model intermittent observables, like the velocity field, by defining them with the help of a fractional integration of a conservative (i.e. strictly scale invariant) flux, such as the turbulent energy flux. It indeed corresponds to a well-defined modelling that yields the observed scaling laws. Generalised Scale Invariance (GSI) enables FIF to deal with anisotropic fractional integrations and has been rather successful to define and model a unique regime of scaling anisotropic turbulence up to planetary scales. This turbulence has an effective dimension of 23/9=2.55... instead of the classical hypothesised 2D and 3D turbulent regimes, respectively for large and small spatial scales. It therefore theoretically eliminates a non plausible "dimension transition" between these two regimes and the resulting requirement of a turbulent energy "mesoscale gap", whose empirical evidence has been brought more and more into question. More recently, GSI-FIF was used to analyse climate, therefore at much larger time scales. Indeed, the 23/9-dimensional regime necessarily breaks up at the outer spatial scales. The corresponding transition range, which can be called "macroweather", seems to have many interesting properties, e.g. it rather corresponds to a fractional differentiation in time with a roughly flat frequency spectrum. Furthermore, this transition yields the possibility to have at much larger time scales scaling space-time climate fluctuations with a much stronger scaling anisotropy between time and space. Lovejoy, S. and D. Schertzer (2013). The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge Press (in press). Schertzer, D. et al. (1997). Fractals 5(3): 427-471. Schertzer, D. and S. Lovejoy (2011). International Journal of Bifurcation and Chaos 21(12): 3417-3456.

  6. A Large-Scale Super-Structure at z=0.65 in the UKIDSS Ultra-Deep Survey Field

    NASA Astrophysics Data System (ADS)

    Galametz, Audrey; Candels Clustering Working Group

    2017-07-01

    In hierarchical structure formation scenarios, galaxies accrete along high density filaments. Superclusters represent the largest density enhancements in the cosmic web with scales of 100 to 200 Mpc. As they represent the largest components of LSS, they are very powerful tools to constrain cosmological models. Since they also offer a wide range of density, from infalling group to high density cluster core, they are also the perfect laboratory to study the influence of environment on galaxy evolution. I will present a newly discovered large scale structure at z=0.65 in the UKIDSS UDS field. Although statistically predicted, the presence of such structure in UKIDSS, one of the most extensively covered and studied extragalactic field, remains a serendipity. Our follow-up confirmed more than 15 group members including at least three galaxy clusters with M200 10^14Msol . Deep spectroscopy of the quiescent core galaxies reveals that the most massive structure knots are at very different formation stage with a range of red sequence properties. Statistics allow us to map formation age across the structure denser knots and identify where quenching is most probably occurring across the LSS. Spectral diagnostics analysis also reveals an interesting population of transition galaxies we suspect are transforming from star-forming to quiescent galaxies.

  7. Large-Scale 3D Printing: The Way Forward

    NASA Astrophysics Data System (ADS)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  8. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  9. Integrated amateur band and ultra-wide band monopole antenna with multiple band-notched

    NASA Astrophysics Data System (ADS)

    Srivastava, Kunal; Kumar, Ashwani; Kanaujia, B. K.; Dwari, Santanu

    2018-05-01

    This paper presents the integrated amateur band and ultra-wide band (UWB) monopole antenna with integrated multiple band-notched characteristics. It is designed for avoiding the potential interference of frequencies 3.99 GHz (3.83 GHz-4.34 GHz), 4.86 GHz (4.48 GHz-5.63 GHz), 7.20 GHz (6.10 GHz-7.55 GHz) and 8.0 GHz (7.62 GHz-8.47 GHz) with VSWR 4.9, 11.5, 6.4 and 5.3, respectively. Equivalent parallel resonant circuits have been presented for each band-notched frequencies of the antenna. Antenna operates in amateur band 1.2 GHz (1.05 GHz-1.3 GHz) and UWB band from 3.2 GHz-13.9 GHz. Different substrates are used to verify the working of the proposed antenna. Integrated GSM band from 0.6 GHz to 1.8 GHz can also be achieved by changing the radius of the radiating patch. Antenna gain varied from 1.4 dBi to 9.8 dBi. Measured results are presented to validate the antenna performances.

  10. Integrating Remote Sensing Information Into A Distributed Hydrological Model for Improving Water Budget Predictions in Large-scale Basins through Data Assimilation.

    PubMed

    Qin, Changbo; Jia, Yangwen; Su, Z; Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen

    2008-07-29

    This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems.

  11. Integrating Remote Sensing Information Into A Distributed Hydrological Model for Improving Water Budget Predictions in Large-scale Basins through Data Assimilation

    PubMed Central

    Qin, Changbo; Jia, Yangwen; Su, Z.(Bob); Zhou, Zuhao; Qiu, Yaqin; Suhui, Shen

    2008-01-01

    This paper investigates whether remote sensing evapotranspiration estimates can be integrated by means of data assimilation into a distributed hydrological model for improving the predictions of spatial water distribution over a large river basin with an area of 317,800 km2. A series of available MODIS satellite images over the Haihe River basin in China are used for the year 2005. Evapotranspiration is retrieved from these 1×1 km resolution images using the SEBS (Surface Energy Balance System) algorithm. The physically-based distributed model WEP-L (Water and Energy transfer Process in Large river basins) is used to compute the water balance of the Haihe River basin in the same year. Comparison between model-derived and remote sensing retrieval basin-averaged evapotranspiration estimates shows a good piecewise linear relationship, but their spatial distribution within the Haihe basin is different. The remote sensing derived evapotranspiration shows variability at finer scales. An extended Kalman filter (EKF) data assimilation algorithm, suitable for non-linear problems, is used. Assimilation results indicate that remote sensing observations have a potentially important role in providing spatial information to the assimilation system for the spatially optical hydrological parameterization of the model. This is especially important for large basins, such as the Haihe River basin in this study. Combining and integrating the capabilities of and information from model simulation and remote sensing techniques may provide the best spatial and temporal characteristics for hydrological states/fluxes, and would be both appealing and necessary for improving our knowledge of fundamental hydrological processes and for addressing important water resource management problems. PMID:27879946

  12. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  13. A Resonant Tunneling Nanowire Field Effect Transistor with Physical Contractions: A Negative Differential Resistance Device for Low Power Very Large Scale Integration Applications

    NASA Astrophysics Data System (ADS)

    Molaei Imen Abadi, Rouzbeh; Saremi, Mehdi

    2018-02-01

    In this paper, the influence of ultra-scaled physical symmetrical contraction on electrical characteristics of ultra-thin silicon-on-insulator nanowires with circular gate-all-around structure is investigated by using a 3D Atlas numerical quantum simulator based on non-equilibrium green's function formalism. It is demonstrated that local cross-section variation in a nanowire transistor results in the establishment of tunnel energy barriers at the source-channel and drain-channel junctions which change device physics and cause a transmission from a quantum wire (1-D) to a floating quantum dot nanowire (0-D) introducing a resonant tunneling nanowire FET (RT-NWFET) as an interesting concept of nanoscale MOSFETs. The barriers construct resonance energy levels in the channel region of nanowires because of the longitudinal confinement in three directions causing some fluctuation in I D- V GS characteristic. In addition, these barriers remarkably improve the subthreshold swing and minimize the ON/OFF-current ratio degradation at a low operation voltage of 0.5 V. As a result, RT-NWFETs are intrinsically preserved from drain-source tunneling and are an interesting candidate for developing the roadmap below 10 nm.

  14. A Latin-cross-shaped integrated resonant cantilever with second torsion-mode resonance for ultra-resoluble bio-mass sensing

    NASA Astrophysics Data System (ADS)

    Xia, Xiaoyuan; Zhang, Zhixiang; Li, Xinxin

    2008-03-01

    Second torsion-mode resonance is proposed for microcantilever biosensors for ultra-high mass-weighing sensitivity and resolution. By increasing both the resonant frequency and Q-factor, the higher mode torsional resonance is favorable for improving the mass-sensing performance. For the first time, a Latin-cross-shaped second-mode resonant cantilever is constructed and optimally designed for both signal-readout and resonance-exciting elements. The cantilever sensor is fabricated by using silicon micromachining techniques. The transverse piezoresistive sensing element and the specific-shaped resonance-exciting loop are successfully integrated in the cantilever. Alpha-fetoprotein (AFP) antibody-antigen specific binding is implemented for the sensing experiment. The proposed cantilever sensor is designed with significantly superior sensitivity to the previously reported first torsion-mode one. After analysis with an Allan variance algorithm, which can be easily embedded in the sensing system, the Latin-cross-shaped second torsion-mode resonant cantilever is evaluated with ultra-high mass resolution. Therefore, the high-performance integrated micro-sensor is promising for on-the-spot bio-molecule detection.

  15. Ultra-low temperature (≤300 °C) growth of Ge-rich SiGe by solid-liquid-coexisting annealing of a-GeSn/c-Si structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sadoh, Taizoh, E-mail: sadoh@ed.kyushu-u.ac.jp; Chikita, Hironori; Miyao, Masanobu

    2015-09-07

    Ultra-low temperature (≤300 °C) growth of Ge-rich SiGe on Si substrates is strongly desired to realize advanced electronic and optical devices, which can be merged onto Si large-scale integrated circuits (LSI). To achieve this, annealing characteristics of a-GeSn/c-Si structures are investigated under wide ranges of the initial Sn concentrations (0%–26%) and annealing conditions (300–1000 °C, 1 s–48 h). Epitaxial growth triggered by SiGe mixing is observed after annealing, where the annealing temperatures necessary for epitaxial growth significantly decrease with increasing initial Sn concentration and/or annealing time. As a result, Ge-rich (∼80%) SiGe layers with Sn concentrations of ∼2% are realized by ultra-low temperature annealingmore » (300 °C, 48 h) for a sample with the initial Sn concentration of 26%. The annealing temperature (300 °C) is in the solid-liquid coexisting temperature region of the phase diagram for Ge-Sn system. From detailed analysis of crystallization characteristics and composition profiles in grown layers, it is suggested that SiGe mixing is generated by a liquid-phase reaction even at ultra-low temperatures far below the melting temperature of a-GeSn. This ultra-low-temperature growth technique of Ge-rich SiGe on Si substrates is expected to be useful to realize next-generation LSI, where various multi-functional devices are integrated on Si substrates.« less

  16. Final Report: Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghattas, Omar

    2013-10-15

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimiza- tion) Project focuses on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimiza- tion and inversion methods. Our research is directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. Our efforts are integrated in the context of a challenging testbed problem that considers subsurface reacting flow and transport. The MIT component of the SAGUAROmore » Project addresses the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas-Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to- observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as "reduce then sample" and "sample then reduce." In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their speedups.« less

  17. Opportunities for Breakthroughs in Large-Scale Computational Simulation and Design

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Alter, Stephen J.; Atkins, Harold L.; Bey, Kim S.; Bibb, Karen L.; Biedron, Robert T.; Carpenter, Mark H.; Cheatwood, F. McNeil; Drummond, Philip J.; Gnoffo, Peter A.

    2002-01-01

    Opportunities for breakthroughs in the large-scale computational simulation and design of aerospace vehicles are presented. Computational fluid dynamics tools to be used within multidisciplinary analysis and design methods are emphasized. The opportunities stem from speedups and robustness improvements in the underlying unit operations associated with simulation (geometry modeling, grid generation, physical modeling, analysis, etc.). Further, an improved programming environment can synergistically integrate these unit operations to leverage the gains. The speedups result from reducing the problem setup time through geometry modeling and grid generation operations, and reducing the solution time through the operation counts associated with solving the discretized equations to a sufficient accuracy. The opportunities are addressed only at a general level here, but an extensive list of references containing further details is included. The opportunities discussed are being addressed through the Fast Adaptive Aerospace Tools (FAAST) element of the Advanced Systems Concept to Test (ASCoT) and the third Generation Reusable Launch Vehicles (RLV) projects at NASA Langley Research Center. The overall goal is to enable greater inroads into the design process with large-scale simulations.

  18. Vertically integrated visible and near-infrared metasurfaces enabling an ultra-broadband and highly angle-resolved anomalous reflection.

    PubMed

    Gao, Song; Lee, Sang-Shin; Kim, Eun-Soo; Choi, Duk-Yong

    2018-06-21

    An optical device with minimized dimensions, which is capable of efficiently resolving an ultra-broad spectrum into a wide splitting angle but incurring no spectrum overlap, is of importance in advancing the development of spectroscopy. Unfortunately, this challenging task cannot be easily addressed through conventional geometrical or diffractive optical elements. Herein, we propose and demonstrate vertically integrated visible and near-infrared metasurfaces which render an ultra-broadband and highly angle-resolved anomalous reflection. The proposed metasurface capitalizes on a supercell that comprises two vertically concatenated trapezoid-shaped aluminum antennae, which are paired with a metallic ground plane via a dielectric layer. Under normal incidence, reflected light within a spectral bandwidth of 1000 nm ranging from λ = 456 nm to 1456 nm is efficiently angle-resolved to a single diffraction order with no spectrum overlap via the anomalous reflection, exhibiting an average reflection efficiency over 70% and a substantial angular splitting of 58°. In light of a supercell pitch of 1500 nm, to the best of our knowledge, the micron-scale bandwidth is the largest ever reported. It is noted that the substantially wide bandwidth has been accomplished by taking advantage of spectral selective vertical coupling effects between antennae and ground plane. In the visible regime, the upper antenna primarily renders an anomalous reflection by cooperating with the lower antenna, which in turn cooperates with the ground plane and produces phase variations leading to an anomalous reflection in the near-infrared regime. Misalignments between the two antennae have been particularly inspected to not adversely affect the anomalous reflection, thus guaranteeing enhanced structural tolerance of the proposed metasurface.

  19. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  20. Uncovering a facile large-scale synthesis of LiNi1/3Co1/3Mn1/3O2 nanoflowers for high power lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Hua, Wei-Bo; Guo, Xiao-Dong; Zheng, Zhuo; Wang, Yan-Jie; Zhong, Ben-He; Fang, Baizeng; Wang, Jia-Zhao; Chou, Shu-Lei; Liu, Heng

    2015-02-01

    Developing advanced electrode materials that deliver high energy at ultra-fast charge and discharge rates are very crucial to meet an increasing large-scale market demand for high power lithium ion batteries (LIBs). A three-dimensional (3D) nanoflower structure is successfully developed in the large-scale synthesis of LiNi1/3Co1/3Mn1/3O2 material for the first time. The fast co-precipitation is the key technique to prepare the nanoflower structure in our method. After heat treatment, the obtained LiNi1/3Co1/3Mn1/3O2 nanoflowers (NL333) pronouncedly present a pristine flower-like nano-architecture and provide fast pathways for the transport of Li-ions and electrons. As a cathode material in a LIB, the prepared NL333 electrode demonstrates an outstanding high-rate capability. Particularly, in a narrow voltage range of 2.7-4.3 V, the discharge capacity at an ultra-fast charge-discharge rate (20C) is up to 126 mAh g-1, which reaches 78% of that at 0.2C, and is much higher than that (i.e., 44.17%) of the traditional bulk LiNi1/3Co1/3Mn1/3O2.

  1. Ultra-low-power and robust digital-signal-processing hardware for implantable neural interface microsystems.

    PubMed

    Narasimhan, S; Chiel, H J; Bhunia, S

    2011-04-01

    Implantable microsystems for monitoring or manipulating brain activity typically require on-chip real-time processing of multichannel neural data using ultra low-power, miniaturized electronics. In this paper, we propose an integrated-circuit/architecture-level hardware design framework for neural signal processing that exploits the nature of the signal-processing algorithm. First, we consider different power reduction techniques and compare the energy efficiency between the ultra-low frequency subthreshold and conventional superthreshold design. We show that the superthreshold design operating at a much higher frequency can achieve comparable energy dissipation by taking advantage of extensive power gating. It also provides significantly higher robustness of operation and yield under large process variations. Next, we propose an architecture level preferential design approach for further energy reduction by isolating the critical computation blocks (with respect to the quality of the output signal) and assigning them higher delay margins compared to the noncritical ones. Possible delay failures under parameter variations are confined to the noncritical components, allowing graceful degradation in quality under voltage scaling. Simulation results using prerecorded neural data from the sea-slug (Aplysia californica) show that the application of the proposed design approach can lead to significant improvement in total energy, without compromising the output signal quality under process variations, compared to conventional design approaches.

  2. Integrating land and resource management plans and applied large-scale research on two national forests

    Treesearch

    Callie Jo Schweitzer; Stacy Clark; Glen Gaines; Paul Finke; Kurt Gottschalk; David Loftis

    2008-01-01

    Researchers working out of the Southern and Northern Research Stations have partnered with two National Forests to conduct two large-scale studies designed to assess the effectiveness of silvicultural techniques used to restore and maintain upland oak (Quercus spp.)-dominated ecosystems in the Cumberland Plateau Region of the southeastern United...

  3. The multi-phase winds of Markarian 231: from the hot, nuclear, ultra-fast wind to the galaxy-scale, molecular outflow

    NASA Astrophysics Data System (ADS)

    Feruglio, C.; Fiore, F.; Carniani, S.; Piconcelli, E.; Zappacosta, L.; Bongiorno, A.; Cicone, C.; Maiolino, R.; Marconi, A.; Menci, N.; Puccetti, S.; Veilleux, S.

    2015-11-01

    Mrk 231 is a nearby ultra-luminous IR galaxy exhibiting a kpc-scale, multi-phase AGN-driven outflow. This galaxy represents the best target to investigate in detail the morphology and energetics of powerful outflows, as well as their still poorly-understood expansion mechanism and impact on the host galaxy. In this work, we present the best sensitivity and angular resolution maps of the molecular disk and outflow of Mrk 231, as traced by CO(2-1) and (3-2) observations obtained with the IRAM/PdBI. In addition, we analyze archival deep Chandra and NuSTAR X-ray observations. We use this unprecedented combination of multi-wavelength data sets to constrain the physical properties of both the molecular disk and outflow, the presence of a highly-ionized ultra-fast nuclear wind, and their connection. The molecular CO(2-1) outflow has a size of 1 kpc, and extends in all directions around the nucleus, being more prominent along the south-west to north-east direction, suggesting a wide-angle biconical geometry. The maximum projected velocity of the outflow is nearly constant out to 1 kpc, thus implying that the density of the outflowing material must decrease from the nucleus outwards as r-2. This suggests that either a large part of the gas leaves the flow during its expansion or that the bulk of the outflow has not yet reached out to 1 kpc, thus implying a limit on its age of 1 Myr. Mapping the mass and energy rates of the molecular outflow yields dot {M} OF = [500-1000] M⊙ yr-1 and Ėkin,OF = [7-10] × 1043 erg s-1. The total kinetic energy of the outflow is Ekin,OF is of the same order of the total energy of the molecular disk, Edisk. Remarkably, our analysis of the X-ray data reveals a nuclear ultra-fast outflow (UFO) with velocity -20 000 km s-1, dot {M}UFO = [0.3-2.1] M⊙ yr-1, and momentum load dot {P}UFO/ dot {P}rad = [0.2-1.6]. We find Ėkin,UFO Ėkin,OF as predicted for outflows undergoing an energy conserving expansion. This suggests that most of the UFO

  4. Analyzing large scale genomic data on the cloud with Sparkhit

    PubMed Central

    Huang, Liren; Krüger, Jan

    2018-01-01

    Abstract Motivation The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92–157 times faster than MetaSpark on metagenomic fragment recruitment and 18–32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data. Availability and implementation Sparkhit is freely available at: https://rhinempi.github.io/sparkhit/. Contact asczyrba@cebitec.uni-bielefeld.de Supplementary information Supplementary data are available at Bioinformatics online. PMID:29253074

  5. Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.

    PubMed

    Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta

    2014-07-01

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.

  6. Sound production due to large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.

  7. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  8. Large-scale influences in near-wall turbulence.

    PubMed

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  9. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    NASA Astrophysics Data System (ADS)

    Squire, Jonathan; Bhattacharjee, Amitava

    2015-11-01

    A new mechanism for turbulent mean-field dynamo is proposed, in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the ``shear-current'' effect. The dynamo is studied using a variety of computational and analytic techniques, both when the magnetic fluctuations arise self-consistently through the small-scale dynamo and in lower Reynolds number regimes. Given the inevitable existence of non-helical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help to explain generation of large-scale magnetic fields across a wide range of astrophysical objects. This work was supported by a Procter Fellowship at Princeton University, and the US Department of Energy Grant DE-AC02-09-CH11466.

  10. PKI security in large-scale healthcare networks.

    PubMed

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  11. Integrating cell on chip—Novel waveguide platform employing ultra-long optical paths

    NASA Astrophysics Data System (ADS)

    Fohrmann, Lena Simone; Sommer, Gerrit; Pitruzzello, Giampaolo; Krauss, Thomas F.; Petrov, Alexander Yu.; Eich, Manfred

    2017-09-01

    Optical waveguides are the most fundamental building blocks of integrated optical circuits. They are extremely well understood, yet there is still room for surprises. Here, we introduce a novel 2D waveguide platform which affords a strong interaction of the evanescent tail of a guided optical wave with an external medium while only employing a very small geometrical footprint. The key feature of the platform is its ability to integrate the ultra-long path lengths by combining low propagation losses in a silicon slab with multiple reflections of the guided wave from photonic crystal (PhC) mirrors. With a reflectivity of 99.1% of our tailored PhC-mirrors, we achieve interaction paths of 25 cm within an area of less than 10 mm2. This corresponds to 0.17 dB/cm effective propagation which is much lower than the state-of-the-art loss of approximately 1 dB/cm of single mode silicon channel waveguides. In contrast to conventional waveguides, our 2D-approach leads to a decay of the guided wave power only inversely proportional to the optical path length. This entirely different characteristic is the major advantage of the 2D integrating cell waveguide platform over the conventional channel waveguide concepts that obey the Beer-Lambert law.

  12. An integrated approach for monitoring efficiency and investments of activated sludge-based wastewater treatment plants at large spatial scale.

    PubMed

    De Gisi, Sabino; Sabia, Gianpaolo; Casella, Patrizia; Farina, Roberto

    2015-08-01

    WISE, the Water Information System for Europe, is the web-portal of the European Commission (EU) that disseminates the quality state of the receiving water bodies and the efficiency of the municipal wastewater treatment plants (WWTPs) in order to monitor advances in the application of both the Water Framework Directive (WFD) as well as the Urban Wastewater Treatment Directive (UWWTD). With the intention to develop WISE applications, the aim of the work was to define and apply an integrated approach capable of monitoring the efficiency and investments of activated sludge-based WWTPs located in a large spatial area, providing the following outcomes useful to the decision-makers: (i) the identification of critical facilities and their critical processes by means of a Performance Assessment System (PAS), (ii) the choice of the most suitable upgrading actions, through a scenario analysis. (iii) the assessment of the investment costs to upgrade the critical WWTPs and (iv) the prioritization of the critical facilities by means of a multi-criteria approach which includes the stakeholders involvement, along with the integration of some technical, environmental, economic and health aspects. The implementation of the proposed approach to a high number of municipal WWTPs highlighted how the PAS developed was able to identify critical processes with a particular effectiveness in identifying the critical nutrient removal ones. In addition, a simplified approach that considers the cost related to a basic-configuration and those for the WWTP integration, allowed to link the critical processes identified and the investment costs. Finally, the questionnaire for the acquisition of data such as that provided by the Italian Institute of Statistics, the PAS defined and the database on the costs, if properly adapted, may allow for the extension of the integrated approach on an EU-scale by providing useful information to water utilities as well as institutions. Copyright © 2015 Elsevier

  13. Development of an ultra-high temperature infrared scene projector at Santa Barbara Infrared Inc.

    NASA Astrophysics Data System (ADS)

    Franks, Greg; Laveigne, Joe; Danielson, Tom; McHugh, Steve; Lannon, John; Goodwin, Scott

    2015-05-01

    The rapid development of very-large format infrared detector arrays has challenged the IR scene projector community to develop correspondingly larger-format infrared emitter arrays to support the testing needs of systems incorporating these detectors. As with most integrated circuits, fabrication yields for the read-in integrated circuit (RIIC) that drives the emitter pixel array are expected to drop dramatically with increasing size, making monolithic RIICs larger than the current 1024x1024 format impractical and unaffordable. Additionally, many scene projector users require much higher simulated temperatures than current technology can generate to fully evaluate the performance of their systems and associated processing algorithms. Under the Ultra High Temperature (UHT) development program, Santa Barbara Infrared Inc. (SBIR) is developing a new infrared scene projector architecture capable of producing both very large format (>1024x1024) resistive emitter arrays and improved emitter pixel technology capable of simulating very high apparent temperatures. During an earlier phase of the program, SBIR demonstrated materials with MWIR apparent temperatures in excess of 1000K. New emitter materials have subsequently been selected to produce pixels that achieve even higher apparent temperatures. Test results from pixels fabricated using the new material set will be presented and discussed. Also in development under the same UHT program is a 'scalable' RIIC that will be used to drive the high temperature pixels. This RIIC will utilize through-silicon vias (TSVs) and quilt packaging (QP) technologies to allow seamless tiling of multiple chips to fabricate very large arrays, and thus overcome the inherent yield limitations of very-large-scale integrated circuits. Current status of the RIIC development effort will also be presented.

  14. Large-scale velocities and primordial non-Gaussianity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Fabian

    2010-09-15

    We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less

  15. Ultra Reliability Workshop Introduction

    NASA Technical Reports Server (NTRS)

    Shapiro, Andrew A.

    2006-01-01

    This plan is the accumulation of substantial work by a large number of individuals. The Ultra-Reliability team consists of representatives from each center who have agreed to champion the program and be the focal point for their center. A number of individuals from NASA, government agencies (including the military), universities, industry and non-governmental organizations also contributed significantly to this effort. Most of their names may be found on the Ultra-Reliability PBMA website.

  16. Active Self-Testing Noise Measurement Sensors for Large-Scale Environmental Sensor Networks

    PubMed Central

    Domínguez, Federico; Cuong, Nguyen The; Reinoso, Felipe; Touhafi, Abdellah; Steenhaut, Kris

    2013-01-01

    Large-scale noise pollution sensor networks consist of hundreds of spatially distributed microphones that measure environmental noise. These networks provide historical and real-time environmental data to citizens and decision makers and are therefore a key technology to steer environmental policy. However, the high cost of certified environmental microphone sensors render large-scale environmental networks prohibitively expensive. Several environmental network projects have started using off-the-shelf low-cost microphone sensors to reduce their costs, but these sensors have higher failure rates and produce lower quality data. To offset this disadvantage, we developed a low-cost noise sensor that actively checks its condition and indirectly the integrity of the data it produces. The main design concept is to embed a 13 mm speaker in the noise sensor casing and, by regularly scheduling a frequency sweep, estimate the evolution of the microphone's frequency response over time. This paper presents our noise sensor's hardware and software design together with the results of a test deployment in a large-scale environmental network in Belgium. Our middle-range-value sensor (around €50) effectively detected all experienced malfunctions, in laboratory tests and outdoor deployments, with a few false positives. Future improvements could further lower the cost of our sensor below €10. PMID:24351634

  17. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Large-scale regions of antimatter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  19. Large Scale Production of Densified Hydrogen Using Integrated Refrigeration and Storage

    NASA Technical Reports Server (NTRS)

    Notardonato, William U.; Swanger, Adam Michael; Jumper, Kevin M.; Fesmire, James E.; Tomsik, Thomas M.; Johnson, Wesley L.

    2017-01-01

    Recent demonstration of advanced liquid hydrogen storage techniques using Integrated Refrigeration and Storage (IRAS) technology at NASA Kennedy Space Center led to the production of large quantities of solid densified liquid and slush hydrogen in a 125,000 L tank. Production of densified hydrogen was performed at three different liquid levels and LH2 temperatures were measured by twenty silicon diode temperature sensors. System energy balances and solid mass fractions are calculated. Experimental data reveal hydrogen temperatures dropped well below the triple point during testing (up to 1 K), and were continuing to trend downward prior to system shutdown. Sub-triple point temperatures were seen to evolve in a time dependent manner along the length of the horizontal, cylindrical vessel. Twenty silicon diode temperature sensors were recorded over approximately one month for testing at two different fill levels (33 67). The phenomenon, observed at both two fill levels, is described and presented detailed and explained herein., and The implications of using IRAS for energy storage, propellant densification, and future cryofuel systems are discussed.

  20. An Integrative Structural Health Monitoring System for the Local/Global Responses of a Large-Scale Irregular Building under Construction

    PubMed Central

    Park, Hyo Seon; Shin, Yunah; Choi, Se Woon; Kim, Yousok

    2013-01-01

    In this study, a practical and integrative SHM system was developed and applied to a large-scale irregular building under construction, where many challenging issues exist. In the proposed sensor network, customized energy-efficient wireless sensing units (sensor nodes, repeater nodes, and master nodes) were employed and comprehensive communications from the sensor node to the remote monitoring server were conducted through wireless communications. The long-term (13-month) monitoring results recorded from a large number of sensors (75 vibrating wire strain gauges, 10 inclinometers, and three laser displacement sensors) indicated that the construction event exhibiting the largest influence on structural behavior was the removal of bents that were temporarily installed to support the free end of the cantilevered members during their construction. The safety of each member could be confirmed based on the quantitative evaluation of each response. Furthermore, it was also confirmed that the relation between these responses (i.e., deflection, strain, and inclination) can provide information about the global behavior of structures induced from specific events. Analysis of the measurement results demonstrates the proposed sensor network system is capable of automatic and real-time monitoring and can be applied and utilized for both the safety evaluation and precise implementation of buildings under construction. PMID:23860317

  1. Large-scale Map of Millimeter-wavelength Hydrogen Radio Recombination Lines around a Young Massive Star Cluster

    NASA Astrophysics Data System (ADS)

    Nguyen-Luong, Q.; Anderson, L. D.; Motte, F.; Kim, Kee-Tae; Schilke, P.; Carlhoff, P.; Beuther, H.; Schneider, N.; Didelon, P.; Kramer, C.; Louvet, F.; Nony, T.; Bihr, S.; Rugel, M.; Soler, J.; Wang, Y.; Bronfman, L.; Simon, R.; Menten, K. M.; Wyrowski, F.; Walmsley, C. M.

    2017-08-01

    We report the first map of large-scale (10 pc in length) emission of millimeter-wavelength hydrogen recombination lines (mm-RRLs) toward the giant H II region around the W43-Main young massive star cluster (YMC). Our mm-RRL data come from the IRAM 30 m telescope and are analyzed together with radio continuum and cm-RRL data from the Karl G. Jansky Very Large Array and HCO+ 1-0 line emission data from the IRAM 30 m. The mm-RRLs reveal an expanding wind-blown ionized gas shell with an electron density ˜70-1500 cm-3 driven by the WR/OB cluster, which produces a total Lyα photon flux of 1.5× {10}50 s-1. This shell is interacting with the dense neutral molecular gas in the W43-Main dense cloud. Combining the high spectral and angular resolution mm-RRL and cm-RRL cubes, we derive the two-dimensional relative distributions of dynamical and pressure broadening of the ionized gas emission and find that the RRL line shapes are dominated by pressure broadening (4-55 {km} {{{s}}}-1) near the YMC and by dynamical broadening (8-36 {km} {{{s}}}-1) near the shell’s edge. Ionized gas clumps hosting ultra-compact H II regions found at the edge of the shell suggest that large-scale ionized gas motion triggers the formation of new star generation near the periphery of the shell.

  2. Large-scale structure perturbation theory without losing stream crossing

    NASA Astrophysics Data System (ADS)

    McDonald, Patrick; Vlah, Zvonimir

    2018-01-01

    We suggest an approach to perturbative calculations of large-scale clustering in the Universe that includes from the start the stream crossing (multiple velocities for mass elements at a single position) that is lost in traditional calculations. Starting from a functional integral over displacement, the perturbative series expansion is in deviations from (truncated) Zel'dovich evolution, with terms that can be computed exactly even for stream-crossed displacements. We evaluate the one-loop formulas for displacement and density power spectra numerically in 1D, finding dramatic improvement in agreement with N-body simulations compared to the Zel'dovich power spectrum (which is exact in 1D up to stream crossing). Beyond 1D, our approach could represent an improvement over previous expansions even aside from the inclusion of stream crossing, but we have not investigated this numerically. In the process we show how to achieve effective-theory-like regulation of small-scale fluctuations without free parameters.

  3. Large-scale structure perturbation theory without losing stream crossing

    DOE PAGES

    McDonald, Patrick; Vlah, Zvonimir

    2018-01-10

    Here, we suggest an approach to perturbative calculations of large-scale clustering in the Universe that includes from the start the stream crossing (multiple velocities for mass elements at a single position) that is lost in traditional calculations. Starting from a functional integral over displacement, the perturbative series expansion is in deviations from (truncated) Zel’dovich evolution, with terms that can be computed exactly even for stream-crossed displacements. We evaluate the one-loop formulas for displacement and density power spectra numerically in 1D, finding dramatic improvement in agreement with N-body simulations compared to the Zel’dovich power spectrum (which is exact in 1D upmore » to stream crossing). Beyond 1D, our approach could represent an improvement over previous expansions even aside from the inclusion of stream crossing, but we have not investigated this numerically. In the process we show how to achieve effective-theory-like regulation of small-scale fluctuations without free parameters.« less

  4. Large-scale structure perturbation theory without losing stream crossing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McDonald, Patrick; Vlah, Zvonimir

    Here, we suggest an approach to perturbative calculations of large-scale clustering in the Universe that includes from the start the stream crossing (multiple velocities for mass elements at a single position) that is lost in traditional calculations. Starting from a functional integral over displacement, the perturbative series expansion is in deviations from (truncated) Zel’dovich evolution, with terms that can be computed exactly even for stream-crossed displacements. We evaluate the one-loop formulas for displacement and density power spectra numerically in 1D, finding dramatic improvement in agreement with N-body simulations compared to the Zel’dovich power spectrum (which is exact in 1D upmore » to stream crossing). Beyond 1D, our approach could represent an improvement over previous expansions even aside from the inclusion of stream crossing, but we have not investigated this numerically. In the process we show how to achieve effective-theory-like regulation of small-scale fluctuations without free parameters.« less

  5. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  6. The Expanded Large Scale Gap Test

    DTIC Science & Technology

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  7. Full-band error control and crack-free surface fabrication techniques for ultra-precision fly cutting of large-aperture KDP crystals

    NASA Astrophysics Data System (ADS)

    Zhang, F. H.; Wang, S. F.; An, C. H.; Wang, J.; Xu, Q.

    2017-06-01

    Large-aperture potassium dihydrogen phosphate (KDP) crystals are widely used in the laser path of inertial confinement fusion (ICF) systems. The most common method of manufacturing half-meter KDP crystals is ultra-precision fly cutting. When processing KDP crystals by ultra-precision fly cutting, the dynamic characteristics of the fly cutting machine and fluctuations in the fly cutting environment are translated into surface errors at different spatial frequency bands. These machining errors should be suppressed effectively to guarantee that KDP crystals meet the full-band machining accuracy specified in the evaluation index. In this study, the anisotropic machinability of KDP crystals and the causes of typical surface errors in ultra-precision fly cutting of the material are investigated. The structures of the fly cutting machine and existing processing parameters are optimized to improve the machined surface quality. The findings are theoretically and practically important in the development of high-energy laser systems in China.

  8. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  9. Analysis of Radar and Optical Space Borne Data for Large Scale Topographical Mapping

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2015-03-01

    Normally, in order to provide high resolution 3 Dimension (3D) geospatial data, large scale topographical mapping needs input from conventional airborne campaigns which are in Indonesia bureaucratically complicated especially during legal administration procedures i.e. security clearance from military/defense ministry. This often causes additional time delays besides technical constraints such as weather and limited aircraft availability for airborne campaigns. Of course the geospatial data quality is an important issue for many applications. The increasing demand of geospatial data nowadays consequently requires high resolution datasets as well as a sufficient level of accuracy. Therefore an integration of different technologies is required in many cases to gain the expected result especially in the context of disaster preparedness and emergency response. Another important issue in this context is the fast delivery of relevant data which is expressed by the term "Rapid Mapping". In this paper we present first results of an on-going research to integrate different data sources like space borne radar and optical platforms. Initially the orthorectification of Very High Resolution Satellite (VHRS) imagery i.e. SPOT-6 has been done as a continuous process to the DEM generation using TerraSAR-X/TanDEM-X data. The role of Ground Control Points (GCPs) from GNSS surveys is mandatory in order to fulfil geometrical accuracy. In addition, this research aims on providing suitable processing algorithm of space borne data for large scale topographical mapping as described in section 3.2. Recently, radar space borne data has been used for the medium scale topographical mapping e.g. for 1:50.000 map scale in Indonesian territories. The goal of this on-going research is to increase the accuracy of remote sensing data by different activities, e.g. the integration of different data sources (optical and radar) or the usage of the GCPs in both, the optical and the radar satellite data

  10. Chip Scale Atomic Resonator Frequency Stabilization System With Ultra-Low Power Consumption for Optoelectronic Oscillators.

    PubMed

    Zhao, Jianye; Zhang, Yaolin; Lu, Haoyuan; Hou, Dong; Zhang, Shuangyou; Wang, Zhong

    2016-07-01

    We present a long-term chip scale stabilization scheme for optoelectronic oscillators (OEOs) based on a rubidium coherent population trapping (CPT) atomic resonator. By locking a single mode of an OEO to the (85)Rb 3.035-GHz CPT resonance utilizing an improved phase-locked loop (PLL) with a PID regulator, we achieved a chip scale frequency stabilization system for the OEO. The fractional frequency stability of the stabilized OEO by overlapping Allan deviation reaches 6.2 ×10(-11) (1 s) and  ∼ 1.45 ×10 (-11) (1000 s). This scheme avoids a decrease in the extra phase noise performance induced by the electronic connection between the OEO and the microwave reference in common injection locking schemes. The total physical package of the stabilization system is [Formula: see text] and the total power consumption is 400 mW, which provides a chip scale and portable frequency stabilization approach with ultra-low power consumption for OEOs.

  11. Ultra-fast all-optical plasmon induced transparency in a metal–insulator–metal waveguide containing two Kerr nonlinear ring resonators

    NASA Astrophysics Data System (ADS)

    Nurmohammadi, Tofiq; Abbasian, Karim; Yadipour, Reza

    2018-05-01

    In this work, an ultra-fast all-optical plasmon induced transparency based on a metal–insulator–metal nanoplasmonic waveguide with two Kerr nonlinear ring resonators is studied. Two-dimensional simulations utilizing the finite-difference time-domain method are used to show an obvious optical bistability and significant switching mechanisms of the signal light by varying the pump-light intensity. The proposed all-optical switching based on plasmon induced transparency demonstrates femtosecond-scale feedback time (90 fs), meaning ultra-fast switching can be achieved. The presented all-optical switch may have potential significant applications in integrated optical circuits.

  12. Photonic generation of ultra-wideband signals by direct current modulation on SOA section of an SOA-integrated SGDBR laser.

    PubMed

    Lv, Hui; Yu, Yonglin; Shu, Tan; Huang, Dexiu; Jiang, Shan; Barry, Liam P

    2010-03-29

    Photonic ultra-wideband (UWB) pulses are generated by direct current modulation of a semiconductor optical amplifier (SOA) section of an SOA-integrated sampled grating distributed Bragg reflector (SGDBR) laser. Modulation responses of the SOA section of the laser are first simulated with a microwave equivalent circuit model. Simulated results show a resonance behavior indicating the possibility to generate UWB signals with complex shapes in the time domain. The UWB pulse generation is then experimentally demonstrated for different selected wavelength channels with an SOA-integrated SGDBR laser.

  13. Chip Scale Ultra-Stable Clocks: Miniaturized Phonon Trap Timing Units for PNT of CubeSats

    NASA Technical Reports Server (NTRS)

    Rais-Zadeh, Mina; Altunc, Serhat; Hunter, Roger C.; Petro, Andrew

    2016-01-01

    The Chip Scale Ultra-Stable Clocks (CSUSC) project aims to provide a superior alternative to current solutions for low size, weight, and power timing devices. Currently available quartz-based clocks have problems adjusting to the high temperature and extreme acceleration found in space applications, especially when scaled down to match small spacecraft size, weight, and power requirements. The CSUSC project aims to utilize dual-mode resonators on an ovenized platform to achieve the exceptional temperature stability required for these systems. The dual-mode architecture utilizes a temperature sensitive and temperature stable mode simultaneously driven on the same device volume to eliminate ovenization error while maintaining extremely high performance. Using this technology it is possible to achieve parts-per-billion (ppb) levels of temperature stability with multiple orders of magnitude smaller size, weight, and power.

  14. On large-scale dynamo action at high magnetic Reynolds number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cattaneo, F.; Tobias, S. M., E-mail: smt@maths.leeds.ac.uk

    2014-07-01

    We consider the generation of magnetic activity—dynamo waves—in the astrophysical limit of very large magnetic Reynolds number. We consider kinematic dynamo action for a system consisting of helical flow and large-scale shear. We demonstrate that large-scale dynamo waves persist at high Rm if the helical flow is characterized by a narrow band of spatial scales and the shear is large enough. However, for a wide band of scales the dynamo becomes small scale with a further increase of Rm, with dynamo waves re-emerging only if the shear is then increased. We show that at high Rm, the key effect ofmore » the shear is to suppress small-scale dynamo action, allowing large-scale dynamo action to be observed. We conjecture that this supports a general 'suppression principle'—large-scale dynamo action can only be observed if there is a mechanism that suppresses the small-scale fluctuations.« less

  15. Large-scale dynamos in rapidly rotating plane layer convection

    NASA Astrophysics Data System (ADS)

    Bushby, P. J.; Käpylä, P. J.; Masada, Y.; Brandenburg, A.; Favier, B.; Guervilly, C.; Käpylä, M. J.

    2018-05-01

    Context. Convectively driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional non-linear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, the large-scale vortex is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity (grand minima-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.

  16. Tracking Architecture Based on Dual-Filter with State Feedback and Its Application in Ultra-Tight GPS/INS Integration

    PubMed Central

    Zhang, Xi; Miao, Lingjuan; Shao, Haijun

    2016-01-01

    If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper. PMID:27144570

  17. Tracking Architecture Based on Dual-Filter with State Feedback and Its Application in Ultra-Tight GPS/INS Integration.

    PubMed

    Zhang, Xi; Miao, Lingjuan; Shao, Haijun

    2016-05-02

    If a Kalman Filter (KF) is applied to Global Positioning System (GPS) baseband signal preprocessing, the estimates of signal phase and frequency can have low variance, even in highly dynamic situations. This paper presents a novel preprocessing scheme based on a dual-filter structure. Compared with the traditional model utilizing a single KF, this structure avoids carrier tracking being subjected to code tracking errors. Meanwhile, as the loop filters are completely removed, state feedback values are adopted to generate local carrier and code. Although local carrier frequency has a wide fluctuation, the accuracy of Doppler shift estimation is improved. In the ultra-tight GPS/Inertial Navigation System (INS) integration, the carrier frequency derived from the external navigation information is not viewed as the local carrier frequency directly. That facilitates retaining the design principle of state feedback. However, under harsh conditions, the GPS outputs may still bear large errors which can destroy the estimation of INS errors. Thus, an innovative integrated navigation filter is constructed by modeling the non-negligible errors in the estimated Doppler shifts, to ensure INS is properly calibrated. Finally, field test and semi-physical simulation based on telemetered missile trajectory validate the effectiveness of methods proposed in this paper.

  18. Large-scale anisotropy of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Silk, J.; Wilson, M. L.

    1981-01-01

    Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.

  19. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  20. Large-Scale Coronal Heating from the Solar Magnetic Network

    NASA Technical Reports Server (NTRS)

    Falconer, David A.; Moore, Ronald L.; Porter, Jason G.; Hathaway, David H.

    1999-01-01

    In Fe 12 images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi- supergranular. In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. The emission of the coronal network and bright points contribute only about 5% of the entire quiet solar coronal Fe MI emission. Here we investigate the large-scale corona, the supergranular and larger-scale structure that we had previously treated as a background, and that emits 95% of the total Fe XII emission. We compare the dim and bright halves of the large- scale corona and find that the bright half is 1.5 times brighter than the dim half, has an order of magnitude greater area of bright point coverage, has three times brighter coronal network, and has about 1.5 times more magnetic flux than the dim half These results suggest that the brightness of the large-scale corona is more closely related to the large- scale total magnetic flux than to bright point activity. We conclude that in the quiet sun: (1) Magnetic flux is modulated (concentrated/diluted) on size scales larger than supergranules. (2) The large-scale enhanced magnetic flux gives an enhanced, more active, magnetic network and an increased incidence of network bright point formation. (3) The heating of the large-scale corona is dominated by more widespread, but weaker, network activity than that which heats the bright points. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.

  1. Extending SME to Handle Large-Scale Cognitive Modeling.

    PubMed

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  2. Large- and Very-Large-Scale Motions in Katabatic Flows Over Steep Slopes

    NASA Astrophysics Data System (ADS)

    Giometto, M. G.; Fang, J.; Salesky, S.; Parlange, M. B.

    2016-12-01

    Evidence of large- and very-large-scale motions populating the boundary layer in katabatic flows over steep slopes is presented via direct numerical simulations (DNSs). DNSs are performed at a modified Reynolds number (Rem = 967), considering four sloping angles (α = 60°, 70°, 80° and 90°). Large coherent structures prove to be strongly dependent on the inclination of the underlying surface. Spectra and co-spectra consistently show signatures of large-scale motions (LSMs), with streamwise extension on the order of the boundary layer thickness. A second low-wavenumber mode characterizes pre-multiplied spectra and co-spectra when the slope angle is below 70°, indicative of very-large-scale motions (VLSMs). In addition, conditional sampling and averaging shows how LSMs and VLSMs are induced by counter-rotating roll modes, in agreement with findings from canonical wall-bounded flows. VLSMs contribute to the stream-wise velocity variance and shear stress in the above-jet regions up to 30% and 45% respectively, whereas both LSMs and VLSMs are inactive in the near-wall regions.

  3. Implementation of a large-scale hospital information infrastructure for multi-unit health-care services.

    PubMed

    Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul

    2008-01-01

    With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.

  4. Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks.

    PubMed

    Nadarajah, Nandakumaran; Khodabandeh, Amir; Wang, Kan; Choudhury, Mazher; Teunissen, Peter J G

    2018-04-03

    Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network.

  5. Generation of large-scale density fluctuations by buoyancy

    NASA Technical Reports Server (NTRS)

    Chasnov, J. R.; Rogallo, R. S.

    1990-01-01

    The generation of fluid motion from a state of rest by buoyancy forces acting on a homogeneous isotropic small-scale density field is considered. Nonlinear interactions between the generated fluid motion and the initial isotropic small-scale density field are found to create an anisotropic large-scale density field with spectrum proportional to kappa(exp 4). This large-scale density field is observed to result in an increasing Reynolds number of the fluid turbulence in its final period of decay.

  6. Ultra-Long-Distance Hybrid BOTDA/Ф-OTDR

    PubMed Central

    Fu, Yun; Zhu, Richeng; Xue, Naitian; Lu, Chongyu; Zhang, Bin; Yang, Le; Atubga, David; Rao, Yunjiang

    2018-01-01

    In the distributed optical fiber sensing (DOFS) domain, simultaneous measurement of vibration and temperature/strain based on Rayleigh scattering and Brillouin scattering in fiber could have wide applications. However, there are certain challenges for the case of ultra-long sensing range, including the interplay of different scattering mechanisms, the interaction of two types of sensing signals, and the competition of pump power. In this paper, a hybrid DOFS system, which can simultaneously measure temperature/strain and vibration over 150 km, is elaborately designed via integrating the Brillouin optical time-domain analyzer (BOTDA) and phase-sensitive optical time-domain reflectometry (Ф-OTDR). Distributed Raman and Brillouin amplifications, frequency division multiplexing (FDM), wavelength division multiplexing (WDM), and time division multiplexing (TDM) are delicately fused to accommodate ultra-long-distance BOTDA and Ф-OTDR. Consequently, the sensing range of the hybrid system is 150.62 km, and the spatial resolution of BOTDA and Ф-OTDR are 9 m and 30 m, respectively. The measurement uncertainty of the BOTDA is ± 0.82 MHz. To the best of our knowledge, this is the first time that such hybrid DOFS is realized with a hundred-kilometer length scale. PMID:29587407

  7. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2009-09-30

    Modeling of Burning Emissions ( FLAMBE ) project, and other related parameters. Our plans to embed NAAPS inside NOGAPS may need to be put on hold...AOD, FLAMBE and FAROP at FNMOC are supported by 6.4 funding from PMW-120 for “Large-scale Atmospheric Models”, “Small-scale Atmospheric Models

  8. Mnemonic discrimination relates to perforant path integrity: An ultra-high resolution diffusion tensor imaging study.

    PubMed

    Bennett, Ilana J; Stark, Craig E L

    2016-03-01

    Pattern separation describes the orthogonalization of similar inputs into unique, non-overlapping representations. This computational process is thought to serve memory by reducing interference and to be mediated by the dentate gyrus of the hippocampus. Using ultra-high in-plane resolution diffusion tensor imaging (hrDTI) in older adults, we previously demonstrated that integrity of the perforant path, which provides input to the dentate gyrus from entorhinal cortex, was associated with mnemonic discrimination, a behavioral outcome designed to load on pattern separation. The current hrDTI study assessed the specificity of this perforant path integrity-mnemonic discrimination relationship relative to other cognitive constructs (identified using a factor analysis) and white matter tracts (hippocampal cingulum, fornix, corpus callosum) in 112 healthy adults (20-87 years). Results revealed age-related declines in integrity of the perforant path and other medial temporal lobe (MTL) tracts (hippocampal cingulum, fornix). Controlling for global effects of brain aging, perforant path integrity related only to the factor that captured mnemonic discrimination performance. Comparable integrity-mnemonic discrimination relationships were also observed for the hippocampal cingulum and fornix. Thus, whereas perforant path integrity specifically relates to mnemonic discrimination, mnemonic discrimination may be mediated by a broader MTL network. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Ultra-wideband Ge-rich silicon germanium integrated Mach-Zehnder interferometer for mid-infrared spectroscopy.

    PubMed

    Vakarin, Vladyslav; Ramírez, Joan Manel; Frigerio, Jacopo; Ballabio, Andrea; Le Roux, Xavier; Liu, Qiankun; Bouville, David; Vivien, Laurent; Isella, Giovanni; Marris-Morini, Delphine

    2017-09-01

    This Letter explores the use of Ge-rich Si 0.2 Ge 0.8 waveguides on graded Si 1-x Ge x substrate for the demonstration of ultra-wideband photonic integrated circuits in the mid-infrared (mid-IR) wavelength range. We designed, fabricated, and characterized broadband Mach-Zehnder interferometers fully covering a range of 3 μm in the mid-IR band. The fabricated devices operate indistinctly in quasi-TE and quasi-TM polarizations, and have an extinction ratio higher than 10 dB over the entire operating wavelength range. The obtained results are in good correlation with theoretical predictions, while numerical simulations indicate that the device bandwidth can reach one octave with low additional losses. This Letter paves the way for further realization of mid-IR integrated spectrometers using low-index-contrast Si 1-x Ge x waveguides with high germanium concentration.

  10. Information Tailoring Enhancements for Large-Scale Social Data

    DTIC Science & Technology

    2016-06-15

    Intelligent Automation Incorporated Information Tailoring Enhancements for Large-Scale... Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...1 Work Performed within This Reporting Period .................................................... 2 1.1 Enhanced Named Entity Recognition (NER

  11. An Integrated Scale for Measuring an Organizational Learning System

    ERIC Educational Resources Information Center

    Jyothibabu, C.; Farooq, Ayesha; Pradhan, Bibhuti Bhusan

    2010-01-01

    Purpose: The purpose of this paper is to develop an integrated measurement scale for an organizational learning system by capturing the learning enablers, learning results and performance outcome in an organization. Design/methodology/approach: A new measurement scale was developed by integrating and modifying two existing scales, identified…

  12. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  13. Off the scale: a new species of fish-scale gecko (Squamata: Gekkonidae: Geckolepis) with exceptionally large scales

    PubMed Central

    Daza, Juan D.; Köhler, Jörn; Vences, Miguel; Glaw, Frank

    2017-01-01

    The gecko genus Geckolepis, endemic to Madagascar and the Comoro archipelago, is taxonomically challenging. One reason is its members ability to autotomize a large portion of their scales when grasped or touched, most likely to escape predation. Based on an integrative taxonomic approach including external morphology, morphometrics, genetics, pholidosis, and osteology, we here describe the first new species from this genus in 75 years: Geckolepis megalepis sp. nov. from the limestone karst of Ankarana in northern Madagascar. The new species has the largest known body scales of any gecko (both relatively and absolutely), which come off with exceptional ease. We provide a detailed description of the skeleton of the genus Geckolepis based on micro-Computed Tomography (micro-CT) analysis of the new species, the holotype of G. maculata, the recently resurrected G. humbloti, and a specimen belonging to an operational taxonomic unit (OTU) recently suggested to represent G. maculata. Geckolepis is characterized by highly mineralized, imbricated scales, paired frontals, and unfused subolfactory processes of the frontals, among other features. We identify diagnostic characters in the osteology of these geckos that help define our new species and show that the OTU assigned to G. maculata is probably not conspecific with it, leaving the taxonomic identity of this species unclear. We discuss possible reasons for the extremely enlarged scales of G. megalepis in the context of an anti-predator defence mechanism, and the future of Geckolepis taxonomy. PMID:28194313

  14. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  15. Computation of Large-Scale Structure Jet Noise Sources With Weak Nonlinear Effects Using Linear Euler

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.

    2003-01-01

    An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.

  16. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    PubMed

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  17. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    PubMed

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  18. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    NASA Astrophysics Data System (ADS)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  19. An ultra scale-down approach to study the interaction of fermentation, homogenization, and centrifugation for antibody fragment recovery from rec E. coli.

    PubMed

    Li, Qiang; Mannall, Gareth J; Ali, Shaukat; Hoare, Mike

    2013-08-01

    Escherichia coli is frequently used as a microbial host to express recombinant proteins but it lacks the ability to secrete proteins into medium. One option for protein release is to use high-pressure homogenization followed by a centrifugation step to remove cell debris. While this does not give selective release of proteins in the periplasmic space, it does provide a robust process. An ultra scale-down (USD) approach based on focused acoustics is described to study rec E. coli cell disruption by high-pressure homogenization for recovery of an antibody fragment (Fab') and the impact of fermentation harvest time. This approach is followed by microwell-based USD centrifugation to study the removal of the resultant cell debris. Successful verification of this USD approach is achieved using pilot scale high-pressure homogenization and pilot scale, continuous flow, disc stack centrifugation comparing performance parameters such as the fraction of Fab' release, cell debris size distribution and the carryover of cell debris fine particles in the supernatant. The integration of fermentation and primary recovery stages is examined using USD monitoring of different phases of cell growth. Increasing susceptibility of the cells to disruption is observed with time following induction. For a given recovery process this results in a higher fraction of product release and a greater proportion of fine cell debris particles that are difficult to remove by centrifugation. Such observations are confirmed at pilot scale. Copyright © 2013 Wiley Periodicals, Inc.

  20. Low-loss integrated electrical surface plasmon source with ultra-smooth metal film fabricated by polymethyl methacrylate 'bond and peel' method.

    PubMed

    Liu, Wenjie; Hu, Xiaolong; Zou, Qiushun; Wu, Shaoying; Jin, Chongjun

    2018-06-15

    External light sources are mostly employed to functionalize the plasmonic components, resulting in a bulky footprint. Electrically driven integrated plasmonic devices, combining ultra-compact critical feature sizes with extremely high transmission speeds and low power consumption, can link plasmonics with the present-day electronic world. In an effort to achieve this prospect, suppressing the losses in the plasmonic devices becomes a pressing issue. In this work, we developed a novel polymethyl methacrylate 'bond and peel' method to fabricate metal films with sub-nanometer smooth surfaces on semiconductor wafers. Based on this method, we further fabricated a compact plasmonic source containing a metal-insulator-metal (MIM) waveguide with an ultra-smooth metal surface on a GaAs-based light-emitting diode wafer. An increase in propagation length of the SPP mode by a factor of 2.95 was achieved as compared with the conventional device containing a relatively rough metal surface. Numerical calculations further confirmed that the propagation length is comparable to the theoretical prediction on the MIM waveguide with perfectly smooth metal surfaces. This method facilitates low-loss and high-integration of electrically driven plasmonic devices, thus provides an immediate opportunity for the practical application of on-chip integrated plasmonic circuits.

  1. Chip-scale sensor system integration for portable health monitoring.

    PubMed

    Jokerst, Nan M; Brooke, Martin A; Cho, Sang-Yeon; Shang, Allan B

    2007-12-01

    The revolution in integrated circuits over the past 50 yr has produced inexpensive computing and communications systems that are powerful and portable. The technologies for these integrated chip-scale sensing systems, which will be miniature, lightweight, and portable, are emerging with the integration of sensors with electronics, optical systems, micromachines, microfluidics, and the integration of chemical and biological materials (soft/wet material integration with traditional dry/hard semiconductor materials). Hence, we stand at a threshold for health monitoring technology that promises to provide wearable biochemical sensing systems that are comfortable, inauspicious, wireless, and battery-operated, yet that continuously monitor health status, and can transmit compressed data signals at regular intervals, or alarm conditions immediately. In this paper, we explore recent results in chip-scale sensor integration technology for health monitoring. The development of inexpensive chip-scale biochemical optical sensors, such as microresonators, that are customizable for high sensitivity coupled with rapid prototyping will be discussed. Ground-breaking work in the integration of chip-scale optical systems to support these optical sensors will be highlighted, and the development of inexpensive Si complementary metal-oxide semiconductor circuitry (which makes up the vast majority of computational systems today) for signal processing and wireless communication with local receivers that lie directly on the chip-scale sensor head itself will be examined.

  2. Large Scale Cross Drive Correlation Of Digital Media

    DTIC Science & Technology

    2016-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS LARGE SCALE CROSS-DRIVE CORRELATION OF DIGITAL MEDIA by Joseph Van Bruaene March 2016 Thesis Co...CROSS-DRIVE CORRELATION OF DIGITAL MEDIA 5. FUNDING NUMBERS 6. AUTHOR(S) Joseph Van Bruaene 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...the ability to make large scale cross-drive correlations among a large corpus of digital media becomes increasingly important. We propose a

  3. Botswana water and surface energy balance research program. Part 2: Large scale moisture and passive microwaves

    NASA Technical Reports Server (NTRS)

    Vandegriend, A. A.; Owe, M.; Chang, A. T. C.

    1992-01-01

    The Botswana water and surface energy balance research program was developed to study and evaluate the integrated use of multispectral satellite remote sensing for monitoring the hydrological status of the Earth's surface. The research program consisted of two major, mutually related components: a surface energy balance modeling component, built around an extensive field campaign; and a passive microwave research component which consisted of a retrospective study of large scale moisture conditions and Nimbus scanning multichannel microwave radiometer microwave signatures. The integrated approach of both components are explained in general and activities performed within the passive microwave research component are summarized. The microwave theory is discussed taking into account: soil dielectric constant, emissivity, soil roughness effects, vegetation effects, optical depth, single scattering albedo, and wavelength effects. The study site is described. The soil moisture data and its processing are considered. The relation between observed large scale soil moisture and normalized brightness temperatures is discussed. Vegetation characteristics and inverse modeling of soil emissivity is considered.

  4. Proposal for a better integration of bacterial lysis into the production of plasmid DNA at large scale.

    PubMed

    O'Mahony, Kevin; Freitag, Ruth; Hilbrig, Frank; Müller, Patrick; Schumacher, Ivo

    2005-09-23

    The paper addresses the question of how to achieve bacterial lysis in large-scale plasmid DNA production processes, where conventional alkaline lysis may become awkward to handle. Bacteria were grown in shaker flasks and a bioreactor. Suboptimal growth conditions were found advantageous for stable plasmid production at high copy numbers (up to 25mg/L could be achieved). Cells were harvested by filtration in the presence of a filter aid. A linear relationship between the biomass and the optimal filter aid concentration in terms of back pressure could be established. Bacteria-containing filter cakes were washed with isotonic buffer and lysis was achieved in situ by a two-step protocol calling for fragilisation of the cells followed by heat lysis in a suitable buffer. RNA and other soluble cell components where washed out of the cake during this step, while the plasmid DNA was retained. Afterwards a clear lysate containing relatively pure plasmid DNA could be eluted from the cake mostly as the desired supercoiled topoisomer, while cell debris and genomic DNA were retained. Lysis is, thus, integrated not only with cell capture but also with a significant degree of isolation/purification, as most impurities were considerably reduced during the procedure.

  5. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    NASA Astrophysics Data System (ADS)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  6. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  7. Bridging the gap between small and large scale sediment budgets? - A scaling challenge in the Upper Rhone Basin, Switzerland

    NASA Astrophysics Data System (ADS)

    Schoch, Anna; Blöthe, Jan; Hoffmann, Thomas; Schrott, Lothar

    2016-04-01

    -regions cover all three litho-tectonic units of the URB (Helvetic nappes, Penninic nappes, External massifs) and different catchment sizes to capture the inherent variability. Different parameters characterizing topography, surface characteristics, and vegetation cover are analyzed for each storage type. The data is then used in geostatistical models (PCA, stepwise logistic regression) to predict the spatial distribution of sediment storage for the whole URB. We further conduct morphometric analyses of the URB to gain information on the varying degree of glacial imprint and postglacial landscape evolution and their control on the spatial distribution of sediment storage in a large scale drainage basin. Geophysical methods (ground penetrating radar and electrical resistivity tomography) are applied on different sediment storage types on the local scale to estimate mean thicknesses. Additional data from published studies are used to complement our dataset. We integrate the local data in the statistical model on the spatial distribution of sediment storages for the whole URB. Hence, we can extrapolate the stored sediment volumes to the regional scale in order to bridge the gap between small and large scale studies.

  8. Research on the impacts of large-scale electric vehicles integration into power grid

    NASA Astrophysics Data System (ADS)

    Su, Chuankun; Zhang, Jian

    2018-06-01

    Because of its special energy driving mode, electric vehicles can improve the efficiency of energy utilization and reduce the pollution to the environment, which is being paid more and more attention. But the charging behavior of electric vehicles is random and intermittent. If the electric vehicle is disordered charging in a large scale, it causes great pressure on the structure and operation of the power grid and affects the safety and economic operation of the power grid. With the development of V2G technology in electric vehicle, the study of the charging and discharging characteristics of electric vehicles is of great significance for improving the safe operation of the power grid and the efficiency of energy utilization.

  9. Large-scale environments of narrow-line Seyfert 1 galaxies

    NASA Astrophysics Data System (ADS)

    Järvelä, E.; Lähteenmäki, A.; Lietzen, H.; Poudel, A.; Heinämäki, P.; Einasto, M.

    2017-09-01

    Studying large-scale environments of narrow-line Seyfert 1 (NLS1) galaxies gives a new perspective on their properties, particularly their radio loudness. The large-scale environment is believed to have an impact on the evolution and intrinsic properties of galaxies, however, NLS1 sources have not been studied in this context before. We have a large and diverse sample of 1341 NLS1 galaxies and three separate environment data sets constructed using Sloan Digital Sky Survey. We use various statistical methods to investigate how the properties of NLS1 galaxies are connected to the large-scale environment, and compare the large-scale environments of NLS1 galaxies with other active galactic nuclei (AGN) classes, for example, other jetted AGN and broad-line Seyfert 1 (BLS1) galaxies, to study how they are related. NLS1 galaxies reside in less dense environments than any of the comparison samples, thus confirming their young age. The average large-scale environment density and environmental distribution of NLS1 sources is clearly different compared to BLS1 galaxies, thus it is improbable that they could be the parent population of NLS1 galaxies and unified by orientation. Within the NLS1 class there is a trend of increasing radio loudness with increasing large-scale environment density, indicating that the large-scale environment affects their intrinsic properties. Our results suggest that the NLS1 class of sources is not homogeneous, and furthermore, that a considerable fraction of them are misclassified. We further support a published proposal to replace the traditional classification to radio-loud, and radio-quiet or radio-silent sources with a division into jetted and non-jetted sources.

  10. The application of liquid air energy storage for large scale long duration solutions to grid balancing

    NASA Astrophysics Data System (ADS)

    Brett, Gareth; Barnett, Matthew

    2014-12-01

    Liquid Air Energy Storage (LAES) provides large scale, long duration energy storage at the point of demand in the 5 MW/20 MWh to 100 MW/1,000 MWh range. LAES combines mature components from the industrial gas and electricity industries assembled in a novel process and is one of the few storage technologies that can be delivered at large scale, with no geographical constraints. The system uses no exotic materials or scarce resources and all major components have a proven lifetime of 25+ years. The system can also integrate low grade waste heat to increase power output. Founded in 2005, Highview Power Storage, is a UK based developer of LAES. The company has taken the concept from academic analysis, through laboratory testing, and in 2011 commissioned the world's first fully integrated system at pilot plant scale (300 kW/2.5 MWh) hosted at SSE's (Scottish & Southern Energy) 80 MW Biomass Plant in Greater London which was partly funded by a Department of Energy and Climate Change (DECC) grant. Highview is now working with commercial customers to deploy multi MW commercial reference plants in the UK and abroad.

  11. Ultra-Low Noise Germanium Neutrino Detection system (ULGeN).

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabrera-Palmer, Belkis; Barton, Paul

    Monitoring nuclear power plant operation by measuring the antineutrino flux has become an active research field for safeguards and non-proliferation. We describe various efforts to demonstrate the feasibility of reactor monitoring based on the detection of the Coherent Neutrino Nucleus Scattering (CNNS) process with High Purity Germanium (HPGe) technology. CNNS detection for reactor antineutrino energies requires lowering the electronic noise in low-capacitance kg-scale HPGe detectors below 100 eV as well as stringent reduction in other particle backgrounds. Existing state- of-the-art detectors are limited to an electronic noise of 95 eV-FWHM. In this work, we employed an ultra-low capacitance point-contact detectormore » with a commercial integrated circuit preamplifier- on-a-chip in an ultra-low vibration mechanically cooled cryostat to achieve an electronic noise of 39 eV-FWHM at 43 K. We also present the results of a background measurement campaign at the Spallation Neutron Source to select the area with sufficient low background to allow a successful first-time measurement of the CNNS process.« less

  12. Double inflation - A possible resolution of the large-scale structure problem

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.; Villumsen, Jens V.; Vittorio, Nicola; Silk, Joseph; Juszkiewicz, Roman

    1987-01-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Omega = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of about 100 Mpc, while the small-scale structure over less than about 10 Mpc resembles that in a low-density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations.

  13. Large-scale behaviour of local and entanglement entropy of the free Fermi gas at any temperature

    NASA Astrophysics Data System (ADS)

    Leschke, Hajo; Sobolev, Alexander V.; Spitzer, Wolfgang

    2016-07-01

    The leading asymptotic large-scale behaviour of the spatially bipartite entanglement entropy (EE) of the free Fermi gas infinitely extended in multidimensional Euclidean space at zero absolute temperature, T = 0, is by now well understood. Here, we present and discuss the first rigorous results for the corresponding EE of thermal equilibrium states at T> 0. The leading large-scale term of this thermal EE turns out to be twice the first-order finite-size correction to the infinite-volume thermal entropy (density). Not surprisingly, this correction is just the thermal entropy on the interface of the bipartition. However, it is given by a rather complicated integral derived from a semiclassical trace formula for a certain operator on the underlying one-particle Hilbert space. But in the zero-temperature limit T\\downarrow 0, the leading large-scale term of the thermal EE considerably simplifies and displays a {ln}(1/T)-singularity which one may identify with the known logarithmic enhancement at T = 0 of the so-called area-law scaling. birthday of the ideal Fermi gas.

  14. Modeling multi-scale aerosol dynamics and micro-environmental air quality near a large highway intersection using the CTAG model.

    PubMed

    Wang, Yan Jason; Nguyen, Monica T; Steffens, Jonathan T; Tong, Zheming; Wang, Yungang; Hopke, Philip K; Zhang, K Max

    2013-01-15

    A new methodology, referred to as the multi-scale structure, integrates "tailpipe-to-road" (i.e., on-road domain) and "road-to-ambient" (i.e., near-road domain) simulations to elucidate the environmental impacts of particulate emissions from traffic sources. The multi-scale structure is implemented in the CTAG model to 1) generate process-based on-road emission rates of ultrafine particles (UFPs) by explicitly simulating the effects of exhaust properties, traffic conditions, and meteorological conditions and 2) to characterize the impacts of traffic-related emissions on micro-environmental air quality near a highway intersection in Rochester, NY. The performance of CTAG, evaluated against with the field measurements, shows adequate agreement in capturing the dispersion of carbon monoxide (CO) and the number concentrations of UFPs in the near road micro-environment. As a proof-of-concept case study, we also apply CTAG to separate the relative impacts of the shutdown of a large coal-fired power plant (CFPP) and the adoption of the ultra-low-sulfur diesel (ULSD) on UFP concentrations in the intersection micro-environment. Although CTAG is still computationally expensive compared to the widely-used parameterized dispersion models, it has the potential to advance our capability to predict the impacts of UFP emissions and spatial/temporal variations of air pollutants in complex environments. Furthermore, for the on-road simulations, CTAG can serve as a process-based emission model; Combining the on-road and near-road simulations, CTAG becomes a "plume-in-grid" model for mobile emissions. The processed emission profiles can potentially improve regional air quality and climate predictions accordingly. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Measuring the Large-scale Solar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.

    2017-12-01

    The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.

  16. Wafer-Scale Integration of Graphene-based Electronic, Optoelectronic and Electroacoustic Devices

    PubMed Central

    Tian, He; Yang, Yi; Xie, Dan; Cui, Ya-Long; Mi, Wen-Tian; Zhang, Yuegang; Ren, Tian-Ling

    2014-01-01

    In virtue of its superior properties, the graphene-based device has enormous potential to be a supplement or an alternative to the conventional silicon-based device in varies applications. However, the functionality of the graphene devices is still limited due to the restriction of the high cost, the low efficiency and the low quality of the graphene growth and patterning techniques. We proposed a simple one-step laser scribing fabrication method to integrate wafer-scale high-performance graphene-based in-plane transistors, photodetectors, and loudspeakers. The in-plane graphene transistors have a large on/off ratio up to 5.34. And the graphene photodetector arrays were achieved with photo responsivity as high as 0.32 A/W. The graphene loudspeakers realize wide-band sound generation from 1 to 50 kHz. These results demonstrated that the laser scribed graphene could be used for wafer-scale integration of a variety of graphene-based electronic, optoelectronic and electroacoustic devices. PMID:24398542

  17. Spectral fingerprints of large-scale neuronal interactions.

    PubMed

    Siegel, Markus; Donner, Tobias H; Engel, Andreas K

    2012-01-11

    Cognition results from interactions among functionally specialized but widely distributed brain regions; however, neuroscience has so far largely focused on characterizing the function of individual brain regions and neurons therein. Here we discuss recent studies that have instead investigated the interactions between brain regions during cognitive processes by assessing correlations between neuronal oscillations in different regions of the primate cerebral cortex. These studies have opened a new window onto the large-scale circuit mechanisms underlying sensorimotor decision-making and top-down attention. We propose that frequency-specific neuronal correlations in large-scale cortical networks may be 'fingerprints' of canonical neuronal computations underlying cognitive processes.

  18. Fabrication of a wide-field NIR integral field unit for SWIMS using ultra-precision cutting

    NASA Astrophysics Data System (ADS)

    Kitagawa, Yutaro; Yamagata, Yutaka; Morita, Shin-ya; Motohara, Kentaro; Ozaki, Shinobu; Takahashi, Hidenori; Konishi, Masahiro; Kato, Natsuko M.; Kobayakawa, Yutaka; Terao, Yasunori; Ohashi, Hirofumi

    2016-07-01

    We describe overview of fabrication methods and measurement results of test fabrications of optical surfaces for an integral field unit (IFU) for Simultaneous color Wide-field Infrared Multi-object Spectrograph, SWIMS, which is a first-generation instrument for the University of Tokyo Atacama Observatory 6.5-m telescope. SWIMS-IFU provides entire near-infrared spectrum from 0.9 to 2.5 μm simultaneously covering wider field of view of 17" × 13" compared with current near-infrared IFUs. We investigate an ultra-precision cutting technique to monolithically fabricate optical surfaces of IFU optics such as an image slicer. Using 4- or 5-axis ultra precision machine we compare the milling process and shaper cutting process to find the best way of fabrication of image slicers. The measurement results show that the surface roughness almost satisfies our requirement in both of two methods. Moreover, we also obtain ideal surface form in the shaper cutting process. This method will be adopted to other mirror arrays (i.e. pupil mirror and slit mirror, and such monolithic fabrications will also help us to considerably reduce alignment procedure of each optical elements.

  19. Modulation of Small-scale Turbulence Structure by Large-scale Motions in the Absence of Direct Energy Transfer.

    NASA Astrophysics Data System (ADS)

    Brasseur, James G.; Juneja, Anurag

    1996-11-01

    Previous DNS studies indicate that small-scale structure can be directly altered through ``distant'' dynamical interactions by energetic forcing of the large scales. To remove the possibility of stimulating energy transfer between the large- and small-scale motions in these long-range interactions, we here perturb the large scale structure without altering its energy content by suddenly altering only the phases of large-scale Fourier modes. Scale-dependent changes in turbulence structure appear as a non zero difference field between two simulations from identical initial conditions of isotropic decaying turbulence, one perturbed and one unperturbed. We find that the large-scale phase perturbations leave the evolution of the energy spectrum virtually unchanged relative to the unperturbed turbulence. The difference field, on the other hand, is strongly affected by the perturbation. Most importantly, the time scale τ characterizing the change in in turbulence structure at spatial scale r shortly after initiating a change in large-scale structure decreases with decreasing turbulence scale r. Thus, structural information is transferred directly from the large- to the smallest-scale motions in the absence of direct energy transfer---a long-range effect which cannot be explained by a linear mechanism such as rapid distortion theory. * Supported by ARO grant DAAL03-92-G-0117

  20. Evolution of Scaling Emergence in Large-Scale Spatial Epidemic Spreading

    PubMed Central

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Background Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. Methodology/Principal Findings In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. Conclusions/Significance The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease. PMID:21747932

  1. A Functional Model for Management of Large Scale Assessments.

    ERIC Educational Resources Information Center

    Banta, Trudy W.; And Others

    This functional model for managing large-scale program evaluations was developed and validated in connection with the assessment of Tennessee's Nutrition Education and Training Program. Management of such a large-scale assessment requires the development of a structure for the organization; distribution and recovery of large quantities of…

  2. Multi-GNSS PPP-RTK: From Large- to Small-Scale Networks

    PubMed Central

    Nadarajah, Nandakumaran; Wang, Kan; Choudhury, Mazher

    2018-01-01

    Precise point positioning (PPP) and its integer ambiguity resolution-enabled variant, PPP-RTK (real-time kinematic), can benefit enormously from the integration of multiple global navigation satellite systems (GNSS). In such a multi-GNSS landscape, the positioning convergence time is expected to be reduced considerably as compared to the one obtained by a single-GNSS setup. It is therefore the goal of the present contribution to provide numerical insights into the role taken by the multi-GNSS integration in delivering fast and high-precision positioning solutions (sub-decimeter and centimeter levels) using PPP-RTK. To that end, we employ the Curtin PPP-RTK platform and process data-sets of GPS, BeiDou Navigation Satellite System (BDS) and Galileo in stand-alone and combined forms. The data-sets are collected by various receiver types, ranging from high-end multi-frequency geodetic receivers to low-cost single-frequency mass-market receivers. The corresponding stations form a large-scale (Australia-wide) network as well as a small-scale network with inter-station distances less than 30 km. In case of the Australia-wide GPS-only ambiguity-float setup, 90% of the horizontal positioning errors (kinematic mode) are shown to become less than five centimeters after 103 min. The stated required time is reduced to 66 min for the corresponding GPS + BDS + Galieo setup. The time is further reduced to 15 min by applying single-receiver ambiguity resolution. The outcomes are supported by the positioning results of the small-scale network. PMID:29614040

  3. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  4. Going the distance: spatial scale of athletic experience affects the accuracy of path integration.

    PubMed

    Smith, Alastair D; Howard, Christina J; Alcock, Niall; Cater, Kirsten

    2010-09-01

    Evidence suggests that athletically trained individuals are more accurate than untrained individuals in updating their spatial position through idiothetic cues. We assessed whether training at different spatial scales affects the accuracy of path integration. Groups of rugby players (large-scale training) and martial artists (small-scale training) participated in a triangle-completion task: they were led (blindfolded) along two sides of a right-angled triangle and were required to complete the hypotenuse by returning to the origin. The groups did not differ in their assessment of the distance to the origin, but rugby players were more accurate than martial artists in assessing the correct angle to turn (heading), and landed significantly closer to the origin. These data support evidence that distance and heading components can be dissociated. Furthermore, they suggest that the spatial scale at which an individual is trained may affect the accuracy of one component of path integration but not the other.

  5. Ultra-low thermal conductivities in large-area Si-Ge nanomeshes for thermoelectric applications

    PubMed Central

    Perez-Taborda, Jaime Andres; Muñoz Rojo, Miguel; Maiz, Jon; Neophytou, Neophytos; Martin-Gonzalez, Marisol

    2016-01-01

    In this work, we measure the thermal and thermoelectric properties of large-area Si0.8Ge0.2 nano-meshed films fabricated by DC sputtering of Si0.8Ge0.2 on highly ordered porous alumina matrices. The Si0.8Ge0.2 film replicated the porous alumina structure resulting in nano-meshed films. Very good control of the nanomesh geometrical features (pore diameter, pitch, neck) was achieved through the alumina template, with pore diameters ranging from 294 ± 5nm down to 31 ± 4 nm. The method we developed is able to provide large areas of nano-meshes in a simple and reproducible way, being easily scalable for industrial applications. Most importantly, the thermal conductivity of the films was reduced as the diameter of the porous became smaller to values that varied from κ = 1.54 ± 0.27 W K−1m−1, down to the ultra-low κ = 0.55 ± 0.10 W K−1m−1 value. The latter is well below the amorphous limit, while the Seebeck coefficient and electrical conductivity of the material were retained. These properties, together with our large area fabrication approach, can provide an important route towards achieving high conversion efficiency, large area, and high scalable thermoelectric materials. PMID:27650202

  6. Multi-time-scale hydroclimate dynamics of a regional watershed and links to large-scale atmospheric circulation: Application to the Seine river catchment, France

    NASA Astrophysics Data System (ADS)

    Massei, N.; Dieppois, B.; Hannah, D. M.; Lavers, D. A.; Fossa, M.; Laignel, B.; Debret, M.

    2017-03-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating correlation between large and local scales, empirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: (i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and (ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the links between large and local scales were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach, which integrated discrete wavelet multiresolution analysis for reconstructing monthly regional hydrometeorological processes (predictand: precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector). This

  7. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  8. Integrating adaptive behaviour in large-scale flood risk assessments: an Agent-Based Modelling approach

    NASA Astrophysics Data System (ADS)

    Haer, Toon; Aerts, Jeroen

    2015-04-01

    Between 1998 and 2009, Europe suffered over 213 major damaging floods, causing 1126 deaths, displacing around half a million people. In this period, floods caused at least 52 billion euro in insured economic losses making floods the most costly natural hazard faced in Europe. In many low-lying areas, the main strategy to cope with floods is to reduce the risk of the hazard through flood defence structures, like dikes and levees. However, it is suggested that part of the responsibility for flood protection needs to shift to households and businesses in areas at risk, and that governments and insurers can effectively stimulate the implementation of individual protective measures. However, adaptive behaviour towards flood risk reduction and the interaction between the government, insurers, and individuals has hardly been studied in large-scale flood risk assessments. In this study, an European Agent-Based Model is developed including agent representatives for the administrative stakeholders of European Member states, insurers and reinsurers markets, and individuals following complex behaviour models. The Agent-Based Modelling approach allows for an in-depth analysis of the interaction between heterogeneous autonomous agents and the resulting (non-)adaptive behaviour. Existing flood damage models are part of the European Agent-Based Model to allow for a dynamic response of both the agents and the environment to changing flood risk and protective efforts. By following an Agent-Based Modelling approach this study is a first contribution to overcome the limitations of traditional large-scale flood risk models in which the influence of individual adaptive behaviour towards flood risk reduction is often lacking.

  9. The Large Scale Structure of the Galactic Magnetic Field and High Energy Cosmic Ray Anisotropy

    NASA Astrophysics Data System (ADS)

    Alvarez-Muñiz, Jaime; Stanev, Todor

    2006-10-01

    Measurements of the magnetic field in our Galaxy are complex and usually difficult to interpret. A spiral regular field in the disk is favored by observations, however the number of field reversals is still under debate. Measurements of the parity of the field across the Galactic plane are also very difficult due to the presence of the disk field itself. In this work we demonstrate that cosmic ray protons in the energy range 1018 to 1019eV, if accelerated near the center of the Galaxy, are sensitive to the large scale structure of the Galactic Magnetic Field (GMF). In particular if the field is of even parity, and the spiral field is bi-symmetric (BSS), ultra high energy protons will predominantly come from the Southern Galactic hemisphere, and predominantly from the Northern Galactic hemisphere if the field is of even parity and axi-symmetric (ASS). There is no sensitivity to the BSS or ASS configurations if the field is of odd parity.

  10. Space-Time Controls on Carbon Sequestration Over Large-Scale Amazon Basin

    NASA Technical Reports Server (NTRS)

    Smith, Eric A.; Cooper, Harry J.; Gu, Jiujing; Grose, Andrew; Norman, John; daRocha, Humberto R.; Starr, David O. (Technical Monitor)

    2002-01-01

    A major research focus of the LBA Ecology Program is an assessment of the carbon budget and the carbon sequestering capacity of the large scale forest-pasture system that dominates the Amazonia landscape, and its time-space heterogeneity manifest in carbon fluxes across the large scale Amazon basin ecosystem. Quantification of these processes requires a combination of in situ measurements, remotely sensed measurements from space, and a realistically forced hydrometeorological model coupled to a carbon assimilation model, capable of simulating details within the surface energy and water budgets along with the principle modes of photosynthesis and respiration. Here we describe the results of an investigation concerning the space-time controls of carbon sources and sinks distributed over the large scale Amazon basin. The results are derived from a carbon-water-energy budget retrieval system for the large scale Amazon basin, which uses a coupled carbon assimilation-hydrometeorological model as an integrating system, forced by both in situ meteorological measurements and remotely sensed radiation fluxes and precipitation retrieval retrieved from a combination of GOES, SSM/I, TOMS, and TRMM satellite measurements. Brief discussion concerning validation of (a) retrieved surface radiation fluxes and precipitation based on 30-min averaged surface measurements taken at Ji-Parana in Rondonia and Manaus in Amazonas, and (b) modeled carbon fluxes based on tower CO2 flux measurements taken at Reserva Jaru, Manaus and Fazenda Nossa Senhora. The space-time controls on carbon sequestration are partitioned into sets of factors classified by: (1) above canopy meteorology, (2) incoming surface radiation, (3) precipitation interception, and (4) indigenous stomatal processes varied over the different land covers of pristine rainforest, partially, and fully logged rainforests, and pasture lands. These are the principle meteorological, thermodynamical, hydrological, and biophysical

  11. Seismic safety in conducting large-scale blasts

    NASA Astrophysics Data System (ADS)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  12. An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard

    Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space andmore » time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.« less

  13. An integrated assessment of location-dependent scaling for microalgae biofuel production facilities

    DOE PAGES

    Coleman, André M.; Abodeely, Jared M.; Skaggs, Richard L.; ...

    2014-06-19

    Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting and design through processing and upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are partially addressed by applying the Integrated Assessment Framework (IAF) – an integrated multi-scale modeling, analysis, and data management suite – to address key issues in developing and operating an open-pond microalgae production facility.more » This is done by analyzing how variability and uncertainty over space and through time affect feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. To provide a baseline analysis, the IAF was applied in this paper to a set of sites in the southeastern U.S. with the potential to cumulatively produce 5 billion gallons per year. Finally, the results indicate costs can be reduced by scaling downstream processing capabilities to fit site-specific growing conditions, available and economically viable resources, and specific microalgal strains.« less

  14. Potential for geophysical experiments in large scale tests.

    USGS Publications Warehouse

    Dieterich, J.H.

    1981-01-01

    Potential research applications for large-specimen geophysical experiments include measurements of scale dependence of physical parameters and examination of interactions with heterogeneities, especially flaws such as cracks. In addition, increased specimen size provides opportunities for improved recording resolution and greater control of experimental variables. Large-scale experiments using a special purpose low stress (100MPa).-Author

  15. Great Thermal Conductivity Enhancement of Silicone Composite with Ultra-Long Copper Nanowires.

    PubMed

    Zhang, Liye; Yin, Junshan; Yu, Wei; Wang, Mingzhu; Xie, Huaqing

    2017-12-01

    In this paper, ultra-long copper nanowires (CuNWs) were successfully synthesized at a large scale by hydrothermal reduction of divalent copper ion using oleylamine and oleic acid as dual ligands. The characteristic of CuNWs is hard and linear, which is clearly different from graphene nanoplatelets (GNPs) and multi-wall carbon nanotubes (MWCNTs). The thermal properties and models of silicone composites with three nanomaterials have been mainly researched. The maximum of thermal conductivity enhancement is up to 215% with only 1.0 vol.% CuNW loading, which is much higher than GNPs and MWCNTs. It is due to the ultra-long CuNWs with a length of more than 100 μm, which facilitates the formation of effective thermal-conductive networks, resulting in great enhancement of thermal conductivity.

  16. Great Thermal Conductivity Enhancement of Silicone Composite with Ultra-Long Copper Nanowires

    NASA Astrophysics Data System (ADS)

    Zhang, Liye; Yin, Junshan; Yu, Wei; Wang, Mingzhu; Xie, Huaqing

    2017-07-01

    In this paper, ultra-long copper nanowires (CuNWs) were successfully synthesized at a large scale by hydrothermal reduction of divalent copper ion using oleylamine and oleic acid as dual ligands. The characteristic of CuNWs is hard and linear, which is clearly different from graphene nanoplatelets (GNPs) and multi-wall carbon nanotubes (MWCNTs). The thermal properties and models of silicone composites with three nanomaterials have been mainly researched. The maximum of thermal conductivity enhancement is up to 215% with only 1.0 vol.% CuNW loading, which is much higher than GNPs and MWCNTs. It is due to the ultra-long CuNWs with a length of more than 100 μm, which facilitates the formation of effective thermal-conductive networks, resulting in great enhancement of thermal conductivity.

  17. Plans for Embedding ICTs into Teaching and Learning through a Large-Scale Secondary Education Reform in the Country of Georgia

    ERIC Educational Resources Information Center

    Richardson, Jayson W.; Sales, Gregory; Sentocnik, Sonja

    2015-01-01

    Integrating ICTs into international development projects is common. However, focusing on how ICTs support leading, teaching, and learning is often overlooked. This article describes a team's approach to technology integration into the design of a large-scale, five year, teacher and leader professional development project in the country of Georgia.…

  18. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  19. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  20. A large-scale circuit mechanism for hierarchical dynamical processing in the primate cortex

    PubMed Central

    Chaudhuri, Rishidev; Knoblauch, Kenneth; Gariel, Marie-Alice; Kennedy, Henry; Wang, Xiao-Jing

    2015-01-01

    We developed a large-scale dynamical model of the macaque neocortex, which is based on recently acquired directed- and weighted-connectivity data from tract-tracing experiments, and which incorporates heterogeneity across areas. A hierarchy of timescales naturally emerges from this system: sensory areas show brief, transient responses to input (appropriate for sensory processing), whereas association areas integrate inputs over time and exhibit persistent activity (suitable for decision-making and working memory). The model displays multiple temporal hierarchies, as evidenced by contrasting responses to visual versus somatosensory stimulation. Moreover, slower prefrontal and temporal areas have a disproportionate impact on global brain dynamics. These findings establish a circuit mechanism for “temporal receptive windows” that are progressively enlarged along the cortical hierarchy, suggest an extension of time integration in decision-making from local to large circuits, and should prompt a re-evaluation of the analysis of functional connectivity (measured by fMRI or EEG/MEG) by taking into account inter-areal heterogeneity. PMID:26439530

  1. Low frequency steady-state brain responses modulate large scale functional networks in a frequency-specific means.

    PubMed

    Wang, Yi-Feng; Long, Zhiliang; Cui, Qian; Liu, Feng; Jing, Xiu-Juan; Chen, Heng; Guo, Xiao-Nan; Yan, Jin H; Chen, Hua-Fu

    2016-01-01

    Neural oscillations are essential for brain functions. Research has suggested that the frequency of neural oscillations is lower for more integrative and remote communications. In this vein, some resting-state studies have suggested that large scale networks function in the very low frequency range (<1 Hz). However, it is difficult to determine the frequency characteristics of brain networks because both resting-state studies and conventional frequency tagging approaches cannot simultaneously capture multiple large scale networks in controllable cognitive activities. In this preliminary study, we aimed to examine whether large scale networks can be modulated by task-induced low frequency steady-state brain responses (lfSSBRs) in a frequency-specific pattern. In a revised attention network test, the lfSSBRs were evoked in the triple network system and sensory-motor system, indicating that large scale networks can be modulated in a frequency tagging way. Furthermore, the inter- and intranetwork synchronizations as well as coherence were increased at the fundamental frequency and the first harmonic rather than at other frequency bands, indicating a frequency-specific modulation of information communication. However, there was no difference among attention conditions, indicating that lfSSBRs modulate the general attention state much stronger than distinguishing attention conditions. This study provides insights into the advantage and mechanism of lfSSBRs. More importantly, it paves a new way to investigate frequency-specific large scale brain activities. © 2015 Wiley Periodicals, Inc.

  2. Steps Towards Understanding Large-scale Deformation of Gas Hydrate-bearing Sediments

    NASA Astrophysics Data System (ADS)

    Gupta, S.; Deusner, C.; Haeckel, M.; Kossel, E.

    2016-12-01

    Marine sediments bearing gas hydrates are typically characterized by heterogeneity in the gas hydrate distribution and anisotropy in the sediment-gas hydrate fabric properties. Gas hydrates also contribute to the strength and stiffness of the marine sediment, and any disturbance in the thermodynamic stability of the gas hydrates is likely to affect the geomechanical stability of the sediment. Understanding mechanisms and triggers of large-strain deformation and failure of marine gas hydrate-bearing sediments is an area of extensive research, particularly in the context of marine slope-stability and industrial gas production. The ultimate objective is to predict severe deformation events such as regional-scale slope failure or excessive sand production by using numerical simulation tools. The development of such tools essentially requires a careful analysis of thermo-hydro-chemo-mechanical behavior of gas hydrate-bearing sediments at lab-scale, and its stepwise integration into reservoir-scale simulators through definition of effective variables, use of suitable constitutive relations, and application of scaling laws. One of the focus areas of our research is to understand the bulk coupled behavior of marine gas hydrate systems with contributions from micro-scale characteristics, transport-reaction dynamics, and structural heterogeneity through experimental flow-through studies using high-pressure triaxial test systems and advanced tomographical tools (CT, ERT, MRI). We combine these studies to develop mathematical model and numerical simulation tools which could be used to predict the coupled hydro-geomechanical behavior of marine gas hydrate reservoirs in a large-strain framework. Here we will present some of our recent results from closely co-ordinated experimental and numerical simulation studies with an objective to capture the large-deformation behavior relevant to different gas production scenarios. We will also report on a variety of mechanically relevant

  3. Large-scale weakly supervised object localization via latent category learning.

    PubMed

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  4. A Bio-Realistic Analog CMOS Cochlea Filter With High Tunability and Ultra-Steep Roll-Off.

    PubMed

    Wang, Shiwei; Koickal, Thomas Jacob; Hamilton, Alister; Cheung, Rebecca; Smith, Leslie S

    2015-06-01

    This paper presents the design and experimental results of a cochlea filter in analog very large scale integration (VLSI) which highly resembles physiologically measured response of the mammalian cochlea. The filter consists of three specialized sub-filter stages which respectively provide passive response in low frequencies, actively tunable response in mid-band frequencies and ultra-steep roll-off at transition frequencies from pass-band to stop-band. The sub-filters are implemented in balanced ladder topology using floating active inductors. Measured results from the fabricated chip show that wide range of mid-band tuning including gain tuning of over 20 dB, Q factor tuning from 2 to 19 as well as the bio-realistic center frequency shift are achieved by adjusting only one circuit parameter. Besides, the filter has an ultra-steep roll-off reaching over 300 dB/dec. By changing biasing currents, the filter can be configured to operate with center frequencies from 31 Hz to 8 kHz. The filter is 9th order, consumes 59.5 ∼ 90.0 μW power and occupies 0.9 mm2 chip area. A parallel bank of the proposed filter can be used as the front-end in hearing prosthesis devices, speech processors as well as other bio-inspired auditory systems owing to its bio-realistic behavior, low power consumption and small size.

  5. Low-loss integrated electrical surface plasmon source with ultra-smooth metal film fabricated by polymethyl methacrylate ‘bond and peel’ method

    NASA Astrophysics Data System (ADS)

    Liu, Wenjie; Hu, Xiaolong; Zou, Qiushun; Wu, Shaoying; Jin, Chongjun

    2018-06-01

    External light sources are mostly employed to functionalize the plasmonic components, resulting in a bulky footprint. Electrically driven integrated plasmonic devices, combining ultra-compact critical feature sizes with extremely high transmission speeds and low power consumption, can link plasmonics with the present-day electronic world. In an effort to achieve this prospect, suppressing the losses in the plasmonic devices becomes a pressing issue. In this work, we developed a novel polymethyl methacrylate ‘bond and peel’ method to fabricate metal films with sub-nanometer smooth surfaces on semiconductor wafers. Based on this method, we further fabricated a compact plasmonic source containing a metal-insulator-metal (MIM) waveguide with an ultra-smooth metal surface on a GaAs-based light-emitting diode wafer. An increase in propagation length of the SPP mode by a factor of 2.95 was achieved as compared with the conventional device containing a relatively rough metal surface. Numerical calculations further confirmed that the propagation length is comparable to the theoretical prediction on the MIM waveguide with perfectly smooth metal surfaces. This method facilitates low-loss and high-integration of electrically driven plasmonic devices, thus provides an immediate opportunity for the practical application of on-chip integrated plasmonic circuits.

  6. Ultra-bright emission from hexagonal boron nitride defects as a new platform for bio-imaging and bio-labelling

    NASA Astrophysics Data System (ADS)

    Elbadawi, Christopher; Tran, Trong Toan; Shimoni, Olga; Totonjian, Daniel; Lobo, Charlene J.; Grosso, Gabriele; Moon, Hyowan; Englund, Dirk R.; Ford, Michael J.; Aharonovich, Igor; Toth, Milos

    2016-12-01

    Bio-imaging requires robust ultra-bright probes without causing any toxicity to the cellular environment, maintain their stability and are chemically inert. In this work we present hexagonal boron nitride (hBN) nanoflakes which exhibit narrowband ultra-bright single photon emitters1. The emitters are optically stable at room temperature and under ambient environment. hBN has also been noted to be noncytotoxic and seen significant advances in functionalization with biomolecules2,3. We further demonstrate two methods of engineering this new range of extremely robust multicolour emitters across the visible and near infrared spectral ranges for large scale sensing and biolabeling applications.

  7. Large-scale structure of randomly jammed spheres

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio

    2017-05-01

    We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.

  8. Large-scale thermal energy storage using sodium hydroxide /NaOH/

    NASA Technical Reports Server (NTRS)

    Turner, R. H.; Truscello, V. C.

    1977-01-01

    A technique employing NaOH phase change material for large-scale thermal energy storage to 900 F (482 C) is described; the concept consists of 12-foot diameter by 60-foot long cylindrical steel shell with closely spaced internal tubes similar to a shell and tube heat exchanger. The NaOH heat storage medium fills the space between the tubes and outer shell. To charge the system, superheated steam flowing through the tubes melts and raises the temperature of NaOH; for discharge, pressurized water flows through the same tube bundle. A technique for system design and cost estimation is shown. General technical and economic properties of the storage unit integrated into a solar power plant are discussed.

  9. An Novel Architecture of Large-scale Communication in IOT

    NASA Astrophysics Data System (ADS)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  10. Real-Time Large-Scale Dense Mapping with Surfels

    PubMed Central

    Fu, Xingyin; Zhu, Feng; Wu, Qingxiao; Sun, Yunlei; Lu, Rongrong; Yang, Ruigang

    2018-01-01

    Real-time dense mapping systems have been developed since the birth of consumer RGB-D cameras. Currently, there are two commonly used models in dense mapping systems: truncated signed distance function (TSDF) and surfel. The state-of-the-art dense mapping systems usually work fine with small-sized regions. The generated dense surface may be unsatisfactory around the loop closures when the system tracking drift grows large. In addition, the efficiency of the system with surfel model slows down when the number of the model points in the map becomes large. In this paper, we propose to use two maps in the dense mapping system. The RGB-D images are integrated into a local surfel map. The old surfels that reconstructed in former times and far away from the camera frustum are moved from the local map to the global map. The updated surfels in the local map when every frame arrives are kept bounded. Therefore, in our system, the scene that can be reconstructed is very large, and the frame rate of our system remains high. We detect loop closures and optimize the pose graph to distribute system tracking drift. The positions and normals of the surfels in the map are also corrected using an embedded deformation graph so that they are consistent with the updated poses. In order to deal with large surface deformations, we propose a new method for constructing constraints with system trajectories and loop closure keyframes. The proposed new method stabilizes large-scale surface deformation. Experimental results show that our novel system behaves better than the prior state-of-the-art dense mapping systems. PMID:29747450

  11. Gravitational lenses and large scale structure

    NASA Technical Reports Server (NTRS)

    Turner, Edwin L.

    1987-01-01

    Four possible statistical tests of the large scale distribution of cosmic material are described. Each is based on gravitational lensing effects. The current observational status of these tests is also summarized.

  12. A fiber-optic ice detection system for large-scale wind turbine blades

    NASA Astrophysics Data System (ADS)

    Kim, Dae-gil; Sampath, Umesh; Kim, Hyunjin; Song, Minho

    2017-09-01

    Icing causes substantial problems in the integrity of large-scale wind turbines. In this work, a fiber-optic sensor system for detection of icing with an arrayed waveguide grating is presented. The sensor system detects Fresnel reflections from the ends of the fibers. The transition in Fresnel reflection due to icing gives peculiar intensity variations, which categorizes the ice, the water, and the air medium on the wind turbine blades. From the experimental results, with the proposed sensor system, the formation of icing conditions and thickness of ice were identified successfully in real time.

  13. Large-Scale 1:1 Computing Initiatives: An Open Access Database

    ERIC Educational Resources Information Center

    Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet

    2013-01-01

    This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…

  14. Neurodevelopmental alterations of large-scale structural networks in children with new-onset epilepsy

    PubMed Central

    Bonilha, Leonardo; Tabesh, Ali; Dabbs, Kevin; Hsu, David A.; Stafstrom, Carl E.; Hermann, Bruce P.; Lin, Jack J.

    2014-01-01

    Recent neuroimaging and behavioral studies have revealed that children with new onset epilepsy already exhibit brain structural abnormalities and cognitive impairment. How the organization of large-scale brain structural networks is altered near the time of seizure onset and whether network changes are related to cognitive performances remain unclear. Recent studies also suggest that regional brain volume covariance reflects synchronized brain developmental changes. Here, we test the hypothesis that epilepsy during early-life is associated with abnormalities in brain network organization and cognition. We used graph theory to study structural brain networks based on regional volume covariance in 39 children with new-onset seizures and 28 healthy controls. Children with new-onset epilepsy showed a suboptimal topological structural organization with enhanced network segregation and reduced global integration compared to controls. At the regional level, structural reorganization was evident with redistributed nodes from the posterior to more anterior head regions. The epileptic brain network was more vulnerable to targeted but not random attacks. Finally, a subgroup of children with epilepsy, namely those with lower IQ and poorer executive function, had a reduced balance between network segregation and integration. Taken together, the findings suggest that the neurodevelopmental impact of new onset childhood epilepsies alters large-scale brain networks, resulting in greater vulnerability to network failure and cognitive impairment. PMID:24453089

  15. Neurodevelopmental alterations of large-scale structural networks in children with new-onset epilepsy.

    PubMed

    Bonilha, Leonardo; Tabesh, Ali; Dabbs, Kevin; Hsu, David A; Stafstrom, Carl E; Hermann, Bruce P; Lin, Jack J

    2014-08-01

    Recent neuroimaging and behavioral studies have revealed that children with new onset epilepsy already exhibit brain structural abnormalities and cognitive impairment. How the organization of large-scale brain structural networks is altered near the time of seizure onset and whether network changes are related to cognitive performances remain unclear. Recent studies also suggest that regional brain volume covariance reflects synchronized brain developmental changes. Here, we test the hypothesis that epilepsy during early-life is associated with abnormalities in brain network organization and cognition. We used graph theory to study structural brain networks based on regional volume covariance in 39 children with new-onset seizures and 28 healthy controls. Children with new-onset epilepsy showed a suboptimal topological structural organization with enhanced network segregation and reduced global integration compared with controls. At the regional level, structural reorganization was evident with redistributed nodes from the posterior to more anterior head regions. The epileptic brain network was more vulnerable to targeted but not random attacks. Finally, a subgroup of children with epilepsy, namely those with lower IQ and poorer executive function, had a reduced balance between network segregation and integration. Taken together, the findings suggest that the neurodevelopmental impact of new onset childhood epilepsies alters large-scale brain networks, resulting in greater vulnerability to network failure and cognitive impairment. Copyright © 2014 Wiley Periodicals, Inc.

  16. A high-speed, tunable silicon photonic ring modulator integrated with ultra-efficient active wavelength control.

    PubMed

    Zheng, Xuezhe; Chang, Eric; Amberg, Philip; Shubin, Ivan; Lexau, Jon; Liu, Frankie; Thacker, Hiren; Djordjevic, Stevan S; Lin, Shiyun; Luo, Ying; Yao, Jin; Lee, Jin-Hyoung; Raj, Kannan; Ho, Ron; Cunningham, John E; Krishnamoorthy, Ashok V

    2014-05-19

    We report the first complete 10G silicon photonic ring modulator with integrated ultra-efficient CMOS driver and closed-loop wavelength control. A selective substrate removal technique was used to improve the ring tuning efficiency. Limited by the thermal tuner driver output power, a maximum open-loop tuning range of about 4.5nm was measured with about 14mW of total tuning power including the heater driver circuit power consumption. Stable wavelength locking was achieved with a low-power mixed-signal closed-loop wavelength controller. An active wavelength tracking range of > 500GHz was demonstrated with controller energy cost of only 20fJ/bit.

  17. Scaling of graphene integrated circuits.

    PubMed

    Bianchi, Massimiliano; Guerriero, Erica; Fiocco, Marco; Alberti, Ruggero; Polloni, Laura; Behnam, Ashkan; Carrion, Enrique A; Pop, Eric; Sordan, Roman

    2015-05-07

    The influence of transistor size reduction (scaling) on the speed of realistic multi-stage integrated circuits (ICs) represents the main performance metric of a given transistor technology. Despite extensive interest in graphene electronics, scaling efforts have so far focused on individual transistors rather than multi-stage ICs. Here we study the scaling of graphene ICs based on transistors from 3.3 to 0.5 μm gate lengths and with different channel widths, access lengths, and lead thicknesses. The shortest gate delay of 31 ps per stage was obtained in sub-micron graphene ROs oscillating at 4.3 GHz, which is the highest oscillation frequency obtained in any strictly low-dimensional material to date. We also derived the fundamental Johnson limit, showing that scaled graphene ICs could be used at high frequencies in applications with small voltage swing.

  18. Spatiotemporal property and predictability of large-scale human mobility

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  19. Wedge measures parallax separations...on large-scale 70-mm

    Treesearch

    Steven L. Wert; Richard J. Myhre

    1967-01-01

    A new parallax wedge (range: 1.5 to 2 inches) has been designed for use with large-scaled 70-mm. aerial photographs. The narrow separation of the wedge allows the user to measure small parallax separations that are characteristic of large-scale photographs.

  20. Scaling relationships of channel networks at large scales: Examples from two large-magnitude watersheds in Brittany, France

    NASA Astrophysics Data System (ADS)

    Crave, A.; Davy, P.

    1997-01-01

    We present a statistical analysis on two watersheds in French Brittany whose drainage areas are about 10,000 and 2000 km2. The channel system was analysed from the digitised blue lines of the 1:100,000 map and from a 250-m DEM. Link lengths follow an exponential distribution, consistent with the Markovian model of channel branching proposed by Smart (1968). The departure from the exponential distribution for small lengths, that has been extensively discussed before, results from a statistical effect due to the finite number of channels and junctions. The Strahler topology applied on channels defines a self-similar organisation whose similarity dimension is about 1.7, that is clearly smaller than the value of 2 expected for a random organisation. The similarity dimension is consistent with an independent measurement of the Horton ratios of stream numbers and lengths. The variables defined by an upstream integral (drainage area, mainstream length, upstream length) follow power-law distributions limited at large scales by a finite size effect, due to the finite area of the watersheds. A special emphasis is given to the exponent of the drainage area, aA, that has been previously discussed in the context of different aggregation models relevant to channel network growth. We show that aA is consistent with 4/3, a value that was obtained and analytically demonstrated from directed random walk aggregating models, inspired by the model of Scheidegger (1967). The drainage density and mainstream length present no simple scaling with area, except at large areas where they tend to trivial values: constant density and square root of drainage area, respectively. These asymptotic limits necessarily imply that the space dimension of channel networks is 2, equal to the embedding space. The limits are reached for drainage areas larger than 100 km2. For smaller areas, the asymptotic limit represents either a lower bound (drainage density) or an upper bound (mainstream length) of the

  1. Feasibility analysis of large length-scale thermocapillary flow experiment for the International Space Station

    NASA Astrophysics Data System (ADS)

    Alberts, Samantha J.

    The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.

  2. A Web-based Distributed Voluntary Computing Platform for Large Scale Hydrological Computations

    NASA Astrophysics Data System (ADS)

    Demir, I.; Agliamzanov, R.

    2014-12-01

    Distributed volunteer computing can enable researchers and scientist to form large parallel computing environments to utilize the computing power of the millions of computers on the Internet, and use them towards running large scale environmental simulations and models to serve the common good of local communities and the world. Recent developments in web technologies and standards allow client-side scripting languages to run at speeds close to native application, and utilize the power of Graphics Processing Units (GPU). Using a client-side scripting language like JavaScript, we have developed an open distributed computing framework that makes it easy for researchers to write their own hydrologic models, and run them on volunteer computers. Users will easily enable their websites for visitors to volunteer sharing their computer resources to contribute running advanced hydrological models and simulations. Using a web-based system allows users to start volunteering their computational resources within seconds without installing any software. The framework distributes the model simulation to thousands of nodes in small spatial and computational sizes. A relational database system is utilized for managing data connections and queue management for the distributed computing nodes. In this paper, we present a web-based distributed volunteer computing platform to enable large scale hydrological simulations and model runs in an open and integrated environment.

  3. Vertical integration from the large Hilbert space

    NASA Astrophysics Data System (ADS)

    Erler, Theodore; Konopka, Sebastian

    2017-12-01

    We develop an alternative description of the procedure of vertical integration based on the observation that amplitudes can be written in BRST exact form in the large Hilbert space. We relate this approach to the description of vertical integration given by Sen and Witten.

  4. Large-Scale NASA Science Applications on the Columbia Supercluster

    NASA Technical Reports Server (NTRS)

    Brooks, Walter

    2005-01-01

    Columbia, NASA's newest 61 teraflops supercomputer that became operational late last year, is a highly integrated Altix cluster of 10,240 processors, and was named to honor the crew of the Space Shuttle lost in early 2003. Constructed in just four months, Columbia increased NASA's computing capability ten-fold, and revitalized the Agency's high-end computing efforts. Significant cutting-edge science and engineering simulations in the areas of space and Earth sciences, as well as aeronautics and space operations, are already occurring on this largest operational Linux supercomputer, demonstrating its capacity and capability to accelerate NASA's space exploration vision. The presentation will describe how an integrated environment consisting not only of next-generation systems, but also modeling and simulation, high-speed networking, parallel performance optimization, and advanced data analysis and visualization, is being used to reduce design cycle time, accelerate scientific discovery, conduct parametric analysis of multiple scenarios, and enhance safety during the life cycle of NASA missions. The talk will conclude by discussing how NAS partnered with various NASA centers, other government agencies, computer industry, and academia, to create a national resource in large-scale modeling and simulation.

  5. Auxiliary basis expansions for large-scale electronic structure calculations

    PubMed Central

    Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin

    2005-01-01

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767

  6. Auxiliary basis expansions for large-scale electronic structure calculations.

    PubMed

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  7. Grid-Scale Energy Storage Demonstration of Ancillary Services Using the UltraBattery Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seasholtz, Jeff

    2015-08-20

    The collaboration described in this document is being done as part of a cooperative research agreement under the Department of Energy’s Smart Grid Demonstration Program. This document represents the Final Technical Performance Report, from July 2012 through April 2015, for the East Penn Manufacturing Smart Grid Program demonstration project. This Smart Grid Demonstration project demonstrates Distributed Energy Storage for Grid Support, in particular the economic and technical viability of a grid-scale, advanced energy storage system using UltraBattery ® technology for frequency regulation ancillary services and demand management services. This project entailed the construction of a dedicated facility on the Eastmore » Penn campus in Lyon Station, PA that is being used as a working demonstration to provide regulation ancillary services to PJM and demand management services to Metropolitan Edison (Met-Ed).« less

  8. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  9. Planarized thick copper gate polycrystalline silicon thin film transistors for ultra-large AMOLED displays

    NASA Astrophysics Data System (ADS)

    Yun, Seung Jae; Lee, Yong Woo; Son, Se Wan; Byun, Chang Woo; Reddy, A. Mallikarjuna; Joo, Seung Ki

    2012-08-01

    A planarized thick copper (Cu) gate low temperature polycrystalline silicon (LTPS) thin film transistors (TFTs) is fabricated for ultra-large active-matrix organic light-emitting diode (AMOLED) displays. We introduce a damascene and chemical mechanical polishing process to embed a planarized Cu gate of 500 nm thickness into a trench and Si3N4/SiO2 multilayer gate insulator, to prevent the Cu gate from diffusing into the silicon (Si) layer at 550°C, and metal-induced lateral crystallization (MILC) technology to crystallize the amorphous Si layer. A poly-Si TFT with planarized thick Cu gate exhibits a field effect mobility of 5 cm2/Vs and a threshold voltage of -9 V, and a subthreshold swing (S) of 1.4 V/dec.

  10. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  11. Digenetic Changes in Macro- to Nano-Scale Porosity in the St. Peter Sandstone:L An (Ultra) Small Angle Neutron Scattering and Backscattered Electron Imagining Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anovitz, Lawrence; Cole, David; Rother, Gernot

    2013-01-01

    Small- and Ultra-Small Angle Neutron Scattering (SANS and USANS) provide powerful tools for quantitative analysis of porous rocks, yielding bulk statistical information over a wide range of length scales. This study utilized (U)SANS to characterize shallowly buried quartz arenites from the St. Peter Sandstone. Backscattered electron imaging was also used to extend the data to larger scales. These samples contain significant volumes of large-scale porosity, modified by quartz overgrowths, and neutron scattering results show significant sub-micron porosity. While previous scattering data from sandstones suggest scattering is dominated by surface fractal behavior over many orders of magnitude, careful analysis of ourmore » data shows both fractal and pseudo-fractal behavior. The scattering curves are composed of subtle steps, modeled as polydispersed assemblages of pores with log-normal distributions. However, in some samples an additional surface-fractal overprint is present, while in others there is no such structure, and scattering can be explained by summation of non-fractal structures. Combined with our work on other rock-types, these data suggest that microporosity is more prevalent, and may play a much more important role than previously thought in fluid/rock interactions.« less

  12. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  13. Large-scale structure in superfluid Chaplygin gas cosmology

    NASA Astrophysics Data System (ADS)

    Yang, Rongjia

    2014-03-01

    We investigate the growth of the large-scale structure in the superfluid Chaplygin gas (SCG) model. Both linear and nonlinear growth, such as σ8 and the skewness S3, are discussed. We find the growth factor of SCG reduces to the Einstein-de Sitter case at early times while it differs from the cosmological constant model (ΛCDM) case in the large a limit. We also find there will be more stricture growth on large scales in the SCG scenario than in ΛCDM and the variations of σ8 and S3 between SCG and ΛCDM cannot be discriminated.

  14. Robust decentralized hybrid adaptive output feedback fuzzy control for a class of large-scale MIMO nonlinear systems and its application to AHS.

    PubMed

    Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu

    2014-09-01

    This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.

  15. Geospatial Optimization of Siting Large-Scale Solar Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macknick, Jordan; Quinby, Ted; Caulfield, Emmet

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent withmore » each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.« less

  16. Critical Issues in Large-Scale Assessment: A Resource Guide.

    ERIC Educational Resources Information Center

    Redfield, Doris

    The purpose of this document is to provide practical guidance and support for the design, development, and implementation of large-scale assessment systems that are grounded in research and best practice. Information is included about existing large-scale testing efforts, including national testing programs, state testing programs, and…

  17. High-Performance Monitoring Architecture for Large-Scale Distributed Systems Using Event Filtering

    NASA Technical Reports Server (NTRS)

    Maly, K.

    1998-01-01

    Monitoring is an essential process to observe and improve the reliability and the performance of large-scale distributed (LSD) systems. In an LSD environment, a large number of events is generated by the system components during its execution or interaction with external objects (e.g. users or processes). Monitoring such events is necessary for observing the run-time behavior of LSD systems and providing status information required for debugging, tuning and managing such applications. However, correlated events are generated concurrently and could be distributed in various locations in the applications environment which complicates the management decisions process and thereby makes monitoring LSD systems an intricate task. We propose a scalable high-performance monitoring architecture for LSD systems to detect and classify interesting local and global events and disseminate the monitoring information to the corresponding end- points management applications such as debugging and reactive control tools to improve the application performance and reliability. A large volume of events may be generated due to the extensive demands of the monitoring applications and the high interaction of LSD systems. The monitoring architecture employs a high-performance event filtering mechanism to efficiently process the large volume of event traffic generated by LSD systems and minimize the intrusiveness of the monitoring process by reducing the event traffic flow in the system and distributing the monitoring computation. Our architecture also supports dynamic and flexible reconfiguration of the monitoring mechanism via its Instrumentation and subscription components. As a case study, we show how our monitoring architecture can be utilized to improve the reliability and the performance of the Interactive Remote Instruction (IRI) system which is a large-scale distributed system for collaborative distance learning. The filtering mechanism represents an Intrinsic component integrated

  18. Large scale structure formation of the normal branch in the DGP brane world model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Yong-Seon

    2008-06-15

    In this paper, we study the large scale structure formation of the normal branch in the DGP model (Dvail, Gabadadze, and Porrati brane world model) by applying the scaling method developed by Sawicki, Song, and Hu for solving the coupled perturbed equations of motion of on-brane and off-brane. There is a detectable departure of perturbed gravitational potential from the cold dark matter model with vacuum energy even at the minimal deviation of the effective equation of state w{sub eff} below -1. The modified perturbed gravitational potential weakens the integrated Sachs-Wolfe effect which is strengthened in the self-accelerating branch DGP model.more » Additionally, we discuss the validity of the scaling solution in the de Sitter limit at late times.« less

  19. A continental-scale hydrology and water quality model for Europe: Calibration and uncertainty of a high-resolution large-scale SWAT model

    NASA Astrophysics Data System (ADS)

    Abbaspour, K. C.; Rouholahnejad, E.; Vaghefi, S.; Srinivasan, R.; Yang, H.; Kløve, B.

    2015-05-01

    A combination of driving forces are increasing pressure on local, national, and regional water supplies needed for irrigation, energy production, industrial uses, domestic purposes, and the environment. In many parts of Europe groundwater quantity, and in particular quality, have come under sever degradation and water levels have decreased resulting in negative environmental impacts. Rapid improvements in the economy of the eastern European block of countries and uncertainties with regard to freshwater availability create challenges for water managers. At the same time, climate change adds a new level of uncertainty with regard to freshwater supplies. In this research we build and calibrate an integrated hydrological model of Europe using the Soil and Water Assessment Tool (SWAT) program. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals. Leaching of nitrate into groundwater is also simulated at a finer spatial level (HRU). The use of large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation. In this article we discuss issues with data availability, calibration of large-scale distributed models, and outline procedures for model calibration and uncertainty analysis. The calibrated model and results provide information support to the European Water Framework Directive and lay the basis for further assessment of the impact of climate change on water availability and quality. The approach and methods developed are general and can be applied to any large region around the world.

  20. State of the Art in Large-Scale Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  1. Techniques for Large-Scale Bacterial Genome Manipulation and Characterization of the Mutants with Respect to In Silico Metabolic Reconstructions.

    PubMed

    diCenzo, George C; Finan, Turlough M

    2018-01-01

    The rate at which all genes within a bacterial genome can be identified far exceeds the ability to characterize these genes. To assist in associating genes with cellular functions, a large-scale bacterial genome deletion approach can be employed to rapidly screen tens to thousands of genes for desired phenotypes. Here, we provide a detailed protocol for the generation of deletions of large segments of bacterial genomes that relies on the activity of a site-specific recombinase. In this procedure, two recombinase recognition target sequences are introduced into known positions of a bacterial genome through single cross-over plasmid integration. Subsequent expression of the site-specific recombinase mediates recombination between the two target sequences, resulting in the excision of the intervening region and its loss from the genome. We further illustrate how this deletion system can be readily adapted to function as a large-scale in vivo cloning procedure, in which the region excised from the genome is captured as a replicative plasmid. We next provide a procedure for the metabolic analysis of bacterial large-scale genome deletion mutants using the Biolog Phenotype MicroArray™ system. Finally, a pipeline is described, and a sample Matlab script is provided, for the integration of the obtained data with a draft metabolic reconstruction for the refinement of the reactions and gene-protein-reaction relationships in a metabolic reconstruction.

  2. Nonlinear Generation of shear flows and large scale magnetic fields by small scale

    NASA Astrophysics Data System (ADS)

    Aburjania, G.

    2009-04-01

    EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge

  3. HD-MTL: Hierarchical Deep Multi-Task Learning for Large-Scale Visual Recognition.

    PubMed

    Fan, Jianping; Zhao, Tianyi; Kuang, Zhenzhong; Zheng, Yu; Zhang, Ji; Yu, Jun; Peng, Jinye

    2017-02-09

    In this paper, a hierarchical deep multi-task learning (HD-MTL) algorithm is developed to support large-scale visual recognition (e.g., recognizing thousands or even tens of thousands of atomic object classes automatically). First, multiple sets of multi-level deep features are extracted from different layers of deep convolutional neural networks (deep CNNs), and they are used to achieve more effective accomplishment of the coarseto- fine tasks for hierarchical visual recognition. A visual tree is then learned by assigning the visually-similar atomic object classes with similar learning complexities into the same group, which can provide a good environment for determining the interrelated learning tasks automatically. By leveraging the inter-task relatedness (inter-class similarities) to learn more discriminative group-specific deep representations, our deep multi-task learning algorithm can train more discriminative node classifiers for distinguishing the visually-similar atomic object classes effectively. Our hierarchical deep multi-task learning (HD-MTL) algorithm can integrate two discriminative regularization terms to control the inter-level error propagation effectively, and it can provide an end-to-end approach for jointly learning more representative deep CNNs (for image representation) and more discriminative tree classifier (for large-scale visual recognition) and updating them simultaneously. Our incremental deep learning algorithms can effectively adapt both the deep CNNs and the tree classifier to the new training images and the new object classes. Our experimental results have demonstrated that our HD-MTL algorithm can achieve very competitive results on improving the accuracy rates for large-scale visual recognition.

  4. Temporal integration and 1/f power scaling in a circuit model of cerebellar interneurons.

    PubMed

    Maex, Reinoud; Gutkin, Boris

    2017-07-01

    Inhibitory interneurons interconnected via electrical and chemical (GABA A receptor) synapses form extensive circuits in several brain regions. They are thought to be involved in timing and synchronization through fast feedforward control of principal neurons. Theoretical studies have shown, however, that whereas self-inhibition does indeed reduce response duration, lateral inhibition, in contrast, may generate slow response components through a process of gradual disinhibition. Here we simulated a circuit of interneurons (stellate and basket cells) of the molecular layer of the cerebellar cortex and observed circuit time constants that could rise, depending on parameter values, to >1 s. The integration time scaled both with the strength of inhibition, vanishing completely when inhibition was blocked, and with the average connection distance, which determined the balance between lateral and self-inhibition. Electrical synapses could further enhance the integration time by limiting heterogeneity among the interneurons and by introducing a slow capacitive current. The model can explain several observations, such as the slow time course of OFF-beam inhibition, the phase lag of interneurons during vestibular rotation, or the phase lead of Purkinje cells. Interestingly, the interneuron spike trains displayed power that scaled approximately as 1/ f at low frequencies. In conclusion, stellate and basket cells in cerebellar cortex, and interneuron circuits in general, may not only provide fast inhibition to principal cells but also act as temporal integrators that build a very short-term memory. NEW & NOTEWORTHY The most common function attributed to inhibitory interneurons is feedforward control of principal neurons. In many brain regions, however, the interneurons are densely interconnected via both chemical and electrical synapses but the function of this coupling is largely unknown. Based on large-scale simulations of an interneuron circuit of cerebellar cortex, we

  5. Scale-adaptive compressive tracking with feature integration

    NASA Astrophysics Data System (ADS)

    Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin

    2016-05-01

    Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.

  6. A large-scale integrated karst-vegetation recharge model to understand the impact of climate and land cover change

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Hartmann, Andreas; Pianosi, Francesca; Wagener, Thorsten

    2017-04-01

    Karst aquifers are an important source of drinking water in many regions of the world, but their resources are likely to be affected by changes in climate and land cover. Karst areas are highly permeable and produce large amounts of groundwater recharge, while surface runoff is typically negligible. As a result, recharge in karst systems may be particularly sensitive to environmental changes compared to other less permeable systems. However, current large-scale hydrological models poorly represent karst specificities. They tend to provide an erroneous water balance and to underestimate groundwater recharge over karst areas. A better understanding of karst hydrology and estimating karst groundwater resources at a large-scale is therefore needed for guiding water management in a changing world. The first objective of the present study is to introduce explicit vegetation processes into a previously developed karst recharge model (VarKarst) to better estimate evapotranspiration losses depending on the land cover characteristics. The novelty of the approach for large-scale modelling lies in the assessment of model output uncertainty, and parameter sensitivity to avoid over-parameterisation. We find that the model so modified is able to produce simulations consistent with observations of evapotranspiration and soil moisture at Fluxnet sites located in carbonate rock areas. Secondly, we aim to determine the model sensitivities to climate and land cover characteristics, and to assess the relative influence of changes in climate and land cover on aquifer recharge. We perform virtual experiments using synthetic climate inputs, and varying the value of land cover parameters. In this way, we can control for variations in climate input characteristics (e.g. precipitation intensity, precipitation frequency) and vegetation characteristics (e.g. canopy water storage capacity, rooting depth), and we can isolate the effect that each of these quantities has on recharge. Our results

  7. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    NASA Astrophysics Data System (ADS)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  8. Large-Scale Coherent Vortex Formation in Two-Dimensional Turbulence

    NASA Astrophysics Data System (ADS)

    Orlov, A. V.; Brazhnikov, M. Yu.; Levchenko, A. A.

    2018-04-01

    The evolution of a vortex flow excited by an electromagnetic technique in a thin layer of a conducting liquid was studied experimentally. Small-scale vortices, excited at the pumping scale, merge with time due to the nonlinear interaction and produce large-scale structures—the inverse energy cascade is formed. The dependence of the energy spectrum in the developed inverse cascade is well described by the Kraichnan law k -5/3. At large scales, the inverse cascade is limited by cell sizes, and a large-scale coherent vortex flow is formed, which occupies almost the entire area of the experimental cell. The radial profile of the azimuthal velocity of the coherent vortex immediately after the pumping was switched off has been established for the first time. Inside the vortex core, the azimuthal velocity grows linearly along a radius and reaches a constant value outside the core, which agrees well with the theoretical prediction.

  9. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Intensive agriculture erodes β-diversity at large scales.

    PubMed

    Karp, Daniel S; Rominger, Andrew J; Zook, Jim; Ranganathan, Jai; Ehrlich, Paul R; Daily, Gretchen C

    2012-09-01

    Biodiversity is declining from unprecedented land conversions that replace diverse, low-intensity agriculture with vast expanses under homogeneous, intensive production. Despite documented losses of species richness, consequences for β-diversity, changes in community composition between sites, are largely unknown, especially in the tropics. Using a 10-year data set on Costa Rican birds, we find that low-intensity agriculture sustained β-diversity across large scales on a par with forest. In high-intensity agriculture, low local (α) diversity inflated β-diversity as a statistical artefact. Therefore, at small spatial scales, intensive agriculture appeared to retain β-diversity. Unlike in forest or low-intensity systems, however, high-intensity agriculture also homogenised vegetation structure over large distances, thereby decoupling the fundamental ecological pattern of bird communities changing with geographical distance. This ~40% decline in species turnover indicates a significant decline in β-diversity at large spatial scales. These findings point the way towards multi-functional agricultural systems that maintain agricultural productivity while simultaneously conserving biodiversity. © 2012 Blackwell Publishing Ltd/CNRS.

  11. Implementation of the Large-Scale Operations Management Test in the State of Washington.

    DTIC Science & Technology

    1982-12-01

    During FY 79, the U.S. Army Engineer Waterways Experiment Station (WES), Vicksburg, Miss., completed the first phase of its 3-year Large-Scale Operations Management Test (LSOMT). The LSOMT was designed to develop an operational plan to identify methodologies that can be implemented by the U.S. Army Engineer District, Seattle (NPS), to prevent the exotic aquatic macrophyte Eurasian watermilfoil (Myrophyllum spicatum L.) from reaching problem-level proportions in water bodies in the state of Washington. The WES developed specific plans as integral elements

  12. Tools for Large-Scale Mobile Malware Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierma, Michael

    Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000more » Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.« less

  13. Role of point defects and HfO2/TiN interface stoichiometry on effective work function modulation in ultra-scaled complementary metal-oxide-semiconductor devices

    NASA Astrophysics Data System (ADS)

    Pandey, R. K.; Sathiyanarayanan, Rajesh; Kwon, Unoh; Narayanan, Vijay; Murali, K. V. R. M.

    2013-07-01

    We investigate the physical properties of a portion of the gate stack of an ultra-scaled complementary metal-oxide-semiconductor (CMOS) device. The effects of point defects, such as oxygen vacancy, oxygen, and aluminum interstitials at the HfO2/TiN interface, on the effective work function of TiN are explored using density functional theory. We compute the diffusion barriers of such point defects in the bulk TiN and across the HfO2/TiN interface. Diffusion of these point defects across the HfO2/TiN interface occurs during the device integration process. This results in variation of the effective work function and hence in the threshold voltage variation in the devices. Further, we simulate the effects of varying the HfO2/TiN interface stoichiometry on the effective work function modulation in these extremely-scaled CMOS devices. Our results show that the interface rich in nitrogen gives higher effective work function, whereas the interface rich in titanium gives lower effective work function, compared to a stoichiometric HfO2/TiN interface. This theoretical prediction is confirmed by the experiment, demonstrating over 700 meV modulation in the effective work function.

  14. Stability of large-scale systems.

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1972-01-01

    The purpose of this paper is to present the results obtained in stability study of large-scale systems based upon the comparison principle and vector Liapunov functions. The exposition is essentially self-contained, with emphasis on recent innovations which utilize explicit information about the system structure. This provides a natural foundation for the stability theory of dynamic systems under structural perturbations.

  15. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  16. Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan.

    NASA Astrophysics Data System (ADS)

    Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey

    2017-04-01

    Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this

  17. Integration and Evaluation of Microscope Adapter for the Ultra-Compact Imaging Spectrometer

    NASA Astrophysics Data System (ADS)

    Smith-Dryden, S. D.; Blaney, D. L.; Van Gorp, B.; Mouroulis, P.; Green, R. O.; Sellar, R. G.; Rodriguez, J.; Wilson, D.

    2012-12-01

    Petrologic, diagenetic, impact and weathering processes often happen at scales that are not observable from orbit. On Earth, one of the most common things that a scientist does when trying to understand detailed geologic history is to create a thin section of the rock and study the mineralogy and texture. Unfortunately, sample preparation and manipulation with advanced instrumentation may be a resource intensive proposition (e.g. time, power, complexity) in-situ. Getting detailed mineralogy and textural information without sample preparation is highly desirable. Visible to short wavelength microimaging spectroscopy has the potential to provide this information without sample preparation. Wavelengths between 500-2600 nm are sensitive to a wide range of minerals including mafic, carbonates, clays, and sulfates. The Ultra-Compact Imaging Spectrometer (UCIS) has been developed as a low mass (<2.0 kg), low power (~5.2 W) Offner spectrometer, ideal for use on Mars rover or other in-situ platforms. The UCIS instrument with its HgCdTe detector provides a spectral resolution of 10 nm with a range of 500-2600 nm, in addition to a 30 degree field of view and a 1.35 mrad instantaneous field of view. (Van Gorp et al. 2011). To explore applications of this technology for microscale investigations, an f/10 microimaging adapter has been designed and integrated to allow imaging of samples. The spatial coverage of the instrument is 2.56 cm with sampling of 67.5 microns (380 spatial pixels). Because the adapter is slow relative to the UCIS detector, strong sample illumination is required. Light from the lamp box was directed through optical fiber bundles, and directed onto the sample at a high angle of incidence to provide dark field imaging. For data collection, a mineral sample is mounted on the microscope adapter and scanned by the detector as it is moved horizontally via actuator. Data from the instrument is stored as a xyz cube end product with one spectral and two spatial

  18. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  19. Effects of biasing on the galaxy power spectrum at large scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran Jimenez, Jose; Departamento de Fisica Teorica, Universidad Complutense de Madrid, 28040, Madrid; Durrer, Ruth

    2011-05-15

    In this paper we study the effect of biasing on the power spectrum at large scales. We show that even though nonlinear biasing does introduce a white noise contribution on large scales, the P(k){proportional_to}k{sup n} behavior of the matter power spectrum on large scales may still be visible and above the white noise for about one decade. We show, that the Kaiser biasing scheme which leads to linear bias of the correlation function on large scales, also generates a linear bias of the power spectrum on rather small scales. This is a consequence of the divergence on small scales ofmore » the pure Harrison-Zeldovich spectrum. However, biasing becomes k dependent if we damp the underlying power spectrum on small scales. We also discuss the effect of biasing on the baryon acoustic oscillations.« less

  20. Tradeoffs and synergies between biofuel production and large-scale solar infrastructure in deserts

    NASA Astrophysics Data System (ADS)

    Ravi, S.; Lobell, D. B.; Field, C. B.

    2012-12-01

    Solar energy installations in deserts are on the rise, fueled by technological advances and policy changes. Deserts, with a combination of high solar radiation and availability of large areas unusable for crop production are ideal locations for large scale solar installations. For efficient power generation, solar infrastructures require large amounts of water for operation (mostly for cleaning panels and dust suppression), leading to significant moisture additions to desert soil. A pertinent question is how to use the moisture inputs for sustainable agriculture/biofuel production. We investigated the water requirements for large solar infrastructures in North American deserts and explored the possibilities for integrating biofuel production with solar infrastructure. In co-located systems the possible decline in yields due to shading by solar panels may be offsetted by the benefits of periodic water addition to biofuel crops, simpler dust management and more efficient power generation in solar installations, and decreased impacts on natural habitats and scarce resources in deserts. In particular, we evaluated the potential to integrate solar infrastructure with biomass feedstocks that grow in arid and semi-arid lands (Agave Spp), which are found to produce high yields with minimal water inputs. To this end, we conducted detailed life cycle analysis for these coupled agave biofuel - solar energy systems to explore the tradeoffs and synergies, in the context of energy input-output, water use and carbon emissions.

  1. Improving the integration of recreation management with management of other natural resources by applying concepts of scale from ecology.

    PubMed

    Morse, Wayde C; Hall, Troy E; Kruger, Linda E

    2009-03-01

    In this article, we examine how issues of scale affect the integration of recreation management with the management of other natural resources on public lands. We present two theories used to address scale issues in ecology and explore how they can improve the two most widely applied recreation-planning frameworks. The theory of patch dynamics and hierarchy theory are applied to the recreation opportunity spectrum (ROS) and the limits of acceptable change (LAC) recreation-planning frameworks. These frameworks have been widely adopted internationally, and improving their ability to integrate with other aspects of natural resource management has significant social and conservation implications. We propose that incorporating ecologic criteria and scale concepts into these recreation-planning frameworks will improve the foundation for integrated land management by resolving issues of incongruent boundaries, mismatched scales, and multiple-scale analysis. Specifically, we argue that whereas the spatially explicit process of the ROS facilitates integrated decision making, its lack of ecologic criteria, broad extent, and large patch size decrease its usefulness for integration at finer scales. The LAC provides explicit considerations for weighing competing values, but measurement of recreation disturbances within an LAC analysis is often done at too fine a grain and at too narrow an extent for integration with other recreation and resource concerns. We suggest that planners should perform analysis at multiple scales when making management decisions that involve trade-offs among competing values. The United States Forest Service is used as an example to discuss how resource-management agencies can improve this integration.

  2. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  3. Ultra-low switching energy and scaling in electric-field-controlled nanoscale magnetic tunnel junctions with high resistance-area product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grezes, C.; Alzate, J. G.; Cai, X.

    2016-01-04

    We report electric-field-induced switching with write energies down to 6 fJ/bit for switching times of 0.5 ns, in nanoscale perpendicular magnetic tunnel junctions (MTJs) with high resistance-area product and diameters down to 50 nm. The ultra-low switching energy is made possible by a thick MgO barrier that ensures negligible spin-transfer torque contributions, along with a reduction of the Ohmic dissipation. We find that the switching voltage and time are insensitive to the junction diameter for high-resistance MTJs, a result accounted for by a macrospin model of purely voltage-induced switching. The measured performance enables integration with same-size CMOS transistors in compact memorymore » and logic integrated circuits.« less

  4. LASSIE: simulating large-scale models of biochemical systems on GPUs.

    PubMed

    Tangherloni, Andrea; Nobile, Marco S; Besozzi, Daniela; Mauri, Giancarlo; Cazzaniga, Paolo

    2017-05-10

    Mathematical modeling and in silico analysis are widely acknowledged as complementary tools to biological laboratory methods, to achieve a thorough understanding of emergent behaviors of cellular processes in both physiological and perturbed conditions. Though, the simulation of large-scale models-consisting in hundreds or thousands of reactions and molecular species-can rapidly overtake the capabilities of Central Processing Units (CPUs). The purpose of this work is to exploit alternative high-performance computing solutions, such as Graphics Processing Units (GPUs), to allow the investigation of these models at reduced computational costs. LASSIE is a "black-box" GPU-accelerated deterministic simulator, specifically designed for large-scale models and not requiring any expertise in mathematical modeling, simulation algorithms or GPU programming. Given a reaction-based model of a cellular process, LASSIE automatically generates the corresponding system of Ordinary Differential Equations (ODEs), assuming mass-action kinetics. The numerical solution of the ODEs is obtained by automatically switching between the Runge-Kutta-Fehlberg method in the absence of stiffness, and the Backward Differentiation Formulae of first order in presence of stiffness. The computational performance of LASSIE are assessed using a set of randomly generated synthetic reaction-based models of increasing size, ranging from 64 to 8192 reactions and species, and compared to a CPU-implementation of the LSODA numerical integration algorithm. LASSIE adopts a novel fine-grained parallelization strategy to distribute on the GPU cores all the calculations required to solve the system of ODEs. By virtue of this implementation, LASSIE achieves up to 92× speed-up with respect to LSODA, therefore reducing the running time from approximately 1 month down to 8 h to simulate models consisting in, for instance, four thousands of reactions and species. Notably, thanks to its smaller memory footprint, LASSIE

  5. Large scale integration of CVD-graphene based NEMS with narrow distribution of resonance parameters

    NASA Astrophysics Data System (ADS)

    Arjmandi-Tash, Hadi; Allain, Adrien; (Vitto Han, Zheng; Bouchiat, Vincent

    2017-06-01

    We present a novel method for the fabrication of the arrays of suspended micron-sized membranes, based on monolayer pulsed-CVD graphene. Such devices are the source of an efficient integration of graphene nano-electro-mechanical resonators, compatible with production at the wafer scale using standard photolithography and processing tools. As the graphene surface is continuously protected by the same polymer layer during the whole process, suspended graphene membranes are clean and free of imperfections such as deposits, wrinkles and tears. Batch fabrication of 100 μm-long multi-connected suspended ribbons is presented. At room temperature, mechanical resonance of electrostatically-actuated devices show narrow distribution of their characteristic parameters with high quality factor and low effective mass and resonance frequencies, as expected for low stress and adsorbate-free membranes. Upon cooling, a sharp increase of both resonant frequency and quality factor is observed, enabling to extract the thermal expansion coefficient of CVD graphene. Comparison with state-of-the-art graphene NEMS is presented.

  6. Statistical Measures of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard

    1993-12-01

    \\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.

  7. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  8. Coronal hole evolution by sudden large scale changes

    NASA Technical Reports Server (NTRS)

    Nolte, J. T.; Gerassimenko, M.; Krieger, A. S.; Solodyna, C. V.

    1978-01-01

    Sudden shifts in coronal-hole boundaries observed by the S-054 X-ray telescope on Skylab between May and November, 1973, within 1 day of CMP of the holes, at latitudes not exceeding 40 deg, are compared with the long-term evolution of coronal-hole area. It is found that large-scale shifts in boundary locations can account for most if not all of the evolution of coronal holes. The temporal and spatial scales of these large-scale changes imply that they are the results of a physical process occurring in the corona. It is concluded that coronal holes evolve by magnetic-field lines' opening when the holes are growing, and by fields' closing as the holes shrink.

  9. RoboPIV: how robotics enable PIV on a large industrial scale

    NASA Astrophysics Data System (ADS)

    Michaux, F.; Mattern, P.; Kallweit, S.

    2018-07-01

    This work demonstrates how the interaction between particle image velocimetry (PIV) and robotics can massively increase measurement efficiency. The interdisciplinary approach is shown using the complex example of an automated, large scale, industrial environment: a typical automotive wind tunnel application. Both the high degree of flexibility in choosing the measurement region and the complete automation of stereo PIV measurements are presented. The setup consists of a combination of three robots, individually used as a 6D traversing unit for the laser illumination system as well as for each of the two cameras. Synchronised movements in the same reference frame are realised through a master-slave setup with a single interface to the user. By integrating the interface into the standard wind tunnel management system, a single measurement plane or a predefined sequence of several planes can be requested through a single trigger event, providing the resulting vector fields within minutes. In this paper, a brief overview on the demands of large scale industrial PIV and the existing solutions is given. Afterwards, the concept of RoboPIV is introduced as a new approach. In a first step, the usability of a selection of commercially available robot arms is analysed. The challenges of pose uncertainty and importance of absolute accuracy are demonstrated through comparative measurements, explaining the individual pros and cons of the analysed systems. Subsequently, the advantage of integrating RoboPIV directly into the existing wind tunnel management system is shown on basis of a typical measurement sequence. In a final step, a practical measurement procedure, including post-processing, is given by using real data and results. Ultimately, the benefits of high automation are demonstrated, leading to a drastic reduction in necessary measurement time compared to non-automated systems, thus massively increasing the efficiency of PIV measurements.

  10. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  11. Large-scale microwave anisotropy from gravitating seeds

    NASA Technical Reports Server (NTRS)

    Veeraraghavan, Shoba; Stebbins, Albert

    1992-01-01

    Topological defects could have seeded primordial inhomogeneities in cosmological matter. We examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. We describe the pattern of the resulting large angular scale microwave anisotropy.

  12. Computer Technology, Large-Scale Social Integration, and the Local Community.

    ERIC Educational Resources Information Center

    Calhoun, Craig

    1986-01-01

    A conceptual framework is proposed for studying variations in kind and extent of social integration and relatedness, such as those new communication technology may foster. Emphasis is on the contrast between direct and indirect social relationships. The framework is illustrated by consideration of potential social impacts of widespread…

  13. The Galactic Magnetic Field and Ultra-High Energy Cosmic Rays

    NASA Astrophysics Data System (ADS)

    Urban, Federico R.

    The Galactic Magnetic Field is a peeving and importune screen between Ultra-High Energy Cosmic Rays and us cosmologists, engaged in the combat to unveil their properties and origin, as it deviates their paths towards the Earth in unpredictable ways. I will, in this order: briefly review the available field models on the market; explain a little trick which allows one to obtain cosmic rays deflection variances without even knowing what the (random) GMF model is; and argue that there is a lack of anisotropy in the large scales cosmic rays signal, which the Galactic field can do nothing about.

  14. Preventing Large-Scale Controlled Substance Diversion From Within the Pharmacy

    PubMed Central

    Martin, Emory S.; Dzierba, Steven H.; Jones, David M.

    2013-01-01

    Large-scale diversion of controlled substances (CS) from within a hospital or heath system pharmacy is a rare but growing problem. It is the responsibility of pharmacy leadership to scrutinize control processes to expose weaknesses. This article reviews examples of large-scale diversion incidents and diversion techniques and provides practical strategies to stimulate enhanced CS security within the pharmacy staff. Large-scale diversion from within a pharmacy department can be averted by a pharmacist-in-charge who is informed and proactive in taking effective countermeasures. PMID:24421497

  15. Optically addressed ultra-wideband phased antenna array

    NASA Astrophysics Data System (ADS)

    Bai, Jian

    Demands for high data rate and multifunctional apertures from both civilian and military users have motivated development of ultra-wideband (UWB) electrically steered phased arrays. Meanwhile, the need for large contiguous frequency is pushing operation of radio systems into the millimeter-wave (mm-wave) range. Therefore, modern radio systems require UWB performance from VHF to mm-wave. However, traditional electronic systems suffer many challenges that make achieving these requirements difficult. Several examples includes: voltage controlled oscillators (VCO) cannot provide a tunable range of several octaves, distribution of wideband local oscillator signals undergo high loss and dispersion through RF transmission lines, and antennas have very limited bandwidth or bulky sizes. Recently, RF photonics technology has drawn considerable attention because of its advantages over traditional systems, with the capability of offering extreme power efficiency, information capacity, frequency agility, and spatial beam diversity. A hybrid RF photonic communication system utilizing optical links and an RF transducer at the antenna potentially provides ultra-wideband data transmission, i.e., over 100 GHz. A successful implementation of such an optically addressed phased array requires addressing several key challenges. Photonic generation of an RF source with over a seven-octave bandwidth has been demonstrated in the last few years. However, one challenge which still remains is how to convey phased optical signals to downconversion modules and antennas. Therefore, a feed network with phase sweeping capability and low excessive phase noise needs to be developed. Another key challenge is to develop an ultra-wideband array antenna. Modern frontends require antennas to be compact, planar, and low-profile in addition to possessing broad bandwidth, conforming to stringent space, weight, cost, and power constraints. To address these issues, I will study broadband and miniaturization

  16. NIAC Phase I Study Final Report on Large Ultra-Lightweight Photonic Muscle Space Structures

    NASA Technical Reports Server (NTRS)

    Ritter, Joe

    2016-01-01

    way to make large inexpensive deployable mirrors where the cost is measured in millions, not billions like current efforts. For example we seek an interim goal within 10 years of a Hubble size (2.4m) primary mirror weighing 1 pound at a cost of 10K in materials. Described here is a technology using thin ultra lightweight materials where shape can be controlled simply with a beam of light, allowing imaging with incredibly low mass yet precisely shaped mirrors. These " Photonic Muscle" substrates will eventually make precision control of giant s p a c e apertures (mirrors) possible. OCCAM substrates make precision control of giant ultra light-weight mirror apertures possible. This technology is posed to create a revolution in remote sensing by making large ultra lightweight space telescopes a fiscal and material reality over the next decade.

  17. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  18. PGen: large-scale genomic variations analysis workflow and browser in SoyKB.

    PubMed

    Liu, Yang; Khan, Saad M; Wang, Juexin; Rynge, Mats; Zhang, Yuanxun; Zeng, Shuai; Chen, Shiyuan; Maldonado Dos Santos, Joao V; Valliyodan, Babu; Calyam, Prasad P; Merchant, Nirav; Nguyen, Henry T; Xu, Dong; Joshi, Trupti

    2016-10-06

    With the advances in next-generation sequencing (NGS) technology and significant reductions in sequencing costs, it is now possible to sequence large collections of germplasm in crops for detecting genome-scale genetic variations and to apply the knowledge towards improvements in traits. To efficiently facilitate large-scale NGS resequencing data analysis of genomic variations, we have developed "PGen", an integrated and optimized workflow using the Extreme Science and Engineering Discovery Environment (XSEDE) high-performance computing (HPC) virtual system, iPlant cloud data storage resources and Pegasus workflow management system (Pegasus-WMS). The workflow allows users to identify single nucleotide polymorphisms (SNPs) and insertion-deletions (indels), perform SNP annotations and conduct copy number variation analyses on multiple resequencing datasets in a user-friendly and seamless way. We have developed both a Linux version in GitHub ( https://github.com/pegasus-isi/PGen-GenomicVariations-Workflow ) and a web-based implementation of the PGen workflow integrated within the Soybean Knowledge Base (SoyKB), ( http://soykb.org/Pegasus/index.php ). Using PGen, we identified 10,218,140 single-nucleotide polymorphisms (SNPs) and 1,398,982 indels from analysis of 106 soybean lines sequenced at 15X coverage. 297,245 non-synonymous SNPs and 3330 copy number variation (CNV) regions were identified from this analysis. SNPs identified using PGen from additional soybean resequencing projects adding to 500+ soybean germplasm lines in total have been integrated. These SNPs are being utilized for trait improvement using genotype to phenotype prediction approaches developed in-house. In order to browse and access NGS data easily, we have also developed an NGS resequencing data browser ( http://soykb.org/NGS_Resequence/NGS_index.php ) within SoyKB to provide easy access to SNP and downstream analysis results for soybean researchers. PGen workflow has been optimized for the most

  19. Large-scale geographic variation in distribution and abundance of Australian deep-water kelp forests.

    PubMed

    Marzinelli, Ezequiel M; Williams, Stefan B; Babcock, Russell C; Barrett, Neville S; Johnson, Craig R; Jordan, Alan; Kendrick, Gary A; Pizarro, Oscar R; Smale, Dan A; Steinberg, Peter D

    2015-01-01

    Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia's Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10-100 m to 100-1,000 km) and depths (15-60 m) across several regions ca 2-6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40-50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves.

  20. Large-Scale Geographic Variation in Distribution and Abundance of Australian Deep-Water Kelp Forests

    PubMed Central

    Marzinelli, Ezequiel M.; Williams, Stefan B.; Babcock, Russell C.; Barrett, Neville S.; Johnson, Craig R.; Jordan, Alan; Kendrick, Gary A.; Pizarro, Oscar R.; Smale, Dan A.; Steinberg, Peter D.

    2015-01-01

    Despite the significance of marine habitat-forming organisms, little is known about their large-scale distribution and abundance in deeper waters, where they are difficult to access. Such information is necessary to develop sound conservation and management strategies. Kelps are main habitat-formers in temperate reefs worldwide; however, these habitats are highly sensitive to environmental change. The kelp Ecklonia radiate is the major habitat-forming organism on subtidal reefs in temperate Australia. Here, we provide large-scale ecological data encompassing the latitudinal distribution along the continent of these kelp forests, which is a necessary first step towards quantitative inferences about the effects of climatic change and other stressors on these valuable habitats. We used the Autonomous Underwater Vehicle (AUV) facility of Australia’s Integrated Marine Observing System (IMOS) to survey 157,000 m2 of seabed, of which ca 13,000 m2 were used to quantify kelp covers at multiple spatial scales (10–100 m to 100–1,000 km) and depths (15–60 m) across several regions ca 2–6° latitude apart along the East and West coast of Australia. We investigated the large-scale geographic variation in distribution and abundance of deep-water kelp (>15 m depth) and their relationships with physical variables. Kelp cover generally increased with latitude despite great variability at smaller spatial scales. Maximum depth of kelp occurrence was 40–50 m. Kelp latitudinal distribution along the continent was most strongly related to water temperature and substratum availability. This extensive survey data, coupled with ongoing AUV missions, will allow for the detection of long-term shifts in the distribution and abundance of habitat-forming kelp and the organisms they support on a continental scale, and provide information necessary for successful implementation and management of conservation reserves. PMID:25693066

  1. Large Scale Underground Detectors in Europe

    NASA Astrophysics Data System (ADS)

    Katsanevas, S. K.

    2006-07-01

    The physics potential and the complementarity of the large scale underground European detectors: Water Cherenkov (MEMPHYS), Liquid Argon TPC (GLACIER) and Liquid Scintillator (LENA) is presented with emphasis on the major physics opportunities, namely proton decay, supernova detection and neutrino parameter determination using accelerator beams.

  2. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Integration of Large-Scale Optimization and Game Theory for Sustainable Water Quality Management

    NASA Astrophysics Data System (ADS)

    Tsao, J.; Li, J.; Chou, C.; Tung, C.

    2009-12-01

    Sustainable water quality management requires total mass control in pollutant discharge based on both the principles of not exceeding assimilative capacity in a river and equity among generations. The stream assimilative capacity is the carrying capacity of a river for the maximum waste load without violating the water quality standard and the spirit of total mass control is to optimize the waste load allocation in subregions. For the goal of sustainable watershed development, this study will use large-scale optimization theory to optimize the profit, and find the marginal values of loadings as reference of the fair price and then the best way to get the equilibrium by water quality trading for the whole of watershed will be found. On the other hand, game theory plays an important role to maximize both individual and entire profits. This study proves the water quality trading market is available in some situation, and also makes the whole participants get a better outcome.

  4. Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.

    PubMed

    Hotta, H; Rempel, M; Yokoyama, T

    2016-03-25

    The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers. Copyright © 2016, American Association for the Advancement of Science.

  5. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  6. Quantitative Serum Nuclear Magnetic Resonance Metabolomics in Large-Scale Epidemiology: A Primer on -Omic Technologies

    PubMed Central

    Kangas, Antti J; Soininen, Pasi; Lawlor, Debbie A; Davey Smith, George; Ala-Korpela, Mika

    2017-01-01

    Abstract Detailed metabolic profiling in large-scale epidemiologic studies has uncovered novel biomarkers for cardiometabolic diseases and clarified the molecular associations of established risk factors. A quantitative metabolomics platform based on nuclear magnetic resonance spectroscopy has found widespread use, already profiling over 400,000 blood samples. Over 200 metabolic measures are quantified per sample; in addition to many biomarkers routinely used in epidemiology, the method simultaneously provides fine-grained lipoprotein subclass profiling and quantification of circulating fatty acids, amino acids, gluconeogenesis-related metabolites, and many other molecules from multiple metabolic pathways. Here we focus on applications of magnetic resonance metabolomics for quantifying circulating biomarkers in large-scale epidemiology. We highlight the molecular characterization of risk factors, use of Mendelian randomization, and the key issues of study design and analyses of metabolic profiling for epidemiology. We also detail how integration of metabolic profiling data with genetics can enhance drug development. We discuss why quantitative metabolic profiling is becoming widespread in epidemiology and biobanking. Although large-scale applications of metabolic profiling are still novel, it seems likely that comprehensive biomarker data will contribute to etiologic understanding of various diseases and abilities to predict disease risks, with the potential to translate into multiple clinical settings. PMID:29106475

  7. Effects of large scale integration of wind and solar energy in Japan

    NASA Astrophysics Data System (ADS)

    Esteban, Miguel; Zhang, Qi; Utama, Agya; Tezuka, Tetsuo; Ishihara, Keiichi

    2010-05-01

    results for the country as a whole are considered it is still substantial. The results are greatly dependant on the mix between the proposed renewables (solar and wind), and by comparing different distributions and mixes, the optimum composition for the target country can be established. The methodology proposed is able to obtain the optimum mix of solar and wind power for a given system, provided that adequate storage capacity exists to allow for excess capacity to be used at times of low electricity production (at the comparatively rare times when there is neither enough sun nor wind throughout the country). This highlights the challenges of large-scale integration of renewable technologies into the electricity grid, and the necessity to combine such a system with other renewables such as hydro or ocean energy to further even out the peaks and lows in the demand.

  8. VisIRR: A Visual Analytics System for Information Retrieval and Recommendation for Large-Scale Document Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choo, Jaegul; Kim, Hannah; Clarkson, Edward

    In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less

  9. VisIRR: A Visual Analytics System for Information Retrieval and Recommendation for Large-Scale Document Data

    DOE PAGES

    Choo, Jaegul; Kim, Hannah; Clarkson, Edward; ...

    2018-01-31

    In this paper, we present an interactive visual information retrieval and recommendation system, called VisIRR, for large-scale document discovery. VisIRR effectively combines the paradigms of (1) a passive pull through query processes for retrieval and (2) an active push that recommends items of potential interest to users based on their preferences. Equipped with an efficient dynamic query interface against a large-scale corpus, VisIRR organizes the retrieved documents into high-level topics and visualizes them in a 2D space, representing the relationships among the topics along with their keyword summary. In addition, based on interactive personalized preference feedback with regard to documents,more » VisIRR provides document recommendations from the entire corpus, which are beyond the retrieved sets. Such recommended documents are visualized in the same space as the retrieved documents, so that users can seamlessly analyze both existing and newly recommended ones. This article presents novel computational methods, which make these integrated representations and fast interactions possible for a large-scale document corpus. We illustrate how the system works by providing detailed usage scenarios. Finally, we present preliminary user study results for evaluating the effectiveness of the system.« less

  10. Large- to small-scale dynamo in domains of large aspect ratio: kinematic regime

    NASA Astrophysics Data System (ADS)

    Shumaylova, Valeria; Teed, Robert J.; Proctor, Michael R. E.

    2017-04-01

    The Sun's magnetic field exhibits coherence in space and time on much larger scales than the turbulent convection that ultimately powers the dynamo. In this work, we look for numerical evidence of a large-scale magnetic field as the magnetic Reynolds number, Rm, is increased. The investigation is based on the simulations of the induction equation in elongated periodic boxes. The imposed flows considered are the standard ABC flow (named after Arnold, Beltrami & Childress) with wavenumber ku = 1 (small-scale) and a modulated ABC flow with wavenumbers ku = m, 1, 1 ± m, where m is the wavenumber corresponding to the long-wavelength perturbation on the scale of the box. The critical magnetic Reynolds number R_m^{crit} decreases as the permitted scale separation in the system increases, such that R_m^{crit} ∝ [L_x/L_z]^{-1/2}. The results show that the α-effect derived from the mean-field theory ansatz is valid for a small range of Rm after which small scale dynamo instability occurs and the mean-field approximation is no longer valid. The transition from large- to small-scale dynamo is smooth and takes place in two stages: a fast transition into a predominantly small-scale magnetic energy state and a slower transition into even smaller scales. In the range of Rm considered, the most energetic Fourier component corresponding to the structure in the long x-direction has twice the length-scale of the forcing scale. The long-wavelength perturbation imposed on the ABC flow in the modulated case is not preserved in the eigenmodes of the magnetic field.

  11. NeuroCa: integrated framework for systematic analysis of spatiotemporal neuronal activity patterns from large-scale optical recording data

    PubMed Central

    Jang, Min Jee; Nam, Yoonkey

    2015-01-01

    Abstract. Optical recording facilitates monitoring the activity of a large neural network at the cellular scale, but the analysis and interpretation of the collected data remain challenging. Here, we present a MATLAB-based toolbox, named NeuroCa, for the automated processing and quantitative analysis of large-scale calcium imaging data. Our tool includes several computational algorithms to extract the calcium spike trains of individual neurons from the calcium imaging data in an automatic fashion. Two algorithms were developed to decompose the imaging data into the activity of individual cells and subsequently detect calcium spikes from each neuronal signal. Applying our method to dense networks in dissociated cultures, we were able to obtain the calcium spike trains of ∼1000 neurons in a few minutes. Further analyses using these data permitted the quantification of neuronal responses to chemical stimuli as well as functional mapping of spatiotemporal patterns in neuronal firing within the spontaneous, synchronous activity of a large network. These results demonstrate that our method not only automates time-consuming, labor-intensive tasks in the analysis of neural data obtained using optical recording techniques but also provides a systematic way to visualize and quantify the collective dynamics of a network in terms of its cellular elements. PMID:26229973

  12. Scale relativity theory and integrative systems biology: 1. Founding principles and scale laws.

    PubMed

    Auffray, Charles; Nottale, Laurent

    2008-05-01

    In these two companion papers, we provide an overview and a brief history of the multiple roots, current developments and recent advances of integrative systems biology and identify multiscale integration as its grand challenge. Then we introduce the fundamental principles and the successive steps that have been followed in the construction of the scale relativity theory, and discuss how scale laws of increasing complexity can be used to model and understand the behaviour of complex biological systems. In scale relativity theory, the geometry of space is considered to be continuous but non-differentiable, therefore fractal (i.e., explicitly scale-dependent). One writes the equations of motion in such a space as geodesics equations, under the constraint of the principle of relativity of all scales in nature. To this purpose, covariant derivatives are constructed that implement the various effects of the non-differentiable and fractal geometry. In this first review paper, the scale laws that describe the new dependence on resolutions of physical quantities are obtained as solutions of differential equations acting in the scale space. This leads to several possible levels of description for these laws, from the simplest scale invariant laws to generalized laws with variable fractal dimensions. Initial applications of these laws to the study of species evolution, embryogenesis and cell confinement are discussed.

  13. Measuring large-scale vertical motion in the atmosphere with dropsondes

    NASA Astrophysics Data System (ADS)

    Bony, Sandrine; Stevens, Bjorn

    2017-04-01

    Large-scale vertical velocity modulates important processes in the atmosphere, including the formation of clouds, and constitutes a key component of the large-scale forcing of Single-Column Model simulations and Large-Eddy Simulations. Its measurement has also been a long-standing challenge for observationalists. We will show that it is possible to measure the vertical profile of large-scale wind divergence and vertical velocity from aircraft by using dropsondes. This methodology was tested in August 2016 during the NARVAL2 campaign in the lower Atlantic trades. Results will be shown for several research flights, the robustness and the uncertainty of measurements will be assessed, ands observational estimates will be compared with data from high-resolution numerical forecasts.

  14. Ultra-compact Marx-type high-voltage generator

    DOEpatents

    Goerz, David A.; Wilson, Michael J.

    2000-01-01

    An ultra-compact Marx-type high-voltage generator includes individual high-performance components that are closely coupled and integrated into an extremely compact assembly. In one embodiment, a repetitively-switched, ultra-compact Marx generator includes low-profile, annular-shaped, high-voltage, ceramic capacitors with contoured edges and coplanar extended electrodes used for primary energy storage; low-profile, low-inductance, high-voltage, pressurized gas switches with compact gas envelopes suitably designed to be integrated with the annular capacitors; feed-forward, high-voltage, ceramic capacitors attached across successive switch-capacitor-switch stages to couple the necessary energy forward to sufficiently overvoltage the spark gap of the next in-line switch; optimally shaped electrodes and insulator surfaces to reduce electric field stresses in the weakest regions where dissimilar materials meet, and to spread the fields more evenly throughout the dielectric materials, allowing them to operate closer to their intrinsic breakdown levels; and uses manufacturing and assembly methods to integrate the capacitors and switches into stages that can be arranged into a low-profile Marx generator.

  15. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  16. Quantifying streamflow change caused by forest disturbance at a large spatial scale: A single watershed study

    NASA Astrophysics Data System (ADS)

    Wei, Xiaohua; Zhang, Mingfang

    2010-12-01

    Climatic variability and forest disturbance are commonly recognized as two major drivers influencing streamflow change in large-scale forested watersheds. The greatest challenge in evaluating quantitative hydrological effects of forest disturbance is the removal of climatic effect on hydrology. In this paper, a method was designed to quantify respective contributions of large-scale forest disturbance and climatic variability on streamflow using the Willow River watershed (2860 km2) located in the central part of British Columbia, Canada. Long-term (>50 years) data on hydrology, climate, and timber harvesting history represented by equivalent clear-cutting area (ECA) were available to discern climatic and forestry influences on streamflow by three steps. First, effective precipitation, an integrated climatic index, was generated by subtracting evapotranspiration from precipitation. Second, modified double mass curves were developed by plotting accumulated annual streamflow against annual effective precipitation, which presented a much clearer picture of the cumulative effects of forest disturbance on streamflow following removal of climatic influence. The average annual streamflow changes that were attributed to forest disturbances and climatic variability were then estimated to be +58.7 and -72.4 mm, respectively. The positive (increasing) and negative (decreasing) values in streamflow change indicated opposite change directions, which suggest an offsetting effect between forest disturbance and climatic variability in the study watershed. Finally, a multivariate Autoregressive Integrated Moving Average (ARIMA) model was generated to establish quantitative relationships between accumulated annual streamflow deviation attributed to forest disturbances and annual ECA. The model was then used to project streamflow change under various timber harvesting scenarios. The methodology can be effectively applied to any large-scale single watershed where long-term data (>50

  17. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  18. Research on unit commitment with large-scale wind power connected power system

    NASA Astrophysics Data System (ADS)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  19. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  20. Overview of ERA Integrated Technology Demonstration (ITD) 51A Ultra-High Bypass (UHB) Integration for Hybrid Wing Body (HWB)

    NASA Technical Reports Server (NTRS)

    Flamm, Jeffrey D.; James, Kevin D.; Bonet, John T.

    2016-01-01

    The NASA Environmentally Responsible Aircraft Project (ERA) was a ve year project broken into two phases. In phase II, high N+2 Technical Readiness Level demonstrations were grouped into Integrated Technology Demonstrations (ITD). This paper describes the work done on ITD-51A: the Vehicle Systems Integration, Engine Airframe Integration Demonstration. Refinement of a Hybrid Wing Body (HWB) aircraft from the possible candidates developed in ERA Phase I was continued. Scaled powered, and unpowered wind- tunnel testing, with and without acoustics, in the NASA LARC 14- by 22-foot Subsonic Tunnel, the NASA ARC Unitary Plan Wind Tunnel, and the 40- by 80-foot test section of the National Full-Scale Aerodynamics Complex (NFAC) in conjunction with very closely coupled Computational Fluid Dynamics was used to demonstrate the fuel burn and acoustic milestone targets of the ERA Project.

  1. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  2. Large-scale Electrophysiology: Acquisition, Compression, Encryption, and Storage of Big Data

    PubMed Central

    Brinkmann, Benjamin H.; Bower, Mark R.; Stengel, Keith A.; Worrell, Gregory A.; Stead, Matt

    2009-01-01

    The use of large-scale electrophysiology to obtain high spatiotemporal resolution brain recordings (>100 channels) capable of probing the range of neural activity from local field potential oscillations to single neuron action potentials presents new challenges for data acquisition, storage, and analysis. Our group is currently performing continuous, long-term electrophysiological recordings in human subjects undergoing evaluation for epilepsy surgery using hybrid intracranial electrodes composed of up to 320 micro- and clinical macroelectrode arrays. DC-capable amplifiers, sampling at 32 kHz per channel with 18-bits of A/D resolution are capable of resolving extracellular voltages spanning single neuron action potentials, high frequency oscillations, and high amplitude ultraslow activity, but this approach generates 3 terabytes of data per day (at 4 bytes per sample) using current data formats. Data compression can provide several practical benefits, but only if data can be compressed and appended to files in real-time in a format that allows random access to data segments of varying size. Here we describe a state-of-the-art, scalable, electrophysiology platform designed for acquisition, compression, encryption, and storage of large-scale data. Data are stored in a file format that incorporates lossless data compression using range encoded differences, a 32-bit cyclically redundant checksum to ensure data integrity, and 128-bit encryption for protection of patient information. PMID:19427545

  3. Uncovering Nature’s 100 TeV Particle Accelerators in the Large-Scale Jets of Quasars

    NASA Astrophysics Data System (ADS)

    Georganopoulos, Markos; Meyer, Eileen; Sparks, William B.; Perlman, Eric S.; Van Der Marel, Roeland P.; Anderson, Jay; Sohn, S. Tony; Biretta, John A.; Norman, Colin Arthur; Chiaberge, Marco

    2016-04-01

    Since the first jet X-ray detections sixteen years ago the adopted paradigm for the X-ray emission has been the IC/CMB model that requires highly relativistic (Lorentz factors of 10-20), extremely powerful (sometimes super-Eddington) kpc scale jets. R I will discuss recently obtained strong evidence, from two different avenues, IR to optical polarimetry for PKS 1136-135 and gamma-ray observations for 3C 273 and PKS 0637-752, ruling out the EC/CMB model. Our work constrains the jet Lorentz factors to less than ~few, and leaves as the only reasonable alternative synchrotron emission from ~100 TeV jet electrons, accelerated hundreds of kpc away from the central engine. This refutes over a decade of work on the jet X-ray emission mechanism and overall energetics and, if confirmed in more sources, it will constitute a paradigm shift in our understanding of powerful large scale jets and their role in the universe. Two important findings emerging from our work will also discussed be: (i) the solid angle-integrated luminosity of the large scale jet is comparable to that of the jet core, contrary to the current belief that the core is the dominant jet radiative outlet and (ii) the large scale jets are the main source of TeV photon in the universe, something potentially important, as TeV photons have been suggested to heat up the intergalactic medium and reduce the number of dwarf galaxies formed.

  4. Preliminary Analysis of the 30-m UltraBoom Flight Test

    NASA Technical Reports Server (NTRS)

    Agnes, Gregory S.; Abelson, Robert D.; Miyake, Robert; Lin, John K. H.; Welsh, Joe; Watson, Judith J.

    2005-01-01

    Future NASA missions require long, ultra-lightweight booms to enable solar sails, large sunshields, and other gossamer-type spacecraft structures. The space experiment discussed in this paper will flight validate the non-traditional ultra lightweight rigidizable, inflatable, isogrid structure utilizing graphite shape memory polymer (GR/SMP) called UltraBoom(TradeMark). The focus of this paper is the analysis of the 3-m ground test article. The primary objective of the mission is to show that a combination of ground testing and analysis can predict the on-orbit performance of an ultra lightweight boom that is scalable, predictable, and thermomechanically stable.

  5. How Large Scales Flows May Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun's magnetic activity cycle and play important roles in shaping the Sun's magnetic field. Differential rotation amplifies the magnetic field through its shearing action and converts poloidal field into toroidal field. Poleward meridional flow near the surface carries magnetic flux that reverses the magnetic poles at about the time of solar maximum. The deeper, equatorward meridional flow can carry magnetic flux back toward the lower latitudes where it erupts through the surface to form tilted active regions that convert toroidal fields into oppositely directed poloidal fields. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun's rotation on convection produce velocity correlations that can maintain both the differential rotation and the meridional circulation. These convective motions can also influence solar activity directly by shaping the magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  6. Optical interconnect for large-scale systems

    NASA Astrophysics Data System (ADS)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  7. Contribution of peculiar shear motions to large-scale structure

    NASA Technical Reports Server (NTRS)

    Mueler, Hans-Reinhard; Treumann, Rudolf A.

    1994-01-01

    Self-gravitating shear flow instability simulations in a cold dark matter-dominated expanding Einstein-de Sitter universe have been performed. When the shear flow speed exceeds a certain threshold, self-gravitating Kelvin-Helmoholtz instability occurs, forming density voids and excesses along the shear flow layer which serve as seeds for large-scale structure formation. A possible mechanism for generating shear peculiar motions are velocity fluctuations induced by the density perturbations of the postinflation era. In this scenario, short scales grow earlier than large scales. A model of this kind may contribute to the cellular structure of the luminous mass distribution in the universe.

  8. Laser Scanning Holographic Lithography for Flexible 3D Fabrication of Multi-Scale Integrated Nano-structures and Optical Biosensors

    PubMed Central

    Yuan, Liang (Leon); Herman, Peter R.

    2016-01-01

    Three-dimensional (3D) periodic nanostructures underpin a promising research direction on the frontiers of nanoscience and technology to generate advanced materials for exploiting novel photonic crystal (PC) and nanofluidic functionalities. However, formation of uniform and defect-free 3D periodic structures over large areas that can further integrate into multifunctional devices has remained a major challenge. Here, we introduce a laser scanning holographic method for 3D exposure in thick photoresist that combines the unique advantages of large area 3D holographic interference lithography (HIL) with the flexible patterning of laser direct writing to form both micro- and nano-structures in a single exposure step. Phase mask interference patterns accumulated over multiple overlapping scans are shown to stitch seamlessly and form uniform 3D nanostructure with beam size scaled to small 200 μm diameter. In this way, laser scanning is presented as a facile means to embed 3D PC structure within microfluidic channels for integration into an optofluidic lab-on-chip, demonstrating a new laser HIL writing approach for creating multi-scale integrated microsystems. PMID:26922872

  9. 3D X-ray ultra-microscopy of bone tissue.

    PubMed

    Langer, M; Peyrin, F

    2016-02-01

    We review the current X-ray techniques with 3D imaging capability at the nano-scale: transmission X-ray microscopy, ptychography and in-line phase nano-tomography. We further review the different ultra-structural features that have so far been resolved: the lacuno-canalicular network, collagen orientation, nano-scale mineralization and their use as basis for mechanical simulations. X-ray computed tomography at the micro-metric scale is increasingly considered as the reference technique in imaging of bone micro-structure. The trend has been to push towards increasingly higher resolution. Due to the difficulty of realizing optics in the hard X-ray regime, the magnification has mainly been due to the use of visible light optics and indirect detection of the X-rays, which limits the attainable resolution with respect to the wavelength of the visible light used in detection. Recent developments in X-ray optics and instrumentation have allowed to implement several types of methods that achieve imaging that is limited in resolution by the X-ray wavelength, thus enabling computed tomography at the nano-scale. We review here the X-ray techniques with 3D imaging capability at the nano-scale: transmission X-ray microscopy, ptychography and in-line phase nano-tomography. Further, we review the different ultra-structural features that have so far been resolved and the applications that have been reported: imaging of the lacuno-canalicular network, direct analysis of collagen orientation, analysis of mineralization on the nano-scale and use of 3D images at the nano-scale to drive mechanical simulations. Finally, we discuss the issue of going beyond qualitative description to quantification of ultra-structural features.

  10. A first large-scale flood inundation forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast

  11. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  12. Sensitivity of the two-dimensional shearless mixing layer to the initial turbulent kinetic energy and integral length scale

    NASA Astrophysics Data System (ADS)

    Fathali, M.; Deshiri, M. Khoshnami

    2016-04-01

    The shearless mixing layer is generated from the interaction of two homogeneous isotropic turbulence (HIT) fields with different integral scales ℓ1 and ℓ2 and different turbulent kinetic energies E1 and E2. In this study, the sensitivity of temporal evolutions of two-dimensional, incompressible shearless mixing layers to the parametric variations of ℓ1/ℓ2 and E1/E2 is investigated. The sensitivity methodology is based on the nonintrusive approach; using direct numerical simulation and generalized polynomial chaos expansion. The analysis is carried out at Reℓ 1=90 for the high-energy HIT region and different integral length scale ratios 1 /4 ≤ℓ1/ℓ2≤4 and turbulent kinetic energy ratios 1 ≤E1/E2≤30 . It is found that the most influential parameter on the variability of the mixing layer evolution is the turbulent kinetic energy while variations of the integral length scale show a negligible influence on the flow field variability. A significant level of anisotropy and intermittency is observed in both large and small scales. In particular, it is found that large scales have higher levels of intermittency and sensitivity to the variations of ℓ1/ℓ2 and E1/E2 compared to the small scales. Reconstructed response surfaces of the flow field intermittency and the turbulent penetration depth show monotonic dependence on ℓ1/ℓ2 and E1/E2 . The mixing layer growth rate and the mixing efficiency both show sensitive dependence on the initial condition parameters. However, the probability density function of these quantities shows relatively small solution variations in response to the variations of the initial condition parameters.

  13. [A large-scale accident in Alpine terrain].

    PubMed

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  14. Sub-mm Scale Fiber Guided Deep/Vacuum Ultra-Violet Optical Source for Trapped Mercury Ion Clocks

    NASA Technical Reports Server (NTRS)

    Yi, Lin; Burt, Eric A.; Huang, Shouhua; Tjoelker, Robert L.

    2013-01-01

    We demonstrate the functionality of a mercury capillary lamp with a diameter in the sub-mm range and deep ultraviolet (DUV)/ vacuum ultraviolet (VUV) radiation delivery via an optical fiber integrated with the capillary. DUV spectrum control is observed by varying the fabrication parameters such as buffer gas type and pressure, capillary diameter, electrical resonator design, and temperature. We also show spectroscopic data of the 199Hg+ hyper-fine transition at 40.5GHz when applying the above fiber optical design. We present efforts toward micro-plasma generation in hollow-core photonic crystal fiber with related optical design and theoretical estimations. This new approach towards a more practical DUV optical interface could benefit trapped ion clock developments for future ultra-stable frequency reference and time-keeping applications.

  15. Proportional and Integral Thermal Control System for Large Scale Heating Tests

    NASA Technical Reports Server (NTRS)

    Fleischer, Van Tran

    2015-01-01

    The National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California) Flight Loads Laboratory is a unique national laboratory that supports thermal, mechanical, thermal/mechanical, and structural dynamics research and testing. A Proportional Integral thermal control system was designed and implemented to support thermal tests. A thermal control algorithm supporting a quartz lamp heater was developed based on the Proportional Integral control concept and a linearized heating process. The thermal control equations were derived and expressed in terms of power levels, integral gain, proportional gain, and differences between thermal setpoints and skin temperatures. Besides the derived equations, user's predefined thermal test information generated in the form of thermal maps was used to implement the thermal control system capabilities. Graphite heater closed-loop thermal control and graphite heater open-loop power level were added later to fulfill the demand for higher temperature tests. Verification and validation tests were performed to ensure that the thermal control system requirements were achieved. This thermal control system has successfully supported many milestone thermal and thermal/mechanical tests for almost a decade with temperatures ranging from 50 F to 3000 F and temperature rise rates from -10 F/s to 70 F/s for a variety of test articles having unique thermal profiles and test setups.

  16. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGES

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  17. Physiology and Pathophysiology in Ultra-Marathon Running

    PubMed Central

    Knechtle, Beat; Nikolaidis, Pantelis T.

    2018-01-01

    -marathons, ~50–60% of the participants experience musculoskeletal problems. The most common injuries in ultra-marathoners involve the lower limb, such as the ankle and the knee. An ultra-marathon can lead to an increase in creatine-kinase to values of 100,000–200,000 U/l depending upon the fitness level of the athlete and the length of the race. Furthermore, an ultra-marathon can lead to changes in the heart as shown by changes in cardiac biomarkers, electro- and echocardiography. Ultra-marathoners often suffer from digestive problems and gastrointestinal bleeding after an ultra-marathon is not uncommon. Liver enzymes can also considerably increase during an ultra-marathon. An ultra-marathon often leads to a temporary reduction in renal function. Ultra-marathoners often suffer from upper respiratory infections after an ultra-marathon. Considering the increased number of participants in ultra-marathons, the findings of the present review would have practical applications for a large number of sports scientists and sports medicine practitioners working in this field. PMID:29910741

  18. Physiology and Pathophysiology in Ultra-Marathon Running.

    PubMed

    Knechtle, Beat; Nikolaidis, Pantelis T

    2018-01-01

    -marathons, ~50-60% of the participants experience musculoskeletal problems. The most common injuries in ultra-marathoners involve the lower limb, such as the ankle and the knee. An ultra-marathon can lead to an increase in creatine-kinase to values of 100,000-200,000 U/l depending upon the fitness level of the athlete and the length of the race. Furthermore, an ultra-marathon can lead to changes in the heart as shown by changes in cardiac biomarkers, electro- and echocardiography. Ultra-marathoners often suffer from digestive problems and gastrointestinal bleeding after an ultra-marathon is not uncommon. Liver enzymes can also considerably increase during an ultra-marathon. An ultra-marathon often leads to a temporary reduction in renal function. Ultra-marathoners often suffer from upper respiratory infections after an ultra-marathon. Considering the increased number of participants in ultra-marathons, the findings of the present review would have practical applications for a large number of sports scientists and sports medicine practitioners working in this field.

  19. Giga-pixel fluorescent imaging over an ultra-large field-of-view using a flatbed scanner.

    PubMed

    Göröcs, Zoltán; Ling, Yuye; Yu, Meng Dai; Karahalios, Dimitri; Mogharabi, Kian; Lu, Kenny; Wei, Qingshan; Ozcan, Aydogan

    2013-11-21

    We demonstrate a new fluorescent imaging technique that can screen for fluorescent micro-objects over an ultra-wide field-of-view (FOV) of ~532 cm(2), i.e., 19 cm × 28 cm, reaching a space-bandwidth product of more than 2 billion. For achieving such a large FOV, we modified the hardware and software of a commercially available flatbed scanner, and added a custom-designed absorbing fluorescent filter, a two-dimensional array of external light sources for computer-controlled and high-angle fluorescent excitation. We also re-programmed the driver of the scanner to take full control of the scanner hardware and achieve the highest possible exposure time, gain and sensitivity for detection of fluorescent micro-objects through the gradient index self-focusing lens array that is positioned in front of the scanner sensor chip. For example, this large FOV of our imaging platform allows us to screen more than 2.2 mL of undiluted whole blood for detection of fluorescent micro-objects within <5 minutes. This high-throughput fluorescent imaging platform could be useful for rare cell research and cytometry applications by enabling rapid screening of large volumes of optically dense media. Our results constitute the first time that a flatbed scanner has been converted to a fluorescent imaging system, achieving a record large FOV.

  20. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    NASA Astrophysics Data System (ADS)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    limited in view of the integrations having being done only for 10-day forecasts. Even so, one should note that they are among very few done using forecast as opposed to reanalysis or analysis global driving data. Our results suggest that (1) running the Eta as an RCM no significant loss of large-scale kinetic energy with time seems to be taking place; (2) no disadvantage from using the Eta LBC scheme compared to the relaxation scheme is seen, while enjoying the advantage of the scheme being significantly less demanding than the relaxation given that it needs driver model fields at the outermost domain boundary only; and (3) the Eta RCM skill in forecasting large scales, with no large scale nudging, seems to be just about the same as that of the driver model, or, in the terminology of Castro et al., the Eta RCM does not lose "value of the large scale" which exists in the larger global analyses used for the initial condition and for verification.