Sample records for large scale demonstration

  1. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel supercomputers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  2. Development and Applications of a Modular Parallel Process for Large Scale Fluid/Structures Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Byun, Chansup; Kwak, Dochan (Technical Monitor)

    2001-01-01

    A modular process that can efficiently solve large scale multidisciplinary problems using massively parallel super computers is presented. The process integrates disciplines with diverse physical characteristics by retaining the efficiency of individual disciplines. Computational domain independence of individual disciplines is maintained using a meta programming approach. The process integrates disciplines without affecting the combined performance. Results are demonstrated for large scale aerospace problems on several supercomputers. The super scalability and portability of the approach is demonstrated on several parallel computers.

  3. Imaging spectroscopy links aspen genotype with below-ground processes at landscape scales

    PubMed Central

    Madritch, Michael D.; Kingdon, Clayton C.; Singh, Aditya; Mock, Karen E.; Lindroth, Richard L.; Townsend, Philip A.

    2014-01-01

    Fine-scale biodiversity is increasingly recognized as important to ecosystem-level processes. Remote sensing technologies have great potential to estimate both biodiversity and ecosystem function over large spatial scales. Here, we demonstrate the capacity of imaging spectroscopy to discriminate among genotypes of Populus tremuloides (trembling aspen), one of the most genetically diverse and widespread forest species in North America. We combine imaging spectroscopy (AVIRIS) data with genetic, phytochemical, microbial and biogeochemical data to determine how intraspecific plant genetic variation influences below-ground processes at landscape scales. We demonstrate that both canopy chemistry and below-ground processes vary over large spatial scales (continental) according to aspen genotype. Imaging spectrometer data distinguish aspen genotypes through variation in canopy spectral signature. In addition, foliar spectral variation correlates well with variation in canopy chemistry, especially condensed tannins. Variation in aspen canopy chemistry, in turn, is correlated with variation in below-ground processes. Variation in spectra also correlates well with variation in soil traits. These findings indicate that forest tree species can create spatial mosaics of ecosystem functioning across large spatial scales and that these patterns can be quantified via remote sensing techniques. Moreover, they demonstrate the utility of using optical properties as proxies for fine-scale measurements of biodiversity over large spatial scales. PMID:24733949

  4. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  5. On large-scale dynamo action at high magnetic Reynolds number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cattaneo, F.; Tobias, S. M., E-mail: smt@maths.leeds.ac.uk

    2014-07-01

    We consider the generation of magnetic activity—dynamo waves—in the astrophysical limit of very large magnetic Reynolds number. We consider kinematic dynamo action for a system consisting of helical flow and large-scale shear. We demonstrate that large-scale dynamo waves persist at high Rm if the helical flow is characterized by a narrow band of spatial scales and the shear is large enough. However, for a wide band of scales the dynamo becomes small scale with a further increase of Rm, with dynamo waves re-emerging only if the shear is then increased. We show that at high Rm, the key effect ofmore » the shear is to suppress small-scale dynamo action, allowing large-scale dynamo action to be observed. We conjecture that this supports a general 'suppression principle'—large-scale dynamo action can only be observed if there is a mechanism that suppresses the small-scale fluctuations.« less

  6. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  7. Energy transfers in large-scale and small-scale dynamos

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  8. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  9. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  10. Mean-field dynamo in a turbulence with shear and kinetic helicity fluctuations.

    PubMed

    Kleeorin, Nathan; Rogachevskii, Igor

    2008-03-01

    We study the effects of kinetic helicity fluctuations in a turbulence with large-scale shear using two different approaches: the spectral tau approximation and the second-order correlation approximation (or first-order smoothing approximation). These two approaches demonstrate that homogeneous kinetic helicity fluctuations alone with zero mean value in a sheared homogeneous turbulence cannot cause a large-scale dynamo. A mean-field dynamo is possible when the kinetic helicity fluctuations are inhomogeneous, which causes a nonzero mean alpha effect in a sheared turbulence. On the other hand, the shear-current effect can generate a large-scale magnetic field even in a homogeneous nonhelical turbulence with large-scale shear. This effect was investigated previously for large hydrodynamic and magnetic Reynolds numbers. In this study we examine the threshold required for the shear-current dynamo versus Reynolds number. We demonstrate that there is no need for a developed inertial range in order to maintain the shear-current dynamo (e.g., the threshold in the Reynolds number is of the order of 1).

  11. Sodium-cutting: a new top-down approach to cut open nanostructures on nonplanar surfaces on a large scale.

    PubMed

    Chen, Wei; Deng, Da

    2014-11-11

    We report a new, low-cost and simple top-down approach, "sodium-cutting", to cut and open nanostructures deposited on a nonplanar surface on a large scale. The feasibility of sodium-cutting was demonstrated with the successfully cutting open of ∼100% carbon nanospheres into nanobowls on a large scale from Sn@C nanospheres for the first time.

  12. Fabrication of the HIAD Large-Scale Demonstration Assembly and Upcoming Mission Applications

    NASA Technical Reports Server (NTRS)

    Swanson, G. T.; Johnson, R. K.; Hughes, S. J.; Dinonno, J. M.; Cheatwood, F M.

    2017-01-01

    Over a decade of work has been conducted in the development of NASAs Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) deployable aeroshell system and associated analytical tools. NASAs HIAD project team has developed, fabricated, and tested inflatable structures (IS) integrated with flexible thermal protection system (F-TPS), ranging in diameters from 3-6m, with cone angles of 60 and 70 deg.In 2015, United Launch Alliance (ULA) announced that they will use a HIAD (10-12m) as part of their Sensible, Modular, Autonomous Return Technology (SMART) for their upcoming Vulcan rocket. ULA expects SMART reusability, coupled with other advancements for Vulcan, will substantially reduce the cost of access to space. The first booster engine recovery via HIAD is scheduled for 2024. To meet this near-term need, as well as future NASA applications, the HIAD team is investigating taking the technology to the 10-15m diameter scale.In the last year, many significant development and fabrication efforts have been accomplished, culminating in the construction of a large-scale inflatable structure demonstration assembly. This assembly incorporated the first three tori for a 12m Mars Human-Scale Pathfinder HIAD conceptual design that was constructed with the current state of the art material set. Numerous design trades and torus fabrication demonstrations preceded this effort. In 2016, three large-scale tori (0.61m cross-section) and six subscale tori (0.25m cross-section) were manufactured to demonstrate fabrication techniques using the newest candidate material sets. These tori were tested to evaluate durability and load capacity. This work led to the selection of the inflatable structures third generation (Gen-3) structural liner. In late 2016, the three tori required for the large-scale demonstration assembly were fabricated, and then integrated in early 2017. The design includes provisions to add the remaining four tori necessary to complete the assembly of the 12m Human-Scale Pathfinder HIAD in the event future project funding becomes available.This presentation will discuss the HIAD large-scale demonstration assembly design and fabrication per-formed in the last year including the precursor tori development and the partial-stack fabrication. Potential near-term and future 10-15m HIAD applications will also be discussed.

  13. Fabrication of the HIAD Large-Scale Demonstration Assembly

    NASA Technical Reports Server (NTRS)

    Swanson, G. T.; Johnson, R. K.; Hughes, S. J.; DiNonno, J. M.; Cheatwood, F. M.

    2017-01-01

    Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) deployable aeroshell system and associated analytical tools. NASAs HIAD project team has developed, fabricated, and tested inflatable structures (IS) integrated with flexible thermal protection system (F-TPS), ranging in diameters from 3-6m, with cone angles of 60 and 70 deg.In 2015, United Launch Alliance (ULA) announced that they will use a HIAD (10-12m) as part of their Sensible, Modular, Autonomous Return Technology (SMART) for their upcoming Vulcan rocket. ULA expects SMART reusability, coupled with other advancements for Vulcan, will substantially reduce the cost of access to space. The first booster engine recovery via HIAD is scheduled for 2024. To meet this near-term need, as well as future NASA applications, the HIAD team is investigating taking the technology to the 10-15m diameter scale. In the last year, many significant development and fabrication efforts have been accomplished, culminating in the construction of a large-scale inflatable structure demonstration assembly. This assembly incorporated the first three tori for a 12m Mars Human-Scale Pathfinder HIAD conceptual design that was constructed with the current state of the art material set. Numerous design trades and torus fabrication demonstrations preceded this effort. In 2016, three large-scale tori (0.61m cross-section) and six subscale tori (0.25m cross-section) were manufactured to demonstrate fabrication techniques using the newest candidate material sets. These tori were tested to evaluate durability and load capacity. This work led to the selection of the inflatable structures third generation (Gen-3) structural liner. In late 2016, the three tori required for the large-scale demonstration assembly were fabricated, and then integrated in early 2017. The design includes provisions to add the remaining four tori necessary to complete the assembly of the 12m Human-Scale Pathfinder HIAD in the event future project funding becomes available.This presentation will discuss the HIAD large-scale demonstration assembly design and fabrication per-formed in the last year including the precursor tori development and the partial-stack fabrication. Potential near-term and future 10-15m HIAD applications will also be discussed.

  14. Theme II Joint Work Plan -2017 Collaboration and Knowledge Sharing on Large-scale Demonstration Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoliang; Stauffer, Philip H.

    This effort is designed to expedite learnings from existing and planned large demonstration projects and their associated research through effective knowledge sharing among participants in the US and China.

  15. Demonstrating a new framework for the comparison of environmental impacts from small- and large-scale hydropower and wind power projects.

    PubMed

    Bakken, Tor Haakon; Aase, Anne Guri; Hagen, Dagmar; Sundt, Håkon; Barton, David N; Lujala, Päivi

    2014-07-01

    Climate change and the needed reductions in the use of fossil fuels call for the development of renewable energy sources. However, renewable energy production, such as hydropower (both small- and large-scale) and wind power have adverse impacts on the local environment by causing reductions in biodiversity and loss of habitats and species. This paper compares the environmental impacts of many small-scale hydropower plants with a few large-scale hydropower projects and one wind power farm, based on the same set of environmental parameters; land occupation, reduction in wilderness areas (INON), visibility and impacts on red-listed species. Our basis for comparison was similar energy volumes produced, without considering the quality of the energy services provided. The results show that small-scale hydropower performs less favourably in all parameters except land occupation. The land occupation of large hydropower and wind power is in the range of 45-50 m(2)/MWh, which is more than two times larger than the small-scale hydropower, where the large land occupation for large hydropower is explained by the extent of the reservoirs. On all the three other parameters small-scale hydropower performs more than two times worse than both large hydropower and wind power. Wind power compares similarly to large-scale hydropower regarding land occupation, much better on the reduction in INON areas, and in the same range regarding red-listed species. Our results demonstrate that the selected four parameters provide a basis for further development of a fair and consistent comparison of impacts between the analysed renewable technologies. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Large Composite Structures Processing Technologies for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Clinton, R. G., Jr.; Vickers, J. H.; McMahon, W. M.; Hulcher, A. B.; Johnston, N. J.; Cano, R. J.; Belvin, H. L.; McIver, K.; Franklin, W.; Sidwell, D.

    2001-01-01

    Significant efforts have been devoted to establishing the technology foundation to enable the progression to large scale composite structures fabrication. We are not capable today of fabricating many of the composite structures envisioned for the second generation reusable launch vehicle (RLV). Conventional 'aerospace' manufacturing and processing methodologies (fiber placement, autoclave, tooling) will require substantial investment and lead time to scale-up. Out-of-autoclave process techniques will require aggressive efforts to mature the selected technologies and to scale up. Focused composite processing technology development and demonstration programs utilizing the building block approach are required to enable envisioned second generation RLV large composite structures applications. Government/industry partnerships have demonstrated success in this area and represent best combination of skills and capabilities to achieve this goal.

  17. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  18. DNA barcoding at riverscape scales: Assessing biodiversity among fishes of the genus Cottus (Teleostei) in northern Rocky Mountain streams

    Treesearch

    Michael K. Young; Kevin S. McKelvey; Kristine L. Pilgrim; Michael K. Schwartz

    2013-01-01

    There is growing interest in broad-scale biodiversity assessments that can serve as benchmarks for identifying ecological change. Genetic tools have been used for such assessments for decades, but spatial sampling considerations have largely been ignored. Here, we demonstrate how intensive sampling efforts across a large geographical scale can influence identification...

  19. Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.

    PubMed

    Hotta, H; Rempel, M; Yokoyama, T

    2016-03-25

    The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers. Copyright © 2016, American Association for the Advancement of Science.

  20. Large-scale Estimates of Leaf Area Index from Active Remote Sensing Laser Altimetry

    NASA Astrophysics Data System (ADS)

    Hopkinson, C.; Mahoney, C.

    2016-12-01

    Leaf area index (LAI) is a key parameter that describes the spatial distribution of foliage within forest canopies which in turn control numerous relationships between the ground, canopy, and atmosphere. The retrieval of LAI has demonstrated success by in-situ (digital) hemispherical photography (DHP) and airborne laser scanning (ALS) data; however, field and ALS acquisitions are often spatially limited (100's km2) and costly. Large-scale (>1000's km2) retrievals have been demonstrated by optical sensors, however, accuracies remain uncertain due to the sensor's inability to penetrate the canopy. The spaceborne Geoscience Laser Altimeter System (GLAS) provides a possible solution in retrieving large-scale derivations whilst simultaneously penetrating the canopy. LAI retrieved by multiple DHP from 6 Australian sites, representing a cross-section of Australian ecosystems, were employed to model ALS LAI, which in turn were used to infer LAI from GLAS data at 5 other sites. An optimally filtered GLAS dataset was then employed in conjunction with a host of supplementary data to build a Random Forest (RF) model to infer predictions (and uncertainties) of LAI at a 250 m resolution across the forested regions of Australia. Predictions were validated against ALS-based LAI from 20 sites (R2=0.64, RMSE=1.1 m2m-2); MODIS-based LAI were also assessed against these sites (R2=0.30, RMSE=1.78 m2m-2) to demonstrate the strength of GLAS-based predictions. The large-scale nature of current predictions was also leveraged to demonstrate large-scale relationships of LAI with other environmental characteristics, such as: canopy height, elevation, and slope. The need for such wide-scale quantification of LAI is key in the assessment and modification of forest management strategies across Australia. Such work also assists Australia's Terrestrial Ecosystem Research Network, in fulfilling their government issued mandates.

  1. Large Scale Triboelectric Nanogenerator and Self-Powered Pressure Sensor Array Using Low Cost Roll-to-Roll UV Embossing

    PubMed Central

    Dhakar, Lokesh; Gudla, Sudeep; Shan, Xuechuan; Wang, Zhiping; Tay, Francis Eng Hock; Heng, Chun-Huat; Lee, Chengkuo

    2016-01-01

    Triboelectric nanogenerators (TENGs) have emerged as a potential solution for mechanical energy harvesting over conventional mechanisms such as piezoelectric and electromagnetic, due to easy fabrication, high efficiency and wider choice of materials. Traditional fabrication techniques used to realize TENGs involve plasma etching, soft lithography and nanoparticle deposition for higher performance. But lack of truly scalable fabrication processes still remains a critical challenge and bottleneck in the path of bringing TENGs to commercial production. In this paper, we demonstrate fabrication of large scale triboelectric nanogenerator (LS-TENG) using roll-to-roll ultraviolet embossing to pattern polyethylene terephthalate sheets. These LS-TENGs can be used to harvest energy from human motion and vehicle motion from embedded devices in floors and roads, respectively. LS-TENG generated a power density of 62.5 mW m−2. Using roll-to-roll processing technique, we also demonstrate a large scale triboelectric pressure sensor array with pressure detection sensitivity of 1.33 V kPa−1. The large scale pressure sensor array has applications in self-powered motion tracking, posture monitoring and electronic skin applications. This work demonstrates scalable fabrication of TENGs and self-powered pressure sensor arrays, which will lead to extremely low cost and bring them closer to commercial production. PMID:26905285

  2. Leaky Integrate and Fire Neuron by Charge-Discharge Dynamics in Floating-Body MOSFET.

    PubMed

    Dutta, Sangya; Kumar, Vinay; Shukla, Aditya; Mohapatra, Nihar R; Ganguly, Udayan

    2017-08-15

    Neuro-biology inspired Spiking Neural Network (SNN) enables efficient learning and recognition tasks. To achieve a large scale network akin to biology, a power and area efficient electronic neuron is essential. Earlier, we had demonstrated an LIF neuron by a novel 4-terminal impact ionization based n+/p/n+ with an extended gate (gated-INPN) device by physics simulation. Excellent improvement in area and power compared to conventional analog circuit implementations was observed. In this paper, we propose and experimentally demonstrate a compact conventional 3-terminal partially depleted (PD) SOI- MOSFET (100 nm gate length) to replace the 4-terminal gated-INPN device. Impact ionization (II) induced floating body effect in SOI-MOSFET is used to capture LIF neuron behavior to demonstrate spiking frequency dependence on input. MHz operation enables attractive hardware acceleration compared to biology. Overall, conventional PD-SOI-CMOS technology enables very-large-scale-integration (VLSI) which is essential for biology scale (~10 11 neuron based) large neural networks.

  3. Transparency Film for Demonstration of Biaxial Optics.

    ERIC Educational Resources Information Center

    Camp, Paul R.

    1994-01-01

    Explains why transparency film demonstrates biaxial optical properties. Provides detailed descriptions of the procedure and equipment needed for large-scale optics demonstrations of the polarization interference pattern produced by biaxial crystals. (DDR)

  4. Robust decentralized hybrid adaptive output feedback fuzzy control for a class of large-scale MIMO nonlinear systems and its application to AHS.

    PubMed

    Huang, Yi-Shao; Liu, Wel-Ping; Wu, Min; Wang, Zheng-Wu

    2014-09-01

    This paper presents a novel observer-based decentralized hybrid adaptive fuzzy control scheme for a class of large-scale continuous-time multiple-input multiple-output (MIMO) uncertain nonlinear systems whose state variables are unmeasurable. The scheme integrates fuzzy logic systems, state observers, and strictly positive real conditions to deal with three issues in the control of a large-scale MIMO uncertain nonlinear system: algorithm design, controller singularity, and transient response. Then, the design of the hybrid adaptive fuzzy controller is extended to address a general large-scale uncertain nonlinear system. It is shown that the resultant closed-loop large-scale system keeps asymptotically stable and the tracking error converges to zero. The better characteristics of our scheme are demonstrated by simulations. Copyright © 2014. Published by Elsevier Ltd.

  5. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells

    PubMed Central

    Osiecki, Michael J.; Michl, Thomas D.; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B.; Griesser, Hans J.; Doran, Michael R.

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs. PMID:26660475

  6. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells.

    PubMed

    Osiecki, Michael J; Michl, Thomas D; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B; Griesser, Hans J; Doran, Michael R

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs.

  7. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  8. Measured acoustic characteristics of ducted supersonic jets at different model scales

    NASA Technical Reports Server (NTRS)

    Jones, R. R., III; Ahuja, K. K.; Tam, Christopher K. W.; Abdelwahab, M.

    1993-01-01

    A large-scale (about a 25x enlargement) model of the Georgia Tech Research Institute (GTRI) hardware was installed and tested in the Propulsion Systems Laboratory of the NASA Lewis Research Center. Acoustic measurements made in these two facilities are compared and the similarity in acoustic behavior over the scale range under consideration is highlighted. The study provide the acoustic data over a relatively large-scale range which may be used to demonstrate the validity of scaling methods employed in the investigation of this phenomena.

  9. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  10. NREL, California Independent System Operator, and First Solar | Energy

    Science.gov Websites

    Solar NREL, California Independent System Operator, and First Solar Demonstrate Essential Reliability Services with Utility-Scale Solar NREL, the California Independent System Operator (CAISO), and First Solar conducted a demonstration project on a large utility-scale photovoltaic (PV) power plant in California to

  11. Diagnostics of a large-scale irregularity in the electron density near the boundary of the radio transparency frequency range of the ionosphere

    NASA Astrophysics Data System (ADS)

    Afanasiev, N. T.; Markov, V. P.

    2011-08-01

    Approximate functional relationships for the calculation of a disturbed transionogram with a trace deformation caused by the influence of a large-scale irregularity in the electron density are obtained. Numerical and asymptotic modeling of disturbed transionograms at various positions of a spacecraft relative to a ground-based observation point is performed. A possibility of the determination of the intensity and dimensions of a single large-scale irregularity near the boundary of the radio transparency frequency range of the ionosphere is demonstrated.

  12. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  13. Solar Trees: First Large-Scale Demonstration of Fully Solution Coated, Semitransparent, Flexible Organic Photovoltaic Modules.

    PubMed

    Berny, Stephane; Blouin, Nicolas; Distler, Andreas; Egelhaaf, Hans-Joachim; Krompiec, Michal; Lohr, Andreas; Lozman, Owen R; Morse, Graham E; Nanson, Lana; Pron, Agnieszka; Sauermann, Tobias; Seidler, Nico; Tierney, Steve; Tiwana, Priti; Wagner, Michael; Wilson, Henry

    2016-05-01

    The technology behind a large area array of flexible solar cells with a unique design and semitransparent blue appearance is presented. These modules are implemented in a solar tree installation at the German pavilion in the EXPO2015 in Milan/IT. The modules show power conversion efficiencies of 4.5% and are produced exclusively using standard printing techniques for large-scale production.

  14. SQDFT: Spectral Quadrature method for large-scale parallel O(N) Kohn-Sham calculations at high temperature

    NASA Astrophysics Data System (ADS)

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; Pask, John E.

    2018-03-01

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method for O(N) Kohn-Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw-Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw-Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. We further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect O(N) scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.

  15. Generalized Master Equation with Non-Markovian Multichromophoric Förster Resonance Energy Transfer for Modular Exciton Densities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jang, Seogjoo; Hoyer, Stephan; Fleming, Graham

    2014-10-31

    A generalized master equation (GME) governing quantum evolution of modular exciton density (MED) is derived for large scale light harvesting systems composed of weakly interacting modules of multiple chromophores. The GME-MED offers a practical framework to incorporate real time coherent quantum dynamics calculations of small length scales into dynamics over large length scales, and also provides a non-Markovian generalization and rigorous derivation of the Pauli master equation employing multichromophoric Förster resonance energy transfer rates. A test of the GME-MED for four sites of the Fenna-Matthews-Olson complex demonstrates how coherent dynamics of excitonic populations over coupled chromophores can be accurately describedmore » by transitions between subgroups (modules) of delocalized excitons. Application of the GME-MED to the exciton dynamics between a pair of light harvesting complexes in purple bacteria demonstrates its promise as a computationally efficient tool to investigate large scale exciton dynamics in complex environments.« less

  16. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  17. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  18. Progressive Mid-latitude Afforestation: Local and Remote Climate Impacts in the Framework of Two Coupled Earth System Models

    NASA Astrophysics Data System (ADS)

    Lague, Marysa

    Vegetation influences the atmosphere in complex and non-linear ways, such that large-scale changes in vegetation cover can drive changes in climate on both local and global scales. Large-scale land surface changes have been shown to introduce excess energy to one hemisphere, causing a shift in atmospheric circulation on a global scale. However, past work has not quantified how the climate response scales with the area of vegetation. Here, we systematically evaluate the response of climate to linearly increasing the area of forest cover over the northern mid-latitudes. We show that the magnitude of afforestation of the northern mid-latitudes determines the climate response in a non-linear fashion, and identify a threshold in vegetation-induced cloud feedbacks - a concept not previously addressed by large-scale vegetation manipulation experiments. Small increases in tree cover drive compensating cloud feedbacks, while latent heat fluxes reach a threshold after sufficiently large increases in tree cover, causing the troposphere to warm and dry, subsequently reducing cloud cover. Increased absorption of solar radiation at the surface is driven by both surface albedo changes and cloud feedbacks. We identify how vegetation-induced changes in cloud cover further feedback on changes in the global energy balance. We also show how atmospheric cross-equatorial energy transport changes as the area of afforestation is incrementally increased (a relationship which has not previously been demonstrated). This work demonstrates that while some climate effects (such as energy transport) of large scale mid-latitude afforestation scale roughly linearly across a wide range of afforestation areas, others (such as the local partitioning of the surface energy budget) are non-linear, and sensitive to the particular magnitude of mid-latitude forcing. Our results highlight the importance of considering both local and remote climate responses to large-scale vegetation change, and explore the scaling relationship between changes in vegetation cover and the resulting climate impacts.

  19. Modification in drag of turbulent boundary layers resulting from manipulation of large-scale structures

    NASA Technical Reports Server (NTRS)

    Corke, T. C.; Guezennec, Y.; Nagib, H. M.

    1981-01-01

    The effects of placing a parallel-plate turbulence manipulator in a boundary layer are documented through flow visualization and hot wire measurements. The boundary layer manipulator was designed to manage the large scale structures of turbulence leading to a reduction in surface drag. The differences in the turbulent structure of the boundary layer are summarized to demonstrate differences in various flow properties. The manipulator inhibited the intermittent large scale structure of the turbulent boundary layer for at least 70 boundary layer thicknesses downstream. With the removal of the large scale, the streamwise turbulence intensity levels near the wall were reduced. The downstream distribution of the skin friction was also altered by the introduction of the manipulator.

  20. Large-Scale, Three–Dimensional, Free–Standing, and Mesoporous Metal Oxide Networks for High–Performance Photocatalysis

    PubMed Central

    Bai, Hua; Li, Xinshi; Hu, Chao; Zhang, Xuan; Li, Junfang; Yan, Yan; Xi, Guangcheng

    2013-01-01

    Mesoporous nanostructures represent a unique class of photocatalysts with many applications, including splitting of water, degradation of organic contaminants, and reduction of carbon dioxide. In this work, we report a general Lewis acid catalytic template route for the high–yield producing single– and multi–component large–scale three–dimensional (3D) mesoporous metal oxide networks. The large-scale 3D mesoporous metal oxide networks possess large macroscopic scale (millimeter–sized) and mesoporous nanostructure with huge pore volume and large surface exposure area. This method also can be used for the synthesis of large–scale 3D macro/mesoporous hierarchical porous materials and noble metal nanoparticles loaded 3D mesoporous networks. Photocatalytic degradation of Azo dyes demonstrated that the large–scale 3D mesoporous metal oxide networks enable high photocatalytic activity. The present synthetic method can serve as the new design concept for functional 3D mesoporous nanomaterials. PMID:23857595

  1. Solutions of large-scale electromagnetics problems involving dielectric objects with the parallel multilevel fast multipole algorithm.

    PubMed

    Ergül, Özgür

    2011-11-01

    Fast and accurate solutions of large-scale electromagnetics problems involving homogeneous dielectric objects are considered. Problems are formulated with the electric and magnetic current combined-field integral equation and discretized with the Rao-Wilton-Glisson functions. Solutions are performed iteratively by using the multilevel fast multipole algorithm (MLFMA). For the solution of large-scale problems discretized with millions of unknowns, MLFMA is parallelized on distributed-memory architectures using a rigorous technique, namely, the hierarchical partitioning strategy. Efficiency and accuracy of the developed implementation are demonstrated on very large problems involving as many as 100 million unknowns.

  2. Development of 5123 Intron-Length Polymorphic Markers for Large-Scale Genotyping Applications in Foxtail Millet

    PubMed Central

    Muthamilarasan, Mehanathan; Venkata Suresh, B.; Pandey, Garima; Kumari, Kajal; Parida, Swarup Kumar; Prasad, Manoj

    2014-01-01

    Generating genomic resources in terms of molecular markers is imperative in molecular breeding for crop improvement. Though development and application of microsatellite markers in large-scale was reported in the model crop foxtail millet, no such large-scale study was conducted for intron-length polymorphic (ILP) markers. Considering this, we developed 5123 ILP markers, of which 4049 were physically mapped onto 9 chromosomes of foxtail millet. BLAST analysis of 5123 expressed sequence tags (ESTs) suggested the function for ∼71.5% ESTs and grouped them into 5 different functional categories. About 440 selected primer pairs representing the foxtail millet genome and the different functional groups showed high-level of cross-genera amplification at an average of ∼85% in eight millets and five non-millet species. The efficacy of the ILP markers for distinguishing the foxtail millet is demonstrated by observed heterozygosity (0.20) and Nei's average gene diversity (0.22). In silico comparative mapping of physically mapped ILP markers demonstrated substantial percentage of sequence-based orthology and syntenic relationship between foxtail millet chromosomes and sorghum (∼50%), maize (∼46%), rice (∼21%) and Brachypodium (∼21%) chromosomes. Hence, for the first time, we developed large-scale ILP markers in foxtail millet and demonstrated their utility in germplasm characterization, transferability, phylogenetics and comparative mapping studies in millets and bioenergy grass species. PMID:24086082

  3. FIELD DEMONSTRATION OF EMERGING PIPE WALL INTEGRITY ASSESSMENT TECHNOLOGIES FOR LARGE CAST IRON WATER MAINS

    EPA Science Inventory

    The U.S. Environmental Protection Agency sponsored a large-scale field demonstration of innovative leak detection/location and condition assessment technologies on a 76-year old, 2,500-ft long, cement-lined, 24-in. cast iron water main in Louisville, KY from July through Septembe...

  4. Field Demonstration of Emerging Pipe Wall Integrity Assessment Technologies for Large Cast Iron Water Mains - Paper

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) sponsored a large-scale field demonstration of innovative leak detection/location and condition assessment technologies on a 76-year old, 2,000-ft long, cement-lined, 24-in. cast-iron water main in Louisville, KY from July through Se...

  5. HiQuant: Rapid Postquantification Analysis of Large-Scale MS-Generated Proteomics Data.

    PubMed

    Bryan, Kenneth; Jarboui, Mohamed-Ali; Raso, Cinzia; Bernal-Llinares, Manuel; McCann, Brendan; Rauch, Jens; Boldt, Karsten; Lynn, David J

    2016-06-03

    Recent advances in mass-spectrometry-based proteomics are now facilitating ambitious large-scale investigations of the spatial and temporal dynamics of the proteome; however, the increasing size and complexity of these data sets is overwhelming current downstream computational methods, specifically those that support the postquantification analysis pipeline. Here we present HiQuant, a novel application that enables the design and execution of a postquantification workflow, including common data-processing steps, such as assay normalization and grouping, and experimental replicate quality control and statistical analysis. HiQuant also enables the interpretation of results generated from large-scale data sets by supporting interactive heatmap analysis and also the direct export to Cytoscape and Gephi, two leading network analysis platforms. HiQuant may be run via a user-friendly graphical interface and also supports complete one-touch automation via a command-line mode. We evaluate HiQuant's performance by analyzing a large-scale, complex interactome mapping data set and demonstrate a 200-fold improvement in the execution time over current methods. We also demonstrate HiQuant's general utility by analyzing proteome-wide quantification data generated from both a large-scale public tyrosine kinase siRNA knock-down study and an in-house investigation into the temporal dynamics of the KSR1 and KSR2 interactomes. Download HiQuant, sample data sets, and supporting documentation at http://hiquant.primesdb.eu .

  6. On the limitations of General Circulation Climate Models

    NASA Technical Reports Server (NTRS)

    Stone, Peter H.; Risbey, James S.

    1990-01-01

    General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.

  7. Evidence for Large Decadal Variability in the Tropical Mean Radiative Energy Budget

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Wong, Takmeng; Allan, Richard; Slingo, Anthony; Kiehl, Jeffrey T.; Soden, Brian J.; Gordon, C. T.; Miller, Alvin J.; Yang, Shi-Keng; Randall, David R.; hide

    2001-01-01

    It is widely assumed that variations in the radiative energy budget at large time and space scales are very small. We present new evidence from a compilation of over two decades of accurate satellite data that the top-of-atmosphere (TOA) tropical radiative energy budget is much more dynamic and variable than previously thought. We demonstrate that the radiation budget changes are caused by changes In tropical mean cloudiness. The results of several current climate model simulations fall to predict this large observed variation In tropical energy budget. The missing variability in the models highlights the critical need to Improve cloud modeling in the tropics to support Improved prediction of tropical climate on Inter-annual and decadal time scales. We believe that these data are the first rigorous demonstration of decadal time scale changes In the Earth's tropical cloudiness, and that they represent a new and necessary test of climate models.

  8. Carbon nanotube circuit integration up to sub-20 nm channel lengths.

    PubMed

    Shulaker, Max Marcel; Van Rethy, Jelle; Wu, Tony F; Liyanage, Luckshitha Suriyasena; Wei, Hai; Li, Zuanyi; Pop, Eric; Gielen, Georges; Wong, H-S Philip; Mitra, Subhasish

    2014-04-22

    Carbon nanotube (CNT) field-effect transistors (CNFETs) are a promising emerging technology projected to achieve over an order of magnitude improvement in energy-delay product, a metric of performance and energy efficiency, compared to silicon-based circuits. However, due to substantial imperfections inherent with CNTs, the promise of CNFETs has yet to be fully realized. Techniques to overcome these imperfections have yielded promising results, but thus far only at large technology nodes (1 μm device size). Here we demonstrate the first very large scale integration (VLSI)-compatible approach to realizing CNFET digital circuits at highly scaled technology nodes, with devices ranging from 90 nm to sub-20 nm channel lengths. We demonstrate inverters functioning at 1 MHz and a fully integrated CNFET infrared light sensor and interface circuit at 32 nm channel length. This demonstrates the feasibility of realizing more complex CNFET circuits at highly scaled technology nodes.

  9. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions.

    PubMed

    Pavlacky, David C; Lukacs, Paul M; Blakesley, Jennifer A; Skorkowsky, Robert C; Klute, David S; Hahn, Beth A; Dreitz, Victoria J; George, T Luke; Hanni, David J

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer's sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer's sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales.

  10. A statistically rigorous sampling design to integrate avian monitoring and management within Bird Conservation Regions

    PubMed Central

    Hahn, Beth A.; Dreitz, Victoria J.; George, T. Luke

    2017-01-01

    Monitoring is an essential component of wildlife management and conservation. However, the usefulness of monitoring data is often undermined by the lack of 1) coordination across organizations and regions, 2) meaningful management and conservation objectives, and 3) rigorous sampling designs. Although many improvements to avian monitoring have been discussed, the recommendations have been slow to emerge in large-scale programs. We introduce the Integrated Monitoring in Bird Conservation Regions (IMBCR) program designed to overcome the above limitations. Our objectives are to outline the development of a statistically defensible sampling design to increase the value of large-scale monitoring data and provide example applications to demonstrate the ability of the design to meet multiple conservation and management objectives. We outline the sampling process for the IMBCR program with a focus on the Badlands and Prairies Bird Conservation Region (BCR 17). We provide two examples for the Brewer’s sparrow (Spizella breweri) in BCR 17 demonstrating the ability of the design to 1) determine hierarchical population responses to landscape change and 2) estimate hierarchical habitat relationships to predict the response of the Brewer’s sparrow to conservation efforts at multiple spatial scales. The collaboration across organizations and regions provided economy of scale by leveraging a common data platform over large spatial scales to promote the efficient use of monitoring resources. We designed the IMBCR program to address the information needs and core conservation and management objectives of the participating partner organizations. Although it has been argued that probabilistic sampling designs are not practical for large-scale monitoring, the IMBCR program provides a precedent for implementing a statistically defensible sampling design from local to bioregional scales. We demonstrate that integrating conservation and management objectives with rigorous statistical design and analyses ensures reliable knowledge about bird populations that is relevant and integral to bird conservation at multiple scales. PMID:29065128

  11. Simulating the impact of the large-scale circulation on the 2-m temperature and precipitation climatology

    NASA Astrophysics Data System (ADS)

    Bowden, Jared H.; Nolte, Christopher G.; Otte, Tanya L.

    2013-04-01

    The impact of the simulated large-scale atmospheric circulation on the regional climate is examined using the Weather Research and Forecasting (WRF) model as a regional climate model. The purpose is to understand the potential need for interior grid nudging for dynamical downscaling of global climate model (GCM) output for air quality applications under a changing climate. In this study we downscale the NCEP-Department of Energy Atmospheric Model Intercomparison Project (AMIP-II) Reanalysis using three continuous 20-year WRF simulations: one simulation without interior grid nudging and two using different interior grid nudging methods. The biases in 2-m temperature and precipitation for the simulation without interior grid nudging are unreasonably large with respect to the North American Regional Reanalysis (NARR) over the eastern half of the contiguous United States (CONUS) during the summer when air quality concerns are most relevant. This study examines how these differences arise from errors in predicting the large-scale atmospheric circulation. It is demonstrated that the Bermuda high, which strongly influences the regional climate for much of the eastern half of the CONUS during the summer, is poorly simulated without interior grid nudging. In particular, two summers when the Bermuda high was west (1993) and east (2003) of its climatological position are chosen to illustrate problems in the large-scale atmospheric circulation anomalies. For both summers, WRF without interior grid nudging fails to simulate the placement of the upper-level anticyclonic (1993) and cyclonic (2003) circulation anomalies. The displacement of the large-scale circulation impacts the lower atmosphere moisture transport and precipitable water, affecting the convective environment and precipitation. Using interior grid nudging improves the large-scale circulation aloft and moisture transport/precipitable water anomalies, thereby improving the simulated 2-m temperature and precipitation. The results demonstrate that constraining the RCM to the large-scale features in the driving fields improves the overall accuracy of the simulated regional climate, and suggest that in the absence of such a constraint, the RCM will likely misrepresent important large-scale shifts in the atmospheric circulation under a future climate.

  12. Multilevel Latent Class Analysis for Large-Scale Educational Assessment Data: Exploring the Relation between the Curriculum and Students' Mathematical Strategies

    ERIC Educational Resources Information Center

    Fagginger Auer, Marije F.; Hickendorff, Marian; Van Putten, Cornelis M.; Béguin, Anton A.; Heiser, Willem J.

    2016-01-01

    A first application of multilevel latent class analysis (MLCA) to educational large-scale assessment data is demonstrated. This statistical technique addresses several of the challenges that assessment data offers. Importantly, MLCA allows modeling of the often ignored teacher effects and of the joint influence of teacher and student variables.…

  13. Large-scale DNA Barcode Library Generation for Biomolecule Identification in High-throughput Screens.

    PubMed

    Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio

    2017-10-24

    High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.

  14. Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications

    NASA Technical Reports Server (NTRS)

    Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.

    2010-01-01

    Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.

  15. Output Control Technologies for a Large-scale PV System Considering Impacts on a Power Grid

    NASA Astrophysics Data System (ADS)

    Kuwayama, Akira

    The mega-solar demonstration project named “Verification of Grid Stabilization with Large-scale PV Power Generation systems” had been completed in March 2011 at Wakkanai, the northernmost city of Japan. The major objectives of this project were to evaluate adverse impacts of large-scale PV power generation systems connected to the power grid and develop output control technologies with integrated battery storage system. This paper describes the outline and results of this project. These results show the effectiveness of battery storage system and also proposed output control methods for a large-scale PV system to ensure stable operation of power grids. NEDO, New Energy and Industrial Technology Development Organization of Japan conducted this project and HEPCO, Hokkaido Electric Power Co., Inc managed the overall project.

  16. Propulsion simulator for magnetically-suspended wind tunnel models

    NASA Technical Reports Server (NTRS)

    Joshi, Prakash B.; Goldey, C. L.; Sacco, G. P.; Lawing, Pierce L.

    1991-01-01

    The objective of phase two of a current investigation sponsored by NASA Langley Research Center is to demonstrate the measurement of aerodynamic forces/moments, including the effects of exhaust gases, in magnetic suspension and balance system (MSBS) wind tunnels. Two propulsion simulator models are being developed: a small-scale and a large-scale unit, both employing compressed, liquified carbon dioxide as propellant. The small-scale unit was designed, fabricated, and statically-tested at Physical Sciences Inc. (PSI). The large-scale simulator is currently in the preliminary design stage. The small-scale simulator design/development is presented, and the data from its static firing on a thrust stand are discussed. The analysis of this data provides important information for the design of the large-scale unit. A description of the preliminary design of the device is also presented.

  17. Selection and Manufacturing of Membrane Materials for Solar Sails

    NASA Technical Reports Server (NTRS)

    Bryant, Robert G.; Seaman, Shane T.; Wilkie, W. Keats; Miyaucchi, Masahiko; Working, Dennis C.

    2013-01-01

    Commercial metallized polyimide or polyester films and hand-assembly techniques are acceptable for small solar sail technology demonstrations, although scaling this approach to large sail areas is impractical. Opportunities now exist to use new polymeric materials specifically designed for solar sailing applications, and take advantage of integrated sail manufacturing to enable large-scale solar sail construction. This approach has, in part, been demonstrated on the JAXA IKAROS solar sail demonstrator, and NASA Langley Research Center is now developing capabilities to produce ultrathin membranes for solar sails by integrating resin synthesis with film forming and sail manufacturing processes. This paper will discuss the selection and development of polymer material systems for space, and these new processes for producing ultrathin high-performance solar sail membrane films.

  18. ORNL Demonstrates Large-Scale Technique to Produce Quantum Dots

    ScienceCinema

    Graham, David; Moon, Ji-Won

    2018-01-16

    A method to produce significant amounts of semiconducting nanoparticles for light-emitting displays, sensors, solar panels and biomedical applications has gained momentum with a demonstration by researchers at Oak Ridge National Laboratory.

  19. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  20. SQDFT: Spectral Quadrature method for large-scale parallel O ( N ) Kohn–Sham calculations at high temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less

  1. SQDFT: Spectral Quadrature method for large-scale parallel O ( N ) Kohn–Sham calculations at high temperature

    DOE PAGES

    Suryanarayana, Phanish; Pratapa, Phanisri P.; Sharma, Abhiraj; ...

    2017-12-07

    We present SQDFT: a large-scale parallel implementation of the Spectral Quadrature (SQ) method formore » $$\\mathscr{O}(N)$$ Kohn–Sham Density Functional Theory (DFT) calculations at high temperature. Specifically, we develop an efficient and scalable finite-difference implementation of the infinite-cell Clenshaw–Curtis SQ approach, in which results for the infinite crystal are obtained by expressing quantities of interest as bilinear forms or sums of bilinear forms, that are then approximated by spatially localized Clenshaw–Curtis quadrature rules. We demonstrate the accuracy of SQDFT by showing systematic convergence of energies and atomic forces with respect to SQ parameters to reference diagonalization results, and convergence with discretization to established planewave results, for both metallic and insulating systems. Here, we further demonstrate that SQDFT achieves excellent strong and weak parallel scaling on computer systems consisting of tens of thousands of processors, with near perfect $$\\mathscr{O}(N)$$ scaling with system size and wall times as low as a few seconds per self-consistent field iteration. Finally, we verify the accuracy of SQDFT in large-scale quantum molecular dynamics simulations of aluminum at high temperature.« less

  2. Retaining large and adjustable elastic strains of kilogram-scale Nb nanowires [Better Superconductor by Elastic Strain Engineering: Kilogram-scale Free-Standing Niobium Metal Composite with Large Retained Elastic Strains

    DOE PAGES

    Hao, Shijie; Cui, Lishan; Wang, Hua; ...

    2016-02-10

    Crystals held at ultrahigh elastic strains and stresses may exhibit exceptional physical and chemical properties. Individual metallic nanowires can sustain ultra-large elastic strains of 4-7%. However, retaining elastic strains of such magnitude in kilogram-scale nanowires is challenging. Here, we find that under active load, ~5.6% elastic strain can be achieved in Nb nanowires in a composite material. Moreover, large tensile (2.8%) and compressive (-2.4%) elastic strains can be retained in kilogram-scale Nb nanowires when the composite is unloaded to a free-standing condition. It is then demonstrated that the retained tensile elastic strains of Nb nanowires significantly increase their superconducting transitionmore » temperature and critical magnetic fields, corroborating ab initio calculations based on BCS theory. This free-standing nanocomposite design paradigm opens new avenues for retaining ultra-large elastic strains in great quantities of nanowires and elastic-strain-engineering at industrial scale.« less

  3. Propulsion simulator for magnetically-suspended wind tunnel models

    NASA Technical Reports Server (NTRS)

    Joshi, P. B.; Malonson, M. R.; Sacco, G. P.; Goldey, C. L.; Garbutt, Keith; Goodyer, M.

    1992-01-01

    In order to demonstrate the measurement of aerodynamic forces/moments, including the effects of exhaust jets in Magnetic Suspension and Balance System (MSBS) wind tunnels, two propulsion simulator models were developed at Physical Sciences Inc. (PSI). Both the small-scale model (1 in. diameter X 8 in. long) and the large-scale model (2.5 in. diameter X 15 in. long) employed compressed, liquefied carbon dioxide as a propellant. The small-scale simulator, made from a highly magnetizable iron alloy, was demonstrated in the 7 in. MSBS wind tunnel at the University of Southampton. It developed a maximum thrust of approximate 1.3 lbf with a 0.098 in. diameter nozzle and 0.7 lbf with a 0.295 in. diameter nozzle. The Southampton MSBS was able to control the simulator at angles-of attack up to 20 deg. The large-scale simulator was demonstrated to operate in both a steady-state and a pulse mode via a miniaturized solinoid valve. It developed a stable and repeatable thrust of 2.75 lbf over a period of 4s and a nozzle pressure ratio (NPR) of 5.

  4. Regional reanalysis without local data: Exploiting the downscaling paradigm

    NASA Astrophysics Data System (ADS)

    von Storch, Hans; Feser, Frauke; Geyer, Beate; Klehmet, Katharina; Li, Delei; Rockel, Burkhardt; Schubert-Frisius, Martina; Tim, Nele; Zorita, Eduardo

    2017-08-01

    This paper demonstrates two important aspects of regional dynamical downscaling of multidecadal atmospheric reanalysis. First, that in this way skillful regional descriptions of multidecadal climate variability may be constructed in regions with little or no local data. Second, that the concept of large-scale constraining allows global downscaling, so that global reanalyses may be completed by additions of consistent detail in all regions of the world. Global reanalyses suffer from inhomogeneities. However, their large-scale componenst are mostly homogeneous; Therefore, the concept of downscaling may be applied to homogeneously complement the large-scale state of the reanalyses with regional detail—wherever the condition of homogeneity of the description of large scales is fulfilled. Technically, this can be done by dynamical downscaling using a regional or global climate model, which's large scales are constrained by spectral nudging. This approach has been developed and tested for the region of Europe, and a skillful representation of regional weather risks—in particular marine risks—was identified. We have run this system in regions with reduced or absent local data coverage, such as Central Siberia, the Bohai and Yellow Sea, Southwestern Africa, and the South Atlantic. Also, a global simulation was computed, which adds regional features to prescribed global dynamics. Our cases demonstrate that spatially detailed reconstructions of the climate state and its change in the recent three to six decades add useful supplementary information to existing observational data for midlatitude and subtropical regions of the world.

  5. Large-scale delamination of multi-layers transition metal carbides and carbonitrides “MXenes”

    DOE PAGES

    Naguib, Michael; Unocic, Raymond R.; Armstrong, Beth L.; ...

    2015-04-17

    Herein we report on a general approach to delaminate multi-layered MXenes using an organic base to induce swelling that in turn weakens the bonds between the MX layers. Simple agitation or mild sonication of the swollen MXene in water resulted in the large-scale delamination of the MXene layers. The delamination method is demonstrated for vanadium carbide, and titanium carbonitrides MXenes.

  6. Newly invented biobased materials from low-carbon, diverted waste fibers: research methods, testing, and full-scale application in a case study structure

    Treesearch

    Julee A Herdt; John Hunt; Kellen Schauermann

    2016-01-01

    This project demonstrates newly invented, biobased construction materials developed by applying lowcarbon, biomass waste sources through the Authors’ engineered fiber processes and technology. If manufactured and applied large-scale the project inventions can divert large volumes of cellulose waste into high-performance, low embodied energy, environmental construction...

  7. Large-Scale Production of Fuel and Feed from Marine Microalgae

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huntley, Mark

    2015-09-30

    In summary, this Consortium has demonstrated a fully integrated process for the production of biofuels and high-value nutritional bioproducts at pre-commercial scale. We have achieved unprecedented yields of algal oil, and converted the oil to viable fuels. We have demonstrated the potential value of the residual product as a viable feed ingredient for many important animals in the global food supply.

  8. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions

    PubMed Central

    Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.

    2017-01-01

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945

  9. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.

    PubMed

    Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J

    2017-04-12

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.

  10. Selecting habitat to survive: the impact of road density on survival in a large carnivore.

    PubMed

    Basille, Mathieu; Van Moorter, Bram; Herfindal, Ivar; Martin, Jodie; Linnell, John D C; Odden, John; Andersen, Reidar; Gaillard, Jean-Michel

    2013-01-01

    Habitat selection studies generally assume that animals select habitat and food resources at multiple scales to maximise their fitness. However, animals sometimes prefer habitats of apparently low quality, especially when considering the costs associated with spatially heterogeneous human disturbance. We used spatial variation in human disturbance, and its consequences on lynx survival, a direct fitness component, to test the Hierarchical Habitat Selection hypothesis from a population of Eurasian lynx Lynx lynx in southern Norway. Data from 46 lynx monitored with telemetry indicated that a high proportion of forest strongly reduced the risk of mortality from legal hunting at the home range scale, while increasing road density strongly increased such risk at the finer scale within the home range. We found hierarchical effects of the impact of human disturbance, with a higher road density at a large scale reinforcing its negative impact at a fine scale. Conversely, we demonstrated that lynx shifted their habitat selection to avoid areas with the highest road densities within their home ranges, thus supporting a compensatory mechanism at fine scale enabling lynx to mitigate the impact of large-scale disturbance. Human impact, positively associated with high road accessibility, was thus a stronger driver of lynx space use at a finer scale, with home range characteristics nevertheless constraining habitat selection. Our study demonstrates the truly hierarchical nature of habitat selection, which aims at maximising fitness by selecting against limiting factors at multiple spatial scales, and indicates that scale-specific heterogeneity of the environment is driving individual spatial behaviour, by means of trade-offs across spatial scales.

  11. Recovering the full velocity and density fields from large-scale redshift-distance samples

    NASA Technical Reports Server (NTRS)

    Bertschinger, Edmund; Dekel, Avishai

    1989-01-01

    A new method for extracting the large-scale three-dimensional velocity and mass density fields from measurements of the radial peculiar velocities is presented. Galaxies are assumed to trace the velocity field rather than the mass. The key assumption made is that the Lagrangian velocity field has negligible vorticity, as might be expected from perturbations that grew by gravitational instability. By applying the method to cosmological N-body simulations, it is demonstrated that it accurately reconstructs the velocity field. This technique promises a direct determination of the mass density field and the initial conditions for the formation of large-scale structure from galaxy peculiar velocity surveys.

  12. Comparison of WinSLAMM Modeled Results with Monitored Biofiltration Data

    EPA Science Inventory

    The US EPA’s Green Infrastructure Demonstration project in Kansas City incorporates both small scale individual biofiltration device monitoring, along with large scale watershed monitoring. The test watershed (100 acres) is saturated with green infrastructure components (includin...

  13. Quarter Scale RLV Multi-Lobe LH2 Tank Test Program

    NASA Technical Reports Server (NTRS)

    Blum, Celia; Puissegur, Dennis; Tidwell, Zeb; Webber, Carol

    1998-01-01

    Thirty cryogenic pressure cycles have been completed on the Lockheed Martin Michoud Space Systems quarter scale RLV composite multi-lobe liquid hydrogen propellant tank assembly, completing the initial phases of testing and demonstrating technologies key to the success of large scale composite cryogenic tankage for X33, RLV, and other future launch vehicles.

  14. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Trenton Free-Fare Demonstration Project

    DOT National Transportation Integrated Search

    1978-12-01

    The "Trenton Free-Fare Demonstration" is the first large-scale test of free transit in the U.S. The New Jersey Department of Transportation, in cooperation with UMTA, Mercer County, and Mercer County Improvement Authority, is administering an Off-Pea...

  16. Statistical analysis of mesoscale rainfall: Dependence of a random cascade generator on large-scale forcing

    NASA Technical Reports Server (NTRS)

    Over, Thomas, M.; Gupta, Vijay K.

    1994-01-01

    Under the theory of independent and identically distributed random cascades, the probability distribution of the cascade generator determines the spatial and the ensemble properties of spatial rainfall. Three sets of radar-derived rainfall data in space and time are analyzed to estimate the probability distribution of the generator. A detailed comparison between instantaneous scans of spatial rainfall and simulated cascades using the scaling properties of the marginal moments is carried out. This comparison highlights important similarities and differences between the data and the random cascade theory. Differences are quantified and measured for the three datasets. Evidence is presented to show that the scaling properties of the rainfall can be captured to the first order by a random cascade with a single parameter. The dependence of this parameter on forcing by the large-scale meteorological conditions, as measured by the large-scale spatial average rain rate, is investigated for these three datasets. The data show that this dependence can be captured by a one-to-one function. Since the large-scale average rain rate can be diagnosed from the large-scale dynamics, this relationship demonstrates an important linkage between the large-scale atmospheric dynamics and the statistical cascade theory of mesoscale rainfall. Potential application of this research to parameterization of runoff from the land surface and regional flood frequency analysis is briefly discussed, and open problems for further research are presented.

  17. An Overview of NASA Efforts on Zero Boiloff Storage of Cryogenic Propellants

    NASA Technical Reports Server (NTRS)

    Hastings, Leon J.; Plachta, D. W.; Salerno, L.; Kittel, P.; Haynes, Davy (Technical Monitor)

    2001-01-01

    Future mission planning within NASA has increasingly motivated consideration of cryogenic propellant storage durations on the order of years as opposed to a few weeks or months. Furthermore, the advancement of cryocooler and passive insulation technologies in recent years has substantially improved the prospects for zero boiloff storage of cryogenics. Accordingly, a cooperative effort by NASA's Ames Research Center (ARC), Glenn Research Center (GRC), and Marshall Space Flight Center (MSFC) has been implemented to develop and demonstrate "zero boiloff" concepts for in-space storage of cryogenic propellants, particularly liquid hydrogen and oxygen. ARC is leading the development of flight-type cryocoolers, GRC the subsystem development and small scale testing, and MSFC the large scale and integrated system level testing. Thermal and fluid modeling involves a combined effort by the three Centers. Recent accomplishments include: 1) development of "zero boiloff" analytical modeling techniques for sizing the storage tankage, passive insulation, cryocooler, power source mass, and radiators; 2) an early subscale demonstration with liquid hydrogen 3) procurement of a flight-type 10 watt, 95 K pulse tube cryocooler for liquid oxygen storage and 4) assembly of a large-scale test article for an early demonstration of the integrated operation of passive insulation, destratification/pressure control, and cryocooler (commercial unit) subsystems to achieve zero boiloff storage of liquid hydrogen. Near term plans include the large-scale integrated system demonstration testing this summer, subsystem testing of the flight-type pulse-tube cryocooler with liquid nitrogen (oxygen simulant), and continued development of a flight-type liquid hydrogen pulse tube cryocooler.

  18. Fabrication of a 3D micro/nano dual-scale carbon array and its demonstration as the microelectrodes for supercapacitors

    NASA Astrophysics Data System (ADS)

    Jiang, Shulan; Shi, Tielin; Gao, Yang; Long, Hu; Xi, Shuang; Tang, Zirong

    2014-04-01

    An easily accessible method is proposed for the fabrication of a 3D micro/nano dual-scale carbon array with a large surface area. The process mainly consists of three critical steps. Firstly, a hemispherical photoresist micro-array was obtained by the cost-effective nanoimprint lithography process. Then the micro-array was transformed into hierarchical structures with longitudinal nanowires on the microstructure surface by oxygen plasma etching. Finally, the micro/nano dual-scale carbon array was fabricated by carbonizing these hierarchical photoresist structures. It has also been demonstrated that the micro/nano dual-scale carbon array can be used as the microelectrodes for supercapacitors by the electrodeposition of a manganese dioxide (MnO2) film onto the hierarchical carbon structures with greatly enhanced electrochemical performance. The specific gravimetric capacitance of the deposited micro/nano dual-scale microelectrodes is estimated to be 337 F g-1 at the scan rate of 5 mV s-1. This proposed approach of fabricating a micro/nano dual-scale carbon array provides a facile way in large-scale microstructures’ manufacturing for a wide variety of applications, including sensors and on-chip energy storage devices.

  19. Generalized Chirp Scaling Combined with Baseband Azimuth Scaling Algorithm for Large Bandwidth Sliding Spotlight SAR Imaging

    PubMed Central

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-01-01

    This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057

  20. A Chinaman's Chance in Civil Rights Demonstration: A Case Study.

    ERIC Educational Resources Information Center

    Sim, Yawsoon

    A traffic incident in April of 1975 developed into an unprecedented civil rights demonstration by Chinese residents in New York City's Chinatown in May of that year. This paper attempts to trace the factors which led to this large scale demonstration and analyze the development of decision making in this case. The demonstration was the result of…

  1. Jumpstarting commercial-scale CO 2 capture and storage with ethylene production and enhanced oil recovery in the US Gulf

    DOE PAGES

    Middleton, Richard S.; Levine, Jonathan S.; Bielicki, Jeffrey M.; ...

    2015-04-27

    CO 2 capture, utilization, and storage (CCUS) technology has yet to be widely deployed at a commercial scale despite multiple high-profile demonstration projects. We suggest that developing a large-scale, visible, and financially viable CCUS network could potentially overcome many barriers to deployment and jumpstart commercial-scale CCUS. To date, substantial effort has focused on technology development to reduce the costs of CO 2 capture from coal-fired power plants. Here, we propose that near-term investment could focus on implementing CO 2 capture on facilities that produce high-value chemicals/products. These facilities can absorb the expected impact of the marginal increase in the costmore » of production on the price of their product, due to the addition of CO 2 capture, more than coal-fired power plants. A financially viable demonstration of a large-scale CCUS network requires offsetting the costs of CO 2 capture by using the CO 2 as an input to the production of market-viable products. As a result, we demonstrate this alternative development path with the example of an integrated CCUS system where CO 2 is captured from ethylene producers and used for enhanced oil recovery in the U.S. Gulf Coast region.« less

  2. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  3. The structure of supersonic jet flow and its radiated sound

    NASA Technical Reports Server (NTRS)

    Mankbadi, Reda R.; Hayder, M. E.; Povinelli, Louis A.

    1993-01-01

    Large-eddy simulation of a supersonic jet is presented with emphasis on capturing the unsteady features of the flow pertinent to sound emission. A high-accuracy numerical scheme is used to solve the filtered, unsteady, compressible Navier-Stokes equations while modelling the subgrid-scale turbulence. For random inflow disturbance, the wave-like feature of the large-scale structure is demonstrated. The large-scale structure was then enhanced by imposing harmonic disturbances to the inflow. The limitation of using the full Navier-Stokes equation to calculate the far-field sound is discussed. Application of Lighthill's acoustic analogy is given with the objective of highlighting the difficulties that arise from the non-compactness of the source term.

  4. PedsQL™ Multidimensional Fatigue Scale in sickle cell disease: feasibility, reliability, and validity.

    PubMed

    Panepinto, Julie A; Torres, Sylvia; Bendo, Cristiane B; McCavit, Timothy L; Dinu, Bogdan; Sherman-Bien, Sandra; Bemrich-Stolz, Christy; Varni, James W

    2014-01-01

    Sickle cell disease (SCD) is an inherited blood disorder characterized by a chronic hemolytic anemia that can contribute to fatigue and global cognitive impairment in patients. The study objective was to report on the feasibility, reliability, and validity of the PedsQL™ Multidimensional Fatigue Scale in SCD for pediatric patient self-report ages 5-18 years and parent proxy-report for ages 2-18 years. This was a cross-sectional multi-site study whereby 240 pediatric patients with SCD and 303 parents completed the 18-item PedsQL™ Multidimensional Fatigue Scale. Participants also completed the PedsQL™ 4.0 Generic Core Scales. The PedsQL™ Multidimensional Fatigue Scale evidenced excellent feasibility, excellent reliability for the Total Scale Scores (patient self-report α = 0.90; parent proxy-report α = 0.95), and acceptable reliability for the three individual scales (patient self-report α = 0.77-0.84; parent proxy-report α = 0.90-0.97). Intercorrelations of the PedsQL™ Multidimensional Fatigue Scale with the PedsQL™ Generic Core Scales were predominantly in the large (≥0.50) range, supporting construct validity. PedsQL™ Multidimensional Fatigue Scale Scores were significantly worse with large effects sizes (≥0.80) for patients with SCD than for a comparison sample of healthy children, supporting known-groups discriminant validity. Confirmatory factor analysis demonstrated an acceptable to excellent model fit in SCD. The PedsQL™ Multidimensional Fatigue Scale demonstrated acceptable to excellent measurement properties in SCD. The results demonstrate the relative severity of fatigue symptoms in pediatric patients with SCD, indicating the potential clinical utility of multidimensional assessment of fatigue in patients with SCD in clinical research and practice. © 2013 Wiley Periodicals, Inc.

  5. PedsQL™ Multidimensional Fatigue Scale in Sickle Cell Disease: Feasibility, Reliability and Validity

    PubMed Central

    Panepinto, Julie A.; Torres, Sylvia; Bendo, Cristiane B.; McCavit, Timothy L.; Dinu, Bogdan; Sherman-Bien, Sandra; Bemrich-Stolz, Christy; Varni, James W.

    2013-01-01

    Background Sickle cell disease (SCD) is an inherited blood disorder characterized by a chronic hemolytic anemia that can contribute to fatigue and global cognitive impairment in patients. The study objective was to report on the feasibility, reliability, and validity of the PedsQL™ Multidimensional Fatigue Scale in SCD for pediatric patient self-report ages 5–18 years and parent proxy-report for ages 2–18 years. Procedure This was a cross-sectional multi-site study whereby 240 pediatric patients with SCD and 303 parents completed the 18-item PedsQL™ Multidimensional Fatigue Scale. Participants also completed the PedsQL™ 4.0 Generic Core Scales. Results The PedsQL™ Multidimensional Fatigue Scale evidenced excellent feasibility, excellent reliability for the Total Scale Scores (patient self-report α = 0.90; parent proxy-report α = 0.95), and acceptable reliability for the three individual scales (patient self-report α = 0.77–0.84; parent proxy-report α = 0.90–0.97). Intercorrelations of the PedsQL™ Multidimensional Fatigue Scale with the PedsQL™ Generic Core Scales were predominantly in the large (≥ 0.50) range, supporting construct validity. PedsQL™ Multidimensional Fatigue Scale Scores were significantly worse with large effects sizes (≥0.80) for patients with SCD than for a comparison sample of healthy children, supporting known-groups discriminant validity. Confirmatory factor analysis demonstrated an acceptable to excellent model fit in SCD. Conclusions The PedsQL™ Multidimensional Fatigue Scale demonstrated acceptable to excellent measurement properties in SCD. The results demonstrate the relative severity of fatigue symptoms in pediatric patients with SCD, indicating the potential clinical utility of multidimensional assessment of fatigue in patients with SCD in clinical research and practice. PMID:24038960

  6. Large scale static tests of a tilt-nacelle V/STOL propulsion/attitude control system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The concept of a combined V/STOL propulsion and aircraft attitude control system was subjected to large scale engine tests. The tilt nacelle/attitude control vane package consisted of the T55 powered Hamilton Standard Q-Fan demonstrator. Vane forces, moments, thermal and acoustic characteristics as well as the effects on propulsion system performance were measured under conditions simulating hover in and out of ground effect.

  7. Distributed Storage Inverter and Legacy Generator Integration Plus Renewables Solution for Microgrids

    DTIC Science & Technology

    2015-07-01

    Reactive kVAR Kilo Watts kW Lithium Ion Li Ion Lithium-Titanate Oxide nLTO Natural gas NG Performance Objectives PO Photovoltaic PV Power ...cloud covered) periods. The demonstration features a large (relative to the overall system power requirements) photovoltaic solar array, whose inverter...microgrid with less expensive power storage instead of large scale energy storage and that the renewable energy with small-scale power storage can

  8. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  9. Validating Bayesian truth serum in large-scale online human experiments.

    PubMed

    Frank, Morgan R; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method's mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon's Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the "honest" distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where "honest" answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers.

  10. Validating Bayesian truth serum in large-scale online human experiments

    PubMed Central

    Frank, Morgan R.; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method’s mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon’s Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the “honest” distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where “honest” answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers. PMID:28494000

  11. Advanced Connectivity Analysis (ACA): a Large Scale Functional Connectivity Data Mining Environment.

    PubMed

    Chen, Rong; Nixon, Erika; Herskovits, Edward

    2016-04-01

    Using resting-state functional magnetic resonance imaging (rs-fMRI) to study functional connectivity is of great importance to understand normal development and function as well as a host of neurological and psychiatric disorders. Seed-based analysis is one of the most widely used rs-fMRI analysis methods. Here we describe a freely available large scale functional connectivity data mining software package called Advanced Connectivity Analysis (ACA). ACA enables large-scale seed-based analysis and brain-behavior analysis. It can seamlessly examine a large number of seed regions with minimal user input. ACA has a brain-behavior analysis component to delineate associations among imaging biomarkers and one or more behavioral variables. We demonstrate applications of ACA to rs-fMRI data sets from a study of autism.

  12. The steady-state mosaic of disturbance and succession across an old-growth Central Amazon forest landscape.

    PubMed

    Chambers, Jeffrey Q; Negron-Juarez, Robinson I; Marra, Daniel Magnabosco; Di Vittorio, Alan; Tews, Joerg; Roberts, Dar; Ribeiro, Gabriel H P M; Trumbore, Susan E; Higuchi, Niro

    2013-03-05

    Old-growth forest ecosystems comprise a mosaic of patches in different successional stages, with the fraction of the landscape in any particular state relatively constant over large temporal and spatial scales. The size distribution and return frequency of disturbance events, and subsequent recovery processes, determine to a large extent the spatial scale over which this old-growth steady state develops. Here, we characterize this mosaic for a Central Amazon forest by integrating field plot data, remote sensing disturbance probability distribution functions, and individual-based simulation modeling. Results demonstrate that a steady state of patches of varying successional age occurs over a relatively large spatial scale, with important implications for detecting temporal trends on plots that sample a small fraction of the landscape. Long highly significant stochastic runs averaging 1.0 Mg biomass⋅ha(-1)⋅y(-1) were often punctuated by episodic disturbance events, resulting in a sawtooth time series of hectare-scale tree biomass. To maximize the detection of temporal trends for this Central Amazon site (e.g., driven by CO2 fertilization), plots larger than 10 ha would provide the greatest sensitivity. A model-based analysis of fractional mortality across all gap sizes demonstrated that 9.1-16.9% of tree mortality was missing from plot-based approaches, underscoring the need to combine plot and remote-sensing methods for estimating net landscape carbon balance. Old-growth tropical forests can exhibit complex large-scale structure driven by disturbance and recovery cycles, with ecosystem and community attributes of hectare-scale plots exhibiting continuous dynamic departures from a steady-state condition.

  13. Demonstration-Scale High-Cell-Density Fermentation of Pichia pastoris.

    PubMed

    Liu, Wan-Cang; Zhu, Ping

    2018-01-01

    Pichia pastoris has been one of the most successful heterologous overexpression systems in generating proteins for large-scale production through high-cell-density fermentation. However, optimizing conditions of the large-scale high-cell-density fermentation for biochemistry and industrialization is usually a laborious and time-consuming process. Furthermore, it is often difficult to produce authentic proteins in large quantities, which is a major obstacle for functional and structural features analysis and industrial application. For these reasons, we have developed a protocol for efficient demonstration-scale high-cell-density fermentation of P. pastoris, which employs a new methanol-feeding strategy-biomass-stat strategy and a strategy of increased air pressure instead of pure oxygen supplement. The protocol included three typical stages of glycerol batch fermentation (initial culture phase), glycerol fed-batch fermentation (biomass accumulation phase), and methanol fed-batch fermentation (induction phase), which allows direct online-monitoring of fermentation conditions, including broth pH, temperature, DO, anti-foam generation, and feeding of glycerol and methanol. Using this protocol, production of the recombinant β-xylosidase of Lentinula edodes origin in 1000-L scale fermentation can be up to ~900 mg/L or 9.4 mg/g cells (dry cell weight, intracellular expression), with the specific production rate and average specific production of 0.1 mg/g/h and 0.081 mg/g/h, respectively. The methodology described in this protocol can be easily transferred to other systems, and eligible to scale up for a large number of proteins used in either the scientific studies or commercial purposes.

  14. Quality of life in small-scaled homelike nursing homes: an 8-month controlled trial.

    PubMed

    Kok, Jeroen S; Nielen, Marjan M A; Scherder, Erik J A

    2018-02-27

    Quality of life is a clinical highly relevant outcome for residents with dementia. The question arises whether small scaled homelike facilities are associated with better quality of life than regular larger scale nursing homes do. A sample of 145 residents living in a large scale care facility were followed over 8 months. Half of the sample (N = 77) subsequently moved to a small scaled facility. Quality of life aspects were measured with the QUALIDEM and GIP before and after relocation. We found a significant Group x Time interaction on measures of anxiety meaning that residents who moved to small scale units became less anxious than residents who stayed on the regular care large-scale units. No significant differences were found on other aspects of quality of life. This study demonstrates that residents who move from a large scale facility to a small scale environment can improve an aspect of quality of life by showing a reduction in anxiety. Current Controlled Trials ISRCTN11151241 . registration date: 21-06-2017. Retrospectively registered.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Middleton, Richard S.; Levine, Jonathan S.; Bielicki, Jeffrey M.

    CO 2 capture, utilization, and storage (CCUS) technology has yet to be widely deployed at a commercial scale despite multiple high-profile demonstration projects. We suggest that developing a large-scale, visible, and financially viable CCUS network could potentially overcome many barriers to deployment and jumpstart commercial-scale CCUS. To date, substantial effort has focused on technology development to reduce the costs of CO 2 capture from coal-fired power plants. Here, we propose that near-term investment could focus on implementing CO 2 capture on facilities that produce high-value chemicals/products. These facilities can absorb the expected impact of the marginal increase in the costmore » of production on the price of their product, due to the addition of CO 2 capture, more than coal-fired power plants. A financially viable demonstration of a large-scale CCUS network requires offsetting the costs of CO 2 capture by using the CO 2 as an input to the production of market-viable products. As a result, we demonstrate this alternative development path with the example of an integrated CCUS system where CO 2 is captured from ethylene producers and used for enhanced oil recovery in the U.S. Gulf Coast region.« less

  16. Biology-Inspired Distributed Consensus in Massively-Deployed Sensor Networks

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng

    2005-01-01

    Promises of ubiquitous control of the physical environment by large-scale wireless sensor networks open avenues for new applications that are expected to redefine the way we live and work. Most of recent research has concentrated on developing techniques for performing relatively simple tasks in small-scale sensor networks assuming some form of centralized control. The main contribution of this work is to propose a new way of looking at large-scale sensor networks, motivated by lessons learned from the way biological ecosystems are organized. Indeed, we believe that techniques used in small-scale sensor networks are not likely to scale to large networks; that such large-scale networks must be viewed as an ecosystem in which the sensors/effectors are organisms whose autonomous actions, based on local information, combine in a communal way to produce global results. As an example of a useful function, we demonstrate that fully distributed consensus can be attained in a scalable fashion in massively deployed sensor networks where individual motes operate based on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects.

  17. Large scale structure in universes dominated by cold dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard

    1986-01-01

    The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.

  18. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  19. SUMMARY OF SOLIDIFICATION/STABILIZATION SITE DEMONSTRATIONS AT UNCONTROLLED HAZARDOUS WASTE SITES

    EPA Science Inventory

    Four large-scale solidification/stabilization demonstrations have occurred under EPA's SITE program. In general, physical testing results have been acceptable. Reduction in metal leachability, as determined by the TCLP test, has been observed. Reduction in organic leachability ha...

  20. SHEAR-DRIVEN DYNAMO WAVES IN THE FULLY NONLINEAR REGIME

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pongkitiwanichakul, P.; Nigro, G.; Cattaneo, F.

    2016-07-01

    Large-scale dynamo action is well understood when the magnetic Reynolds number ( Rm ) is small, but becomes problematic in the astrophysically relevant large Rm limit since the fluctuations may control the operation of the dynamo, obscuring the large-scale behavior. Recent works by Tobias and Cattaneo demonstrated numerically the existence of large-scale dynamo action in the form of dynamo waves driven by strongly helical turbulence and shear. Their calculations were carried out in the kinematic regime in which the back-reaction of the Lorentz force on the flow is neglected. Here, we have undertaken a systematic extension of their work tomore » the fully nonlinear regime. Helical turbulence and large-scale shear are produced self-consistently by prescribing body forces that, in the kinematic regime, drive flows that resemble the original velocity used by Tobias and Cattaneo. We have found four different solution types in the nonlinear regime for various ratios of the fluctuating velocity to the shear and Reynolds numbers. Some of the solutions are in the form of propagating waves. Some solutions show large-scale helical magnetic structure. Both waves and structures are permanent only when the kinetic helicity is non-zero on average.« less

  1. Activity-Based Introductory Physics Reform *

    NASA Astrophysics Data System (ADS)

    Thornton, Ronald

    2004-05-01

    Physics education research has shown that learning environments that engage students and allow them to take an active part in their learning can lead to large conceptual gains compared to those of good traditional instruction. Examples of successful curricula and methods include Peer Instruction, Just in Time Teaching, RealTime Physics, Workshop Physics, Scale-Up, and Interactive Lecture Demonstrations (ILDs). RealTime Physics promotes interaction among students in a laboratory setting and makes use of powerful real-time data logging tools to teach concepts as well as quantitative relationships. An active learning environment is often difficult to achieve in large lecture sessions and Workshop Physics and Scale-Up largely eliminate lectures in favor of collaborative student activities. Peer Instruction, Just in Time Teaching, and Interactive Lecture Demonstrations (ILDs) make lectures more interactive in complementary ways. This presentation will introduce these reforms and use Interactive Lecture Demonstrations (ILDs) with the audience to illustrate the types of curricula and tools used in the curricula above. ILDs make use real experiments, real-time data logging tools and student interaction to create an active learning environment in large lecture classes. A short video of students involved in interactive lecture demonstrations will be shown. The results of research studies at various institutions to measure the effectiveness of these methods will be presented.

  2. Cardiac Light-Sheet Fluorescent Microscopy for Multi-Scale and Rapid Imaging of Architecture and Function

    NASA Astrophysics Data System (ADS)

    Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.

    2016-03-01

    Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.

  3. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  4. Demonstration of nanoimprinted hyperlens array for high-throughput sub-diffraction imaging

    NASA Astrophysics Data System (ADS)

    Byun, Minsueop; Lee, Dasol; Kim, Minkyung; Kim, Yangdoo; Kim, Kwan; Ok, Jong G.; Rho, Junsuk; Lee, Heon

    2017-04-01

    Overcoming the resolution limit of conventional optics is regarded as the most important issue in optical imaging science and technology. Although hyperlenses, super-resolution imaging devices based on highly anisotropic dispersion relations that allow the access of high-wavevector components, have recently achieved far-field sub-diffraction imaging in real-time, the previously demonstrated devices have suffered from the extreme difficulties of both the fabrication process and the non-artificial objects placement. This results in restrictions on the practical applications of the hyperlens devices. While implementing large-scale hyperlens arrays in conventional microscopy is desirable to solve such issues, it has not been feasible to fabricate such large-scale hyperlens array with the previously used nanofabrication methods. Here, we suggest a scalable and reliable fabrication process of a large-scale hyperlens device based on direct pattern transfer techniques. We fabricate a 5 cm × 5 cm size hyperlenses array and experimentally demonstrate that it can resolve sub-diffraction features down to 160 nm under 410 nm wavelength visible light. The array-based hyperlens device will provide a simple solution for much more practical far-field and real-time super-resolution imaging which can be widely used in optics, biology, medical science, nanotechnology and other closely related interdisciplinary fields.

  5. Non-linear scale interactions in a forced turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Duvvuri, Subrahmanyam; McKeon, Beverley

    2015-11-01

    A strong phase-organizing influence exerted by a single synthetic large-scale spatio-temporal mode on directly-coupled (through triadic interactions) small scales in a turbulent boundary layer forced by a spatially-impulsive dynamic wall-roughness patch was previously demonstrated by the authors (J. Fluid Mech. 2015, vol. 767, R4). The experimental set-up was later enhanced to allow for simultaneous forcing of multiple scales in the flow. Results and analysis are presented from a new set of novel experiments where two distinct large scales are forced in the flow by a dynamic wall-roughness patch. The internal non-linear forcing of two other scales with triadic consistency to the artificially forced large scales, corresponding to sum and difference in wavenumbers, is dominated by the latter. This allows for a forcing-response (input-output) type analysis of the two triadic scales, and naturally lends itself to a resolvent operator based model (e.g. McKeon & Sharma, J. Fluid Mech. 2010, vol. 658, pp. 336-382) of the governing Navier-Stokes equations. The support of AFOSR (grant #FA 9550-12-1-0469, program manager D. Smith) is gratefully acknowledged.

  6. Large-scale broadband absorber based on metallic tungsten nanocone structure

    NASA Astrophysics Data System (ADS)

    Wang, Jiaxing; Liang, Yuzhang; Huo, Pengcheng; Wang, Daopeng; Tan, Jun; Xu, Ting

    2017-12-01

    We report a broadband tungsten absorber based on a nanocone metallic resonant structure fabricated by self-assembly nanosphere lithography. In experimental demonstration, the fabricated absorber has more than 90% average absorption efficiency and shows superior angular tolerance in the entire visible and near-infrared spectral region. We envision that this large-scale nanostructured broadband optical absorber would find great potential in the applications of high performance optoelectronic platforms and solar-thermal energy harvesting systems.

  7. A real-time interferometer technique for compressible flow research

    NASA Technical Reports Server (NTRS)

    Bachalo, W. D.; Houser, M. J.

    1984-01-01

    Strengths and shortcomings in the application of interferometric techniques to transonic flow fields are examined and an improved method is elaborated. Such applications have demonstrated the value of interferometry in obtaining data for compressible flow research. With holographic techniques, interferometry may be applied in large scale facilities without the use of expensive optics or elaborate vibration isolation equipment. Results obtained using holographic interferometry and other methods demonstrate that reliable qualitative and quantitative data can be acquired. Nevertheless, the conventional method can be difficult to set up and apply, and it cannot produce real-time data. A new interferometry technique is investigated that promises to be easier to apply and can provide real-time information. This single-beam technique has the necessary insensitivity to vibration for large scale wind tunnel operations. Capabilities of the method and preliminary tests on some laboratory scale flow fluids are described.

  8. Role of large-scale velocity fluctuations in a two-vortex kinematic dynamo.

    PubMed

    Kaplan, E J; Brown, B P; Rahbarnia, K; Forest, C B

    2012-06-01

    This paper presents an analysis of the Dudley-James two-vortex flow, which inspired several laboratory-scale liquid-metal experiments, in order to better demonstrate its relation to astrophysical dynamos. A coordinate transformation splits the flow into components that are axisymmetric and nonaxisymmetric relative to the induced magnetic dipole moment. The reformulation gives the flow the same dynamo ingredients as are present in more complicated convection-driven dynamo simulations. These ingredients are currents driven by the mean flow and currents driven by correlations between fluctuations in the flow and fluctuations in the magnetic field. The simple model allows us to isolate the dynamics of the growing eigenvector and trace them back to individual three-wave couplings between the magnetic field and the flow. This simple model demonstrates the necessity of poloidal advection in sustaining the dynamo and points to the effect of large-scale flow fluctuations in exciting a dynamo magnetic field.

  9. Topographically Engineered Large Scale Nanostructures for Plasmonic Biosensing

    NASA Astrophysics Data System (ADS)

    Xiao, Bo; Pradhan, Sangram K.; Santiago, Kevin C.; Rutherford, Gugu N.; Pradhan, Aswini K.

    2016-04-01

    We demonstrate that a nanostructured metal thin film can achieve enhanced transmission efficiency and sharp resonances and use a large-scale and high-throughput nanofabrication technique for the plasmonic structures. The fabrication technique combines the features of nanoimprint and soft lithography to topographically construct metal thin films with nanoscale patterns. Metal nanogratings developed using this method show significantly enhanced optical transmission (up to a one-order-of-magnitude enhancement) and sharp resonances with full width at half maximum (FWHM) of ~15nm in the zero-order transmission using an incoherent white light source. These nanostructures are sensitive to the surrounding environment, and the resonance can shift as the refractive index changes. We derive an analytical method using a spatial Fourier transformation to understand the enhancement phenomenon and the sensing mechanism. The use of real-time monitoring of protein-protein interactions in microfluidic cells integrated with these nanostructures is demonstrated to be effective for biosensing. The perpendicular transmission configuration and large-scale structures provide a feasible platform without sophisticated optical instrumentation to realize label-free surface plasmon resonance (SPR) sensing.

  10. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce.

    PubMed

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2013-11-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS - a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing.

  11. Lessons from SMD experience with approaches to the evaluation of fare changes

    DOT National Transportation Integrated Search

    1980-01-01

    Over the past several years UMTA's Service and Methods Demonstration Program (SMD) has undertaken a large number of studies of the effects of fare changes, both increases and decreases. Some of these studies have been large scale efforts directed at ...

  12. A scanning tunneling microscope with a scanning range from hundreds of micrometers down to nanometer resolution.

    PubMed

    Kalkan, Fatih; Zaum, Christopher; Morgenstern, Karina

    2012-10-01

    A beetle type stage and a flexure scanning stage are combined to form a two stages scanning tunneling microscope (STM). It operates at room temperature in ultrahigh vacuum and is capable of scanning areas up to 300 μm × 450 μm down to resolution on the nanometer scale. This multi-scale STM has been designed and constructed in order to investigate prestructured metallic or semiconducting micro- and nano-structures in real space from atomic-sized structures up to the large-scale environment. The principle of the instrument is demonstrated on two different systems. Gallium nitride based micropillars demonstrate scan areas up to hundreds of micrometers; a Au(111) surface demonstrates nanometer resolution.

  13. SITE DEMONSTRATION BULLETIN: SOIL RECYCLING TREATMENT TRAIN - THE TORONTO HARBOUR COMMISSIONERS

    EPA Science Inventory

    The Toronto Harbour Commissioners (THC) have developed a soil treatment train designed to treat inorganic and organic contaminants in soils. THC has conducted a large-scale demonstration of these technologies in an attempt to establish that contaminated soils at the Toronto Port...

  14. Perspectives on integrated modeling of transport processes in semiconductor crystal growth

    NASA Technical Reports Server (NTRS)

    Brown, Robert A.

    1992-01-01

    The wide range of length and time scales involved in industrial scale solidification processes is demonstrated here by considering the Czochralski process for the growth of large diameter silicon crystals that become the substrate material for modern microelectronic devices. The scales range in time from microseconds to thousands of seconds and in space from microns to meters. The physics and chemistry needed to model processes on these different length scales are reviewed.

  15. A mesostructured Y zeolite as a superior FCC catalyst--lab to refinery.

    PubMed

    García-Martínez, Javier; Li, Kunhao; Krishnaiah, Gautham

    2012-12-18

    A mesostructured Y zeolite was prepared by a surfactant-templated process at the commercial scale and tested in a refinery, showing superior hydrothermal stability and catalytic cracking selectivity, which demonstrates, for the first time, the promising future of mesoporous zeolites in large scale industrial applications.

  16. Scalability study of solid xenon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, J.; Cease, H.; Jaskierny, W. F.

    2015-04-01

    We report a demonstration of the scalability of optically transparent xenon in the solid phase for use as a particle detector above a kilogram scale. We employed a cryostat cooled by liquid nitrogen combined with a xenon purification and chiller system. A modified {\\it Bridgeman's technique} reproduces a large scale optically transparent solid xenon.

  17. Some aspects of wind tunnel magnetic suspension systems with special application at large physical scales

    NASA Technical Reports Server (NTRS)

    Britcher, C. P.

    1983-01-01

    Wind tunnel magnetic suspension and balance systems (MSBSs) have so far failed to find application at the large physical scales necessary for the majority of aerodynamic testing. Three areas of technology relevant to such application are investigated. Two variants of the Spanwise Magnet roll torque generation scheme are studied. Spanwise Permanent Magnets are shown to be practical and are experimentally demonstrated. Extensive computations of the performance of the Spanwise Iron Magnet scheme indicate powerful capability, limited principally be electromagnet technology. Aerodynamic testing at extreme attitudes is shown to be practical in relatively conventional MSBSs. Preliminary operation of the MSBS over a wide range of angles of attack is demonstrated. The impact of a requirement for highly reliable operation on the overall architecture of Large MSBSs is studied and it is concluded that system cost and complexity need not be seriously increased.

  18. Laser-induced plasmonic colours on metals

    NASA Astrophysics Data System (ADS)

    Guay, Jean-Michel; Calà Lesina, Antonino; Côté, Guillaume; Charron, Martin; Poitras, Daniel; Ramunno, Lora; Berini, Pierre; Weck, Arnaud

    2017-07-01

    Plasmonic resonances in metallic nanoparticles have been used since antiquity to colour glasses. The use of metal nanostructures for surface colourization has attracted considerable interest following recent developments in plasmonics. However, current top-down colourization methods are not ideally suited to large-scale industrial applications. Here we use a bottom-up approach where picosecond laser pulses can produce a full palette of non-iridescent colours on silver, gold, copper and aluminium. We demonstrate the process on silver coins weighing up to 5 kg and bearing large topographic variations (~1.5 cm). We find that colours are related to a single parameter, the total accumulated fluence, making the process suitable for high-throughput industrial applications. Statistical image analyses of laser-irradiated surfaces reveal various nanoparticle size distributions. Large-scale finite-difference time-domain computations based on these nanoparticle distributions reproduce trends seen in reflectance measurements, and demonstrate the key role of plasmonic resonances in colour formation.

  19. Laser-induced plasmonic colours on metals

    PubMed Central

    Guay, Jean-Michel; Calà Lesina, Antonino; Côté, Guillaume; Charron, Martin; Poitras, Daniel; Ramunno, Lora; Berini, Pierre; Weck, Arnaud

    2017-01-01

    Plasmonic resonances in metallic nanoparticles have been used since antiquity to colour glasses. The use of metal nanostructures for surface colourization has attracted considerable interest following recent developments in plasmonics. However, current top-down colourization methods are not ideally suited to large-scale industrial applications. Here we use a bottom-up approach where picosecond laser pulses can produce a full palette of non-iridescent colours on silver, gold, copper and aluminium. We demonstrate the process on silver coins weighing up to 5 kg and bearing large topographic variations (∼1.5 cm). We find that colours are related to a single parameter, the total accumulated fluence, making the process suitable for high-throughput industrial applications. Statistical image analyses of laser-irradiated surfaces reveal various nanoparticle size distributions. Large-scale finite-difference time-domain computations based on these nanoparticle distributions reproduce trends seen in reflectance measurements, and demonstrate the key role of plasmonic resonances in colour formation. PMID:28719576

  20. Nonreciprocity in the dynamics of coupled oscillators with nonlinearity, asymmetry, and scale hierarchy

    NASA Astrophysics Data System (ADS)

    Moore, Keegan J.; Bunyan, Jonathan; Tawfick, Sameh; Gendelman, Oleg V.; Li, Shuangbao; Leamy, Michael; Vakakis, Alexander F.

    2018-01-01

    In linear time-invariant dynamical and acoustical systems, reciprocity holds by the Onsager-Casimir principle of microscopic reversibility, and this can be broken only by odd external biases, nonlinearities, or time-dependent properties. A concept is proposed in this work for breaking dynamic reciprocity based on irreversible nonlinear energy transfers from large to small scales in a system with nonlinear hierarchical internal structure, asymmetry, and intentional strong stiffness nonlinearity. The resulting nonreciprocal large-to-small scale energy transfers mimic analogous nonlinear energy transfer cascades that occur in nature (e.g., in turbulent flows), and are caused by the strong frequency-energy dependence of the essentially nonlinear small-scale components of the system considered. The theoretical part of this work is mainly based on action-angle transformations, followed by direct numerical simulations of the resulting system of nonlinear coupled oscillators. The experimental part considers a system with two scales—a linear large-scale oscillator coupled to a small scale by a nonlinear spring—and validates the theoretical findings demonstrating nonreciprocal large-to-small scale energy transfer. The proposed study promotes a paradigm for designing nonreciprocal acoustic materials harnessing strong nonlinearity, which in a future application will be implemented in designing lattices incorporating nonlinear hierarchical internal structures, asymmetry, and scale mixing.

  1. A small-scale, rolled-membrane microfluidic artificial lung designed towards future large area manufacturing.

    PubMed

    Thompson, A J; Marks, L H; Goudie, M J; Rojas-Pena, A; Handa, H; Potkay, J A

    2017-03-01

    Artificial lungs have been used in the clinic for multiple decades to supplement patient pulmonary function. Recently, small-scale microfluidic artificial lungs (μAL) have been demonstrated with large surface area to blood volume ratios, biomimetic blood flow paths, and pressure drops compatible with pumpless operation. Initial small-scale microfluidic devices with blood flow rates in the μ l/min to ml/min range have exhibited excellent gas transfer efficiencies; however, current manufacturing techniques may not be suitable for scaling up to human applications. Here, we present a new manufacturing technology for a microfluidic artificial lung in which the structure is assembled via a continuous "rolling" and bonding procedure from a single, patterned layer of polydimethyl siloxane (PDMS). This method is demonstrated in a small-scale four-layer device, but is expected to easily scale to larger area devices. The presented devices have a biomimetic branching blood flow network, 10  μ m tall artificial capillaries, and a 66  μ m thick gas transfer membrane. Gas transfer efficiency in blood was evaluated over a range of blood flow rates (0.1-1.25 ml/min) for two different sweep gases (pure O 2 , atmospheric air). The achieved gas transfer data closely follow predicted theoretical values for oxygenation and CO 2 removal, while pressure drop is marginally higher than predicted. This work is the first step in developing a scalable method for creating large area microfluidic artificial lungs. Although designed for microfluidic artificial lungs, the presented technique is expected to result in the first manufacturing method capable of simply and easily creating large area microfluidic devices from PDMS.

  2. From Large-scale to Protostellar Disk Fragmentation into Close Binary Stars

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; Cruz, Fidel; Gabbasov, Ruslan; Klapp, Jaime; Ramírez-Velasquez, José

    2018-04-01

    Recent observations of young stellar systems with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Karl G. Jansky Very Large Array are helping to cement the idea that close companion stars form via fragmentation of a gravitationally unstable disk around a protostar early in the star formation process. As the disk grows in mass, it eventually becomes gravitationally unstable and fragments, forming one or more new protostars in orbit with the first at mean separations of 100 au or even less. Here, we report direct numerical calculations down to scales as small as ∼0.1 au, using a consistent Smoothed Particle Hydrodynamics code, that show the large-scale fragmentation of a cloud core into two protostars accompanied by small-scale fragmentation of their circumstellar disks. Our results demonstrate the two dominant mechanisms of star formation, where the disk forming around a protostar (which in turn results from the large-scale fragmentation of the cloud core) undergoes eccentric (m = 1) fragmentation to produce a close binary. We generate two-dimensional emission maps and simulated ALMA 1.3 mm continuum images of the structure and fragmentation of the disks that can help explain the dynamical processes occurring within collapsing cloud cores.

  3. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    NASA Astrophysics Data System (ADS)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  4. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    PubMed

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  5. DISRUPTION OF LARGE-SCALE NEURAL NETWORKS IN NON-FLUENT/AGRAMMATIC VARIANT PRIMARY PROGRESSIVE APHASIA ASSOCIATED WITH FRONTOTEMPORAL DEGENERATION PATHOLOGY

    PubMed Central

    Grossman, Murray; Powers, John; Ash, Sherry; McMillan, Corey; Burkholder, Lisa; Irwin, David; Trojanowski, John Q.

    2012-01-01

    Non-fluent/agrammatic primary progressive aphasia (naPPA) is a progressive neurodegenerative condition most prominently associated with slowed, effortful speech. A clinical imaging marker of naPPA is disease centered in the left inferior frontal lobe. We used multimodal imaging to assess large-scale neural networks underlying effortful expression in 15 patients with sporadic naPPA due to frontotemporal lobar degeneration (FTLD) spectrum pathology. Effortful speech in these patients is related in part to impaired grammatical processing, and to phonologic speech errors. Gray matter (GM) imaging shows frontal and anterior-superior temporal atrophy, most prominently in the left hemisphere. Diffusion tensor imaging reveals reduced fractional anisotropy in several white matter (WM) tracts mediating projections between left frontal and other GM regions. Regression analyses suggest disruption of three large-scale GM-WM neural networks in naPPA that support fluent, grammatical expression. These findings emphasize the role of large-scale neural networks in language, and demonstrate associated language deficits in naPPA. PMID:23218686

  6. Multi-level discriminative dictionary learning with application to large scale image classification.

    PubMed

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  7. Replicating Impact of a Primary School HIV Prevention Programme: Primary School Action for Better Health, Kenya

    ERIC Educational Resources Information Center

    Maticka-Tyndale, E.; Mungwete, R.; Jayeoba, O.

    2014-01-01

    School-based programmes to combat the spread of HIV have been demonstrated to be effective over the short-term when delivered on a small scale. The question addressed here is whether results obtained with small-scale delivery are replicable in large-scale roll-out. Primary School Action for Better Health (PSABH), a programme to train teachers to…

  8. Construction of Penrose Diagrams for Dynamic Black Holes

    NASA Technical Reports Server (NTRS)

    Brown, Beth A.; Lindesay, James

    2008-01-01

    A set of Penrose diagrams is constructed in order to examine the large-scale causal structure of black holes with dynamic horizons. Coordinate dependencies of significant features, such as the event horizon and radial mass scale, are demonstrated on the diagrams. Unlike in static Schwarzschild geometries, the radial mass scale is clearly seen to differ from the horizon. Trajectories for photons near the horizon are briefly discussed.

  9. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    PubMed

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Large-scale Density Structures in Magneto-rotational Disk Turbulence

    NASA Astrophysics Data System (ADS)

    Youdin, Andrew; Johansen, A.; Klahr, H.

    2009-01-01

    Turbulence generated by the magneto-rotational instability (MRI) is a strong candidate to drive accretion flows in disks, including sufficiently ionized regions of protoplanetary disks. The MRI is often studied in local shearing boxes, which model a small section of the disk at high resolution. I will present simulations of large, stratified shearing boxes which extend up to 10 gas scale-heights across. These simulations are a useful bridge to fully global disk simulations. We find that MRI turbulence produces large-scale, axisymmetric density perturbations . These structures are part of a zonal flow --- analogous to the banded flow in Jupiter's atmosphere --- which survives in near geostrophic balance for tens of orbits. The launching mechanism is large-scale magnetic tension generated by an inverse cascade. We demonstrate the robustness of these results by careful study of various box sizes, grid resolutions, and microscopic diffusion parameterizations. These gas structures can trap solid material (in the form of large dust or ice particles) with important implications for planet formation. Resolved disk images at mm-wavelengths (e.g. from ALMA) will verify or constrain the existence of these structures.

  11. Inexpensive Device for Demonstrating Rock Slope Failure and Other Collapse Phenomena.

    ERIC Educational Resources Information Center

    Stimpson, B.

    1980-01-01

    Describes an inexpensive modeling technique for demonstrating large-scale displacement phenomena in rock masses, such as slope collapse and failure of underground openings. Excavation of the model material occurs through openings made in the polyurethane foam in the correct excavation sequence. (Author/SA)

  12. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  13. Porous microwells for geometry-selective, large-scale microparticle arrays

    NASA Astrophysics Data System (ADS)

    Kim, Jae Jung; Bong, Ki Wan; Reátegui, Eduardo; Irimia, Daniel; Doyle, Patrick S.

    2017-01-01

    Large-scale microparticle arrays (LSMAs) are key for material science and bioengineering applications. However, previous approaches suffer from trade-offs between scalability, precision, specificity and versatility. Here, we present a porous microwell-based approach to create large-scale microparticle arrays with complex motifs. Microparticles are guided to and pushed into microwells by fluid flow through small open pores at the bottom of the porous well arrays. A scaling theory allows for the rational design of LSMAs to sort and array particles on the basis of their size, shape, or modulus. Sequential particle assembly allows for proximal and nested particle arrangements, as well as particle recollection and pattern transfer. We demonstrate the capabilities of the approach by means of three applications: high-throughput single-cell arrays; microenvironment fabrication for neutrophil chemotaxis; and complex, covert tags by the transfer of an upconversion nanocrystal-laden LSMA.

  14. The Use of Weighted Graphs for Large-Scale Genome Analysis

    PubMed Central

    Zhou, Fang; Toivonen, Hannu; King, Ross D.

    2014-01-01

    There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061

  15. Structural Similitude and Scaling Laws for Plates and Shells: A Review

    NASA Technical Reports Server (NTRS)

    Simitses, G. J.; Starnes, J. H., Jr.; Rezaeepazhand, J.

    2000-01-01

    This paper deals with the development and use of scaled-down models in order to predict the structural behavior of large prototypes. The concept is fully described and examples are presented which demonstrate its applicability to beam-plates, plates and cylindrical shells of laminated construction. The concept is based on the use of field equations, which govern the response behavior of both the small model as well as the large prototype. The conditions under which the experimental data of a small model can be used to predict the behavior of a large prototype are called scaling laws or similarity conditions and the term that best describes the process is structural similitude. Moreover, since the term scaling is used to describe the effect of size on strength characteristics of materials, a discussion is included which should clarify the difference between "scaling law" and "size effect". Finally, a historical review of all published work in the broad area of structural similitude is presented for completeness.

  16. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    NASA Technical Reports Server (NTRS)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  17. Project BALLOTS: Bibliographic Automation of Large Library Operations Using a Time-Sharing System. Progress Report (3/27/69 - 6/26/69).

    ERIC Educational Resources Information Center

    Veaner, Allen B.

    Project BALLOTS is a large-scale library automation development project of the Stanford University Libraries which has demonstrated the feasibility of conducting on-line interactive searches of complex bibliographic files, with a large number of users working simultaneously in the same or different files. This report documents the continuing…

  18. Think global, act local: Preserving the global commons

    PubMed Central

    Hauser, Oliver P.; Hendriks, Achim; Rand, David G.; Nowak, Martin A.

    2016-01-01

    Preserving global public goods, such as the planet’s ecosystem, depends on large-scale cooperation, which is difficult to achieve because the standard reciprocity mechanisms weaken in large groups. Here we demonstrate a method by which reciprocity can maintain cooperation in a large-scale public goods game (PGG). In a first experiment, participants in groups of on average 39 people play one round of a Prisoner’s Dilemma (PD) with their two nearest neighbours on a cyclic network after each PGG round. We observe that people engage in “local-to-global” reciprocity, leveraging local interactions to enforce global cooperation: Participants reduce PD cooperation with neighbours who contribute little in the PGG. In response, low PGG contributors increase their contributions if both neighbours defect in the PD. In a control condition, participants do not know their neighbours’ PGG contribution and thus cannot link play in the PD to the PGG. In the control we observe a sharp decline of cooperation in the PGG, while in the treatment condition global cooperation is maintained. In a second experiment, we demonstrate the scalability of this effect: in a 1,000-person PGG, participants in the treatment condition successfully sustain public contributions. Our findings suggest that this simple “local-to-global” intervention facilitates large-scale cooperation. PMID:27808222

  19. Association of parameter, software, and hardware variation with large-scale behavior across 57,000 climate models

    PubMed Central

    Knight, Christopher G.; Knight, Sylvia H. E.; Massey, Neil; Aina, Tolu; Christensen, Carl; Frame, Dave J.; Kettleborough, Jamie A.; Martin, Andrew; Pascoe, Stephen; Sanderson, Ben; Stainforth, David A.; Allen, Myles R.

    2007-01-01

    In complex spatial models, as used to predict the climate response to greenhouse gas emissions, parameter variation within plausible bounds has major effects on model behavior of interest. Here, we present an unprecedentedly large ensemble of >57,000 climate model runs in which 10 parameters, initial conditions, hardware, and software used to run the model all have been varied. We relate information about the model runs to large-scale model behavior (equilibrium sensitivity of global mean temperature to a doubling of carbon dioxide). We demonstrate that effects of parameter, hardware, and software variation are detectable, complex, and interacting. However, we find most of the effects of parameter variation are caused by a small subset of parameters. Notably, the entrainment coefficient in clouds is associated with 30% of the variation seen in climate sensitivity, although both low and high values can give high climate sensitivity. We demonstrate that the effect of hardware and software is small relative to the effect of parameter variation and, over the wide range of systems tested, may be treated as equivalent to that caused by changes in initial conditions. We discuss the significance of these results in relation to the design and interpretation of climate modeling experiments and large-scale modeling more generally. PMID:17640921

  20. Space Technology 5 Multi-point Observations of Field-aligned Currents: Temporal Variability of Meso-Scale Structures

    NASA Technical Reports Server (NTRS)

    Le, Guan; Wang, Yongli; Slavin, James A.; Strangeway, Robert J.

    2007-01-01

    Space Technology 5 (ST5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from ST5. The data demonstrate that meso-scale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of - 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are approx. 1 min for meso-scale currents and approx. 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.

  1. Event management for large scale event-driven digital hardware spiking neural networks.

    PubMed

    Caron, Louis-Charles; D'Haene, Michiel; Mailhot, Frédéric; Schrauwen, Benjamin; Rouat, Jean

    2013-09-01

    The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Topology assisted self-organization of colloidal nanoparticles: application to 2D large-scale nanomastering.

    PubMed

    Kadiri, Hind; Kostcheev, Serguei; Turover, Daniel; Salas-Montiel, Rafael; Nomenyo, Komla; Gokarna, Anisha; Lerondel, Gilles

    2014-01-01

    Our aim was to elaborate a novel method for fully controllable large-scale nanopatterning. We investigated the influence of the surface topology, i.e., a pre-pattern of hydrogen silsesquioxane (HSQ) posts, on the self-organization of polystyrene beads (PS) dispersed over a large surface. Depending on the post size and spacing, long-range ordering of self-organized polystyrene beads is observed wherein guide posts were used leading to single crystal structure. Topology assisted self-organization has proved to be one of the solutions to obtain large-scale ordering. Besides post size and spacing, the colloidal concentration and the nature of solvent were found to have a significant effect on the self-organization of the PS beads. Scanning electron microscope and associated Fourier transform analysis were used to characterize the morphology of the ordered surfaces. Finally, the production of silicon molds is demonstrated by using the beads as a template for dry etching.

  3. Random access in large-scale DNA data storage.

    PubMed

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  4. Development and Examination of the Social Appearance Anxiety Scale

    ERIC Educational Resources Information Center

    Hart, Trevor A.; Flora, David B.; Palyo, Sarah A.; Fresco, David M.; Holle, Christian; Heimberg, Richard G.

    2008-01-01

    The Social Appearance Anxiety Scale (SAAS) was created to measure anxiety about being negatively evaluated by others because of one's overall appearance, including body shape. This study examined the psychometric properties of the SAAS in three large samples of undergraduate students (respective ns = 512, 853, and 541). The SAAS demonstrated a…

  5. Insufficiency of avoided crossings for witnessing large-scale quantum coherence in flux qubits

    NASA Astrophysics Data System (ADS)

    Fröwis, Florian; Yadin, Benjamin; Gisin, Nicolas

    2018-04-01

    Do experiments based on superconducting loops segmented with Josephson junctions (e.g., flux qubits) show macroscopic quantum behavior in the sense of Schrödinger's cat example? Various arguments based on microscopic and phenomenological models were recently adduced in this debate. We approach this problem by adapting (to flux qubits) the framework of large-scale quantum coherence, which was already successfully applied to spin ensembles and photonic systems. We show that contemporary experiments might show quantum coherence more than 100 times larger than experiments in the classical regime. However, we argue that the often-used demonstration of an avoided crossing in the energy spectrum is not sufficient to make a conclusion about the presence of large-scale quantum coherence. Alternative, rigorous witnesses are proposed.

  6. Electron drift in a large scale solid xenon

    DOE PAGES

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  7. MEAN-FIELD MODELING OF AN α{sup 2} DYNAMO COUPLED WITH DIRECT NUMERICAL SIMULATIONS OF RIGIDLY ROTATING CONVECTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masada, Youhei; Sano, Takayoshi, E-mail: ymasada@harbor.kobe-u.ac.jp, E-mail: sano@ile.osaka-u.ac.jp

    2014-10-10

    The mechanism of large-scale dynamos in rigidly rotating stratified convection is explored by direct numerical simulations (DNS) in Cartesian geometry. A mean-field dynamo model is also constructed using turbulent velocity profiles consistently extracted from the corresponding DNS results. By quantitative comparison between the DNS and our mean-field model, it is demonstrated that the oscillatory α{sup 2} dynamo wave, excited and sustained in the convection zone, is responsible for large-scale magnetic activities such as cyclic polarity reversal and spatiotemporal migration. The results provide strong evidence that a nonuniformity of the α-effect, which is a natural outcome of rotating stratified convection, canmore » be an important prerequisite for large-scale stellar dynamos, even without the Ω-effect.« less

  8. Multiscale recurrence quantification analysis of order recurrence plots

    NASA Astrophysics Data System (ADS)

    Xu, Mengjia; Shang, Pengjian; Lin, Aijing

    2017-03-01

    In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.

  9. Coordinating the Commons: Diversity & Dynamics in Open Collaborations

    ERIC Educational Resources Information Center

    Morgan, Jonathan T.

    2013-01-01

    The success of Wikipedia demonstrates that open collaboration can be an effective model for organizing geographically-distributed volunteers to perform complex, sustained work at a massive scale. However, Wikipedia's history also demonstrates some of the challenges that large, long-term open collaborations face: the core community of Wikipedia…

  10. Astronomy Demonstrations and Models.

    ERIC Educational Resources Information Center

    Eckroth, Charles A.

    Demonstrations in astronomy classes seem to be more necessary than in physics classes for three reasons. First, many of the events are very large scale and impossibly remote from human senses. Secondly, while physics courses use discussions of one- and two-dimensional motion, three-dimensional motion is the normal situation in astronomy; thus,…

  11. Bio-inspired wooden actuators for large scale applications.

    PubMed

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules.

  12. Bio-Inspired Wooden Actuators for Large Scale Applications

    PubMed Central

    Rüggeberg, Markus; Burgert, Ingo

    2015-01-01

    Implementing programmable actuation into materials and structures is a major topic in the field of smart materials. In particular the bilayer principle has been employed to develop actuators that respond to various kinds of stimuli. A multitude of small scale applications down to micrometer size have been developed, but up-scaling remains challenging due to either limitations in mechanical stiffness of the material or in the manufacturing processes. Here, we demonstrate the actuation of wooden bilayers in response to changes in relative humidity, making use of the high material stiffness and a good machinability to reach large scale actuation and application. Amplitude and response time of the actuation were measured and can be predicted and controlled by adapting the geometry and the constitution of the bilayers. Field tests in full weathering conditions revealed long-term stability of the actuation. The potential of the concept is shown by a first demonstrator. With the sensor and actuator intrinsically incorporated in the wooden bilayers, the daily change in relative humidity is exploited for an autonomous and solar powered movement of a tracker for solar modules. PMID:25835386

  13. Demonstration of a Large-Scale Tank Assembly via Circumferential Friction Stir Welds

    NASA Technical Reports Server (NTRS)

    Jones, Clyde S.; Adams, Glynn; Colligan, Kevin

    2000-01-01

    A collaborative effort between NASA/Marshall Space Flight Center and the Michoud Unit of Lockheed Martin Space Systems Company was undertaken to demonstrate assembly of a large-scale aluminum tank using circumferential friction stir welds. The hardware used to complete this demonstration was fabricated as a study of near-net- shape technologies. The tooling used to complete this demonstration was originally designed for assembly of a tank using fusion weld processes. This presentation describes the modifications and additions that were made to the existing fusion welding tools required to accommodate circumferential friction stir welding, as well as the process used to assemble the tank. The tooling modifications include design, fabrication and installation of several components. The most significant components include a friction stir weld unit with adjustable pin length capabilities, a continuous internal anvil for 'open' circumferential welds, a continuous closeout anvil, clamping systems, an external reaction system and the control system required to conduct the friction stir welds and integrate the operation of the tool. The demonstration was intended as a development task. The experience gained during each circumferential weld was applied to improve subsequent welds. Both constant and tapered thickness 14-foot diameter circumferential welds were successfully demonstrated.

  14. Demonstration-scale evaluation of a novel high-solids anaerobic digestion process for converting organic wastes to fuel gas and compost.

    PubMed

    Rivard, C J; Duff, B W; Dickow, J H; Wiles, C C; Nagle, N J; Gaddy, J L; Clausen, E C

    1998-01-01

    Early evaluations of the bioconversion potential for combined wastes such as tuna sludge and sorted municipal solid waste (MSW) were conducted at laboratory scale and compared conventional low-solids, stirred-tank anaerobic systems with the novel, high-solids anaerobic digester (HSAD) design. Enhanced feedstock conversion rates and yields were determined for the HSAD system. In addition, the HSAD system demonstrated superior resiliency to process failure. Utilizing relatively dry feedstocks, the HSAD system is approximately one-tenth the size of conventional low-solids systems. In addition, the HSAD system is capable of organic loading rates (OLRs) on the order of 20-25 g volatile solids per liter digester volume per d (gVS/L/d), roughly 4-5 times those of conventional systems. Current efforts involve developing a demonstration-scale (pilot-scale) HSAD system. A two-ton/d plant has been constructed in Stanton, CA and is currently in the commissioning/startup phase. The purposes of the project are to verify laboratory- and intermediate-scale process performance; test the performance of large-scale prototype mechanical systems; demonstrate the long-term reliability of the process; and generate the process and economic data required for the design, financing, and construction of full-scale commercial systems. This study presents conformational fermentation data obtained at intermediate-scale and a snapshot of the pilot-scale project.

  15. E-ELT M5 field stabilisation unit scale 1 demonstrator design and performances evaluation

    NASA Astrophysics Data System (ADS)

    Casalta, J. M.; Barriga, J.; Ariño, J.; Mercader, J.; San Andrés, M.; Serra, J.; Kjelberg, I.; Hubin, N.; Jochum, L.; Vernet, E.; Dimmler, M.; Müller, M.

    2010-07-01

    The M5 Field stabilization Unit (M5FU) for European Extremely Large Telescope (E-ELT) is a fast correcting optical system that shall provide tip-tilt corrections for the telescope dynamic pointing errors and the effect of atmospheric tiptilt and wind disturbances. A M5FU scale 1 demonstrator (M5FU1D) is being built to assess the feasibility of the key elements (actuators, sensors, mirror, mirror interfaces) and the real-time control algorithm. The strict constraints (e.g. tip-tilt control frequency range 100Hz, 3m ellipse mirror size, mirror first Eigen frequency 300Hz, maximum tip/tilt range +/- 30 arcsec, maximum tiptilt error < 40 marcsec) have been a big challenge for developing the M5FU Conceptual Design and its scale 1 demonstrator. The paper summarises the proposed design for the final unit and demonstrator and the measured performances compared to the applicable specifications.

  16. Architectural Optimization of Digital Libraries

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1998-01-01

    This work investigates performance and scaling issues relevant to large scale distributed digital libraries. Presently, performance and scaling studies focus on specific implementations of production or prototype digital libraries. Although useful information is gained to aid these designers and other researchers with insights to performance and scaling issues, the broader issues relevant to very large scale distributed libraries are not addressed. Specifically, no current studies look at the extreme or worst case possibilities in digital library implementations. A survey of digital library research issues is presented. Scaling and performance issues are mentioned frequently in the digital library literature but are generally not the focus of much of the current research. In this thesis a model for a Generic Distributed Digital Library (GDDL) and nine cases of typical user activities are defined. This model is used to facilitate some basic analysis of scaling issues. Specifically, the calculation of Internet traffic generated for different configurations of the study parameters and an estimate of the future bandwidth needed for a large scale distributed digital library implementation. This analysis demonstrates the potential impact a future distributed digital library implementation would have on the Internet traffic load and raises questions concerning the architecture decisions being made for future distributed digital library designs.

  17. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  18. Space Technology 5 Multi-Point Observations of Temporal Variability of Field-Aligned Currents

    NASA Technical Reports Server (NTRS)

    Le, Guan; Wang, Yongli; Slavin, James A.; Strangeway, Robert J.

    2008-01-01

    Space Technology 5 (ST5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from ST5. The data demonstrate that meso-scale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of approximately 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are approximately 1 min for meso-scale currents and approximately 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.

  19. Modified dispersion relations, inflation, and scale invariance

    NASA Astrophysics Data System (ADS)

    Bianco, Stefano; Friedhoff, Victor Nicolai; Wilson-Ewing, Edward

    2018-02-01

    For a certain type of modified dispersion relations, the vacuum quantum state for very short wavelength cosmological perturbations is scale-invariant and it has been suggested that this may be the source of the scale-invariance observed in the temperature anisotropies in the cosmic microwave background. We point out that for this scenario to be possible, it is necessary to redshift these short wavelength modes to cosmological scales in such a way that the scale-invariance is not lost. This requires nontrivial background dynamics before the onset of standard radiation-dominated cosmology; we demonstrate that one possible solution is inflation with a sufficiently large Hubble rate, for this slow roll is not necessary. In addition, we also show that if the slow-roll condition is added to inflation with a large Hubble rate, then for any power law modified dispersion relation quantum vacuum fluctuations become nearly scale-invariant when they exit the Hubble radius.

  20. Cross-cultural validation of the Work Values Scale EVAT using multi-group confirmatory factor analysis and confirmatory multidimensional scaling.

    PubMed

    Arciniega, Luis M; González, Luis; Soares, Vítor; Ciulli, Stefania; Giannini, Marco

    2009-11-01

    The Work Values Scale EVAT (based on its initials in Spanish: Escala de Valores hacia el Trabajo) was created in 2000 to measure values in the work context. The instrument operationalizes the four higher-order-values of the Schwartz Theory (1992) through sixteen items focused on work scenarios. The questionnaire has been used among large samples of Mexican and Spanish individuals reporting adequate psychometric properties. The instrument has recently been translated into Portuguese and Italian, and subsequently used in a large-scale study with nurses in Portugal and in a sample of various occupations in Italy. The purpose of this research was to demonstrate the cross-cultural validity of the Work Values Scale EVAT in Spanish, Portuguese, and Italian. Our results suggest that the original Spanish version of the EVAT scale and the new Portuguese and Italian versions are equivalent.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiang, Nai-Yuan; Zavala, Victor M.

    We present a filter line-search algorithm that does not require inertia information of the linear system. This feature enables the use of a wide range of linear algebra strategies and libraries, which is essential to tackle large-scale problems on modern computing architectures. The proposed approach performs curvature tests along the search step to detect negative curvature and to trigger convexification. We prove that the approach is globally convergent and we implement the approach within a parallel interior-point framework to solve large-scale and highly nonlinear problems. Our numerical tests demonstrate that the inertia-free approach is as efficient as inertia detection viamore » symmetric indefinite factorizations. We also demonstrate that the inertia-free approach can lead to reductions in solution time because it reduces the amount of convexification needed.« less

  2. Composites for Exploration Upper Stage

    NASA Technical Reports Server (NTRS)

    Fikes, J. C.; Jackson, J. R.; Richardson, S. W.; Thomas, A. D.; Mann, T. O.; Miller, S. G.

    2016-01-01

    The Composites for Exploration Upper Stage (CEUS) was a 3-year, level III project within the Technology Demonstration Missions program of the NASA Space Technology Mission Directorate. Studies have shown that composites provide important programmatic enhancements, including reduced weight to increase capability and accelerated expansion of exploration and science mission objectives. The CEUS project was focused on technologies that best advanced innovation, infusion, and broad applications for the inclusion of composites on future large human-rated launch vehicles and spacecraft. The benefits included near- and far-term opportunities for infusion (NASA, industry/commercial, Department of Defense), demonstrated critical technologies and technically implementable evolvable innovations, and sustained Agency experience. The initial scope of the project was to advance technologies for large composite structures applicable to the Space Launch System (SLS) Exploration Upper Stage (EUS) by focusing on the affordability and technical performance of the EUS forward and aft skirts. The project was tasked to develop and demonstrate critical composite technologies with a focus on full-scale materials, design, manufacturing, and test using NASA in-house capabilities. This would have demonstrated a major advancement in confidence and matured the large-scale composite technology to a Technology Readiness Level 6. This project would, therefore, have bridged the gap for providing composite application to SLS upgrades, enabling future exploration missions.

  3. Demonstration of Hadoop-GIS: A Spatial Data Warehousing System Over MapReduce

    PubMed Central

    Aji, Ablimit; Sun, Xiling; Vo, Hoang; Liu, Qioaling; Lee, Rubao; Zhang, Xiaodong; Saltz, Joel; Wang, Fusheng

    2016-01-01

    The proliferation of GPS-enabled devices, and the rapid improvement of scientific instruments have resulted in massive amounts of spatial data in the last decade. Support of high performance spatial queries on large volumes data has become increasingly important in numerous fields, which requires a scalable and efficient spatial data warehousing solution as existing approaches exhibit scalability limitations and efficiency bottlenecks for large scale spatial applications. In this demonstration, we present Hadoop-GIS – a scalable and high performance spatial query system over MapReduce. Hadoop-GIS provides an efficient spatial query engine to process spatial queries, data and space based partitioning, and query pipelines that parallelize queries implicitly on MapReduce. Hadoop-GIS also provides an expressive, SQL-like spatial query language for workload specification. We will demonstrate how spatial queries are expressed in spatially extended SQL queries, and submitted through a command line/web interface for execution. Parallel to our system demonstration, we explain the system architecture and details on how queries are translated to MapReduce operators, optimized, and executed on Hadoop. In addition, we will showcase how the system can be used to support two representative real world use cases: large scale pathology analytical imaging, and geo-spatial data warehousing. PMID:27617325

  4. Experimental performance evaluation of software defined networking (SDN) based data communication networks for large scale flexi-grid optical networks.

    PubMed

    Zhao, Yongli; He, Ruiying; Chen, Haoran; Zhang, Jie; Ji, Yuefeng; Zheng, Haomian; Lin, Yi; Wang, Xinbo

    2014-04-21

    Software defined networking (SDN) has become the focus in the current information and communication technology area because of its flexibility and programmability. It has been introduced into various network scenarios, such as datacenter networks, carrier networks, and wireless networks. Optical transport network is also regarded as an important application scenario for SDN, which is adopted as the enabling technology of data communication networks (DCN) instead of general multi-protocol label switching (GMPLS). However, the practical performance of SDN based DCN for large scale optical networks, which is very important for the technology selection in the future optical network deployment, has not been evaluated up to now. In this paper we have built a large scale flexi-grid optical network testbed with 1000 virtual optical transport nodes to evaluate the performance of SDN based DCN, including network scalability, DCN bandwidth limitation, and restoration time. A series of network performance parameters including blocking probability, bandwidth utilization, average lightpath provisioning time, and failure restoration time have been demonstrated under various network environments, such as with different traffic loads and different DCN bandwidths. The demonstration in this work can be taken as a proof for the future network deployment.

  5. Effect of nacelle on wake meandering in a laboratory scale wind turbine using LES

    NASA Astrophysics Data System (ADS)

    Foti, Daniel; Yang, Xiaolei; Guala, Michele; Sotiropoulos, Fotis

    2015-11-01

    Wake meandering, large scale motion in the wind turbine wakes, has considerable effects on the velocity deficit and turbulence intensity in the turbine wake from the laboratory scale to utility scale wind turbines. In the dynamic wake meandering model, the wake meandering is assumed to be caused by large-scale atmospheric turbulence. On the other hand, Kang et al. (J. Fluid Mech., 2014) demonstrated that the nacelle geometry has a significant effect on the wake meandering of a hydrokinetic turbine, through the interaction of the inner wake of the nacelle vortex with the outer wake of the tip vortices. In this work, the significance of the nacelle on the wake meandering of a miniature wind turbine previously used in experiments (Howard et al., Phys. Fluid, 2015) is demonstrated with large eddy simulations (LES) using immersed boundary method with fine enough grids to resolve the turbine geometric characteristics. The three dimensionality of the wake meandering is analyzed in detail through turbulent spectra and meander reconstruction. The computed flow fields exhibit wake dynamics similar to those observed in the wind tunnel experiments and are analyzed to shed new light into the role of the energetic nacelle vortex on wake meandering. This work was supported by Department of Energy DOE (DE-EE0002980, DE-EE0005482 and DE-AC04-94AL85000), and Sandia National Laboratories. Computational resources were provided by Sandia National Laboratories and the University of Minnesota Supercomputing.

  6. Quantum information processing with long-wavelength radiation

    NASA Astrophysics Data System (ADS)

    Murgia, David; Weidt, Sebastian; Randall, Joseph; Lekitsch, Bjoern; Webster, Simon; Navickas, Tomas; Grounds, Anton; Rodriguez, Andrea; Webb, Anna; Standing, Eamon; Pearce, Stuart; Sari, Ibrahim; Kiang, Kian; Rattanasonti, Hwanjit; Kraft, Michael; Hensinger, Winfried

    To this point, the entanglement of ions has predominantly been performed using lasers. Using long wavelength radiation with static magnetic field gradients provides an architecture to simplify construction of a large scale quantum computer. The use of microwave-dressed states protects against decoherence from fluctuating magnetic fields, with radio-frequency fields used for qubit manipulation. I will report the realisation of spin-motion entanglement using long-wavelength radiation, and a new method to efficiently prepare dressed-state qubits and qutrits, reducing experimental complexity of gate operations. I will also report demonstration of ground state cooling using long wavelength radiation, which may increase two-qubit entanglement fidelity. I will then report demonstration of a high-fidelity long-wavelength two-ion quantum gate using dressed states. Combining these results with microfabricated ion traps allows for scaling towards a large scale ion trap quantum computer, and provides a platform for quantum simulations of fundamental physics. I will report progress towards the operation of microchip ion traps with extremely high magnetic field gradients for multi-ion quantum gates.

  7. National land cover monitoring using large, permanent photo plots

    Treesearch

    Raymond L. Czaplewski; Glenn P. Catts; Paul W. Snook

    1987-01-01

    A study in the State of North Carplina, U.S.A. demonstrated that large, permanent photo plots (400 hectares) can be used to monitor large regions of land by using remote sensing techniques. Estimates of area in a variety of land cover categories were made by photointerpretation of medium-scale aerial photography from a single month using 111 photo plots. Many of these...

  8. A comparison of large-scale electron beam and bench-scale 60Co irradiations of simulated aqueous waste streams

    NASA Astrophysics Data System (ADS)

    Kurucz, Charles N.; Waite, Thomas D.; Otaño, Suzana E.; Cooper, William J.; Nickelsen, Michael G.

    2002-11-01

    The effectiveness of using high energy electron beam irradiation for the removal of toxic organic chemicals from water and wastewater has been demonstrated by commercial-scale experiments conducted at the Electron Beam Research Facility (EBRF) located in Miami, Florida and elsewhere. The EBRF treats various waste and water streams up to 450 l min -1 (120 gal min -1) with doses up to 8 kilogray (kGy). Many experiments have been conducted by injecting toxic organic compounds into various plant feed streams and measuring the concentrations of compound(s) before and after exposure to the electron beam at various doses. Extensive experimentation has also been performed by dissolving selected chemicals in 22,700 l (6000 gal) tank trucks of potable water to simulate contaminated groundwater, and pumping the resulting solutions through the electron beam. These large-scale experiments, although necessary to demonstrate the commercial viability of the process, require a great deal of time and effort. This paper compares the results of large-scale electron beam irradiations to those obtained from bench-scale irradiations using gamma rays generated by a 60Co source. Dose constants from exponential contaminant removal models are found to depend on the source of radiation and initial contaminant concentration. Possible reasons for observed differences such as a dose rate effect are discussed. Models for estimating electron beam dose constants from bench-scale gamma experiments are presented. Data used to compare the removal of organic compounds using gamma irradiation and electron beam irradiation are taken from the literature and a series of experiments designed to examine the effects of pH, the presence of turbidity, and initial concentration on the removal of various organic compounds (benzene, toluene, phenol, PCE, TCE and chloroform) from simulated groundwater.

  9. Classical boson sampling algorithms with superior performance to near-term experiments

    NASA Astrophysics Data System (ADS)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  10. A large-scale perspective on stress-induced alterations in resting-state networks

    NASA Astrophysics Data System (ADS)

    Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron

    2016-02-01

    Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.

  11. A Coherent vorticity preserving eddy-viscosity correction for Large-Eddy Simulation

    NASA Astrophysics Data System (ADS)

    Chapelier, J.-B.; Wasistho, B.; Scalo, C.

    2018-04-01

    This paper introduces a new approach to Large-Eddy Simulation (LES) where subgrid-scale (SGS) dissipation is applied proportionally to the degree of local spectral broadening, hence mitigated or deactivated in regions dominated by large-scale and/or laminar vortical motion. The proposed coherent-vorticity preserving (CvP) LES methodology is based on the evaluation of the ratio of the test-filtered to resolved (or grid-filtered) enstrophy, σ. Values of σ close to 1 indicate low sub-test-filter turbulent activity, justifying local deactivation of the SGS dissipation. The intensity of the SGS dissipation is progressively increased for σ < 1 which corresponds to a small-scale spectral broadening. The SGS dissipation is then fully activated in developed turbulence characterized by σ ≤σeq, where the value σeq is derived assuming a Kolmogorov spectrum. The proposed approach can be applied to any eddy-viscosity model, is algorithmically simple and computationally inexpensive. LES of Taylor-Green vortex breakdown demonstrates that the CvP methodology improves the performance of traditional, non-dynamic dissipative SGS models, capturing the peak of total turbulent kinetic energy dissipation during transition. Similar accuracy is obtained by adopting Germano's dynamic procedure albeit at more than twice the computational overhead. A CvP-LES of a pair of unstable periodic helical vortices is shown to predict accurately the experimentally observed growth rate using coarse resolutions. The ability of the CvP methodology to dynamically sort the coherent, large-scale motion from the smaller, broadband scales during transition is demonstrated via flow visualizations. LES of compressible channel are carried out and show a good match with a reference DNS.

  12. Soil organic carbon - a large scale paired catchment assessment

    NASA Astrophysics Data System (ADS)

    Kunkel, V.; Hancock, G. R.; Wells, T.

    2016-12-01

    Soil organic carbon (SOC) concentration can vary both spatially and temporally driven by differences in soil properties, topography and climate. However most studies have focused on point scale data sets with a paucity of studies examining larger scale catchments. Here we examine the spatial and temporal distribution of SOC for two large catchments. The Krui (575 km2) and Merriwa River (675km2) catchments (New South Wales, Australia). Both have similar shape, soils, topography and orientation. We show that SOC distribution is very similar for both catchments and that elevation (and associated increase in soil moisture) is a major influence on SOC. We also show that there is little change in SOC from the initial assessment in 2006 to 2015 despite a major drought from 2003 to 2010 and extreme rainfall events in 2007 and 2010 -therefore SOC concentration appears robust. However, we found significant relationships between erosion and deposition patterns (as quantified using 137Cs) and SOC for both catchments again demonstrating a strong geomorphic relationship. Vegetation across the catchments was assessed using remote sensing (Landsat and MODIS). Vegetation patterns were temporally consistent with above ground biomass increasing with elevation. SOC could be predicted using both these low and high resolution remote sensing platforms. Results indicate that, although moderate resolution (250 m) allows for reasonable prediction of the spatial distribution of SOC, the higher resolution (30 m) improved the strength of the SOC-NDVI relationship. The relationship between SOC and 137Cs, as a surrogate for the erosion and deposition of SOC, suggested that sediment transport and deposition influences the distribution of SOC within the catchment. The findings demonstrate that over the large catchment scale and at the decadal time scale that SOC is relatively constant and can largely be predicted by topography.

  13. Topology of large-scale structure. IV - Topology in two dimensions

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Cohen, Alexander P.; Hamilton, Andrew J. S.; Gott, J. Richard, III; Weinberg, David H.

    1989-01-01

    In a recent series of papers, an algorithm was developed for quantitatively measuring the topology of the large-scale structure of the universe and this algorithm was applied to numerical models and to three-dimensional observational data sets. In this paper, it is shown that topological information can be derived from a two-dimensional cross section of a density field, and analytic expressions are given for a Gaussian random field. The application of a two-dimensional numerical algorithm for measuring topology to cross sections of three-dimensional models is demonstrated.

  14. Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Lotts, Christine G.; Sleight, David W.

    1999-01-01

    This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.

  15. Peculiarity of Seismicity in the Balakend-Zagatal Region, Azerbaijan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ismail-Zadeh, Tahir T.

    2006-03-23

    The study of seismicity in the Balakend-Zagatal region demonstrates a temporal correlation of small events in the region with the moderate events in Caucasus for the time interval of 1980 to 1990. It is shown that the processes resulting in deformation and tectonic movements of main structural elements of the Caucasus region are internal and are not related to large-scale tectonic processes. A week dependence of the regional movements on the large-scale motion of the lithospheric plates and microplates is apparent from another geological and geodetic data as well.

  16. Solving Large-Scale Inverse Magnetostatic Problems using the Adjoint Method

    PubMed Central

    Bruckner, Florian; Abert, Claas; Wautischer, Gregor; Huber, Christian; Vogler, Christoph; Hinze, Michael; Suess, Dieter

    2017-01-01

    An efficient algorithm for the reconstruction of the magnetization state within magnetic components is presented. The occurring inverse magnetostatic problem is solved by means of an adjoint approach, based on the Fredkin-Koehler method for the solution of the forward problem. Due to the use of hybrid FEM-BEM coupling combined with matrix compression techniques the resulting algorithm is well suited for large-scale problems. Furthermore the reconstruction of the magnetization state within a permanent magnet as well as an optimal design application are demonstrated. PMID:28098851

  17. Filter size definition in anisotropic subgrid models for large eddy simulation on irregular grids

    NASA Astrophysics Data System (ADS)

    Abbà, Antonella; Campaniello, Dario; Nini, Michele

    2017-06-01

    The definition of the characteristic filter size to be used for subgrid scales models in large eddy simulation using irregular grids is still an unclosed problem. We investigate some different approaches to the definition of the filter length for anisotropic subgrid scale models and we propose a tensorial formulation based on the inertial ellipsoid of the grid element. The results demonstrate an improvement in the prediction of several key features of the flow when the anisotropicity of the grid is explicitly taken into account with the tensorial filter size.

  18. Validation of the Short Form of the Academic Procrastination Scale.

    PubMed

    Yockey, Ronald D

    2016-02-01

    The factor structure, internal consistency reliability, and convergent validity of the five-item Academic Procrastination Scale-Short Form was investigated on an ethnically diverse sample of college students. The results provided support for the Academic Procrastination Scale-Short Form as a unidimensional measure of academic procrastination, which possessed good internal consistency reliability in this sample of 282 students. The scale also demonstrated good convergent validity, with moderate to large correlations with both the Procrastination Assessment Scale-Students and the Tuckman Procrastination Scale. Implications of the results are discussed and recommendations for future work provided.

  19. In-situ device integration of large-area patterned organic nanowire arrays for high-performance optical sensors

    PubMed Central

    Wu, Yiming; Zhang, Xiujuan; Pan, Huanhuan; Deng, Wei; Zhang, Xiaohong; Zhang, Xiwei; Jie, Jiansheng

    2013-01-01

    Single-crystalline organic nanowires (NWs) are important building blocks for future low-cost and efficient nano-optoelectronic devices due to their extraordinary properties. However, it remains a critical challenge to achieve large-scale organic NW array assembly and device integration. Herein, we demonstrate a feasible one-step method for large-area patterned growth of cross-aligned single-crystalline organic NW arrays and their in-situ device integration for optical image sensors. The integrated image sensor circuitry contained a 10 × 10 pixel array in an area of 1.3 × 1.3 mm2, showing high spatial resolution, excellent stability and reproducibility. More importantly, 100% of the pixels successfully operated at a high response speed and relatively small pixel-to-pixel variation. The high yield and high spatial resolution of the operational pixels, along with the high integration level of the device, clearly demonstrate the great potential of the one-step organic NW array growth and device construction approach for large-scale optoelectronic device integration. PMID:24287887

  20. Desalination: Status and Federal Issues

    DTIC Science & Technology

    2009-12-30

    on one side and lets purified water through. Reverse osmosis plants have fewer problems with corrosion and usually have lower energy requirements...Texas) and cities are actively researching and investigating the feasibility of large-scale desalination plants for municipal water supplies...desalination research and development, and in construction and operational costs of desalination demonstration projects and full-scale plants

  1. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, P. T.; Shadid, J. N.; Hu, J. J.

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  2. Performance of fully-coupled algebraic multigrid preconditioners for large-scale VMS resistive MHD

    DOE PAGES

    Lin, P. T.; Shadid, J. N.; Hu, J. J.; ...

    2017-11-06

    Here, we explore the current performance and scaling of a fully-implicit stabilized unstructured finite element (FE) variational multiscale (VMS) capability for large-scale simulations of 3D incompressible resistive magnetohydrodynamics (MHD). The large-scale linear systems that are generated by a Newton nonlinear solver approach are iteratively solved by preconditioned Krylov subspace methods. The efficiency of this approach is critically dependent on the scalability and performance of the algebraic multigrid preconditioner. Our study considers the performance of the numerical methods as recently implemented in the second-generation Trilinos implementation that is 64-bit compliant and is not limited by the 32-bit global identifiers of themore » original Epetra-based Trilinos. The study presents representative results for a Poisson problem on 1.6 million cores of an IBM Blue Gene/Q platform to demonstrate very large-scale parallel execution. Additionally, results for a more challenging steady-state MHD generator and a transient solution of a benchmark MHD turbulence calculation for the full resistive MHD system are also presented. These results are obtained on up to 131,000 cores of a Cray XC40 and one million cores of a BG/Q system.« less

  3. Fluid-structure interaction simulation of floating structures interacting with complex, large-scale ocean waves and atmospheric turbulence with application to floating offshore wind turbines

    NASA Astrophysics Data System (ADS)

    Calderer, Antoni; Guo, Xin; Shen, Lian; Sotiropoulos, Fotis

    2018-02-01

    We develop a numerical method for simulating coupled interactions of complex floating structures with large-scale ocean waves and atmospheric turbulence. We employ an efficient large-scale model to develop offshore wind and wave environmental conditions, which are then incorporated into a high resolution two-phase flow solver with fluid-structure interaction (FSI). The large-scale wind-wave interaction model is based on a two-fluid dynamically-coupled approach that employs a high-order spectral method for simulating the water motion and a viscous solver with undulatory boundaries for the air motion. The two-phase flow FSI solver is based on the level set method and is capable of simulating the coupled dynamic interaction of arbitrarily complex bodies with airflow and waves. The large-scale wave field solver is coupled with the near-field FSI solver with a one-way coupling approach by feeding into the latter waves via a pressure-forcing method combined with the level set method. We validate the model for both simple wave trains and three-dimensional directional waves and compare the results with experimental and theoretical solutions. Finally, we demonstrate the capabilities of the new computational framework by carrying out large-eddy simulation of a floating offshore wind turbine interacting with realistic ocean wind and waves.

  4. FIELD DEMONSTRATION OF INNOVATIVE LEAK DETECTION/LOCATION TECHNOLOGIES COUPLED WITH WALL-THICKNESS SCREENING FOR WATER MAINS

    EPA Science Inventory

    The U.S. Environmental Protection Agency sponsored a large-scale field demonstration of innovative leak detection/location and condition assessment technologies on a 76-year old, 2,500-ft long, cement-lined, 24-in. cast iron water main in Louisville, KY from July through Septembe...

  5. Field Demonstration of Innovative Leak Detection/Location in Conjunction with Pipe Wall Thickness Testing for Water Mains

    EPA Science Inventory

    The U.S. Environmental Protection Agency (EPA) sponsored a large-scale field demonstration of innovative leak detection/location and condition assessment technologies on a 76-year old, 2,000-ft long, cement-lined, 24-in. cast iron water main in Louisville, KY from July through Se...

  6. Large-Scale Campus Computer Technology Implementation: Lessons from the First Year.

    ERIC Educational Resources Information Center

    Nichols, Todd; Frazer, Linda H.

    The purpose of the Elementary Technology Demonstration Schools (ETDS) Project, funded by IBM and Apple, Inc., was to demonstrate the effectiveness of technology in accelerating the learning of low achieving at-risk students and enhancing the education of high achieving students. The paper begins by giving background information on the district,…

  7. Space Technology 5 (ST-5) Observations of Field-Aligned Currents: Temporal Variability

    NASA Technical Reports Server (NTRS)

    Le, Guan

    2010-01-01

    Space Technology 5 (ST-5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from STS. The data demonstrate that masoscale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of about 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are about I min for meso-scale currents and about 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.

  8. Space Technology 5 (ST-5) Multipoint Observations of Temporal and Spatial Variability of Field-Aligned Currents

    NASA Technical Reports Server (NTRS)

    Le, Guan

    2010-01-01

    Space Technology 5 (ST-5) is a three micro-satellite constellation deployed into a 300 x 4500 km, dawn-dusk, sun-synchronous polar orbit from March 22 to June 21, 2006, for technology validations. In this paper, we present a study of the temporal variability of field-aligned currents using multi-point magnetic field measurements from ST5. The data demonstrate that mesoscale current structures are commonly embedded within large-scale field-aligned current sheets. The meso-scale current structures are very dynamic with highly variable current density and/or polarity in time scales of about 10 min. They exhibit large temporal variations during both quiet and disturbed times in such time scales. On the other hand, the data also shown that the time scales for the currents to be relatively stable are about 1 min for meso-scale currents and about 10 min for large scale current sheets. These temporal features are obviously associated with dynamic variations of their particle carriers (mainly electrons) as they respond to the variations of the parallel electric field in auroral acceleration region. The characteristic time scales for the temporal variability of meso-scale field-aligned currents are found to be consistent with those of auroral parallel electric field.

  9. Scale-free characteristics of random networks: the topology of the world-wide web

    NASA Astrophysics Data System (ADS)

    Barabási, Albert-László; Albert, Réka; Jeong, Hawoong

    2000-06-01

    The world-wide web forms a large directed graph, whose vertices are documents and edges are links pointing from one document to another. Here we demonstrate that despite its apparent random character, the topology of this graph has a number of universal scale-free characteristics. We introduce a model that leads to a scale-free network, capturing in a minimal fashion the self-organization processes governing the world-wide web.

  10. Measures of Agreement Between Many Raters for Ordinal Classifications

    PubMed Central

    Nelson, Kerrie P.; Edwards, Don

    2015-01-01

    Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449

  11. Portable parallel stochastic optimization for the design of aeropropulsion components

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Rhodes, G. S.

    1994-01-01

    This report presents the results of Phase 1 research to develop a methodology for performing large-scale Multi-disciplinary Stochastic Optimization (MSO) for the design of aerospace systems ranging from aeropropulsion components to complete aircraft configurations. The current research recognizes that such design optimization problems are computationally expensive, and require the use of either massively parallel or multiple-processor computers. The methodology also recognizes that many operational and performance parameters are uncertain, and that uncertainty must be considered explicitly to achieve optimum performance and cost. The objective of this Phase 1 research was to initialize the development of an MSO methodology that is portable to a wide variety of hardware platforms, while achieving efficient, large-scale parallelism when multiple processors are available. The first effort in the project was a literature review of available computer hardware, as well as review of portable, parallel programming environments. The first effort was to implement the MSO methodology for a problem using the portable parallel programming language, Parallel Virtual Machine (PVM). The third and final effort was to demonstrate the example on a variety of computers, including a distributed-memory multiprocessor, a distributed-memory network of workstations, and a single-processor workstation. Results indicate the MSO methodology can be well-applied towards large-scale aerospace design problems. Nearly perfect linear speedup was demonstrated for computation of optimization sensitivity coefficients on both a 128-node distributed-memory multiprocessor (the Intel iPSC/860) and a network of workstations (speedups of almost 19 times achieved for 20 workstations). Very high parallel efficiencies (75 percent for 31 processors and 60 percent for 50 processors) were also achieved for computation of aerodynamic influence coefficients on the Intel. Finally, the multi-level parallelization strategy that will be needed for large-scale MSO problems was demonstrated to be highly efficient. The same parallel code instructions were used on both platforms, demonstrating portability. There are many applications for which MSO can be applied, including NASA's High-Speed-Civil Transport, and advanced propulsion systems. The use of MSO will reduce design and development time and testing costs dramatically.

  12. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  13. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs.

    PubMed

    Cormode, Graham; Dasgupta, Anirban; Goyal, Amit; Lee, Chi Hoon

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users' queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with "vanilla" LSH, even when using the same amount of space.

  14. A comprehensive study on urban true orthorectification

    USGS Publications Warehouse

    Zhou, G.; Chen, W.; Kelmelis, J.A.; Zhang, Dongxiao

    2005-01-01

    To provide some advanced technical bases (algorithms and procedures) and experience needed for national large-scale digital orthophoto generation and revision of the Standards for National Large-Scale City Digital Orthophoto in the National Digital Orthophoto Program (NDOP), this paper presents a comprehensive study on theories, algorithms, and methods of large-scale urban orthoimage generation. The procedures of orthorectification for digital terrain model (DTM)-based and digital building model (DBM)-based orthoimage generation and their mergence for true orthoimage generation are discussed in detail. A method of compensating for building occlusions using photogrammetric geometry is developed. The data structure needed to model urban buildings for accurately generating urban orthoimages is presented. Shadow detection and removal, the optimization of seamline for automatic mosaic, and the radiometric balance of neighbor images are discussed. Street visibility analysis, including the relationship between flight height, building height, street width, and relative location of the street to the imaging center, is analyzed for complete true orthoimage generation. The experimental results demonstrated that our method can effectively and correctly orthorectify the displacements caused by terrain and buildings in urban large-scale aerial images. ?? 2005 IEEE.

  15. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    PubMed

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  16. Manufacturing Process Developments for Regeneratively-Cooled Channel Wall Rocket Nozzles

    NASA Technical Reports Server (NTRS)

    Gradl, Paul; Brandsmeier, Will

    2016-01-01

    Regeneratively cooled channel wall nozzles incorporate a series of integral coolant channels to contain the coolant to maintain adequate wall temperatures and expand hot gas providing engine thrust and specific impulse. NASA has been evaluating manufacturing techniques targeting large scale channel wall nozzles to support affordability of current and future liquid rocket engine nozzles and thrust chamber assemblies. The development of these large scale manufacturing techniques focus on the liner formation, channel slotting with advanced abrasive water-jet milling techniques and closeout of the coolant channels to replace or augment other cost reduction techniques being evaluated for nozzles. NASA is developing a series of channel closeout techniques including large scale additive manufacturing laser deposition and explosively bonded closeouts. A series of subscale nozzles were completed evaluating these processes. Fabrication of mechanical test and metallography samples, in addition to subscale hardware has focused on Inconel 625, 300 series stainless, aluminum alloys as well as other candidate materials. Evaluations of these techniques are demonstrating potential for significant cost reductions for large scale nozzles and chambers. Hot fire testing is planned using these techniques in the future.

  17. Channel optimization of high-intensity laser beams in millimeter-scale plasmas.

    PubMed

    Ceurvorst, L; Savin, A; Ratan, N; Kasim, M F; Sadler, J; Norreys, P A; Habara, H; Tanaka, K A; Zhang, S; Wei, M S; Ivancic, S; Froula, D H; Theobald, W

    2018-04-01

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>10^{18}W/cm^{2}) kilojoule laser pulses through large density scale length (∼390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.

  18. Channel optimization of high-intensity laser beams in millimeter-scale plasmas

    NASA Astrophysics Data System (ADS)

    Ceurvorst, L.; Savin, A.; Ratan, N.; Kasim, M. F.; Sadler, J.; Norreys, P. A.; Habara, H.; Tanaka, K. A.; Zhang, S.; Wei, M. S.; Ivancic, S.; Froula, D. H.; Theobald, W.

    2018-04-01

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (>1018W/cm 2 ) kilojoule laser pulses through large density scale length (˜390 -570 μ m ) laser-produced plasmas, demonstrating the effects of the pulse's focal location and intensity as well as the plasma's temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities, as expected. However, contrary to previous large-scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer-duration equivalents. This new observation has many implications for future laser-plasma research in the relativistic regime.

  19. Automated AFM for small-scale and large-scale surface profiling in CMP applications

    NASA Astrophysics Data System (ADS)

    Zandiatashbar, Ardavan; Kim, Byong; Yoo, Young-kook; Lee, Keibock; Jo, Ahjin; Lee, Ju Suk; Cho, Sang-Joon; Park, Sang-il

    2018-03-01

    As the feature size is shrinking in the foundries, the need for inline high resolution surface profiling with versatile capabilities is increasing. One of the important areas of this need is chemical mechanical planarization (CMP) process. We introduce a new generation of atomic force profiler (AFP) using decoupled scanners design. The system is capable of providing small-scale profiling using XY scanner and large-scale profiling using sliding stage. Decoupled scanners design enables enhanced vision which helps minimizing the positioning error for locations of interest in case of highly polished dies. Non-Contact mode imaging is another feature of interest in this system which is used for surface roughness measurement, automatic defect review, and deep trench measurement. Examples of the measurements performed using the atomic force profiler are demonstrated.

  20. Torsional Oscillations in a Global Solar Dynamo

    NASA Astrophysics Data System (ADS)

    Beaudoin, P.; Charbonneau, P.; Racine, E.; Smolarkiewicz, P. K.

    2013-02-01

    We characterize and analyze rotational torsional oscillations developing in a large-eddy magnetohydrodynamical simulation of solar convection (Ghizaru, Charbonneau, and Smolarkiewicz, Astrophys. J. Lett. 715, L133, 2010; Racine et al., Astrophys. J. 735, 46, 2011) producing an axisymmetric, large-scale, magnetic field undergoing periodic polarity reversals. Motivated by the many solar-like features exhibited by these oscillations, we carry out an analysis of the large-scale zonal dynamics. We demonstrate that simulated torsional oscillations are not driven primarily by the periodically varying large-scale magnetic torque, as one might have expected, but rather via the magnetic modulation of angular-momentum transport by the large-scale meridional flow. This result is confirmed by a straightforward energy analysis. We also detect a fairly sharp transition in rotational dynamics taking place as one moves from the base of the convecting layers to the base of the thin tachocline-like shear layer formed in the stably stratified fluid layers immediately below. We conclude by discussing the implications of our analyses with regard to the mechanism of amplitude saturation in the global dynamo operating in the simulation, and speculate on the possible precursor value of torsional oscillations for the forecast of solar-cycle characteristics.

  1. Criterion Validity and Practical Utility of the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF) in Assessments of Police Officer Candidates.

    PubMed

    Tarescavage, Anthony M; Corey, David M; Gupton, Herbert M; Ben-Porath, Yossef S

    2015-01-01

    Minnesota Multiphasic Personality Inventory-2-Restructured Form scores for 145 male police officer candidates were compared with supervisor ratings of field performance and problem behaviors during their initial probationary period. Results indicated that the officers produced meaningfully lower and less variant substantive scale scores compared to the general population. After applying a statistical correction for range restriction, substantive scale scores from all domains assessed by the inventory demonstrated moderate to large correlations with performance criteria. The practical significance of these results was assessed with relative risk ratio analyses that examined the utility of specific cutoffs on scales demonstrating associations with performance criteria.

  2. Untangling Brain-Wide Dynamics in Consciousness by Cross-Embedding

    PubMed Central

    Tajima, Satohiro; Yanagawa, Toru; Fujii, Naotaka; Toyoizumi, Taro

    2015-01-01

    Brain-wide interactions generating complex neural dynamics are considered crucial for emergent cognitive functions. However, the irreducible nature of nonlinear and high-dimensional dynamical interactions challenges conventional reductionist approaches. We introduce a model-free method, based on embedding theorems in nonlinear state-space reconstruction, that permits a simultaneous characterization of complexity in local dynamics, directed interactions between brain areas, and how the complexity is produced by the interactions. We demonstrate this method in large-scale electrophysiological recordings from awake and anesthetized monkeys. The cross-embedding method captures structured interaction underlying cortex-wide dynamics that may be missed by conventional correlation-based analysis, demonstrating a critical role of time-series analysis in characterizing brain state. The method reveals a consciousness-related hierarchy of cortical areas, where dynamical complexity increases along with cross-area information flow. These findings demonstrate the advantages of the cross-embedding method in deciphering large-scale and heterogeneous neuronal systems, suggesting a crucial contribution by sensory-frontoparietal interactions to the emergence of complex brain dynamics during consciousness. PMID:26584045

  3. On the ecological relevance of landscape mapping and its application in the spatial planning of very large marine protected areas.

    PubMed

    Hogg, Oliver T; Huvenne, Veerle A I; Griffiths, Huw J; Linse, Katrin

    2018-06-01

    In recent years very large marine protected areas (VLMPAs) have become the dominant form of spatial protection in the marine environment. Whilst seen as a holistic and geopolitically achievable approach to conservation, there is currently a mismatch between the size of VLMPAs, and the data available to underpin their establishment and inform on their management. Habitat mapping has increasingly been adopted as a means of addressing paucity in biological data, through use of environmental proxies to estimate species and community distribution. Small-scale studies have demonstrated environmental-biological links in marine systems. Such links, however, are rarely demonstrated across larger spatial scales in the benthic environment. As such, the utility of habitat mapping as an effective approach to the ecosystem-based management of VLMPAs remains, thus far, largely undetermined. The aim of this study was to assess the ecological relevance of broadscale landscape mapping. Specifically we test the relationship between broad-scale marine landscapes and the structure of their benthic faunal communities. We focussed our work at the sub-Antarctic island of South Georgia, site of one of the largest MPAs in the world. We demonstrate a statistically significant relationship between environmentally derived landscape mapping clusters, and the composition of presence-only species data from the region. To demonstrate this relationship required specific re-sampling of historical species occurrence data to balance biological rarity, biological cosmopolitism, range-restricted sampling and fine-scale heterogeneity between sampling stations. The relationship reveals a distinct biological signature in the faunal composition of individual landscapes, attributing ecological relevance to South Georgia's environmentally derived marine landscape map. We argue therefore, that landscape mapping represents an effective framework for ensuring representative protection of habitats in management plans. Such scientific underpinning of marine spatial planning is critical in balancing the needs of multiple stakeholders whilst maximising conservation payoff. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Large-scale structure of randomly jammed spheres

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio

    2017-05-01

    We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.

  5. A robust quantitative near infrared modeling approach for blend monitoring.

    PubMed

    Mohan, Shikhar; Momose, Wataru; Katz, Jeffrey M; Hossain, Md Nayeem; Velez, Natasha; Drennen, James K; Anderson, Carl A

    2018-01-30

    This study demonstrates a material sparing Near-Infrared modeling approach for powder blend monitoring. In this new approach, gram scale powder mixtures are subjected to compression loads to simulate the effect of scale using an Instron universal testing system. Models prepared by the new method development approach (small-scale method) and by a traditional method development (blender-scale method) were compared by simultaneously monitoring a 1kg batch size blend run. Both models demonstrated similar model performance. The small-scale method strategy significantly reduces the total resources expended to develop Near-Infrared calibration models for on-line blend monitoring. Further, this development approach does not require the actual equipment (i.e., blender) to which the method will be applied, only a similar optical interface. Thus, a robust on-line blend monitoring method can be fully developed before any large-scale blending experiment is viable, allowing the blend method to be used during scale-up and blend development trials. Copyright © 2017. Published by Elsevier B.V.

  6. Development of Large-Eddy Interaction Model for inhomogeneous turbulent flows

    NASA Technical Reports Server (NTRS)

    Hong, S. K.; Payne, F. R.

    1987-01-01

    The objective of this paper is to demonstrate the applicability of a currently proposed model, with minimum empiricism, for calculation of the Reynolds stresses and other turbulence structural quantities in a channel. The current Large-Eddy Interaction Model not only yields Reynolds stresses but also presents an opportunity to illuminate typical characteristic motions of large-scale turbulence and the phenomenological aspects of engineering models for two Reynolds numbers.

  7. Experience in using commercial clouds in CMS

    NASA Astrophysics Data System (ADS)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.; Fuess, S.; Garzoglio, G.; Girone, M.; Gutsche, O.; Holzman, B.; Hufnagel, D.; Kim, H.; Kennedy, R.; Mason, D.; Spentzouris, P.; Timm, S.; Tiradani, A.; Vaandering, E.; CMS Collaboration

    2017-10-01

    Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.

  8. Experience in using commercial clouds in CMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauerdick, L.; Bockelman, B.; Dykstra, D.

    Historically high energy physics computing has been performed on large purposebuilt computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is amore » growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.« less

  9. A 100,000 Scale Factor Radar Range.

    PubMed

    Blanche, Pierre-Alexandre; Neifeld, Mark; Peyghambarian, Nasser

    2017-12-19

    The radar cross section of an object is an important electromagnetic property that is often measured in anechoic chambers. However, for very large and complex structures such as ships or sea and land clutters, this common approach is not practical. The use of computer simulations is also not viable since it would take many years of computational time to model and predict the radar characteristics of such large objects. We have now devised a new scaling technique to overcome these difficulties, and make accurate measurements of the radar cross section of large items. In this article we demonstrate that by reducing the scale of the model by a factor 100,000, and using near infrared wavelength, the radar cross section can be determined in a tabletop setup. The accuracy of the method is compared to simulations, and an example of measurement is provided on a 1 mm highly detailed model of a ship. The advantages of this scaling approach is its versatility, and the possibility to perform fast, convenient, and inexpensive measurements.

  10. Hofmeister series salts enhance purification of plasmid DNA by non-ionic detergents

    PubMed Central

    Lezin, George; Kuehn, Michael R.; Brunelli, Luca

    2011-01-01

    Ion-exchange chromatography is the standard technique used for plasmid DNA purification, an essential molecular biology procedure. Non-ionic detergents (NIDs) have been used for plasmid DNA purification, but it is unclear whether Hofmeister series salts (HSS) change the solubility and phase separation properties of specific NIDs, enhancing plasmid DNA purification. After scaling-up NID-mediated plasmid DNA isolation, we established that NIDs in HSS solutions minimize plasmid DNA contamination with protein. In addition, large-scale NID/HSS solutions eliminated LPS contamination of plasmid DNA more effectively than Qiagen ion-exchange columns. Large-scale NID isolation/NID purification generated increased yields of high quality DNA compared to alkali isolation/column purification. This work characterizes how HSS enhance NID-mediated plasmid DNA purification, and demonstrates that NID phase transition is not necessary for LPS removal from plasmid DNA. Specific NIDs such as IGEPAL CA-520 can be utilized for rapid, inexpensive and efficient laboratory-based large-scale plasmid DNA purification, outperforming Qiagen-based column procedures. PMID:21351074

  11. Oblique Wing Flights

    NASA Image and Video Library

    2018-05-09

    Flown in the mid 70's, this Oblique Wing was a large-scale R/C experimental aircraft to demonstrate the ability to pivot its wing to an oblique angle, allowing for a reduced drag penalty at transonic speeds.

  12. Suppressing turbulence of self-propelling rods by strongly coupled passive particles.

    PubMed

    Su, Yen-Shuo; Wang, Hao-Chen; I, Lin

    2015-03-01

    The strong turbulence suppression, mainly for large-scale modes, of two-dimensional self-propelling rods, by increasing the long-range coupling strength Γ of low-concentration passive particles, is numerically demonstrated. It is found that large-scale collective rod motion in forms of swirls or jets is mainly contributed from well-aligned dense patches, which can push small poorly aligned rod patches and uncoupled passive particles. The more efficient momentum transfer and dissipation through increasing passive particle coupling leads to the formation of a more ordered and slowed down network of passive particles, which competes with coherent dense active rod clusters. The frustration of active rod alignment ordering and coherent motion by the passive particle network, which interrupt the inverse cascading of forming large-scale swirls, is the key for suppressing collective rod motion with scales beyond the interpassive distance, even in the liquid phase of passive particles. The loosely packed active rods are weakly affected by increasing passive particle coupling due to the weak rod-particle interaction. They mainly contribute to the small-scale modes and high-speed motion.

  13. Building and measuring a high performance network architecture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kramer, William T.C.; Toole, Timothy; Fisher, Chuck

    2001-04-20

    Once a year, the SC conferences present a unique opportunity to create and build one of the most complex and highest performance networks in the world. At SC2000, large-scale and complex local and wide area networking connections were demonstrated, including large-scale distributed applications running on different architectures. This project was designed to use the unique opportunity presented at SC2000 to create a testbed network environment and then use that network to demonstrate and evaluate high performance computational and communication applications. This testbed was designed to incorporate many interoperable systems and services and was designed for measurement from the very beginning.more » The end results were key insights into how to use novel, high performance networking technologies and to accumulate measurements that will give insights into the networks of the future.« less

  14. Polycyclopentene-Crystal-Decorated Carbon Nanotubes by Convenient Large-Scale In Situ Polymerization and their Lotus-Leaf-Like Superhydrophobic Films.

    PubMed

    Xu, Lixin; Huang, Lingqi; Ye, Zhibin; Meng, Nan; Shu, Yang; Gu, Zhiyong

    2017-02-01

    In situ Pd-catalyzed cyclopentene polymerization in the presence of multi-walled carbon nanotubes (MWCNTs) is demonstrated to effectively render, on a large scale, polycyclopentene-crystal-decorated MWCNTs. Controlling the catalyst loading and/or time in the polymerization offers a convenient tuning of the polymer content and the morphology of the decorated MWCNTs. Appealingly, films made of the decorated carbon nanotubes through simple vacuum filtration show the characteristic lotus-leaf-like superhydrophobicity with high water contact angle (>150°), low contact angle hysteresis (<10°), and low water adhesion, while being electrically conductive. This is the first demonstration of the direct fabrication of lotus-leaf-like superhydrophobic films with solution-grown polymer-crystal-decorated carbon nanotubes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Correlates of the MMPI-2-RF in a college setting.

    PubMed

    Forbey, Johnathan D; Lee, Tayla T C; Handel, Richard W

    2010-12-01

    The current study examined empirical correlates of scores on Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; A. Tellegen & Y. S. Ben-Porath, 2008; Y. S. Ben-Porath & A. Tellegen, 2008) scales in a college setting. The MMPI-2-RF and six criterion measures (assessing anger, assertiveness, sex roles, cognitive failures, social avoidance, and social fear) were administered to 846 college students (nmen = 264, nwomen = 582) to examine the convergent and discriminant validity of scores on the MMPI-2-RF Specific Problems and Interest scales. Results demonstrated evidence of generally good convergent score validity for the selected MMPI-2-RF scales, reflected in large effect size correlations with criterion measure scores. Further, MMPI-2-RF scale scores demonstrated adequate discriminant validity, reflected in relatively low comparative median correlations between scores on MMPI-2-RF substantive scale sets and criterion measures. Limitations and future directions are discussed.

  16. An effective and simple procedure to isolate abundant quantities of biologically active chemopreventive lunasin-protease inhibitor concentrate (LPIC) from soybean

    USDA-ARS?s Scientific Manuscript database

    Lunasin is a 5-kDa soybean bioactive peptide with demonstrated anti-cancer and anti-inflammatory properties. The use of lunasin as a chemopreventive agent in large-scale animal studies and human clinical trials is hampered by the paucity of large quantities of lunasin. Recently, purification methods...

  17. Automated Scheduling of Science Activities for Titan Encounters by Cassini

    NASA Technical Reports Server (NTRS)

    Ray, Trina L.; Knight, Russel L.; Mohr, Dave

    2014-01-01

    In an effort to demonstrate the efficacy of automated planning and scheduling techniques for large missions, we have adapted ASPEN (Activity Scheduling and Planning Environment) [1] and CLASP (Compressed Large-scale Activity Scheduling and Planning) [2] to the domain of scheduling high-level science goals into conflict-free operations plans for Titan encounters by the Cassini spacecraft.

  18. Use of Analogy in Learning Physics: The Role of Representations

    ERIC Educational Resources Information Center

    Podolefsky, Noah S.; Finkelstein, Naoh D.

    2006-01-01

    Previous studies have demonstrated that analogies can promote student learning in physics and can be productively taught to students to support their learning, under certain conditions. We build on these studies to explore the use of analogy by students in a large introductory college physics course. In the first large-scale study of its kind, we…

  19. Measuring Kindness at School: Psychometric Properties of a School Kindness Scale for Children and Adolescents

    ERIC Educational Resources Information Center

    Binfet, John Tyler; Gadermann, Anne M.; Schonert-Reichl, Kimberly A.

    2016-01-01

    In this study, we sought to create and validate a brief measure to assess students' perceptions of kindness in school. Participants included 1,753 students in Grades 4 to 8 attending public schools in a large school district in southern British Columbia. The School Kindness Scale (SKS) demonstrated a unidimensional factor structure and adequate…

  20. The Classroom Sandbox: A Physical Model for Scientific Inquiry

    ERIC Educational Resources Information Center

    Feldman, Allan; Cooke, Michele L.; Ellsworth, Mary S.

    2010-01-01

    For scientists, the sandbox serves as an analog for faulting in Earth's crust. Here, the large, slow processes within the crust can be scaled to the size of a table, and time scales are directly observable. This makes it a useful tool for demonstrating the role of inquiry in science. For this reason, the sandbox is also helpful for learning…

  1. Large-scale synthesis of high-quality hexagonal boron nitride nanosheets for large-area graphene electronics.

    PubMed

    Lee, Kang Hyuck; Shin, Hyeon-Jin; Lee, Jinyeong; Lee, In-yeal; Kim, Gil-Ho; Choi, Jae-Young; Kim, Sang-Woo

    2012-02-08

    Hexagonal boron nitride (h-BN) has received a great deal of attention as a substrate material for high-performance graphene electronics because it has an atomically smooth surface, lattice constant similar to that of graphene, large optical phonon modes, and a large electrical band gap. Herein, we report the large-scale synthesis of high-quality h-BN nanosheets in a chemical vapor deposition (CVD) process by controlling the surface morphologies of the copper (Cu) catalysts. It was found that morphology control of the Cu foil is much critical for the formation of the pure h-BN nanosheets as well as the improvement of their crystallinity. For the first time, we demonstrate the performance enhancement of CVD-based graphene devices with large-scale h-BN nanosheets. The mobility of the graphene device on the h-BN nanosheets was increased 3 times compared to that without the h-BN nanosheets. The on-off ratio of the drain current is 2 times higher than that of the graphene device without h-BN. This work suggests that high-quality h-BN nanosheets based on CVD are very promising for high-performance large-area graphene electronics. © 2012 American Chemical Society

  2. Classroom Activity Connections: Demonstrating Various Flame Tests Using Common Household Materials

    ERIC Educational Resources Information Center

    Baldwin, Bruce W.; Hasbrouck, Scott; Smith, Jordan; Kuntzleman, Thomas S.

    2010-01-01

    In "JCE" Activity #67, "Flame Tests: Which Ion Causes the Color?", Michael Sanger describes how to conduct flame tests with household items. We have used this activity in outreach settings, and have extended it in a variety of ways. For example, we have demonstrated large-scale strontium (red), copper (green), and carbon (blue) flames using only…

  3. Topology assisted self-organization of colloidal nanoparticles: application to 2D large-scale nanomastering

    PubMed Central

    Kostcheev, Serguei; Turover, Daniel; Salas-Montiel, Rafael; Nomenyo, Komla; Gokarna, Anisha; Lerondel, Gilles

    2014-01-01

    Summary Our aim was to elaborate a novel method for fully controllable large-scale nanopatterning. We investigated the influence of the surface topology, i.e., a pre-pattern of hydrogen silsesquioxane (HSQ) posts, on the self-organization of polystyrene beads (PS) dispersed over a large surface. Depending on the post size and spacing, long-range ordering of self-organized polystyrene beads is observed wherein guide posts were used leading to single crystal structure. Topology assisted self-organization has proved to be one of the solutions to obtain large-scale ordering. Besides post size and spacing, the colloidal concentration and the nature of solvent were found to have a significant effect on the self-organization of the PS beads. Scanning electron microscope and associated Fourier transform analysis were used to characterize the morphology of the ordered surfaces. Finally, the production of silicon molds is demonstrated by using the beads as a template for dry etching. PMID:25161854

  4. Fast Crystallization of the Phase Change Compound GeTe by Large-Scale Molecular Dynamics Simulations.

    PubMed

    Sosso, Gabriele C; Miceli, Giacomo; Caravati, Sebastiano; Giberti, Federico; Behler, Jörg; Bernasconi, Marco

    2013-12-19

    Phase change materials are of great interest as active layers in rewritable optical disks and novel electronic nonvolatile memories. These applications rest on a fast and reversible transformation between the amorphous and crystalline phases upon heating, taking place on the nanosecond time scale. In this work, we investigate the microscopic origin of the fast crystallization process by means of large-scale molecular dynamics simulations of the phase change compound GeTe. To this end, we use an interatomic potential generated from a Neural Network fitting of a large database of ab initio energies. We demonstrate that in the temperature range of the programming protocols of the electronic memories (500-700 K), nucleation of the crystal in the supercooled liquid is not rate-limiting. In this temperature range, the growth of supercritical nuclei is very fast because of a large atomic mobility, which is, in turn, the consequence of the high fragility of the supercooled liquid and the associated breakdown of the Stokes-Einstein relation between viscosity and diffusivity.

  5. Osborne Reynolds pipe flow: Direct simulation from laminar through gradual transition to fully developed turbulence.

    PubMed

    Wu, Xiaohua; Moin, Parviz; Adrian, Ronald J; Baltzer, Jon R

    2015-06-30

    The precise dynamics of breakdown in pipe transition is a century-old unresolved problem in fluid mechanics. We demonstrate that the abruptness and mysteriousness attributed to the Osborne Reynolds pipe transition can be partially resolved with a spatially developing direct simulation that carries weakly but finitely perturbed laminar inflow through gradual rather than abrupt transition arriving at the fully developed turbulent state. Our results with this approach show during transition the energy norms of such inlet perturbations grow exponentially rather than algebraically with axial distance. When inlet disturbance is located in the core region, helical vortex filaments evolve into large-scale reverse hairpin vortices. The interaction of these reverse hairpins among themselves or with the near-wall flow when they descend to the surface from the core produces small-scale hairpin packets, which leads to breakdown. When inlet disturbance is near the wall, certain quasi-spanwise structure is stretched into a Lambda vortex, and develops into a large-scale hairpin vortex. Small-scale hairpin packets emerge near the tip region of the large-scale hairpin vortex, and subsequently grow into a turbulent spot, which is itself a local concentration of small-scale hairpin vortices. This vortex dynamics is broadly analogous to that in the boundary layer bypass transition and in the secondary instability and breakdown stage of natural transition, suggesting the possibility of a partial unification. Under parabolic base flow the friction factor overshoots Moody's correlation. Plug base flow requires stronger inlet disturbance for transition. Accuracy of the results is demonstrated by comparing with analytical solutions before breakdown, and with fully developed turbulence measurements after the completion of transition.

  6. Osborne Reynolds pipe flow: Direct simulation from laminar through gradual transition to fully developed turbulence

    PubMed Central

    Wu, Xiaohua; Moin, Parviz; Adrian, Ronald J.; Baltzer, Jon R.

    2015-01-01

    The precise dynamics of breakdown in pipe transition is a century-old unresolved problem in fluid mechanics. We demonstrate that the abruptness and mysteriousness attributed to the Osborne Reynolds pipe transition can be partially resolved with a spatially developing direct simulation that carries weakly but finitely perturbed laminar inflow through gradual rather than abrupt transition arriving at the fully developed turbulent state. Our results with this approach show during transition the energy norms of such inlet perturbations grow exponentially rather than algebraically with axial distance. When inlet disturbance is located in the core region, helical vortex filaments evolve into large-scale reverse hairpin vortices. The interaction of these reverse hairpins among themselves or with the near-wall flow when they descend to the surface from the core produces small-scale hairpin packets, which leads to breakdown. When inlet disturbance is near the wall, certain quasi-spanwise structure is stretched into a Lambda vortex, and develops into a large-scale hairpin vortex. Small-scale hairpin packets emerge near the tip region of the large-scale hairpin vortex, and subsequently grow into a turbulent spot, which is itself a local concentration of small-scale hairpin vortices. This vortex dynamics is broadly analogous to that in the boundary layer bypass transition and in the secondary instability and breakdown stage of natural transition, suggesting the possibility of a partial unification. Under parabolic base flow the friction factor overshoots Moody’s correlation. Plug base flow requires stronger inlet disturbance for transition. Accuracy of the results is demonstrated by comparing with analytical solutions before breakdown, and with fully developed turbulence measurements after the completion of transition. PMID:26080447

  7. Osborne Reynolds pipe flow: Direct simulation from laminar through gradual transition to fully developed turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Xiaohua; Moin, Parviz; Adrian, Ronald J.

    We report that the precise dynamics of breakdown in pipe transition is a century-old unresolved problem in fluid mechanics. We demonstrate that the abruptness and mysteriousness attributed to the Osborne Reynolds pipe transition can be partially resolved with a spatially developing direct simulation that carries weakly but finitely perturbed laminar inflow through gradual rather than abrupt transition arriving at the fully developed turbulent state. Our results with this approach show during transition the energy norms of such inlet perturbations grow exponentially rather than algebraically with axial distance. When inlet disturbance is located in the core region, helical vortex filaments evolvemore » into large-scale reverse hairpin vortices. The interaction of these reverse hairpins among themselves or with the near-wall flow when they descend to the surface from the core produces small-scale hairpin packets, which leads to breakdown. When inlet disturbance is near the wall, certain quasi-spanwise structure is stretched into a Lambda vortex, and develops into a large-scale hairpin vortex. Small-scale hairpin packets emerge near the tip region of the large-scale hairpin vortex, and subsequently grow into a turbulent spot, which is itself a local concentration of small-scale hairpin vortices. This vortex dynamics is broadly analogous to that in the boundary layer bypass transition and in the secondary instability and breakdown stage of natural transition, suggesting the possibility of a partial unification. Under parabolic base flow the friction factor overshoots Moody’s correlation. Plug base flow requires stronger inlet disturbance for transition. Finally, accuracy of the results is demonstrated by comparing with analytical solutions before breakdown, and with fully developed turbulence measurements after the completion of transition.« less

  8. Osborne Reynolds pipe flow: Direct simulation from laminar through gradual transition to fully developed turbulence

    DOE PAGES

    Wu, Xiaohua; Moin, Parviz; Adrian, Ronald J.; ...

    2015-06-15

    We report that the precise dynamics of breakdown in pipe transition is a century-old unresolved problem in fluid mechanics. We demonstrate that the abruptness and mysteriousness attributed to the Osborne Reynolds pipe transition can be partially resolved with a spatially developing direct simulation that carries weakly but finitely perturbed laminar inflow through gradual rather than abrupt transition arriving at the fully developed turbulent state. Our results with this approach show during transition the energy norms of such inlet perturbations grow exponentially rather than algebraically with axial distance. When inlet disturbance is located in the core region, helical vortex filaments evolvemore » into large-scale reverse hairpin vortices. The interaction of these reverse hairpins among themselves or with the near-wall flow when they descend to the surface from the core produces small-scale hairpin packets, which leads to breakdown. When inlet disturbance is near the wall, certain quasi-spanwise structure is stretched into a Lambda vortex, and develops into a large-scale hairpin vortex. Small-scale hairpin packets emerge near the tip region of the large-scale hairpin vortex, and subsequently grow into a turbulent spot, which is itself a local concentration of small-scale hairpin vortices. This vortex dynamics is broadly analogous to that in the boundary layer bypass transition and in the secondary instability and breakdown stage of natural transition, suggesting the possibility of a partial unification. Under parabolic base flow the friction factor overshoots Moody’s correlation. Plug base flow requires stronger inlet disturbance for transition. Finally, accuracy of the results is demonstrated by comparing with analytical solutions before breakdown, and with fully developed turbulence measurements after the completion of transition.« less

  9. Dynamic Control of Facts Devices to Enable Large Scale Penetration of Renewable Energy Resources

    NASA Astrophysics Data System (ADS)

    Chavan, Govind Sahadeo

    This thesis focuses on some of the problems caused by large scale penetration of Renewable Energy Resources within EHV transmission networks, and investigates some approaches in resolving these problems. In chapter 4, a reduced-order model of the 500 kV WECC transmission system is developed by estimating its key parameters from phasor measurement unit (PMU) data. The model was then implemented in RTDS and was investigated for its accuracy with respect to the PMU data. Finally it was tested for observing the effects of various contingencies like transmission line loss, generation loss and large scale penetration of wind farms on EHV transmission systems. Chapter 5 introduces Static Series Synchronous Compensators (SSSC) which are seriesconnected converters that can control real power flow along a transmission line. A new application of SSSCs in mitigating Ferranti effect on unloaded transmission lines was demonstrated on PSCAD. A new control scheme for SSSCs based on the Cascaded H-bridge (CHB) converter configuration was proposed and was demonstrated using PSCAD and RTDS. A new centralized controller was developed for the distributed SSSCs based on some of the concepts used in the CHB-based SSSC. The controller's efficacy was demonstrated using RTDS. Finally chapter 6 introduces the problem of power oscillations induced by renewable sources in a transmission network. A power oscillation damping (POD) controller is designed using distributed SSSCs in NYPA's 345 kV three-bus AC system and its efficacy is demonstrated in PSCAD. A similar POD controller is then designed for the CHB-based SSSC in the IEEE 14 bus system in PSCAD. Both controllers were noted to have significantly damped power oscillations in the transmission networks.

  10. 'Getting to Know Me': The second phase roll-out of a staff training programme for supporting people with dementia in general hospitals.

    PubMed

    Elvish, Ruth; Burrow, Simon; Cawley, Rosanne; Harney, Kathryn; Pilling, Mark; Gregory, Julie; Keady, John

    2018-01-01

    Objectives The aims were to evaluate a second phase roll-out of a dementia care training programme for general hospital staff and to further develop two outcome scales: the Confidence in Dementia scale for measuring confidence in working with people with dementia and the Knowledge in Dementia scale for measuring knowledge in dementia. Method Following a 'training the trainers' phase, the study involved the delivery of the 'Getting to Know Me' training programme to a large number of staff (n = 517) across three National Health Service (NHS) Trusts situated in North-West England. The impact of the programme was evaluated using a pre-post design which explored: (i) changes in confidence in dementia, (ii) changes in knowledge in dementia, and (iii) changes in beliefs about behaviours that challenge. Results Statistically significant change was identified between pre-post training on all outcome measures (Confidence in Dementia: eight point increase, p < 0.001; Knowledge in Dementia: two point increase p < 0.001; controllability beliefs scale: four point decrease, p < 0.001). Medium to large effect sizes were demonstrated on all outcome measures. The psychometric properties of the Confidence in Dementia and Knowledge in Dementia scales are reported. Conclusion Staff knowledge in dementia and confidence in working with people with dementia significantly increased following attendance at the training sessions. The findings are consistent with preliminary findings and strengthen current knowledge about the impact of dementia care training in general hospitals. The Confidence in Dementia and Knowledge in Dementia scales continue to demonstrate psychometrically sound properties and demonstrate utility in the field of dementia research.

  11. Observing large-scale temporal variability of ocean currents by satellite altimetry - With application to the Antarctic circumpolar current

    NASA Technical Reports Server (NTRS)

    Fu, L.-L.; Chelton, D. B.

    1985-01-01

    A new method is developed for studying large-scale temporal variability of ocean currents from satellite altimetric sea level measurements at intersections (crossovers) of ascending and descending orbit ground tracks. Using this method, sea level time series can be constructed from crossover sea level differences in small sample areas where altimetric crossovers are clustered. The method is applied to Seasat altimeter data to study the temporal evolution of the Antarctic Circumpolar Current (ACC) over the 3-month Seasat mission (July-October 1978). The results reveal a generally eastward acceleration of the ACC around the Southern Ocean with meridional disturbances which appear to be associated with bottom topographic features. This is the first direct observational evidence for large-scale coherence in the temporal variability of the ACC. It demonstrates the great potential of satellite altimetry for synoptic observation of temporal variability of the world ocean circulation.

  12. Solving large-scale fixed cost integer linear programming models for grid-based location problems with heuristic techniques

    NASA Astrophysics Data System (ADS)

    Noor-E-Alam, Md.; Doucette, John

    2015-08-01

    Grid-based location problems (GBLPs) can be used to solve location problems in business, engineering, resource exploitation, and even in the field of medical sciences. To solve these decision problems, an integer linear programming (ILP) model is designed and developed to provide the optimal solution for GBLPs considering fixed cost criteria. Preliminary results show that the ILP model is efficient in solving small to moderate-sized problems. However, this ILP model becomes intractable in solving large-scale instances. Therefore, a decomposition heuristic is proposed to solve these large-scale GBLPs, which demonstrates significant reduction of solution runtimes. To benchmark the proposed heuristic, results are compared with the exact solution via ILP. The experimental results show that the proposed method significantly outperforms the exact method in runtime with minimal (and in most cases, no) loss of optimality.

  13. Comparisons of ionospheric electron density distributions reconstructed by GPS computerized tomography, backscatter ionograms, and vertical ionograms

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Lei, Yong; Li, Bofeng; An, Jiachun; Zhu, Peng; Jiang, Chunhua; Zhao, Zhengyu; Zhang, Yuannong; Ni, Binbin; Wang, Zemin; Zhou, Xuhua

    2015-12-01

    Global Positioning System (GPS) computerized ionosphere tomography (CIT) and ionospheric sky wave ground backscatter radar are both capable of measuring the large-scale, two-dimensional (2-D) distributions of ionospheric electron density (IED). Here we report the spatial and temporal electron density results obtained by GPS CIT and backscatter ionogram (BSI) inversion for three individual experiments. Both the GPS CIT and BSI inversion techniques demonstrate the capability and the consistency of reconstructing large-scale IED distributions. To validate the results, electron density profiles obtained from GPS CIT and BSI inversion are quantitatively compared to the vertical ionosonde data, which clearly manifests that both methods output accurate information of ionopsheric electron density and thereby provide reliable approaches to ionospheric soundings. Our study can improve current understanding of the capability and insufficiency of these two methods on the large-scale IED reconstruction.

  14. An evaluation of multi-probe locality sensitive hashing for computing similarities over web-scale query logs

    PubMed Central

    2018-01-01

    Many modern applications of AI such as web search, mobile browsing, image processing, and natural language processing rely on finding similar items from a large database of complex objects. Due to the very large scale of data involved (e.g., users’ queries from commercial search engines), computing such near or nearest neighbors is a non-trivial task, as the computational cost grows significantly with the number of items. To address this challenge, we adopt Locality Sensitive Hashing (a.k.a, LSH) methods and evaluate four variants in a distributed computing environment (specifically, Hadoop). We identify several optimizations which improve performance, suitable for deployment in very large scale settings. The experimental results demonstrate our variants of LSH achieve the robust performance with better recall compared with “vanilla” LSH, even when using the same amount of space. PMID:29346410

  15. Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.

    PubMed

    Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner

    2016-01-01

    Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Gaussian processes for personalized e-health monitoring with wearable sensors.

    PubMed

    Clifton, Lei; Clifton, David A; Pimentel, Marco A F; Watkinson, Peter J; Tarassenko, Lionel

    2013-01-01

    Advances in wearable sensing and communications infrastructure have allowed the widespread development of prototype medical devices for patient monitoring. However, such devices have not penetrated into clinical practice, primarily due to a lack of research into "intelligent" analysis methods that are sufficiently robust to support large-scale deployment. Existing systems are typically plagued by large false-alarm rates, and an inability to cope with sensor artifact in a principled manner. This paper has two aims: 1) proposal of a novel, patient-personalized system for analysis and inference in the presence of data uncertainty, typically caused by sensor artifact and data incompleteness; 2) demonstration of the method using a large-scale clinical study in which 200 patients have been monitored using the proposed system. This latter provides much-needed evidence that personalized e-health monitoring is feasible within an actual clinical environment, at scale, and that the method is capable of improving patient outcomes via personalized healthcare.

  17. Desalination

    EPA Science Inventory

    To cope with the rising demand for fresh water, desalination of brackish groundwater and seawater is increasingly being viewed as a pragmatic option for augmenting fresh water supplies. The large scale deployment of desalination is likely to demonstrably increase electricity use,...

  18. Large-scale synthesis of high-purity well-aligned carbon nanotubes using pyrolysis of iron(II) phthalocyanine and acetylene

    NASA Astrophysics Data System (ADS)

    Liu, B. C.; Lee, T. J.; Lee, S. H.; Park, C. Y.; Lee, C. J.

    2003-08-01

    Well-aligned carbon nanotubes (CNTs) with high purity have been produced by pyrolysis of iron(II) phthalocyanine and acetylene at 800 °C. The synthesized CNTs have a length of 75 μm and diameters ranging from 20 to 60 nm. The CNTs have a bamboo-like structure and exhibit good crystallinity of graphite sheets. The growth rate of the CNTs was rapidly increased with adding C 2H 2. Our results demonstrate that the proposed growth method is suitable to large-scale synthesis of high-purity well-aligned CNTs on various substrates.

  19. Full-color large-scaled computer-generated holograms using RGB color filters.

    PubMed

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji

    2017-02-06

    A technique using RGB color filters is proposed for creating high-quality full-color computer-generated holograms (CGHs). The fringe of these CGHs is composed of more than a billion pixels. The CGHs reconstruct full-parallax three-dimensional color images with a deep sensation of depth caused by natural motion parallax. The simulation technique as well as the principle and challenges of high-quality full-color reconstruction are presented to address the design of filter properties suitable for large-scaled CGHs. Optical reconstructions of actual fabricated full-color CGHs are demonstrated in order to verify the proposed techniques.

  20. REVIEWS OF TOPICAL PROBLEMS: Large-scale star formation in galaxies

    NASA Astrophysics Data System (ADS)

    Efremov, Yurii N.; Chernin, Artur D.

    2003-01-01

    A brief review is given of the history of modern ideas on the ongoing star formation process in the gaseous disks of galaxies. Recent studies demonstrate the key role of the interplay between the gas self-gravitation and its turbulent motions. The large scale supersonic gas flows create structures of enhanced density which then give rise to the gravitational condensation of gas into stars and star clusters. Formation of star clusters, associations and complexes is considered, as well as the possibility of isolated star formation. Special emphasis is placed on star formation under the action of ram pressure.

  1. Statistical simulation of the magnetorotational dynamo.

    PubMed

    Squire, J; Bhattacharjee, A

    2015-02-27

    Turbulence and dynamo induced by the magnetorotational instability (MRI) are analyzed using quasilinear statistical simulation methods. It is found that homogenous turbulence is unstable to a large-scale dynamo instability, which saturates to an inhomogenous equilibrium with a strong dependence on the magnetic Prandtl number (Pm). Despite its enormously reduced nonlinearity, the dependence of the angular momentum transport on Pm in the quasilinear model is qualitatively similar to that of nonlinear MRI turbulence. This demonstrates the importance of the large-scale dynamo and suggests how dramatically simplified models may be used to gain insight into the astrophysically relevant regimes of very low or high Pm.

  2. ‘Natural experiment’ Demonstrates Top-Down Control of Spiders by Birds on a Landscape Level

    PubMed Central

    Rogers, Haldre; Hille Ris Lambers, Janneke; Miller, Ross; Tewksbury, Joshua J.

    2012-01-01

    The combination of small-scale manipulative experiments and large-scale natural experiments provides a powerful approach for demonstrating the importance of top-down trophic control on the ecosystem scale. The most compelling natural experiments have come from studies examining the landscape-scale loss of apex predators like sea otters, wolves, fish and land crabs. Birds are dominant apex predators in terrestrial systems around the world, yet all studies on their role as predators have come from small-scale experiments; the top-down impact of bird loss on their arthropod prey has yet to be examined at a landscape scale. Here, we use a unique natural experiment, the extirpation of insectivorous birds from nearly all forests on the island of Guam by the invasive brown tree snake, to produce the first assessment of the impacts of bird loss on their prey. We focused on spiders because experimental studies showed a consistent top-down effect of birds on spiders. We conducted spider web surveys in native forest on Guam and three nearby islands with healthy bird populations. Spider web densities on the island of Guam were 40 times greater than densities on islands with birds during the wet season, and 2.3 times greater during the dry season. These results confirm the general trend from manipulative experiments conducted in other systems however, the effect size was much greater in this natural experiment than in most manipulative experiments. In addition, bird loss appears to have removed the seasonality of spider webs and led to larger webs in at least one spider species in the forests of Guam than on nearby islands with birds. We discuss several possible mechanisms for the observed changes. Overall, our results suggest that effect sizes from smaller-scale experimental studies may significantly underestimate the impact of bird loss on spider density as demonstrated by this large-scale natural experiment. PMID:22970126

  3. Effective Integration of Earth Observation Data and Flood Modeling for Rapid Disaster Response: The Texas 2015 Case

    NASA Astrophysics Data System (ADS)

    Schumann, G.

    2016-12-01

    Routinely obtaining real-time 2-D inundation patterns of a flood event at a meaningful spatial resolution and over large scales is at the moment only feasible with either operational aircraft flights or satellite imagery. Of course having model simulations of floodplain inundation available to complement the remote sensing data is highly desirable, for both event re-analysis and forecasting event inundation. Using the Texas 2015 flood disaster, we demonstrate the value of multi-scale EO data for large scale 2-D floodplain inundation modeling and forecasting. A dynamic re-analysis of the Texas 2015 flood disaster was run using a 2-D flood model developed for accurate large scale simulations. We simulated the major rivers entering the Gulf of Mexico and used flood maps produced from both optical and SAR satellite imagery to examine regional model sensitivities and assess associated performance. It was demonstrated that satellite flood maps can complement model simulations and add value, although this is largely dependent on a number of important factors, such as image availability, regional landscape topology, and model uncertainty. In the preferred case where model uncertainty is high, landscape topology is complex (i.e. urbanized coastal area) and satellite flood maps are available (in case of SAR for instance), satellite data can significantly reduce model uncertainty by identifying the "best possible" model parameter set. However, most often the situation is occurring where model uncertainty is low and spatially contiguous flooding can be mapped from satellites easily enough, such as in rural large inland river floodplains. Consequently, not much value from satellites can be added. Nevertheless, where a large number of flood maps are available, model credibility can be increased substantially. In the case presented here this was true for at least 60% of the many thousands of kilometers of river flow length simulated, where satellite flood maps existed. The next steps of this project is to employ a technique termed "targeted observation" approach, which is an assimilation based procedure that allows quantifying the impact observations have on model predictions at the local scale and also along the entire river system, when assimilated with the model at specific "overpass" locations.

  4. Channel optimization of high-intensity laser beams in millimeter-scale plasmas

    DOE PAGES

    Ceurvorst, L.; Savin, A.; Ratan, N.; ...

    2018-04-20

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (> 10 18 W/cm 2) kilojoule laser pulses through large density scale length (~ 390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse’s focal location and intensity as well as the plasma’s temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities as expected. However, contrary to previous large scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer duration equivalents. To conclude, this new observation has manymore » implications for future laser-plasma research in the relativistic regime.« less

  5. Scaling up digital circuit computation with DNA strand displacement cascades.

    PubMed

    Qian, Lulu; Winfree, Erik

    2011-06-03

    To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.

  6. Channel optimization of high-intensity laser beams in millimeter-scale plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ceurvorst, L.; Savin, A.; Ratan, N.

    Channeling experiments were performed at the OMEGA EP facility using relativistic intensity (> 10 18 W/cm 2) kilojoule laser pulses through large density scale length (~ 390-570 μm) laser-produced plasmas, demonstrating the effects of the pulse’s focal location and intensity as well as the plasma’s temperature on the resulting channel formation. The results show deeper channeling when focused into hot plasmas and at lower densities as expected. However, contrary to previous large scale particle-in-cell studies, the results also indicate deeper penetration by short (10 ps), intense pulses compared to their longer duration equivalents. To conclude, this new observation has manymore » implications for future laser-plasma research in the relativistic regime.« less

  7. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  8. High Temperature Electrolysis 4 kW Experiment Design, Operation, and Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.E. O'Brien; X. Zhang; K. DeWall

    2012-09-01

    This report provides results of long-term stack testing completed in the new high-temperature steam electrolysis multi-kW test facility recently developed at INL. The report includes detailed descriptions of the piping layout, steam generation and delivery system, test fixture, heat recuperation system, hot zone, instrumentation, and operating conditions. This facility has provided a demonstration of high-temperature steam electrolysis operation at the 4 kW scale with advanced cell and stack technology. This successful large-scale demonstration of high-temperature steam electrolysis will help to advance the technology toward near-term commercialization.

  9. Population cycles are highly correlated over long time series and large spatial scales in two unrelated species: Greater sage-grouse and cottontail rabbits

    USGS Publications Warehouse

    Fedy, B.C.; Doherty, K.E.

    2011-01-01

    Animal species across multiple taxa demonstrate multi-annual population cycles, which have long been of interest to ecologists. Correlated population cycles between species that do not share a predator-prey relationship are particularly intriguing and challenging to explain. We investigated annual population trends of greater sage-grouse (Centrocercus urophasianus) and cottontail rabbits (Sylvilagus sp.) across Wyoming to explore the possibility of correlations between unrelated species, over multiple cycles, very large spatial areas, and relatively southern latitudes in terms of cycling species. We analyzed sage-grouse lek counts and annual hunter harvest indices from 1982 to 2007. We show that greater sage-grouse, currently listed as warranted but precluded under the US Endangered Species Act, and cottontails have highly correlated cycles (r = 0. 77). We explore possible mechanistic hypotheses to explain the synchronous population cycles. Our research highlights the importance of control populations in both adaptive management and impact studies. Furthermore, we demonstrate the functional value of these indices (lek counts and hunter harvest) for tracking broad-scale fluctuations in the species. This level of highly correlated long-term cycling has not previously been documented between two non-related species, over a long time-series, very large spatial scale, and within more southern latitudes. ?? 2010 US Government.

  10. Infraslow Electroencephalographic and Dynamic Resting State Network Activity.

    PubMed

    Grooms, Joshua K; Thompson, Garth J; Pan, Wen-Ju; Billings, Jacob; Schumacher, Eric H; Epstein, Charles M; Keilholz, Shella D

    2017-06-01

    A number of studies have linked the blood oxygenation level dependent (BOLD) signal to electroencephalographic (EEG) signals in traditional frequency bands (δ, θ, α, β, and γ), but the relationship between BOLD and its direct frequency correlates in the infraslow band (<1 Hz) has been little studied. Previously, work in rodents showed that infraslow local field potentials play a role in functional connectivity, particularly in the dynamic organization of large-scale networks. To examine the relationship between infraslow activity and network dynamics in humans, direct current (DC) EEG and resting state magnetic resonance imaging data were acquired simultaneously. The DC EEG signals were correlated with the BOLD signal in patterns that resembled resting state networks. Subsequent dynamic analysis showed that the correlation between DC EEG and the BOLD signal varied substantially over time, even within individual subjects. The variation in DC EEG appears to reflect the time-varying contribution of different resting state networks. Furthermore, some of the patterns of DC EEG and BOLD correlation are consistent with previous work demonstrating quasiperiodic spatiotemporal patterns of large-scale network activity in resting state. These findings demonstrate that infraslow electrical activity is linked to BOLD fluctuations in humans and that it may provide a basis for large-scale organization comparable to that observed in animal studies.

  11. The Immersive Virtual Reality Lab: Possibilities for Remote Experimental Manipulations of Autonomic Activity on a Large Scale.

    PubMed

    Juvrud, Joshua; Gredebäck, Gustaf; Åhs, Fredrik; Lerin, Nils; Nyström, Pär; Kastrati, Granit; Rosén, Jörgen

    2018-01-01

    There is a need for large-scale remote data collection in a controlled environment, and the in-home availability of virtual reality (VR) and the commercial availability of eye tracking for VR present unique and exciting opportunities for researchers. We propose and provide a proof-of-concept assessment of a robust system for large-scale in-home testing using consumer products that combines psychophysiological measures and VR, here referred to as a Virtual Lab. For the first time, this method is validated by correlating autonomic responses, skin conductance response (SCR), and pupillary dilation, in response to a spider, a beetle, and a ball using commercially available VR. Participants demonstrated greater SCR and pupillary responses to the spider, and the effect was dependent on the proximity of the stimuli to the participant, with a stronger response when the spider was close to the virtual self. We replicated these effects across two experiments and in separate physical room contexts to mimic variability in home environment. Together, these findings demonstrate the utility of pupil dilation as a marker of autonomic arousal and the feasibility to assess this in commercially available VR hardware and support a robust Virtual Lab tool for massive remote testing.

  12. Infraslow Electroencephalographic and Dynamic Resting State Network Activity

    PubMed Central

    Grooms, Joshua K.; Thompson, Garth J.; Pan, Wen-Ju; Billings, Jacob; Schumacher, Eric H.; Epstein, Charles M.

    2017-01-01

    Abstract A number of studies have linked the blood oxygenation level dependent (BOLD) signal to electroencephalographic (EEG) signals in traditional frequency bands (δ, θ, α, β, and γ), but the relationship between BOLD and its direct frequency correlates in the infraslow band (<1 Hz) has been little studied. Previously, work in rodents showed that infraslow local field potentials play a role in functional connectivity, particularly in the dynamic organization of large-scale networks. To examine the relationship between infraslow activity and network dynamics in humans, direct current (DC) EEG and resting state magnetic resonance imaging data were acquired simultaneously. The DC EEG signals were correlated with the BOLD signal in patterns that resembled resting state networks. Subsequent dynamic analysis showed that the correlation between DC EEG and the BOLD signal varied substantially over time, even within individual subjects. The variation in DC EEG appears to reflect the time-varying contribution of different resting state networks. Furthermore, some of the patterns of DC EEG and BOLD correlation are consistent with previous work demonstrating quasiperiodic spatiotemporal patterns of large-scale network activity in resting state. These findings demonstrate that infraslow electrical activity is linked to BOLD fluctuations in humans and that it may provide a basis for large-scale organization comparable to that observed in animal studies. PMID:28462586

  13. The Immersive Virtual Reality Lab: Possibilities for Remote Experimental Manipulations of Autonomic Activity on a Large Scale

    PubMed Central

    Juvrud, Joshua; Gredebäck, Gustaf; Åhs, Fredrik; Lerin, Nils; Nyström, Pär; Kastrati, Granit; Rosén, Jörgen

    2018-01-01

    There is a need for large-scale remote data collection in a controlled environment, and the in-home availability of virtual reality (VR) and the commercial availability of eye tracking for VR present unique and exciting opportunities for researchers. We propose and provide a proof-of-concept assessment of a robust system for large-scale in-home testing using consumer products that combines psychophysiological measures and VR, here referred to as a Virtual Lab. For the first time, this method is validated by correlating autonomic responses, skin conductance response (SCR), and pupillary dilation, in response to a spider, a beetle, and a ball using commercially available VR. Participants demonstrated greater SCR and pupillary responses to the spider, and the effect was dependent on the proximity of the stimuli to the participant, with a stronger response when the spider was close to the virtual self. We replicated these effects across two experiments and in separate physical room contexts to mimic variability in home environment. Together, these findings demonstrate the utility of pupil dilation as a marker of autonomic arousal and the feasibility to assess this in commercially available VR hardware and support a robust Virtual Lab tool for massive remote testing. PMID:29867318

  14. The role of large eddy fluctuations in the magnetic dynamics of the Madison Dynamo Experiment

    NASA Astrophysics Data System (ADS)

    Kaplan, Elliot

    The Madison Dynamo Experiment (MDE), a liquid sodium magnetohydrodynamics experiment in a 1 m diameter sphere at the University of Wisconsin-Madison, had measured [in Spence et al., 2006] diamagnetic electrical currents in the experiment that violated an anti dynamo theorem for axisymmetric flow. The diamagnetic currents were instead attributed to nonaxisymmetric turbulent fluctuations. The experimental apparatus has been modified to reduce the strength of the large-scale turbulence driven by the shear layer in its flow. A 7.62 cm baffle was affixed to the equator of the machine to stabilize the shear layer. This reduction has correlated with a decrease in the magnetic fields, induced by the flow, which had been associated with the α and β effects of mean-field magnetohydrodynamics. The research presented herein presents the experimental evidence for reduced fluctuations and reduced mean field emfs, and provides a theoretical framework—based upon mean-field MHD—that connects the observations. The shapes of the large-scale velocity fluctuations are inferred by the spectra of induced magnetic fluctuations and measured in a kinematically similar water experiment. The Bullard and Gellman [1954] formalism demonstrates that the large-scale velocity fluctuations that are inhibited by the baffle can beat with the large-scale magnetic fluctuations that they produce to generate a mean-field emf of the sort measured in Spence et al. [2006]. This shows that the reduction of these large-scale eddies has brought the MDE closer to exciting a dynamo magnetic field. We also examine the mean-field like effects of large-scale (stable) eddies in the Dudley-James [1989] two-vortex dynamo (that the MDE was based upon). Rotating the axis of symmetry redefines the problem from one of an axisymmetric flow exciting a nonaxisymmetric field to one of a combination of axisymmetric and nonaxisymmetric flows exciting a predominantly axisymmetric magnetic eigenmode. As a result, specific interactions between large-scale velocity modes and large-scale magnetic modes are shown to correspond to the Ω effect and the mean-field α and β effects.

  15. Unsupervised Transfer Learning via Multi-Scale Convolutional Sparse Coding for Biomedical Applications

    PubMed Central

    Chang, Hang; Han, Ju; Zhong, Cheng; Snijders, Antoine M.; Mao, Jian-Hua

    2017-01-01

    The capabilities of (I) learning transferable knowledge across domains; and (II) fine-tuning the pre-learned base knowledge towards tasks with considerably smaller data scale are extremely important. Many of the existing transfer learning techniques are supervised approaches, among which deep learning has the demonstrated power of learning domain transferrable knowledge with large scale network trained on massive amounts of labeled data. However, in many biomedical tasks, both the data and the corresponding label can be very limited, where the unsupervised transfer learning capability is urgently needed. In this paper, we proposed a novel multi-scale convolutional sparse coding (MSCSC) method, that (I) automatically learns filter banks at different scales in a joint fashion with enforced scale-specificity of learned patterns; and (II) provides an unsupervised solution for learning transferable base knowledge and fine-tuning it towards target tasks. Extensive experimental evaluation of MSCSC demonstrates the effectiveness of the proposed MSCSC in both regular and transfer learning tasks in various biomedical domains. PMID:28129148

  16. Self-Assembled Epitaxial Au–Oxide Vertically Aligned Nanocomposites for Nanoscale Metamaterials

    DOE PAGES

    Li, Leigang; Sun, Liuyang; Gomez-Diaz, Juan Sebastian; ...

    2016-05-17

    Metamaterials made of nanoscale inclusions or artificial unit cells exhibit exotic optical properties that do not exist in natural materials. Promising applications, such as super-resolution imaging, cloaking, hyperbolic propagation, and ultrafast phase velocities have been demonstrated based on mostly micrometer-scale metamaterials and few nanoscale metamaterials. To date, most metamaterials are created using costly and tedious fabrication techniques with limited paths toward reliable large-scale fabrication. In this work, we demonstrate the one-step direct growth of self-assembled epitaxial metal–oxide nanocomposites as a drastically different approach to fabricating large-area nanostructured metamaterials. Using pulsed laser deposition, we fabricated nanocomposite films with vertically aligned goldmore » (Au) nanopillars (~20 nm in diameter) embedded in various oxide matrices with high epitaxial quality. Strong, broad absorption features in the measured absorbance spectrum are clear signatures of plasmon resonances of Au nanopillars. By tuning their densities on selected substrates, anisotropic optical properties are demonstrated via angular dependent and polarization resolved reflectivity measurements and reproduced by full-wave simulations and effective medium theory. Our model predicts exotic properties, such as zero permittivity responses and topological transitions. In conclusion, our studies suggest that these self-assembled metal–oxide nanostructures provide an exciting new material platform to control and enhance optical response at nanometer scales.« less

  17. Bridging the Science/Policy Gap through Boundary Chain Partnerships and Communities of Practice

    NASA Astrophysics Data System (ADS)

    Kalafatis, S.

    2014-12-01

    Generating the capacity to facilitate the informed usage of climate change science by decision makers on a large scale is fast becoming an area of great concern. While research demonstrates that sustained interactions between producers of such information and potential users can overcome barriers to information usage, it also demonstrates the high resource demand of these efforts. Our social science work at Great Lakes Integrated Sciences and Assessments (GLISA) sheds light on scaling up the usability of climate science through two research areas. The first focuses on partnerships with other boundary organizations that GLISA has leveraged - the "boundary chains" approach. These partnerships reduce the transaction costs involved with outreach and have enhanced the scope of GLISA's climate service efforts to encompass new users such as First Nations groups in Wisconsin and Michigan and underserved neighborhoods in St. Paul, Minnesota. The second research area looks at the development of information usability across the regional scale of the eight Great Lakes states. It has identified the critical role that communities of practice are playing in making information usable to large groups of users who work in similar contexts and have similar information needs. Both these research areas demonstrate the emerging potential of flexible knowledge networks to enhance society's ability to prepare for the impacts of climate change.

  18. OpenMP parallelization of a gridded SWAT (SWATG)

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Hou, Jinliang; Cao, Yongpan; Gu, Juan; Huang, Chunlin

    2017-12-01

    Large-scale, long-term and high spatial resolution simulation is a common issue in environmental modeling. A Gridded Hydrologic Response Unit (HRU)-based Soil and Water Assessment Tool (SWATG) that integrates grid modeling scheme with different spatial representations also presents such problems. The time-consuming problem affects applications of very high resolution large-scale watershed modeling. The OpenMP (Open Multi-Processing) parallel application interface is integrated with SWATG (called SWATGP) to accelerate grid modeling based on the HRU level. Such parallel implementation takes better advantage of the computational power of a shared memory computer system. We conducted two experiments at multiple temporal and spatial scales of hydrological modeling using SWATG and SWATGP on a high-end server. At 500-m resolution, SWATGP was found to be up to nine times faster than SWATG in modeling over a roughly 2000 km2 watershed with 1 CPU and a 15 thread configuration. The study results demonstrate that parallel models save considerable time relative to traditional sequential simulation runs. Parallel computations of environmental models are beneficial for model applications, especially at large spatial and temporal scales and at high resolutions. The proposed SWATGP model is thus a promising tool for large-scale and high-resolution water resources research and management in addition to offering data fusion and model coupling ability.

  19. A study on large-scale nudging effects in regional climate model simulation

    NASA Astrophysics Data System (ADS)

    Yhang, Yoo-Bin; Hong, Song-You

    2011-05-01

    The large-scale nudging effects on the East Asian summer monsoon (EASM) are examined using the National Centers for Environmental Prediction (NCEP) Regional Spectral Model (RSM). The NCEP/DOE reanalysis data is used to provide large-scale forcings for RSM simulations, configured with an approximately 50-km grid over East Asia, centered on the Korean peninsula. The RSM with a variant of spectral nudging, that is, the scale selective bias correction (SSBC), is forced by perfect boundary conditions during the summers (June-July-August) from 1979 to 2004. The two summers of 2000 and 2004 are investigated to demonstrate the impact of SSBC on precipitation in detail. It is found that the effect of SSBC on the simulated seasonal precipitation is in general neutral without a discernible advantage. Although errors in large-scale circulation for both 2000 and 2004 are reduced by using the SSBC method, the impact on simulated precipitation is found to be negative in 2000 and positive in 2004 summers. One possible reason for a different effect is that precipitation in the summer of 2004 is characterized by a strong baroclinicity, while precipitation in 2000 is caused by thermodynamic instability. The reduction of convective rainfall over the oceans by the application of the SSBC method seems to play an important role in modeled atmosphere.

  20. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    NASA Astrophysics Data System (ADS)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  1. Programmability of nanowire networks

    NASA Astrophysics Data System (ADS)

    Bellew, A. T.; Bell, A. P.; McCarthy, E. K.; Fairfield, J. A.; Boland, J. J.

    2014-07-01

    Electrical connectivity in networks of nanoscale junctions must be better understood if nanowire devices are to be scaled up from single wires to functional material systems. We show that the natural connectivity behaviour found in random nanowire networks presents a new paradigm for creating multi-functional, programmable materials. In devices made from networks of Ni/NiO core-shell nanowires at different length scales, we discover the emergence of distinct behavioural regimes when networks are electrically stressed. We show that a small network, with few nanowire-nanowire junctions, acts as a unipolar resistive switch, demonstrating very high ON/OFF current ratios (>105). However, large networks of nanowires distribute an applied bias across a large number of junctions, and thus respond not by switching but instead by evolving connectivity. We demonstrate that these emergent properties lead to fault-tolerant materials whose resistance may be tuned, and which are capable of adaptively reconfiguring under stress. By combining these two behavioural regimes, we demonstrate that the same nanowire network may be programmed to act both as a metallic interconnect, and a resistive switch device with high ON/OFF ratio. These results enable the fabrication of programmable, multi-functional materials from random nanowire networks.Electrical connectivity in networks of nanoscale junctions must be better understood if nanowire devices are to be scaled up from single wires to functional material systems. We show that the natural connectivity behaviour found in random nanowire networks presents a new paradigm for creating multi-functional, programmable materials. In devices made from networks of Ni/NiO core-shell nanowires at different length scales, we discover the emergence of distinct behavioural regimes when networks are electrically stressed. We show that a small network, with few nanowire-nanowire junctions, acts as a unipolar resistive switch, demonstrating very high ON/OFF current ratios (>105). However, large networks of nanowires distribute an applied bias across a large number of junctions, and thus respond not by switching but instead by evolving connectivity. We demonstrate that these emergent properties lead to fault-tolerant materials whose resistance may be tuned, and which are capable of adaptively reconfiguring under stress. By combining these two behavioural regimes, we demonstrate that the same nanowire network may be programmed to act both as a metallic interconnect, and a resistive switch device with high ON/OFF ratio. These results enable the fabrication of programmable, multi-functional materials from random nanowire networks. Electronic supplementary information (ESI) available: Nanowire statistics (length, diameter statistics, and oxide thickness) are provided. Forming curves for single junctions and networks. Passive voltage contrast image demonstrating selectivity of conductive pathways in 100 μm network. See DOI: 10.1039/c4nr02338b

  2. Modelling high Reynolds number wall–turbulence interactions in laboratory experiments using large-scale free-stream turbulence

    PubMed Central

    Dogan, Eda; Hearst, R. Jason

    2017-01-01

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to ‘simulate’ high Reynolds number wall–turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167584

  3. Modelling high Reynolds number wall-turbulence interactions in laboratory experiments using large-scale free-stream turbulence.

    PubMed

    Dogan, Eda; Hearst, R Jason; Ganapathisubramani, Bharathram

    2017-03-13

    A turbulent boundary layer subjected to free-stream turbulence is investigated in order to ascertain the scale interactions that dominate the near-wall region. The results are discussed in relation to a canonical high Reynolds number turbulent boundary layer because previous studies have reported considerable similarities between these two flows. Measurements were acquired simultaneously from four hot wires mounted to a rake which was traversed through the boundary layer. Particular focus is given to two main features of both canonical high Reynolds number boundary layers and boundary layers subjected to free-stream turbulence: (i) the footprint of the large scales in the logarithmic region on the near-wall small scales, specifically the modulating interaction between these scales, and (ii) the phase difference in amplitude modulation. The potential for a turbulent boundary layer subjected to free-stream turbulence to 'simulate' high Reynolds number wall-turbulence interactions is discussed. The results of this study have encouraging implications for future investigations of the fundamental scale interactions that take place in high Reynolds number flows as it demonstrates that these can be achieved at typical laboratory scales.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  4. Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.

    2017-12-01

    Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.

  5. Psychometric properties of the feedback orientation scale among South African salespersons.

    PubMed

    Lilford, Neil; Caruana, Albert; Pitt, Leyland

    2014-02-01

    Feedback to employees is an important management tool, and the literature demonstrates that it has a positive effect on learning, motivation, and job performance. This study investigates in a non-U.S. context the psychometric properties of the Feedback Orientation Scale. Data were gathered from a sample of 202 salespersons from a large South African firm within the industrial fuels and lubricants sector. Confirmatory Factor Analysis provided evidence for the intended dimensionality, reliability, and convergent and discriminant validity of the scale.

  6. Linear Scaling Density Functional Calculations with Gaussian Orbitals

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.

    1999-01-01

    Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.

  7. Materials Integration and Doping of Carbon Nanotube-based Logic Circuits

    NASA Astrophysics Data System (ADS)

    Geier, Michael

    Over the last 20 years, extensive research into the structure and properties of single- walled carbon nanotube (SWCNT) has elucidated many of the exceptional qualities possessed by SWCNTs, including record-setting tensile strength, excellent chemical stability, distinctive optoelectronic features, and outstanding electronic transport characteristics. In order to exploit these remarkable qualities, many application-specific hurdles must be overcome before the material can be implemented in commercial products. For electronic applications, recent advances in sorting SWCNTs by electronic type have enabled significant progress towards SWCNT-based integrated circuits. Despite these advances, demonstrations of SWCNT-based devices with suitable characteristics for large-scale integrated circuits have been limited. The processing methodologies, materials integration, and mechanistic understanding of electronic properties developed in this dissertation have enabled unprecedented scales of SWCNT-based transistor fabrication and integrated circuit demonstrations. Innovative materials selection and processing methods are at the core of this work and these advances have led to transistors with the necessary transport properties required for modern circuit integration. First, extensive collaborations with other research groups allowed for the exploration of SWCNT thin-film transistors (TFTs) using a wide variety of materials and processing methods such as new dielectric materials, hybrid semiconductor materials systems, and solution-based printing of SWCNT TFTs. These materials were integrated into circuit demonstrations such as NOR and NAND logic gates, voltage-controlled ring oscillators, and D-flip-flops using both rigid and flexible substrates. This dissertation explores strategies for implementing complementary SWCNT-based circuits, which were developed by using local metal gate structures that achieve enhancement-mode p-type and n-type SWCNT TFTs with widely separated and symmetric threshold voltages. Additionally, a novel n-type doping procedure for SWCNT TFTs was also developed utilizing a solution-processed organometallic small molecule to demonstrate the first network top-gated n-type SWCNT TFTs. Lastly, new doping and encapsulation layers were incorporated to stabilize both p-type and n-type SWCNT TFT electronic properties, which enabled the fabrication of large-scale memory circuits. Employing these materials and processing advances has addressed many application specific barriers to commercialization. For instance, the first thin-film SWCNT complementary metal-oxide-semi-conductor (CMOS) logic devices are demonstrated with sub-nanowatt static power consumption and full rail-to-rail voltage transfer characteristics. With the introduction of a new n-type Rh-based molecular dopant, the first SWCNT TFTs are fabricated in top-gate geometries over large areas with high yield. Then by utilizing robust encapsulation methods, stable and uniform electronic performance of both p-type and n-type SWCNT TFTs has been achieved. Based on these complementary SWCNT TFTs, it is possible to simulate, design, and fabricate arrays of low-power static random access memory (SRAM) circuits, achieving large-scale integration for the first time based on solution-processed semiconductors. Together, this work provides a direct pathway for solution processable, large scale, power-efficient advanced integrated logic circuits and systems.

  8. Engineering-Scale Demonstration of DuraLith and Ceramicrete Waste Forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josephson, Gary B.; Westsik, Joseph H.; Pires, Richard P.

    2011-09-23

    To support the selection of a waste form for the liquid secondary wastes from the Hanford Waste Immobilization and Treatment Plant, Washington River Protection Solutions (WRPS) has initiated secondary waste form testing on four candidate waste forms. Two of the candidate waste forms have not been developed to scale as the more mature waste forms. This work describes engineering-scale demonstrations conducted on Ceramicrete and DuraLith candidate waste forms. Both candidate waste forms were successfully demonstrated at an engineering scale. A preliminary conceptual design could be prepared for full-scale production of the candidate waste forms. However, both waste forms are stillmore » too immature to support a detailed design. Formulations for each candidate waste form need to be developed so that the material has a longer working time after mixing the liquid and solid constituents together. Formulations optimized based on previous lab studies did not have sufficient working time to support large-scale testing. The engineering-scale testing was successfully completed using modified formulations. Further lab development and parametric studies are needed to optimize formulations with adequate working time and assess the effects of changes in raw materials and process parameters on the final product performance. Studies on effects of mixing intensity on the initial set time of the waste forms are also needed.« less

  9. An extended algebraic variational multiscale-multigrid-multifractal method (XAVM4) for large-eddy simulation of turbulent two-phase flow

    NASA Astrophysics Data System (ADS)

    Rasthofer, U.; Wall, W. A.; Gravemeier, V.

    2018-04-01

    A novel and comprehensive computational method, referred to as the eXtended Algebraic Variational Multiscale-Multigrid-Multifractal Method (XAVM4), is proposed for large-eddy simulation of the particularly challenging problem of turbulent two-phase flow. The XAVM4 involves multifractal subgrid-scale modeling as well as a Nitsche-type extended finite element method as an approach for two-phase flow. The application of an advanced structural subgrid-scale modeling approach in conjunction with a sharp representation of the discontinuities at the interface between two bulk fluids promise high-fidelity large-eddy simulation of turbulent two-phase flow. The high potential of the XAVM4 is demonstrated for large-eddy simulation of turbulent two-phase bubbly channel flow, that is, turbulent channel flow carrying a single large bubble of the size of the channel half-width in this particular application.

  10. Addressing Methodological Challenges in Large Communication Data Sets: Collecting and Coding Longitudinal Interactions in Home Hospice Cancer Care.

    PubMed

    Reblin, Maija; Clayton, Margaret F; John, Kevin K; Ellington, Lee

    2016-07-01

    In this article, we present strategies for collecting and coding a large longitudinal communication data set collected across multiple sites, consisting of more than 2000 hours of digital audio recordings from approximately 300 families. We describe our methods within the context of implementing a large-scale study of communication during cancer home hospice nurse visits, but this procedure could be adapted to communication data sets across a wide variety of settings. This research is the first study designed to capture home hospice nurse-caregiver communication, a highly understudied location and type of communication event. We present a detailed example protocol encompassing data collection in the home environment, large-scale, multisite secure data management, the development of theoretically-based communication coding, and strategies for preventing coder drift and ensuring reliability of analyses. Although each of these challenges has the potential to undermine the utility of the data, reliability between coders is often the only issue consistently reported and addressed in the literature. Overall, our approach demonstrates rigor and provides a "how-to" example for managing large, digitally recorded data sets from collection through analysis. These strategies can inform other large-scale health communication research.

  11. Addressing Methodological Challenges in Large Communication Datasets: Collecting and Coding Longitudinal Interactions in Home Hospice Cancer Care

    PubMed Central

    Reblin, Maija; Clayton, Margaret F; John, Kevin K; Ellington, Lee

    2015-01-01

    In this paper, we present strategies for collecting and coding a large longitudinal communication dataset collected across multiple sites, consisting of over 2000 hours of digital audio recordings from approximately 300 families. We describe our methods within the context of implementing a large-scale study of communication during cancer home hospice nurse visits, but this procedure could be adapted to communication datasets across a wide variety of settings. This research is the first study designed to capture home hospice nurse-caregiver communication, a highly understudied location and type of communication event. We present a detailed example protocol encompassing data collection in the home environment, large-scale, multi-site secure data management, the development of theoretically-based communication coding, and strategies for preventing coder drift and ensuring reliability of analyses. Although each of these challenges have the potential to undermine the utility of the data, reliability between coders is often the only issue consistently reported and addressed in the literature. Overall, our approach demonstrates rigor and provides a “how-to” example for managing large, digitally-recorded data sets from collection through analysis. These strategies can inform other large-scale health communication research. PMID:26580414

  12. Assessing the influence of rater and subject characteristics on measures of agreement for ordinal ratings.

    PubMed

    Nelson, Kerrie P; Mitani, Aya A; Edwards, Don

    2017-09-10

    Widespread inconsistencies are commonly observed between physicians' ordinal classifications in screening tests results such as mammography. These discrepancies have motivated large-scale agreement studies where many raters contribute ratings. The primary goal of these studies is to identify factors related to physicians and patients' test results, which may lead to stronger consistency between raters' classifications. While ordered categorical scales are frequently used to classify screening test results, very few statistical approaches exist to model agreement between multiple raters. Here we develop a flexible and comprehensive approach to assess the influence of rater and subject characteristics on agreement between multiple raters' ordinal classifications in large-scale agreement studies. Our approach is based upon the class of generalized linear mixed models. Novel summary model-based measures are proposed to assess agreement between all, or a subgroup of raters, such as experienced physicians. Hypothesis tests are described to formally identify factors such as physicians' level of experience that play an important role in improving consistency of ratings between raters. We demonstrate how unique characteristics of individual raters can be assessed via conditional modes generated during the modeling process. Simulation studies are presented to demonstrate the performance of the proposed methods and summary measure of agreement. The methods are applied to a large-scale mammography agreement study to investigate the effects of rater and patient characteristics on the strength of agreement between radiologists. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  13. Planetesimal Formation through the Streaming Instability

    NASA Astrophysics Data System (ADS)

    Yang, Chao-Chin; Johansen, Anders; Schäfer, Urs

    2015-12-01

    The streaming instability is a promising mechanism to circumvent the barriers in direct dust growth and lead to the formation of planetesimals, as demonstrated by many previous studies. In order to resolve the thin layer of solids, however, most of these studies were focused on a local region of a protoplanetary disk with a limited simulation domain. It remains uncertain how the streaming instability is affected by the disk gas on large scales, and models that have sufficient dynamical range to capture both the thin particle layer and the large-scale disk dynamics are required.We hereby systematically push the limits of the computational domain up to more than the gas scale height, and study the particle-gas interaction on large scales in the saturated state of the streaming instability and the initial mass function of the resulting planetesimals. To overcome the numerical challenges posed by this kind of models, we have developed a new technique to simultaneously relieve the stringent time step constraints due to small-sized particles and strong local solid concentrations. Using these models, we demonstrate that the streaming instability can drive multiple radial, filamentary concentrations of solids, implying that planetesimals are born in well separated belt-like structures. We also find that the initial mass function of planetesimals via the streaming instability has a characteristic exponential form, which is robust against computational domain as well as resolution. These findings will help us further constrain the cosmochemical history of the Solar system as well as the planet formation theory in general.

  14. The EMCC / DARPA Massively Parallel Electromagnetic Scattering Project

    NASA Technical Reports Server (NTRS)

    Woo, Alex C.; Hill, Kueichien C.

    1996-01-01

    The Electromagnetic Code Consortium (EMCC) was sponsored by the Advanced Research Program Agency (ARPA) to demonstrate the effectiveness of massively parallel computing in large scale radar signature predictions. The EMCC/ARPA project consisted of three parts.

  15. A rapid mechanism to remobilize and homogenize highly crystalline magma bodies.

    PubMed

    Burgisser, Alain; Bergantz, George W

    2011-03-10

    The largest products of magmatic activity on Earth, the great bodies of granite and their corresponding large eruptions, have a dual nature: homogeneity at the large scale and spatial and temporal heterogeneity at the small scale. This duality calls for a mechanism that selectively removes the large-scale heterogeneities associated with the incremental assembly of these magmatic systems and yet occurs rapidly despite crystal-rich, viscous conditions seemingly resistant to mixing. Here we show that a simple dynamic template can unify a wide range of apparently contradictory observations from both large plutonic bodies and volcanic systems by a mechanism of rapid remobilization (unzipping) of highly viscous crystal-rich mushes. We demonstrate that this remobilization can lead to rapid overturn and produce the observed juxtaposition of magmatic materials with very disparate ages and complex chemical zoning. What distinguishes our model is the recognition that the process has two stages. Initially, a stiff mushy magma is reheated from below, producing a reduction in crystallinity that leads to the growth of a subjacent buoyant mobile layer. When the thickening mobile layer becomes sufficiently buoyant, it penetrates the overlying viscous mushy magma. This second stage rapidly exports homogenized material from the lower mobile layer to the top of the system, and leads to partial overturn within the viscous mush itself as an additional mechanism of mixing. Model outputs illustrate that unzipping can rapidly produce large amounts of mobile magma available for eruption. The agreement between calculated and observed unzipping rates for historical eruptions at Pinatubo and at Montserrat demonstrates the general applicability of the model. This mechanism furthers our understanding of both the formation of periodically homogenized plutons (crust building) and of ignimbrites by large eruptions.

  16. A high-throughput assay for quantifying appetite and digestive dynamics.

    PubMed

    Jordi, Josua; Guggiana-Nilo, Drago; Soucy, Edward; Song, Erin Yue; Lei Wee, Caroline; Engert, Florian

    2015-08-15

    Food intake and digestion are vital functions, and their dysregulation is fundamental for many human diseases. Current methods do not support their dynamic quantification on large scales in unrestrained vertebrates. Here, we combine an infrared macroscope with fluorescently labeled food to quantify feeding behavior and intestinal nutrient metabolism with high temporal resolution, sensitivity, and throughput in naturally behaving zebrafish larvae. Using this method and rate-based modeling, we demonstrate that zebrafish larvae match nutrient intake to their bodily demand and that larvae adjust their digestion rate, according to the ingested meal size. Such adaptive feedback mechanisms make this model system amenable to identify potential chemical modulators. As proof of concept, we demonstrate that nicotine, l-lysine, ghrelin, and insulin have analogous impact on food intake as in mammals. Consequently, the method presented here will promote large-scale translational research of food intake and digestive function in a naturally behaving vertebrate. Copyright © 2015 the American Physiological Society.

  17. A high-throughput assay for quantifying appetite and digestive dynamics

    PubMed Central

    Guggiana-Nilo, Drago; Soucy, Edward; Song, Erin Yue; Lei Wee, Caroline; Engert, Florian

    2015-01-01

    Food intake and digestion are vital functions, and their dysregulation is fundamental for many human diseases. Current methods do not support their dynamic quantification on large scales in unrestrained vertebrates. Here, we combine an infrared macroscope with fluorescently labeled food to quantify feeding behavior and intestinal nutrient metabolism with high temporal resolution, sensitivity, and throughput in naturally behaving zebrafish larvae. Using this method and rate-based modeling, we demonstrate that zebrafish larvae match nutrient intake to their bodily demand and that larvae adjust their digestion rate, according to the ingested meal size. Such adaptive feedback mechanisms make this model system amenable to identify potential chemical modulators. As proof of concept, we demonstrate that nicotine, l-lysine, ghrelin, and insulin have analogous impact on food intake as in mammals. Consequently, the method presented here will promote large-scale translational research of food intake and digestive function in a naturally behaving vertebrate. PMID:26108871

  18. Scalable parallel distance field construction for large-scale applications

    DOE PAGES

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; ...

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  19. Multidimensional quantum entanglement with large-scale integrated optics.

    PubMed

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G

    2018-04-20

    The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  20. Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.

    PubMed

    Haghverdi, Laleh; Lun, Aaron T L; Morgan, Michael D; Marioni, John C

    2018-06-01

    Large-scale single-cell RNA sequencing (scRNA-seq) data sets that are produced in different laboratories and at different times contain batch effects that may compromise the integration and interpretation of the data. Existing scRNA-seq analysis methods incorrectly assume that the composition of cell populations is either known or identical across batches. We present a strategy for batch correction based on the detection of mutual nearest neighbors (MNNs) in the high-dimensional expression space. Our approach does not rely on predefined or equal population compositions across batches; instead, it requires only that a subset of the population be shared between batches. We demonstrate the superiority of our approach compared with existing methods by using both simulated and real scRNA-seq data sets. Using multiple droplet-based scRNA-seq data sets, we demonstrate that our MNN batch-effect-correction method can be scaled to large numbers of cells.

  1. Scalable Parallel Distance Field Construction for Large-Scale Applications.

    PubMed

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan-Liu; Kolla, Hemanth; Chen, Jacqueline H

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. A new distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking over time, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate its efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. Our work greatly extends the usability of distance fields for demanding applications.

  2. Two-frequency /Delta k/ microwave scatterometer measurements of ocean wave spectra from an aircraft

    NASA Technical Reports Server (NTRS)

    Johnson, J. W.; Jones, W. L.; Weissman, D. E.

    1981-01-01

    A technique for remotely sensing the large-scale gravity wave spectrum on the ocean surface using a two frequency (Delta k) microwave scatterometer has been demonstrated from stationary platforms and proposed from moving platforms. This measurement takes advantage of Bragg type resonance matching between the electromagnetic wavelength at the difference frequency and the length of the large-scale surface waves. A prominent resonance appears in the cross product power spectral density (PSD) of the two backscattered signals. Ku-Band aircraft scatterometer measurements were conducted by NASA in the North Sea during the 1979 Maritime Remote Sensing (MARSEN) experiment. Typical examples of cross product PSD's computed from the MARSEN data are presented. They demonstrate strong resonances whose frequency and bandwidth agree with the surface characteristics and the theory. Directional modulation spectra of the surface reflectivity are compared to the gravity wave spectrum derived from surface truth measurements.

  3. A review of challenges to determining and demonstrating efficiency of large fire management

    Treesearch

    Matthew P. Thompson; Francisco Rodriguez y Silva; David E. Calkin; Michael S. Hand

    2017-01-01

    Characterising the impacts of wildland fire and fire suppression is critical information for fire management decision-making. Here, we focus on decisions related to the rare larger and longer-duration fire events, where the scope and scale of decision-making can be far broader than initial response efforts, and where determining and demonstrating efficiency of...

  4. Highly Efficient Large-Scale Lentiviral Vector Concentration by Tandem Tangential Flow Filtration

    PubMed Central

    Cooper, Aaron R.; Patel, Sanjeet; Senadheera, Shantha; Plath, Kathrin; Kohn, Donald B.; Hollis, Roger P.

    2014-01-01

    Large-scale lentiviral vector (LV) concentration can be inefficient and time consuming, often involving multiple rounds of filtration and centrifugation. This report describes a simpler method using two tangential flow filtration (TFF) steps to concentrate liter-scale volumes of LV supernatant, achieving in excess of 2000-fold concentration in less than 3 hours with very high recovery (>97%). Large volumes of LV supernatant can be produced easily through the use of multi-layer flasks, each having 1720 cm2 surface area and producing ~560 mL of supernatant per flask. Combining the use of such flasks and TFF greatly simplifies large-scale production of LV. As a demonstration, the method is used to produce a very high titer LV (>1010 TU/mL) and transduce primary human CD34+ hematopoietic stem/progenitor cells at high final vector concentrations with no overt toxicity. A complex LV (STEMCCA) for induced pluripotent stem cell generation is also concentrated from low initial titer and used to transduce and reprogram primary human fibroblasts with no overt toxicity. Additionally, a generalized and simple multiplexed real- time PCR assay is described for lentiviral vector titer and copy number determination. PMID:21784103

  5. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  6. Large-scale frequency- and time-domain quantum entanglement over the optical frequency comb (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Pfister, Olivier

    2017-05-01

    When it comes to practical quantum computing, the two main challenges are circumventing decoherence (devastating quantum errors due to interactions with the environmental bath) and achieving scalability (as many qubits as needed for a real-life, game-changing computation). We show that using, in lieu of qubits, the "qumodes" represented by the resonant fields of the quantum optical frequency comb of an optical parametric oscillator allows one to create bona fide, large scale quantum computing processors, pre-entangled in a cluster state. We detail our recent demonstration of 60-qumode entanglement (out of an estimated 3000) and present an extension to combining this frequency-tagged with time-tagged entanglement, in order to generate an arbitrarily large, universal quantum computing processor.

  7. Large scale Full QM-MD investigation of small peptides and insulin adsorption on ideal and defective TiO2 (1 0 0) surfaces. Influence of peptide size on interfacial bonds

    NASA Astrophysics Data System (ADS)

    Dubot, Pierre; Boisseau, Nicolas; Cenedese, Pierre

    2018-05-01

    Large biomolecule interaction with oxide surface has attracted a lot of attention because it drives behavior of implanted devices in the living body. To investigate the role of TiO2 surface structure on a large polypeptide (insulin) adsorption, we use a homemade mixed Molecular Dynamics-Full large scale Quantum Mechanics code. A specific re-parameterized (Ti) and globally convergent NDDO method fitted on high level ab initio method (coupled cluster CCSD(T) and DFT) allows us to safely describe the electronic structure of the whole insulin-TiO2 surface system (up to 4000 atoms). Looking specifically at carboxylate residues, we demonstrate in this work that specific interfacial bonds are obtained from the insulin/TiO2 system that are not observed in the case of smaller peptides (tripeptides, insulin segment chains with different configurations). We also demonstrate that a large part of the adsorption energy is compensated by insulin conformational energy changes and surface defects enhanced this trend. Large slab dimensions allow us to take into account surface defects that are actually beyond ab initio capabilities owing to size effect. These results highlight the influence of the surface structure on the conformation and therefore of the possible inactivity of an adsorbed polypeptides.

  8. A survey on routing protocols for large-scale wireless sensor networks.

    PubMed

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. "Large-scale" means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed.

  9. Large-scale metabolite analysis of standards and human serum by laser desorption ionization mass spectrometry from silicon nanopost arrays

    DOE PAGES

    Korte, Andrew R.; Stopka, Sylwia A.; Morris, Nicholas; ...

    2016-07-11

    The unique challenges presented by metabolomics have driven the development of new mass spectrometry (MS)-based techniques for small molecule analysis. We have previously demonstrated silicon nanopost arrays (NAPA) to be an effective substrate for laser desorption ionization (LDI) of small molecules for MS. However, the utility of NAPA-LDI-MS for a wide range of metabolite classes has not been investigated. Here we apply NAPA-LDI-MS to the large-scale acquisition of high-resolution mass spectra and tandem mass spectra from a collection of metabolite standards covering a range of compound classes including amino acids, nucleotides, carbohydrates, xenobiotics, lipids, and other classes. In untargeted analysismore » of metabolite standard mixtures, detection was achieved for 374 compounds and useful MS/MS spectra were obtained for 287 compounds, without individual optimization of ionization or fragmentation conditions. Metabolite detection was evaluated in the context of 31 metabolic pathways, and NAPA-LDI-MS was found to provide detection for 63% of investigated pathway metabolites. Individual, targeted analysis of the 20 common amino acids provided detection of 100% of the investigated compounds, demonstrating that improved coverage is possible through optimization and targeting of individual analytes or analyte classes. In direct analysis of aqueous and organic extracts from human serum samples, spectral features were assigned to a total of 108 small metabolites and lipids. Glucose and amino acids were quantitated within their physiological concentration ranges. Finally, the broad coverage demonstrated by this large-scale screening experiment opens the door for use of NAPA-LDI-MS in numerous metabolite analysis applications« less

  10. Large-scale metabolite analysis of standards and human serum by laser desorption ionization mass spectrometry from silicon nanopost arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korte, Andrew R.; Stopka, Sylwia A.; Morris, Nicholas

    The unique challenges presented by metabolomics have driven the development of new mass spectrometry (MS)-based techniques for small molecule analysis. We have previously demonstrated silicon nanopost arrays (NAPA) to be an effective substrate for laser desorption ionization (LDI) of small molecules for MS. However, the utility of NAPA-LDI-MS for a wide range of metabolite classes has not been investigated. Here we apply NAPA-LDI-MS to the large-scale acquisition of high-resolution mass spectra and tandem mass spectra from a collection of metabolite standards covering a range of compound classes including amino acids, nucleotides, carbohydrates, xenobiotics, lipids, and other classes. In untargeted analysismore » of metabolite standard mixtures, detection was achieved for 374 compounds and useful MS/MS spectra were obtained for 287 compounds, without individual optimization of ionization or fragmentation conditions. Metabolite detection was evaluated in the context of 31 metabolic pathways, and NAPA-LDI-MS was found to provide detection for 63% of investigated pathway metabolites. Individual, targeted analysis of the 20 common amino acids provided detection of 100% of the investigated compounds, demonstrating that improved coverage is possible through optimization and targeting of individual analytes or analyte classes. In direct analysis of aqueous and organic extracts from human serum samples, spectral features were assigned to a total of 108 small metabolites and lipids. Glucose and amino acids were quantitated within their physiological concentration ranges. Finally, the broad coverage demonstrated by this large-scale screening experiment opens the door for use of NAPA-LDI-MS in numerous metabolite analysis applications« less

  11. RNA sequencing demonstrates large-scale temporal dysregulation of gene expression in stimulated macrophages derived from MHC-defined chicken haplotypes.

    PubMed

    Irizarry, Kristopher J L; Downs, Eileen; Bryden, Randall; Clark, Jory; Griggs, Lisa; Kopulos, Renee; Boettger, Cynthia M; Carr, Thomas J; Keeler, Calvin L; Collisson, Ellen; Drechsler, Yvonne

    2017-01-01

    Discovering genetic biomarkers associated with disease resistance and enhanced immunity is critical to developing advanced strategies for controlling viral and bacterial infections in different species. Macrophages, important cells of innate immunity, are directly involved in cellular interactions with pathogens, the release of cytokines activating other immune cells and antigen presentation to cells of the adaptive immune response. IFNγ is a potent activator of macrophages and increased production has been associated with disease resistance in several species. This study characterizes the molecular basis for dramatically different nitric oxide production and immune function between the B2 and the B19 haplotype chicken macrophages.A large-scale RNA sequencing approach was employed to sequence the RNA of purified macrophages from each haplotype group (B2 vs. B19) during differentiation and after stimulation. Our results demonstrate that a large number of genes exhibit divergent expression between B2 and B19 haplotype cells both prior and after stimulation. These differences in gene expression appear to be regulated by complex epigenetic mechanisms that need further investigation.

  12. Species composition and morphologic variation of Porites in the Gulf of California

    NASA Astrophysics Data System (ADS)

    López-Pérez, R. A.

    2013-09-01

    Morphometric analysis of corallite calices confirmed that from the late Miocene to the Recent, four species of Porites have inhabited the Gulf of California: the extinct Porites carrizensis, the locally extirpated Porites lobata and the extant Porites sverdrupi and Porites panamensis. Furthermore, large-scale spatial and temporal phenotypic plasticity was observed in the dominant species P. panamensis. Canonical discriminant analysis and ANOVA demonstrated that the calice structures of P. panamensis experienced size reduction between the late Pleistocene and Recent. Similarly, PERMANOVA, regression and correlation analyses demonstrated that across the 800 km north to south in the gulf, P. panamensis populations displayed a similar reduction in calice structures. Based on correlation analysis with environmental data, these large spatial changes are likely related to changes in nutrient concentration and sea surface temperature. As such, the large-scale spatial and temporal phenotypic variation recorded in populations of P. panamensis in the Gulf of California is likely related to optimization of corallite performance (energy acquisition) within various environmental scenarios. These findings may have relevance to modern conservation efforts within this ecological dominant genus.

  13. Evaluation of Large-scale Data to Detect Irregularity in Payment for Medical Services. An Extended Use of Benford's Law.

    PubMed

    Park, Junghyun A; Kim, Minki; Yoon, Seokjoon

    2016-05-17

    Sophisticated anti-fraud systems for the healthcare sector have been built based on several statistical methods. Although existing methods have been developed to detect fraud in the healthcare sector, these algorithms consume considerable time and cost, and lack a theoretical basis to handle large-scale data. Based on mathematical theory, this study proposes a new approach to using Benford's Law in that we closely examined the individual-level data to identify specific fees for in-depth analysis. We extended the mathematical theory to demonstrate the manner in which large-scale data conform to Benford's Law. Then, we empirically tested its applicability using actual large-scale healthcare data from Korea's Health Insurance Review and Assessment (HIRA) National Patient Sample (NPS). For Benford's Law, we considered the mean absolute deviation (MAD) formula to test the large-scale data. We conducted our study on 32 diseases, comprising 25 representative diseases and 7 DRG-regulated diseases. We performed an empirical test on 25 diseases, showing the applicability of Benford's Law to large-scale data in the healthcare industry. For the seven DRG-regulated diseases, we examined the individual-level data to identify specific fees to carry out an in-depth analysis. Among the eight categories of medical costs, we considered the strength of certain irregularities based on the details of each DRG-regulated disease. Using the degree of abnormality, we propose priority action to be taken by government health departments and private insurance institutions to bring unnecessary medical expenses under control. However, when we detect deviations from Benford's Law, relatively high contamination ratios are required at conventional significance levels.

  14. Scalable Microfabrication Procedures for Adhesive-Integrated Flexible and Stretchable Electronic Sensors.

    PubMed

    Kang, Dae Y; Kim, Yun-Soung; Ornelas, Gladys; Sinha, Mridu; Naidu, Keerthiga; Coleman, Todd P

    2015-09-16

    New classes of ultrathin flexible and stretchable devices have changed the way modern electronics are designed to interact with their target systems. Though more and more novel technologies surface and steer the way we think about future electronics, there exists an unmet need in regards to optimizing the fabrication procedures for these devices so that large-scale industrial translation is realistic. This article presents an unconventional approach for facile microfabrication and processing of adhesive-peeled (AP) flexible sensors. By assembling AP sensors on a weakly-adhering substrate in an inverted fashion, we demonstrate a procedure with 50% reduced end-to-end processing time that achieves greater levels of fabrication yield. The methodology is used to demonstrate the fabrication of electrical and mechanical flexible and stretchable AP sensors that are peeled-off their carrier substrates by consumer adhesives. In using this approach, we outline the manner by which adhesion is maintained and buckling is reduced for gold film processing on polydimethylsiloxane substrates. In addition, we demonstrate the compatibility of our methodology with large-scale post-processing using a roll-to-roll approach.

  15. Scalable Microfabrication Procedures for Adhesive-Integrated Flexible and Stretchable Electronic Sensors

    PubMed Central

    Kang, Dae Y.; Kim, Yun-Soung; Ornelas, Gladys; Sinha, Mridu; Naidu, Keerthiga; Coleman, Todd P.

    2015-01-01

    New classes of ultrathin flexible and stretchable devices have changed the way modern electronics are designed to interact with their target systems. Though more and more novel technologies surface and steer the way we think about future electronics, there exists an unmet need in regards to optimizing the fabrication procedures for these devices so that large-scale industrial translation is realistic. This article presents an unconventional approach for facile microfabrication and processing of adhesive-peeled (AP) flexible sensors. By assembling AP sensors on a weakly-adhering substrate in an inverted fashion, we demonstrate a procedure with 50% reduced end-to-end processing time that achieves greater levels of fabrication yield. The methodology is used to demonstrate the fabrication of electrical and mechanical flexible and stretchable AP sensors that are peeled-off their carrier substrates by consumer adhesives. In using this approach, we outline the manner by which adhesion is maintained and buckling is reduced for gold film processing on polydimethylsiloxane substrates. In addition, we demonstrate the compatibility of our methodology with large-scale post-processing using a roll-to-roll approach. PMID:26389915

  16. A new large-scale manufacturing platform for complex biopharmaceuticals.

    PubMed

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals. Copyright © 2012 Wiley Periodicals, Inc.

  17. Approximate Computing Techniques for Iterative Graph Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Panyala, Ajay R.; Subasi, Omer; Halappanavar, Mahantesh

    Approximate computing enables processing of large-scale graphs by trading off quality for performance. Approximate computing techniques have become critical not only due to the emergence of parallel architectures but also the availability of large scale datasets enabling data-driven discovery. Using two prototypical graph algorithms, PageRank and community detection, we present several approximate computing heuristics to scale the performance with minimal loss of accuracy. We present several heuristics including loop perforation, data caching, incomplete graph coloring and synchronization, and evaluate their efficiency. We demonstrate performance improvements of up to 83% for PageRank and up to 450x for community detection, with lowmore » impact of accuracy for both the algorithms. We expect the proposed approximate techniques will enable scalable graph analytics on data of importance to several applications in science and their subsequent adoption to scale similar graph algorithms.« less

  18. Molecular Precision at Micrometer Length Scales: Hierarchical Assembly of DNA-Protein Nanostructures.

    PubMed

    Schiffels, Daniel; Szalai, Veronika A; Liddle, J Alexander

    2017-07-25

    Robust self-assembly across length scales is a ubiquitous feature of biological systems but remains challenging for synthetic structures. Taking a cue from biology-where disparate molecules work together to produce large, functional assemblies-we demonstrate how to engineer microscale structures with nanoscale features: Our self-assembly approach begins by using DNA polymerase to controllably create double-stranded DNA (dsDNA) sections on a single-stranded template. The single-stranded DNA (ssDNA) sections are then folded into a mechanically flexible skeleton by the origami method. This process simultaneously shapes the structure at the nanoscale and directs the large-scale geometry. The DNA skeleton guides the assembly of RecA protein filaments, which provides rigidity at the micrometer scale. We use our modular design strategy to assemble tetrahedral, rectangular, and linear shapes of defined dimensions. This method enables the robust construction of complex assemblies, greatly extending the range of DNA-based self-assembly methods.

  19. Scale Up of Malonic Acid Fermentation Process: Cooperative Research and Development Final Report, CRADA Number CRD-16-612

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schell, Daniel J

    The goal of this work is to use the large fermentation vessels in the National Renewable Energy Laboratory's (NREL) Integrated Biorefinery Research Facility (IBRF) to scale-up Lygos' biological-based process for producing malonic acid and to generate performance data. Initially, work at the 1 L scale validated successful transfer of Lygos' fermentation protocols to NREL using a glucose substrate. Outside of the scope of the CRADA with NREL, Lygos tested their process on lignocellulosic sugars produced by NREL at Lawrence Berkeley National Laboratory's (LBNL) Advanced Biofuels Process Development Unit (ABPDU). NREL produced these cellulosic sugar solutions from corn stover using amore » separate cellulose/hemicellulose process configuration. Finally, NREL performed fermentations using glucose in large fermentors (1,500- and 9,000-L vessels) to intermediate product and to demonstrate successful performance of Lygos' technology at larger scales.« less

  20. EVALUATION PLAN FOR TWO LARGE-SCALE LANDFILL BIOREACTOR TECHNOLOGIES

    EPA Science Inventory

    Abstract - Waste Management, Inc., is operating two long-term bioreactor studies at the Outer Loop Landfill in Louisville, KY, including facultative landfill bioreactor and staged aerobic-anaerobic landfill bioreactor demonstrations. A Quality Assurance Project Plan (QAPP) was p...

  1. Large Scale Evaluation fo Nickel Aluminide Rolls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2005-09-01

    This completed project was a joint effort between Oak Ridge National Laboratory and Bethlehem Steel (now Mittal Steel) to demonstrate the effectiveness of using nickel aluminide intermetallic alloy rolls as part of an updated, energy-efficient, commercial annealing furnace system.

  2. Cosmology with CLASS

    NASA Astrophysics Data System (ADS)

    Watts, Duncan; CLASS Collaboration

    2018-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will use large-scale measurements of the polarized cosmic microwave background (CMB) to constrain the physics of inflation, reionization, and massive neutrinos. The experiment is designed to characterize the largest scales, which are inaccessible to most ground-based experiments, and remove Galactic foregrounds from the CMB maps. In this dissertation talk, I present simulations of CLASS data and demonstrate their ability to constrain the simplest single-field models of inflation and to reduce the uncertainty of the optical depth to reionization, τ, to near the cosmic variance limit, significantly improving on current constraints. These constraints will bring a qualitative shift in our understanding of standard ΛCDM cosmology. In particular, CLASS's measurement of τ breaks cosmological parameter degeneracies. Probes of large scale structure (LSS) test the effect of neutrino free-streaming at small scales, which depends on the mass of the neutrinos. CLASS's τ measurement, when combined with next-generation LSS and BAO measurements, will enable a 4σ detection of neutrino mass, compared with 2σ without CLASS data.. I will also briefly discuss the CLASS experiment's measurements of circular polarization of the CMB and the implications of the first-such near-all-sky map.

  3. Transport Barriers in Bootstrap Driven Tokamaks

    NASA Astrophysics Data System (ADS)

    Staebler, Gary

    2017-10-01

    Maximizing the bootstrap current in a tokamak, so that it drives a high fraction of the total current, reduces the external power required to drive current by other means. Improved energy confinement, relative to empirical scaling laws, enables a reactor to more fully take advantage of the bootstrap driven tokamak. Experiments have demonstrated improved energy confinement due to the spontaneous formation of an internal transport barrier in high bootstrap fraction discharges. Gyrokinetic analysis, and quasilinear predictive modeling, demonstrates that the observed transport barrier is due to the suppression of turbulence primarily due to the large Shafranov shift. ExB velocity shear does not play a significant role in the transport barrier due to the high safety factor. It will be shown, that the Shafranov shift can produce a bifurcation to improved confinement in regions of positive magnetic shear or a continuous reduction in transport for weak or negative magnetic shear. Operation at high safety factor lowers the pressure gradient threshold for the Shafranov shift driven barrier formation. The ion energy transport is reduced to neoclassical and electron energy and particle transport is reduced, but still turbulent, within the barrier. Deeper into the plasma, very large levels of electron transport are observed. The observed electron temperature profile is shown to be close to the threshold for the electron temperature gradient (ETG) mode. A large ETG driven energy transport is qualitatively consistent with recent multi-scale gyrokinetic simulations showing that reducing the ion scale turbulence can lead to large increase in the electron scale transport. A new saturation model for the quasilinear TGLF transport code, that fits these multi-scale gyrokinetic simulations, can match the data if the impact of zonal flow mixing on the ETG modes is reduced at high safety factor. This work was supported by the U.S. Department of Energy under DE-FG02-95ER54309 and DE-FC02-04ER54698.

  4. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    PubMed

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  5. Direct numerical simulation of turbulent channel flow with spanwise alternatively distributed strips control

    NASA Astrophysics Data System (ADS)

    Ni, Weidan; Lu, Lipeng; Fang, Jian; Moulinec, Charles; Yao, Yufeng

    2018-05-01

    The effect of spanwise alternatively distributed strips (SADS) control on turbulent flow in a plane channel has been studied by direct numerical simulations to investigate the characteristics of large-scale streamwise vortices (LSSVs) induced by small-scale active wall actuation, and their potential in suppressing flow separation. SADS control is realized by alternatively arranging out-of-phase control (OPC) and in-phase control (IPC) wall actuations on the lower channel wall surface, in the spanwise direction. It is found that the coherent structures are suppressed or enhanced alternatively by OPC or IPC, respectively, leading to the formation of a vertical shear layer, which is responsible for the LSSVs’ presence. Large-scale low-speed region can also be observed above the OPC strips, which resemble large-scale low-speed streaks. LSSVs are found to be in a statistically-converged steady state and their cores are located between two neighboring OPC and IPC strips. Their motions contribute significantly to the momentum transport in the wall-normal and spanwise directions, demonstrating their potential ability to suppress flow separation.

  6. Fabrication and performance analysis of 4-sq cm indium tin oxide/InP photovoltaic solar cells

    NASA Technical Reports Server (NTRS)

    Gessert, T. A.; Li, X.; Phelps, P. W.; Coutts, T. J.; Tzafaras, N.

    1991-01-01

    Large-area photovoltaic solar cells based on direct current magnetron sputter deposition of indium tin oxide (ITO) into single-crystal p-InP substrates demonstrated both the radiation hardness and high performance necessary for extraterrestrial applications. A small-scale production project was initiated in which approximately 50 ITO/InP cells are being produced. The procedures used in this small-scale production of 4-sq cm ITO/InP cells are presented and discussed. The discussion includes analyses of performance range of all available production cells, and device performance data of the best cells thus far produced. Additionally, processing experience gained from the production of these cells is discussed, indicating other issues that may be encountered when large-scale productions are begun.

  7. Quantum chaos inside black holes

    NASA Astrophysics Data System (ADS)

    Addazi, Andrea

    2017-06-01

    We show how semiclassical black holes can be reinterpreted as an effective geometry, composed of a large ensemble of horizonless naked singularities (eventually smoothed at the Planck scale). We call these new items frizzy-balls, which can be rigorously defined by Euclidean path integral approach. This leads to interesting implications about information paradoxes. We demonstrate that infalling information will chaotically propagate inside this system before going to the full quantum gravity regime (Planck scale).

  8. A spatial picture of the synthetic large-scale motion from dynamic roughness

    NASA Astrophysics Data System (ADS)

    Huynh, David; McKeon, Beverley

    2017-11-01

    Jacobi and McKeon (2011) set up a dynamic roughness apparatus to excite a synthetic, travelling wave-like disturbance in a wind tunnel, boundary layer study. In the present work, this dynamic roughness has been adapted for a flat-plate, turbulent boundary layer experiment in a water tunnel. A key advantage of operating in water as opposed to air is the longer flow timescales. This makes accessible higher non-dimensional actuation frequencies and correspondingly shorter synthetic length scales, and is thus more amenable to particle image velocimetry. As a result, this experiment provides a novel spatial picture of the synthetic mode, the coupled small scales, and their streamwise development. It is demonstrated that varying the roughness actuation frequency allows for significant tuning of the streamwise wavelength of the synthetic mode, with a range of 3 δ-13 δ being achieved. Employing a phase-locked decomposition, spatial snapshots are constructed of the synthetic large scale and used to analyze its streamwise behavior. Direct spatial filtering is used to separate the synthetic large scale and the related small scales, and the results are compared to those obtained by temporal filtering that invokes Taylor's hypothesis. The support of AFOSR (Grant # FA9550-16-1-0361) is gratefully acknowledged.

  9. A Universal Model for Solar Eruptions

    NASA Astrophysics Data System (ADS)

    Wyper, Peter; Antiochos, Spiro K.; DeVore, C. Richard

    2017-08-01

    We present a universal model for solar eruptions that encompasses coronal mass ejections (CMEs) at one end of the scale, to coronal jets at the other. The model is a natural extension of the Magnetic Breakout model for large-scale fast CMEs. Using high-resolution adaptive mesh MHD simulations conducted with the ARMS code, we show that so-called blowout or mini-filament coronal jets can be explained as one realisation of the breakout process. We also demonstrate the robustness of this “breakout-jet” model by studying three realisations in simulations with different ambient field inclinations. We conclude that magnetic breakout supports both large-scale fast CMEs and small-scale coronal jets, and by inference eruptions at scales in between. Thus, magnetic breakout provides a unified model for solar eruptions. P.F.W was supported in this work by an award of a RAS Fellowship and an appointment to the NASA Postdoctoral Program. C.R.D and S.K.A were supported by NASA’s LWS TR&T and H-SR programs.

  10. Inviscid criterion for decomposing scales

    NASA Astrophysics Data System (ADS)

    Zhao, Dongxiao; Aluie, Hussein

    2018-05-01

    The proper scale decomposition in flows with significant density variations is not as straightforward as in incompressible flows, with many possible ways to define a "length scale." A choice can be made according to the so-called inviscid criterion [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009]. It is a kinematic requirement that a scale decomposition yield negligible viscous effects at large enough length scales. It has been proved [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009] recently that a Favre decomposition satisfies the inviscid criterion, which is necessary to unravel inertial-range dynamics and the cascade. Here we present numerical demonstrations of those results. We also show that two other commonly used decompositions can violate the inviscid criterion and, therefore, are not suitable to study inertial-range dynamics in variable-density and compressible turbulence. Our results have practical modeling implication in showing that viscous terms in Large Eddy Simulations do not need to be modeled and can be neglected.

  11. Anisotropic storage medium development in a full-scale, sodium alanate-based, hydrogen storage system

    DOE PAGES

    Jorgensen, Scott W.; Johnson, Terry A.; Payzant, E. Andrew; ...

    2016-06-11

    Deuterium desorption in an automotive-scale hydrogen storage tube was studied in-situ using neutron diffraction. Gradients in the concentration of the various alanate phases were observed along the length of the tube but no significant radial anisotropy was present. In addition, neutron radiography and computed tomography showed large scale cracks and density fluctuations, confirming the presence of these structures in an undisturbed storage system. These results demonstrate that large scale storage structures are not uniform even after many absorption/desorption cycles and that movement of gaseous hydrogen cannot be properly modeled by a simple porous bed model. In addition, the evidence indicatesmore » that there is slow transformation of species at one end of the tube indicating loss of catalyst functionality. These observations explain the unusually fast movement of hydrogen in a full scale system and shows that loss of capacity is not occurring uniformly in this type of hydrogen-storage system.« less

  12. Variations of trends of indicators describing complex systems: Change of scaling precursory to extreme events

    NASA Astrophysics Data System (ADS)

    Keilis-Borok, V. I.; Soloviev, A. A.

    2010-09-01

    Socioeconomic and natural complex systems persistently generate extreme events also known as disasters, crises, or critical transitions. Here we analyze patterns of background activity preceding extreme events in four complex systems: economic recessions, surges in homicides in a megacity, magnetic storms, and strong earthquakes. We use as a starting point the indicators describing the system's behavior and identify changes in an indicator's trend. Those changes constitute our background events (BEs). We demonstrate a premonitory pattern common to all four systems considered: relatively large magnitude BEs become more frequent before extreme event. A premonitory change of scaling has been found in various models and observations. Here we demonstrate this change in scaling of uniformly defined BEs in four real complex systems, their enormous differences notwithstanding.

  13. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    NASA Astrophysics Data System (ADS)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  14. An adaptive response surface method for crashworthiness optimization

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Yang, Ren-Jye; Zhu, Ping

    2013-11-01

    Response surface-based design optimization has been commonly used for optimizing large-scale design problems in the automotive industry. However, most response surface models are built by a limited number of design points without considering data uncertainty. In addition, the selection of a response surface in the literature is often arbitrary. This article uses a Bayesian metric to systematically select the best available response surface among several candidates in a library while considering data uncertainty. An adaptive, efficient response surface strategy, which minimizes the number of computationally intensive simulations, was developed for design optimization of large-scale complex problems. This methodology was demonstrated by a crashworthiness optimization example.

  15. Large Scale Flutter Data for Design of Rotating Blades Using Navier-Stokes Equations

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.

    2012-01-01

    A procedure to compute flutter boundaries of rotating blades is presented; a) Navier-Stokes equations. b) Frequency domain method compatible with industry practice. Procedure is initially validated: a) Unsteady loads with flapping wing experiment. b) Flutter boundary with fixed wing experiment. Large scale flutter computation is demonstrated for rotating blade: a) Single job submission script. b) Flutter boundary in 24 hour wall clock time with 100 cores. c) Linearly scalable with number of cores. Tested with 1000 cores that produced data in 25 hrs for 10 flutter boundaries. Further wall-clock speed-up is possible by performing parallel computations within each case.

  16. Probabilistic double guarantee kidnapping detection in SLAM.

    PubMed

    Tian, Yang; Ma, Shugen

    2016-01-01

    For determining whether kidnapping has happened and which type of kidnapping it is while a robot performs autonomous tasks in an unknown environment, a double guarantee kidnapping detection (DGKD) method has been proposed. The good performance of DGKD in a relative small environment is shown. However, a limitation of DGKD is found in a large-scale environment by our recent work. In order to increase the adaptability of DGKD in a large-scale environment, an improved method called probabilistic double guarantee kidnapping detection is proposed in this paper to combine probability of features' positions and the robot's posture. Simulation results demonstrate the validity and accuracy of the proposed method.

  17. Criminological research in contemporary China: challenges and lessons learned from a large-scale criminal victimization survey.

    PubMed

    Zhang, Lening; Messner, Steven F; Lu, Jianhong

    2007-02-01

    This article discusses research experience gained from a large-scale survey of criminal victimization recently conducted in Tianjin, China. The authors review some of the more important challenges that arose in the research, their responses to these challenges, and lessons learned that might be beneficial to other scholars who are interested in conducting criminological research in China. Their experience underscores the importance of understanding the Chinese political, cultural, and academic context, and the utility of collaborating with experienced and knowledgeable colleagues "on site." Although there are some special difficulties and barriers, their project demonstrates the feasibility of original criminological data collection in China.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodford, William

    This document is the final technical report from 24M Technologies on the project titled: Low Cost, Structurally Advanced Novel Electrode and Cell Manufacturing. All of the program milestones and deliverables were completed during the performance of the award. Specific accomplishments are 1) 24M demonstrated the processability and electrochemical performance of semi-solid electrodes with active volume contents increased by 10% relative to the program baseline; 2) electrode-level metrics, quality, and yield were demonstrated at an 80 cm 2 electrode footprint; 3) these electrodes were integrated into cells with consistent capacities and impedances, including cells delivered to Argonne National Laboratory for independentmore » testing; 4) those processes were scaled to a large-format (> 260 cm 2) electrode footprint and quality and yield were demonstrated; 5) a high-volume manufacturing approach for large-format electrode fabrication was demonstrated; and 6) large-format cells (> 100 Ah capacity) were prototyped with consistent capacity and impedance, including cells which were delivered to Argonne National Laboratory for independent testing.« less

  19. Time domain topology optimization of 3D nanophotonic devices

    NASA Astrophysics Data System (ADS)

    Elesin, Y.; Lazarov, B. S.; Jensen, J. S.; Sigmund, O.

    2014-02-01

    We present an efficient parallel topology optimization framework for design of large scale 3D nanophotonic devices. The code shows excellent scalability and is demonstrated for optimization of broadband frequency splitter, waveguide intersection, photonic crystal-based waveguide and nanowire-based waveguide. The obtained results are compared to simplified 2D studies and we demonstrate that 3D topology optimization may lead to significant performance improvements.

  20. A dual-scale metal nanowire network transparent conductor for highly efficient and flexible organic light emitting diodes.

    PubMed

    Lee, Jinhwan; An, Kunsik; Won, Phillip; Ka, Yoonseok; Hwang, Hyejin; Moon, Hyunjin; Kwon, Yongwon; Hong, Sukjoon; Kim, Changsoon; Lee, Changhee; Ko, Seung Hwan

    2017-02-02

    Although solution processed metal nanowire (NW) percolation networks are a strong candidate to replace commercial indium tin oxide, their performance is limited in thin film device applications due to reduced effective electrical areas arising from the dimple structure and percolative voids that single size metal NW percolation networks inevitably possess. Here, we present a transparent electrode based on a dual-scale silver nanowire (AgNW) percolation network embedded in a flexible substrate to demonstrate a significant enhancement in the effective electrical area by filling the large percolative voids present in a long/thick AgNW network with short/thin AgNWs. As a proof of concept, the performance enhancement of a flexible phosphorescent OLED is demonstrated with the dual-scale AgNW percolation network compared to the previous mono-scale AgNWs. Moreover, we report that mechanical and oxidative robustness, which are critical for flexible OLEDs, are greatly increased by embedding the dual-scale AgNW network in a resin layer.

  1. k-neighborhood Decentralization: A Comprehensive Solution to Index the UMLS for Large Scale Knowledge Discovery

    PubMed Central

    Xiang, Yang; Lu, Kewei; James, Stephen L.; Borlawsky, Tara B.; Huang, Kun; Payne, Philip R.O.

    2011-01-01

    The Unified Medical Language System (UMLS) is the largest thesaurus in the biomedical informatics domain. Previous works have shown that knowledge constructs comprised of transitively-associated UMLS concepts are effective for discovering potentially novel biomedical hypotheses. However, the extremely large size of the UMLS becomes a major challenge for these applications. To address this problem, we designed a k-neighborhood Decentralization Labeling Scheme (kDLS) for the UMLS, and the corresponding method to effectively evaluate the kDLS indexing results. kDLS provides a comprehensive solution for indexing the UMLS for very efficient large scale knowledge discovery. We demonstrated that it is highly effective to use kDLS paths to prioritize disease-gene relations across the whole genome, with extremely high fold-enrichment values. To our knowledge, this is the first indexing scheme capable of supporting efficient large scale knowledge discovery on the UMLS as a whole. Our expectation is that kDLS will become a vital engine for retrieving information and generating hypotheses from the UMLS for future medical informatics applications. PMID:22154838

  2. Hierarchical Learning of Tree Classifiers for Large-Scale Plant Species Identification.

    PubMed

    Fan, Jianping; Zhou, Ning; Peng, Jinye; Gao, Ling

    2015-11-01

    In this paper, a hierarchical multi-task structural learning algorithm is developed to support large-scale plant species identification, where a visual tree is constructed for organizing large numbers of plant species in a coarse-to-fine fashion and determining the inter-related learning tasks automatically. For a given parent node on the visual tree, it contains a set of sibling coarse-grained categories of plant species or sibling fine-grained plant species, and a multi-task structural learning algorithm is developed to train their inter-related classifiers jointly for enhancing their discrimination power. The inter-level relationship constraint, e.g., a plant image must first be assigned to a parent node (high-level non-leaf node) correctly if it can further be assigned to the most relevant child node (low-level non-leaf node or leaf node) on the visual tree, is formally defined and leveraged to learn more discriminative tree classifiers over the visual tree. Our experimental results have demonstrated the effectiveness of our hierarchical multi-task structural learning algorithm on training more discriminative tree classifiers for large-scale plant species identification.

  3. k-Neighborhood decentralization: a comprehensive solution to index the UMLS for large scale knowledge discovery.

    PubMed

    Xiang, Yang; Lu, Kewei; James, Stephen L; Borlawsky, Tara B; Huang, Kun; Payne, Philip R O

    2012-04-01

    The Unified Medical Language System (UMLS) is the largest thesaurus in the biomedical informatics domain. Previous works have shown that knowledge constructs comprised of transitively-associated UMLS concepts are effective for discovering potentially novel biomedical hypotheses. However, the extremely large size of the UMLS becomes a major challenge for these applications. To address this problem, we designed a k-neighborhood Decentralization Labeling Scheme (kDLS) for the UMLS, and the corresponding method to effectively evaluate the kDLS indexing results. kDLS provides a comprehensive solution for indexing the UMLS for very efficient large scale knowledge discovery. We demonstrated that it is highly effective to use kDLS paths to prioritize disease-gene relations across the whole genome, with extremely high fold-enrichment values. To our knowledge, this is the first indexing scheme capable of supporting efficient large scale knowledge discovery on the UMLS as a whole. Our expectation is that kDLS will become a vital engine for retrieving information and generating hypotheses from the UMLS for future medical informatics applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. Transitioning a home telehealth project into a sustainable, large-scale service: a qualitative study.

    PubMed

    Wade, Victoria A; Taylor, Alan D; Kidd, Michael R; Carati, Colin

    2016-05-16

    This study was a component of the Flinders Telehealth in the Home project, which tested adding home telehealth to existing rehabilitation, palliative care and geriatric outreach services. Due to the known difficulty of transitioning telehealth projects services, a qualitative study was conducted to produce a preferred implementation approach for sustainable and large-scale operations, and a process model that offers practical advice for achieving this goal. Initially, semi-structured interviews were conducted with senior clinicians, health service managers and policy makers, and a thematic analysis of the interview transcripts was undertaken to identify the range of options for ongoing operations, plus the factors affecting sustainability. Subsequently, the interviewees and other decision makers attended a deliberative forum in which participants were asked to select a preferred model for future implementation. Finally, all data from the study was synthesised by the researchers to produce a process model. 19 interviews with senior clinicians, managers, and service development staff were conducted, finding strong support for home telehealth but a wide diversity of views on governance, models of clinical care, technical infrastructure operations, and data management. The deliberative forum worked through these options and recommended a collaborative consortium approach for large-scale implementation. The process model proposes that the key factor for large-scale implementation is leadership support, which is enabled by 1) showing solutions to the problems of service demand, budgetary pressure and the relationship between hospital and primary care, 2) demonstrating how home telehealth aligns with health service policies, and 3) achieving clinician acceptance through providing evidence of benefit and developing new models of clinical care. Two key actions to enable change were marketing telehealth to patients, clinicians and policy-makers, and building a community of practice. The implementation of home telehealth services is still in an early stage. Change agents and a community of practice can contribute by marketing telehealth, demonstrating policy alignment and providing potential solutions for difficult health services problems. This should assist health leaders to move from trials to large-scale services.

  5. On the role of minicomputers in structural design

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1977-01-01

    Results are presented of exploratory studies on the use of a minicomputer in conjunction with large-scale computers to perform structural design tasks, including data and program management, use of interactive graphics, and computations for structural analysis and design. An assessment is made of minicomputer use for the structural model definition and checking and for interpreting results. Included are results of computational experiments demonstrating the advantages of using both a minicomputer and a large computer to solve a large aircraft structural design problem.

  6. Engineering of Baeyer-Villiger monooxygenase-based Escherichia coli biocatalyst for large scale biotransformation of ricinoleic acid into (Z)-11-(heptanoyloxy)undec-9-enoic acid

    PubMed Central

    Seo, Joo-Hyun; Kim, Hwan-Hee; Jeon, Eun-Yeong; Song, Young-Ha; Shin, Chul-Soo; Park, Jin-Byung

    2016-01-01

    Baeyer-Villiger monooxygenases (BVMOs) are able to catalyze regiospecific Baeyer-Villiger oxygenation of a variety of cyclic and linear ketones to generate the corresponding lactones and esters, respectively. However, the enzymes are usually difficult to express in a functional form in microbial cells and are rather unstable under process conditions hindering their large-scale applications. Thereby, we investigated engineering of the BVMO from Pseudomonas putida KT2440 and the gene expression system to improve its activity and stability for large-scale biotransformation of ricinoleic acid (1) into the ester (i.e., (Z)-11-(heptanoyloxy)undec-9-enoic acid) (3), which can be hydrolyzed into 11-hydroxyundec-9-enoic acid (5) (i.e., a precursor of polyamide-11) and n-heptanoic acid (4). The polyionic tag-based fusion engineering of the BVMO and the use of a synthetic promoter for constitutive enzyme expression allowed the recombinant Escherichia coli expressing the BVMO and the secondary alcohol dehydrogenase of Micrococcus luteus to produce the ester (3) to 85 mM (26.6 g/L) within 5 h. The 5 L scale biotransformation process was then successfully scaled up to a 70 L bioreactor; 3 was produced to over 70 mM (21.9 g/L) in the culture medium 6 h after biotransformation. This study demonstrated that the BVMO-based whole-cell reactions can be applied for large-scale biotransformations. PMID:27311560

  7. What Determines Upscale Growth of Oceanic Convection into MCSs?

    NASA Astrophysics Data System (ADS)

    Zipser, E. J.

    2017-12-01

    Over tropical oceans, widely scattered convection of various depths may or may not grow upscale into mesoscale convective systems (MCSs). But what distinguishes the large-scale environment that favors such upscale growth from that favoring "unorganized", scattered convection? Is it some combination of large-scale low-level convergence and ascending motion, combined with sufficient instability? We recently put this to a test with ERA-I reanalysis data, with disappointing results. The "usual suspects" of total column water vapor, large-scale ascent, and CAPE may all be required to some extent, but their differences between large MCSs and scattered convection are small. The main positive results from this work (already published) demonstrate that the strength of convection is well correlated with the size and perhaps "organization" of convective features over tropical oceans, in contrast to tropical land, where strong convection is common for large or small convective features. So, important questions remain: Over tropical oceans, how should we define "organized" convection? By size of the precipitation area? And what environmental conditions lead to larger and better organized MCSs? Some recent attempts to answer these questions will be described, but good answers may require more data, and more insights.

  8. Large-scale synthesis of arrays of high-aspect-ratio rigid vertically aligned carbon nanofibres

    NASA Astrophysics Data System (ADS)

    Melechko, A. V.; McKnight, T. E.; Hensley, D. K.; Guillorn, M. A.; Borisevich, A. Y.; Merkulov, V. I.; Lowndes, D. H.; Simpson, M. L.

    2003-09-01

    We report on techniques for catalytic synthesis of rigid, high-aspect-ratio, vertically aligned carbon nanofibres by dc plasma enhanced chemical vapour deposition that are tailored for applications that require arrays of individual fibres that feature long fibre lengths (up to 20 µm) such as scanning probe microscopy, penetrant cell and tissue probing arrays and mechanical insertion approaches for gene delivery to cell cultures. We demonstrate that the definition of catalyst nanoparticles is the critical step that enables growth of individual, long-length fibres and discuss methods for catalyst particle preparation that allow the growth of individual isolated nanofibres from catalyst dots with diameters as large as 500 nm. This development enables photolithographic definition of catalyst and therefore the inexpensive, large-scale production of such arrays.

  9. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.

    PubMed

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-11-11

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.

  10. A comparison of selected MMPI-2 and MMPI-2-RF validity scales in assessing effort on cognitive tests in a military sample.

    PubMed

    Jones, Alvin; Ingram, M Victoria

    2011-10-01

    Using a relatively new statistical paradigm, Optimal Data Analysis (ODA; Yarnold & Soltysik, 2005), this research demonstrated that newly developed scales for the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and MMPI-2 Restructured Form (MMPI-2-RF) specifically designed to assess over-reporting of cognitive and/or somatic symptoms were more effective than the MMPI-2 F-family of scales in predicting effort status on tests of cognitive functioning in a sample of 288 military members. ODA demonstrated that when all scales were performing at their theoretical maximum possible level of classification accuracy, the Henry Heilbronner Index (HHI), Response Bias Scale (RBS), Fake Bad Scale (FBS), and the Symptom Validity Scale (FBS-r) outperformed the F-family of scales on a variety of ODA indexes of classification accuracy, including an omnibus measure (effect strength total, EST) of the descriptive and prognostic utility of ODA models developed for each scale. Based on the guidelines suggested by Yarnold and Soltysik for evaluating effect strengths for ODA models, the newly developed scales had effects sizes that were moderate in size (37.66 to 45.68), whereas the F-family scales had effects strengths that ranged from weak to moderate (15.42 to 32.80). In addition, traditional analysis demonstrated that HHI, RBS, FBS, and FBS-R had large effect sizes (0.98 to 1.16) based on Cohen's (1988) suggested categorization of effect size when comparing mean scores for adequate versus inadequate effort groups, whereas F-family of scales had small to medium effect sizes (0.25 to 0.76). The MMPI-2-RF Infrequent Somatic Responses Scale (F(S)) tended to perform in a fashion similar to F, the best performing F-family scale.

  11. Demonstration of a Large-Scale Tank Assembly Via Circumferential Friction Stir Welds

    NASA Technical Reports Server (NTRS)

    Jones, Chip; Adams, Glynn; Colligan, Kevin; McCool, A. (Technical Monitor)

    2000-01-01

    Five (5) each 14-foot diameter circumferential FSWelds were conducted on the modified CWT, two (2) each pathfinder and three (3) each assembly welds Tapered circumferential welds were successfully demonstrated The use of a closeout anvil was successfully demonstrated during one of the pathfinder welds Considerable difficulty maintaining joint f it-up during the weld process Anvil deflections Hardware dimensional tolerances Inadequate clamping Variations in the heat sink characteristics of the circumferential anvil as compared to the test panel anvil

  12. Medical image classification based on multi-scale non-negative sparse coding.

    PubMed

    Zhang, Ruijie; Shen, Jian; Wei, Fushan; Li, Xiong; Sangaiah, Arun Kumar

    2017-11-01

    With the rapid development of modern medical imaging technology, medical image classification has become more and more important in medical diagnosis and clinical practice. Conventional medical image classification algorithms usually neglect the semantic gap problem between low-level features and high-level image semantic, which will largely degrade the classification performance. To solve this problem, we propose a multi-scale non-negative sparse coding based medical image classification algorithm. Firstly, Medical images are decomposed into multiple scale layers, thus diverse visual details can be extracted from different scale layers. Secondly, for each scale layer, the non-negative sparse coding model with fisher discriminative analysis is constructed to obtain the discriminative sparse representation of medical images. Then, the obtained multi-scale non-negative sparse coding features are combined to form a multi-scale feature histogram as the final representation for a medical image. Finally, SVM classifier is combined to conduct medical image classification. The experimental results demonstrate that our proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. GAT: a graph-theoretical analysis toolbox for analyzing between-group differences in large-scale structural and functional brain networks.

    PubMed

    Hosseini, S M Hadi; Hoeft, Fumiko; Kesler, Shelli R

    2012-01-01

    In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT) that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI) that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC) and functional data analyses (FDA), in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL) and healthy matched Controls (CON). The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.

  14. Large scale, highly conductive and patterned transparent films of silver nanowires on arbitrary substrates and their application in touch screens

    NASA Astrophysics Data System (ADS)

    Madaria, Anuj R.; Kumar, Akshay; Zhou, Chongwu

    2011-06-01

    The application of silver nanowire films as transparent conductive electrodes has shown promising results recently. In this paper, we demonstrate the application of a simple spray coating technique to obtain large scale, highly uniform and conductive silver nanowire films on arbitrary substrates. We also integrated a polydimethylsiloxane (PDMS)-assisted contact transfer technique with spray coating, which allowed us to obtain large scale high quality patterned films of silver nanowires. The transparency and conductivity of the films was controlled by the volume of the dispersion used in spraying and the substrate area. We note that the optoelectrical property, σDC/σOp, for various films fabricated was in the range 75-350, which is extremely high for transparent thin film compared to other candidate alternatives to doped metal oxide film. Using this method, we obtain silver nanowire films on a flexible polyethylene terephthalate (PET) substrate with a transparency of 85% and sheet resistance of 33 Ω/sq, which is comparable to that of tin-doped indium oxide (ITO) on flexible substrates. In-depth analysis of the film shows a high performance using another commonly used figure-of-merit, ΦTE. Also, Ag nanowire film/PET shows good mechanical flexibility and the application of such a conductive silver nanowire film as an electrode in a touch panel has been demonstrated.

  15. Engineering a Large Scale Indium Nanodot Array for Refractive Index Sensing.

    PubMed

    Xu, Xiaoqing; Hu, Xiaolin; Chen, Xiaoshu; Kang, Yangsen; Zhang, Zhiping; B Parizi, Kokab; Wong, H-S Philip

    2016-11-23

    In this work, we developed a simple method to fabricate 12 × 4 mm 2 large scale nanostructure arrays and investigated the feasibility of indium nanodot (ND) array with different diameters and periods for refractive index sensing. Absorption resonances at multiple wavelengths from the visible to the near-infrared range were observed for various incident angles in a variety of media. Engineering the ND array with a centered square lattice, we successfully enhanced the sensitivity by 60% and improved the figure of merit (FOM) by 190%. The evolution of the resonance dips in the reflection spectra, of square lattice and centered square lattice, from air to water, matches well with the results of Lumerical FDTD simulation. The improvement of sensitivity is due to the enhancement of local electromagnetic field (E-field) near the NDs with centered square lattice, as revealed by E-field simulation at resonance wavelengths. The E-field is enhanced due to coupling between the two square ND arrays with [Formula: see text]x period at phase matching. This work illustrates an effective way to engineer and fabricate a refractive index sensor at a large scale. This is the first experimental demonstration of poor-metal (indium) nanostructure array for refractive index sensing. It also demonstrates a centered square lattice for higher sensitivity and as a better basic platform for more complex sensor designs.

  16. GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith

    2014-08-25

    Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines themore » scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.« less

  17. Talking About The Smokes: a large-scale, community-based participatory research project.

    PubMed

    Couzos, Sophia; Nicholson, Anna K; Hunt, Jennifer M; Davey, Maureen E; May, Josephine K; Bennet, Pele T; Westphal, Darren W; Thomas, David P

    2015-06-01

    To describe the Talking About The Smokes (TATS) project according to the World Health Organization guiding principles for conducting community-based participatory research (PR) involving indigenous peoples, to assist others planning large-scale PR projects. The TATS project was initiated in Australia in 2010 as part of the International Tobacco Control Policy Evaluation Project, and surveyed a representative sample of 2522 Aboriginal and Torres Strait Islander adults to assess the impact of tobacco control policies. The PR process of the TATS project, which aimed to build partnerships to create equitable conditions for knowledge production, was mapped and summarised onto a framework adapted from the WHO principles. Processes describing consultation and approval, partnerships and research agreements, communication, funding, ethics and consent, data and benefits of the research. The TATS project involved baseline and follow-up surveys conducted in 34 Aboriginal community-controlled health services and one Torres Strait community. Consistent with the WHO PR principles, the TATS project built on community priorities and strengths through strategic partnerships from project inception, and demonstrated the value of research agreements and trusting relationships to foster shared decision making, capacity building and a commitment to Indigenous data ownership. Community-based PR methodology, by definition, needs adaptation to local settings and priorities. The TATS project demonstrates that large-scale research can be participatory, with strong Indigenous community engagement and benefits.

  18. The role of Natural Flood Management in managing floods in large scale basins during extreme events

    NASA Astrophysics Data System (ADS)

    Quinn, Paul; Owen, Gareth; ODonnell, Greg; Nicholson, Alex; Hetherington, David

    2016-04-01

    There is a strong evidence database showing the negative impacts of land use intensification and soil degradation in NW European river basins on hydrological response and to flood impact downstream. However, the ability to target zones of high runoff production and the extent to which we can manage flood risk using nature-based flood management solution are less known. A move to planting more trees and having less intense farmed landscapes is part of natural flood management (NFM) solutions and these methods suggest that flood risk can be managed in alternative and more holistic ways. So what local NFM management methods should be used, where in large scale basin should they be deployed and how does flow is propagate to any point downstream? Generally, how much intervention is needed and will it compromise food production systems? If we are observing record levels of rainfall and flow, for example during Storm Desmond in Dec 2015 in the North West of England, what other flood management options are really needed to complement our traditional defences in large basins for the future? In this paper we will show examples of NFM interventions in the UK that have impacted at local scale sites. We will demonstrate the impact of interventions at local, sub-catchment (meso-scale) and finally at the large scale. These tools include observations, process based models and more generalised Flood Impact Models. Issues of synchronisation and the design level of protection will be debated. By reworking observed rainfall and discharge (runoff) for observed extreme events in the River Eden and River Tyne, during Storm Desmond, we will show how much flood protection is needed in large scale basins. The research will thus pose a number of key questions as to how floods may have to be managed in large scale basins in the future. We will seek to support a method of catchment systems engineering that holds water back across the whole landscape as a major opportunity to management water in large scale basins in the future. The broader benefits of engineering landscapes to hold water for pollution control, sediment loss and drought minimisation will also be shown.

  19. WeaVR: a self-contained and wearable immersive virtual environment simulation system.

    PubMed

    Hodgson, Eric; Bachmann, Eric R; Vincent, David; Zmuda, Michael; Waller, David; Calusdian, James

    2015-03-01

    We describe WeaVR, a computer simulation system that takes virtual reality technology beyond specialized laboratories and research sites and makes it available in any open space, such as a gymnasium or a public park. Novel hardware and software systems enable HMD-based immersive virtual reality simulations to be conducted in any arbitrary location, with no external infrastructure and little-to-no setup or site preparation. The ability of the WeaVR system to provide realistic motion-tracked navigation for users, to improve the study of large-scale navigation, and to generate usable behavioral data is shown in three demonstrations. First, participants navigated through a full-scale virtual grocery store while physically situated in an open grass field. Trajectory data are presented for both normal tracking and for tracking during the use of redirected walking that constrained users to a predefined area. Second, users followed a straight path within a virtual world for distances of up to 2 km while walking naturally and being redirected to stay within the field, demonstrating the ability of the system to study large-scale navigation by simulating virtual worlds that are potentially unlimited in extent. Finally, the portability and pedagogical implications of this system were demonstrated by taking it to a regional high school for live use by a computer science class on their own school campus.

  20. A low-frequency chip-scale optomechanical oscillator with 58 kHz mechanical stiffening and more than 100th-order stable harmonics.

    PubMed

    Huang, Yongjun; Flores, Jaime Gonzalo Flor; Cai, Ziqiang; Yu, Mingbin; Kwong, Dim-Lee; Wen, Guangjun; Churchill, Layne; Wong, Chee Wei

    2017-06-29

    For the sensitive high-resolution force- and field-sensing applications, the large-mass microelectromechanical system (MEMS) and optomechanical cavity have been proposed to realize the sub-aN/Hz 1/2 resolution levels. In view of the optomechanical cavity-based force- and field-sensors, the optomechanical coupling is the key parameter for achieving high sensitivity and resolution. Here we demonstrate a chip-scale optomechanical cavity with large mass which operates at ≈77.7 kHz fundamental mode and intrinsically exhibiting large optomechanical coupling of 44 GHz/nm or more, for both optical resonance modes. The mechanical stiffening range of ≈58 kHz and a more than 100 th -order harmonics are obtained, with which the free-running frequency instability is lower than 10 -6 at 100 ms integration time. Such results can be applied to further improve the sensing performance of the optomechanical inspired chip-scale sensors.

  1. How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?

    DOE PAGES

    Hagos, Samson; Ruby Leung, L.; Zhao, Chun; ...

    2018-02-10

    Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less

  2. The Parallel System for Integrating Impact Models and Sectors (pSIMS)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Kelly, David; Chryssanthacopoulos, James; Glotter, Michael; Jhunjhnuwala, Kanika; Best, Neil; Wilde, Michael; Foster, Ian

    2014-01-01

    We present a framework for massively parallel climate impact simulations: the parallel System for Integrating Impact Models and Sectors (pSIMS). This framework comprises a) tools for ingesting and converting large amounts of data to a versatile datatype based on a common geospatial grid; b) tools for translating this datatype into custom formats for site-based models; c) a scalable parallel framework for performing large ensemble simulations, using any one of a number of different impacts models, on clusters, supercomputers, distributed grids, or clouds; d) tools and data standards for reformatting outputs to common datatypes for analysis and visualization; and e) methodologies for aggregating these datatypes to arbitrary spatial scales such as administrative and environmental demarcations. By automating many time-consuming and error-prone aspects of large-scale climate impacts studies, pSIMS accelerates computational research, encourages model intercomparison, and enhances reproducibility of simulation results. We present the pSIMS design and use example assessments to demonstrate its multi-model, multi-scale, and multi-sector versatility.

  3. How Do Microphysical Processes Influence Large-Scale Precipitation Variability and Extremes?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson; Ruby Leung, L.; Zhao, Chun

    Convection permitting simulations using the Model for Prediction Across Scales-Atmosphere (MPAS-A) are used to examine how microphysical processes affect large-scale precipitation variability and extremes. An episode of the Madden-Julian Oscillation is simulated using MPAS-A with a refined region at 4-km grid spacing over the Indian Ocean. It is shown that cloud microphysical processes regulate the precipitable water (PW) statistics. Because of the non-linear relationship between precipitation and PW, PW exceeding a certain critical value (PWcr) contributes disproportionately to precipitation variability. However, the frequency of PW exceeding PWcr decreases rapidly with PW, so changes in microphysical processes that shift the columnmore » PW statistics relative to PWcr even slightly have large impacts on precipitation variability. Furthermore, precipitation variance and extreme precipitation frequency are approximately linearly related to the difference between the mean and critical PW values. Thus observed precipitation statistics could be used to directly constrain model microphysical parameters as this study demonstrates using radar observations from DYNAMO field campaign.« less

  4. Modeling relief demands in an emergency supply chain system under large-scale disasters based on a queuing network.

    PubMed

    He, Xinhua; Hu, Wenfa

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model.

  5. Large-Scale Advanced Prop-Fan (LAP)

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel efficiency. Analytical studies and research with wind tunnel models have demonstrated that the high inherent efficiency of low speed turboprop propulsion systems may now be extended to the Mach .8 flight regime of today's commercial airliners. This can be accomplished with a propeller, employing a large number of thin highly swept blades. The term Prop-Fan has been coined to describe such a propulsion system. In 1983 the NASA-Lewis Research Center contracted with Hamilton Standard to design, build and test a near full scale Prop-Fan, designated the Large Scale Advanced Prop-Fan (LAP). This report provides a detailed description of the LAP program. The assumptions and analytical procedures used in the design of Prop-Fan system components are discussed in detail. The manufacturing techniques used in the fabrication of the Prop-Fan are presented. Each of the tests run during the course of the program are also discussed and the major conclusions derived from them stated.

  6. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  7. Limitations and tradeoffs in synchronization of large-scale networks with uncertain links

    PubMed Central

    Diwadkar, Amit; Vaidya, Umesh

    2016-01-01

    The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994

  8. Parallel Index and Query for Large Scale Data Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing ofmore » a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.« less

  9. The co-evolution of social institutions, demography, and large-scale human cooperation.

    PubMed

    Powers, Simon T; Lehmann, Laurent

    2013-11-01

    Human cooperation is typically coordinated by institutions, which determine the outcome structure of the social interactions individuals engage in. Explaining the Neolithic transition from small- to large-scale societies involves understanding how these institutions co-evolve with demography. We study this using a demographically explicit model of institution formation in a patch-structured population. Each patch supports both social and asocial niches. Social individuals create an institution, at a cost to themselves, by negotiating how much of the costly public good provided by cooperators is invested into sanctioning defectors. The remainder of their public good is invested in technology that increases carrying capacity, such as irrigation systems. We show that social individuals can invade a population of asocials, and form institutions that support high levels of cooperation. We then demonstrate conditions where the co-evolution of cooperation, institutions, and demographic carrying capacity creates a transition from small- to large-scale social groups. © 2013 John Wiley & Sons Ltd/CNRS.

  10. A simulation study demonstrating the importance of large-scale trailing vortices in wake steering

    DOE PAGES

    Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew; ...

    2018-05-14

    In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less

  11. Multimode resource-constrained multiple project scheduling problem under fuzzy random environment and its application to a large scale hydropower construction project.

    PubMed

    Xu, Jiuping; Feng, Cuiying

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.

  12. A simulation study demonstrating the importance of large-scale trailing vortices in wake steering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fleming, Paul; Annoni, Jennifer; Churchfield, Matthew

    In this article, we investigate the role of flow structures generated in wind farm control through yaw misalignment. A pair of counter-rotating vortices are shown to be important in deforming the shape of the wake and in explaining the asymmetry of wake steering in oppositely signed yaw angles. We motivate the development of new physics for control-oriented engineering models of wind farm control, which include the effects of these large-scale flow structures. Such a new model would improve the predictability of control-oriented models. Results presented in this paper indicate that wind farm control strategies, based on new control-oriented models withmore » new physics, that target total flow control over wake redirection may be different, and perhaps more effective, than current approaches. We propose that wind farm control and wake steering should be thought of as the generation of large-scale flow structures, which will aid in the improved performance of wind farms.« less

  13. A low-cost iron-cadmium redox flow battery for large-scale energy storage

    NASA Astrophysics Data System (ADS)

    Zeng, Y. K.; Zhao, T. S.; Zhou, X. L.; Wei, L.; Jiang, H. R.

    2016-10-01

    The redox flow battery (RFB) is one of the most promising large-scale energy storage technologies that offer a potential solution to the intermittency of renewable sources such as wind and solar. The prerequisite for widespread utilization of RFBs is low capital cost. In this work, an iron-cadmium redox flow battery (Fe/Cd RFB) with a premixed iron and cadmium solution is developed and tested. It is demonstrated that the coulombic efficiency and energy efficiency of the Fe/Cd RFB reach 98.7% and 80.2% at 120 mA cm-2, respectively. The Fe/Cd RFB exhibits stable efficiencies with capacity retention of 99.87% per cycle during the cycle test. Moreover, the Fe/Cd RFB is estimated to have a low capital cost of 108 kWh-1 for 8-h energy storage. Intrinsically low-cost active materials, high cell performance and excellent capacity retention equip the Fe/Cd RFB to be a promising solution for large-scale energy storage systems.

  14. Modeling Relief Demands in an Emergency Supply Chain System under Large-Scale Disasters Based on a Queuing Network

    PubMed Central

    He, Xinhua

    2014-01-01

    This paper presents a multiple-rescue model for an emergency supply chain system under uncertainties in large-scale affected area of disasters. The proposed methodology takes into consideration that the rescue demands caused by a large-scale disaster are scattered in several locations; the servers are arranged in multiple echelons (resource depots, distribution centers, and rescue center sites) located in different places but are coordinated within one emergency supply chain system; depending on the types of rescue demands, one or more distinct servers dispatch emergency resources in different vehicle routes, and emergency rescue services queue in multiple rescue-demand locations. This emergency system is modeled as a minimal queuing response time model of location and allocation. A solution to this complex mathematical problem is developed based on genetic algorithm. Finally, a case study of an emergency supply chain system operating in Shanghai is discussed. The results demonstrate the robustness and applicability of the proposed model. PMID:24688367

  15. Multimode Resource-Constrained Multiple Project Scheduling Problem under Fuzzy Random Environment and Its Application to a Large Scale Hydropower Construction Project

    PubMed Central

    Xu, Jiuping

    2014-01-01

    This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708

  16. Periodic magnetorotational dynamo action as a prototype of nonlinear magnetic-field generation in shear flows.

    PubMed

    Herault, J; Rincon, F; Cossu, C; Lesur, G; Ogilvie, G I; Longaretti, P-Y

    2011-09-01

    The nature of dynamo action in shear flows prone to magnetohydrodynamc instabilities is investigated using the magnetorotational dynamo in Keplerian shear flow as a prototype problem. Using direct numerical simulations and Newton's method, we compute an exact time-periodic magnetorotational dynamo solution to three-dimensional dissipative incompressible magnetohydrodynamic equations with rotation and shear. We discuss the physical mechanism behind the cycle and show that it results from a combination of linear and nonlinear interactions between a large-scale axisymmetric toroidal magnetic field and nonaxisymmetric perturbations amplified by the magnetorotational instability. We demonstrate that this large-scale dynamo mechanism is overall intrinsically nonlinear and not reducible to the standard mean-field dynamo formalism. Our results therefore provide clear evidence for a generic nonlinear generation mechanism of time-dependent coherent large-scale magnetic fields in shear flows and call for new theoretical dynamo models. These findings may offer important clues to understanding the transitional and statistical properties of subcritical magnetorotational turbulence.

  17. Development of a gene synthesis platform for the efficient large scale production of small genes encoding animal toxins.

    PubMed

    Sequeira, Ana Filipa; Brás, Joana L A; Guerreiro, Catarina I P D; Vincentelli, Renaud; Fontes, Carlos M G A

    2016-12-01

    Gene synthesis is becoming an important tool in many fields of recombinant DNA technology, including recombinant protein production. De novo gene synthesis is quickly replacing the classical cloning and mutagenesis procedures and allows generating nucleic acids for which no template is available. In addition, when coupled with efficient gene design algorithms that optimize codon usage, it leads to high levels of recombinant protein expression. Here, we describe the development of an optimized gene synthesis platform that was applied to the large scale production of small genes encoding venom peptides. This improved gene synthesis method uses a PCR-based protocol to assemble synthetic DNA from pools of overlapping oligonucleotides and was developed to synthesise multiples genes simultaneously. This technology incorporates an accurate, automated and cost effective ligation independent cloning step to directly integrate the synthetic genes into an effective Escherichia coli expression vector. The robustness of this technology to generate large libraries of dozens to thousands of synthetic nucleic acids was demonstrated through the parallel and simultaneous synthesis of 96 genes encoding animal toxins. An automated platform was developed for the large-scale synthesis of small genes encoding eukaryotic toxins. Large scale recombinant expression of synthetic genes encoding eukaryotic toxins will allow exploring the extraordinary potency and pharmacological diversity of animal venoms, an increasingly valuable but unexplored source of lead molecules for drug discovery.

  18. Tunable multipole resonances in plasmonic crystals made by four-beam holographic lithography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Y.; Li, X.; Zhang, X.

    2016-02-01

    Plasmonic nanostructures confine light to sub-wavelength scales, resulting in drastically enhanced light-matter interactions. Recent interest has focused on controlled symmetry breaking to create higher-order multipole plasmonic modes that store electromagnetic energy more efficiently than dipole modes. Here we demonstrate that four-beam holographic lithography enables fabrication of large-area plasmonic crystals with near-field coupled plasmons as well as deliberately broken symmetry to sustain multipole modes and Fano-resonances. Compared with the spectrally broad dipole modes we demonstrate an order of magnitude improved Q-factors (Q = 21) when the quadrupole mode is activated. We further demonstrate continuous tuning of the Fano-resonances using the polarization state ofmore » the incident light beam. The demonstrated technique opens possibilities to extend the rich physics of multipole plasmonic modes to wafer-scale applications that demand low-cost and high-throughput.« less

  19. Demonstration of Essential Reliability Services by a 300-MW Solar Photovoltaic Power Plant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loutan, Clyde; Klauer, Peter; Chowdhury, Sirajul

    The California Independent System Operator (CAISO), First Solar, and the National Renewable Energy Laboratory (NREL) conducted a demonstration project on a large utility-scale photovoltaic (PV) power plant in California to test its ability to provide essential ancillary services to the electric grid. With increasing shares of solar- and wind-generated energy on the electric grid, traditional generation resources equipped with automatic governor control (AGC) and automatic voltage regulation controls -- specifically, fossil thermal -- are being displaced. The deployment of utility-scale, grid-friendly PV power plants that incorporate advanced capabilities to support grid stability and reliability is essential for the large-scale integrationmore » of PV generation into the electric power grid, among other technical requirements. A typical PV power plant consists of multiple power electronic inverters and can contribute to grid stability and reliability through sophisticated 'grid-friendly' controls. In this way, PV power plants can be used to mitigate the impact of variability on the grid, a role typically reserved for conventional generators. In August 2016, testing was completed on First Solar's 300-MW PV power plant, and a large amount of test data was produced and analyzed that demonstrates the ability of PV power plants to use grid-friendly controls to provide essential reliability services. These data showed how the development of advanced power controls can enable PV to become a provider of a wide range of grid services, including spinning reserves, load following, voltage support, ramping, frequency response, variability smoothing, and frequency regulation to power quality. Specifically, the tests conducted included various forms of active power control such as AGC and frequency regulation; droop response; and reactive power, voltage, and power factor controls. This project demonstrated that advanced power electronics and solar generation can be controlled to contribute to system-wide reliability. It was shown that the First Solar plant can provide essential reliability services related to different forms of active and reactive power controls, including plant participation in AGC, primary frequency control, ramp rate control, and voltage regulation. For AGC participation in particular, by comparing the PV plant testing results to the typical performance of individual conventional technologies, we showed that regulation accuracy by the PV plant is 24-30 points better than fast gas turbine technologies. The plant's ability to provide volt-ampere reactive control during periods of extremely low power generation was demonstrated as well. The project team developed a pioneering demonstration concept and test plan to show how various types of active and reactive power controls can leverage PV generation's value from being a simple variable energy resource to a resource that provides a wide range of ancillary services. With this project's approach to a holistic demonstration on an actual, large, utility-scale, operational PV power plant and dissemination of the obtained results, the team sought to close some gaps in perspectives that exist among various stakeholders in California and nationwide by providing real test data.« less

  20. Dither Gyro Scale Factor Calibration: GOES-16 Flight Experience

    NASA Technical Reports Server (NTRS)

    Reth, Alan D.; Freesland, Douglas C.; Krimchansky, Alexander

    2018-01-01

    This poster is a sequel to a paper presented at the 34th Annual AAS Guidance and Control Conference in 2011, which first introduced dither-based calibration of gyro scale factors. The dither approach uses very small excitations, avoiding the need to take instruments offline during gyro scale factor calibration. In 2017, the dither calibration technique was successfully used to estimate gyro scale factors on the GOES-16 satellite. On-orbit dither calibration results were compared to more traditional methods using large angle spacecraft slews about each gyro axis, requiring interruption of science. The results demonstrate that the dither technique can estimate gyro scale factors to better than 2000 ppm during normal science observations.

  1. Multiresolution persistent homology for excessively large biomolecular datasets

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  2. Designing and developing portable large-scale JavaScript web applications within the Experiment Dashboard framework

    NASA Astrophysics Data System (ADS)

    Andreeva, J.; Dzhunov, I.; Karavakis, E.; Kokoszkiewicz, L.; Nowotka, M.; Saiz, P.; Tuckett, D.

    2012-12-01

    Improvements in web browser performance and web standards compliance, as well as the availability of comprehensive JavaScript libraries, provides an opportunity to develop functionally rich yet intuitive web applications that allow users to access, render and analyse data in novel ways. However, the development of such large-scale JavaScript web applications presents new challenges, in particular with regard to code sustainability and team-based work. We present an approach that meets the challenges of large-scale JavaScript web application design and development, including client-side model-view-controller architecture, design patterns, and JavaScript libraries. Furthermore, we show how the approach leads naturally to the encapsulation of the data source as a web API, allowing applications to be easily ported to new data sources. The Experiment Dashboard framework is used for the development of applications for monitoring the distributed computing activities of virtual organisations on the Worldwide LHC Computing Grid. We demonstrate the benefits of the approach for large-scale JavaScript web applications in this context by examining the design of several Experiment Dashboard applications for data processing, data transfer and site status monitoring, and by showing how they have been ported for different virtual organisations and technologies.

  3. A Ranking Approach on Large-Scale Graph With Multidimensional Heterogeneous Information.

    PubMed

    Wei, Wei; Gao, Bin; Liu, Tie-Yan; Wang, Taifeng; Li, Guohui; Li, Hang

    2016-04-01

    Graph-based ranking has been extensively studied and frequently applied in many applications, such as webpage ranking. It aims at mining potentially valuable information from the raw graph-structured data. Recently, with the proliferation of rich heterogeneous information (e.g., node/edge features and prior knowledge) available in many real-world graphs, how to effectively and efficiently leverage all information to improve the ranking performance becomes a new challenging problem. Previous methods only utilize part of such information and attempt to rank graph nodes according to link-based methods, of which the ranking performances are severely affected by several well-known issues, e.g., over-fitting or high computational complexity, especially when the scale of graph is very large. In this paper, we address the large-scale graph-based ranking problem and focus on how to effectively exploit rich heterogeneous information of the graph to improve the ranking performance. Specifically, we propose an innovative and effective semi-supervised PageRank (SSP) approach to parameterize the derived information within a unified semi-supervised learning framework (SSLF-GR), then simultaneously optimize the parameters and the ranking scores of graph nodes. Experiments on the real-world large-scale graphs demonstrate that our method significantly outperforms the algorithms that consider such graph information only partially.

  4. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  5. TREATMENT OF MUNICIPAL WASTEWATERS BY THE FLUIDIZED BED BIOREACTOR PROCESS

    EPA Science Inventory

    A 2-year, large-scale pilot investigation was conducted at the City of Newburgh Water Pollution Control Plant, Newburgh, NY, to demonstrate the application of the fluidized bed bioreactor process to the treatment of municipal wastewaters. The experimental effort investigated the ...

  6. Footprint of recycled water subsidies downwind of Lake Michigan

    USDA-ARS?s Scientific Manuscript database

    Continental evaporation is a significant and dynamic flux within the atmospheric water budget, but few methods provide robust observational constraints on the large-scale hydroclimatological and hydroecological impacts of this ‘recycled-water’ flux. We demonstrate a geospatial analysis that provides...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Torcellini, P.; Pless, S.; Lobato, C.

    Ongoing work at the National Renewable Energy Laboratory indicates that net-zero energy building (NZEB) status is both achievable and repeatable today. This paper presents a definition framework for classifying NZEBs and a real-life example that demonstrates how a large-scale office building can cost-effectively achieve net-zero energy.

  8. Evaluating Green/Gray Infrastructure for CSO/Stormwater Control

    EPA Science Inventory

    The NRMRL is conducting this project to evaluate the water quality and quantity benefits of a large-scale application of green infrastructure (low-impact development/best management practices) retrofits in an entire subcatchment. It will document ORD's effort to demonstrate the e...

  9. Harvesting the Decay Energy of 26-Al to Drive Lightning Discharge and Chondrule Formation

    NASA Astrophysics Data System (ADS)

    Johansen, A.; Okuzumi, S.

    2017-02-01

    We demonstrate that positrons released in the decay of 26-Al cause large-scale charging of dense pebble regions. The charge separation is neutralized by lightning discharge and this can lead to the formation of chondrules.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaminski, Michael

    The Irreversible Wash Aid Additive process has been under development by the U.S. Environmental Protection Agency (EPA) and Argonne National Laboratory (Argonne). This process for radioactive cesium mitigation consists of a solution to wash down contaminated structures, roadways, and vehicles and a sequestering agent to bind the radionuclides from the wash water and render them environmentally immobile. The purpose of this process is to restore functionality to basic services and immediately reduce the consequences of a radiologically-contaminated urban environment. Research and development have resulted in a down-selection of technologies for integration and demonstration at the pilot-scale level as part ofmore » the Wide Area Recovery and Resiliency Program (WARRP) under the Department of Homeland Security and the Denver Urban Area Security Initiative. As part of developing the methods for performing a pilot-scale demonstration at the WARRP conference in Denver in 2012, Argonne conducted small-scale field experiments at Separmatic Systems. The main purpose of these experiments was to refine the wash water collection and separations systems and demonstrate key unit operations to help in planning for the large scale demonstration in Denver. Since the purpose of these tests was to demonstrate the operations of the system, we used no radioactive materials. After a brief set of experiments with the LAKOS unit to familiarize ourselves with its operation, two experiments were completed on two separate dates with the Separmatic systems.« less

  11. A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks

    PubMed Central

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed. PMID:22163808

  12. Multi-Scale Effects in the Strength of Ceramics

    PubMed Central

    Cook, Robert F.

    2016-01-01

    Multiple length-scale effects are demonstrated in indentation-strength measurements of a range of ceramic materials under inert and reactive conditions. Meso-scale effects associated with flaw disruption by lateral cracking at large indentation loads are shown to increase strengths above the ideal indentation response. Micro-scale effects associated with toughening by microstructural restraints at small indentation loads are shown to decrease strengths below the ideal response. A combined meso-micro-scale analysis is developed that describes ceramic inert strength behaviors over the complete indentation flaw size range. Nano-scale effects associated with chemical equilibria and crack velocity thresholds are shown to lead to invariant minimum strengths at slow applied stressing rates under reactive conditions. A combined meso-micro-nano-scale analysis is developed that describes the full range of reactive and inert strength behaviors as a function of indentation load and applied stressing rate. Applications of the multi-scale analysis are demonstrated for materials design, materials selection, toughness determination, crack velocity determination, bond-rupture parameter determination, and prediction of reactive strengths. The measurements and analysis provide strong support for the existence of sharp crack tips in ceramics such that the nano-scale mechanisms of discrete bond rupture are separate from the larger scale crack driving force mechanics characterized by continuum-based stress-intensity factors. PMID:27563150

  13. Centimeter-Scale 2D van der Waals Vertical Heterostructures Integrated on Deformable Substrates Enabled by Gold Sacrificial Layer-Assisted Growth.

    PubMed

    Islam, Md Ashraful; Kim, Jung Han; Schropp, Anthony; Kalita, Hirokjyoti; Choudhary, Nitin; Weitzman, Dylan; Khondaker, Saiful I; Oh, Kyu Hwan; Roy, Tania; Chung, Hee-Suk; Jung, Yeonwoong

    2017-10-11

    Two-dimensional (2D) transition metal dichalcogenides (TMDs) such as molybdenum or tungsten disulfides (MoS 2 or WS 2 ) exhibit extremely large in-plane strain limits and unusual optical/electrical properties, offering unprecedented opportunities for flexible electronics/optoelectronics in new form factors. In order for them to be technologically viable building-blocks for such emerging technologies, it is critically demanded to grow/integrate them onto flexible or arbitrary-shaped substrates on a large wafer-scale compatible with the prevailing microelectronics processes. However, conventional approaches to assemble them on such unconventional substrates via mechanical exfoliations or coevaporation chemical growths have been limited to small-area transfers of 2D TMD layers with uncontrolled spatial homogeneity. Moreover, additional processes involving a prolonged exposure to strong chemical etchants have been required for the separation of as-grown 2D layers, which is detrimental to their material properties. Herein, we report a viable strategy to universally combine the centimeter-scale growth of various 2D TMD layers and their direct assemblies on mechanically deformable substrates. By exploring the water-assisted debonding of gold (Au) interfaced with silicon dioxide (SiO 2 ), we demonstrate the direct growth, transfer, and integration of 2D TMD layers and heterostructures such as 2D MoS 2 and 2D MoS 2 /WS 2 vertical stacks on centimeter-scale plastic and metal foil substrates. We identify the dual function of the Au layer as a growth substrate as well as a sacrificial layer which facilitates 2D layer transfer. Furthermore, we demonstrate the versatility of this integration approach by fabricating centimeter-scale 2D MoS 2 /single walled carbon nanotube (SWNT) vertical heterojunctions which exhibit current rectification and photoresponse. This study opens a pathway to explore large-scale 2D TMD van der Waals layers as device building blocks for emerging mechanically deformable electronics/optoelectronics.

  14. Design and Analysis of a Hyperspectral Microwave Receiver Subsystem

    NASA Technical Reports Server (NTRS)

    Blackwell, W.; Galbraith, C.; Hancock, T.; Leslie, R.; Osaretin, I.; Shields, M.; Racette, P.; Hillard, L.

    2012-01-01

    Hyperspectral microwave (HM) sounding has been proposed to achieve unprecedented performance. HM operation is achieved using multiple banks of RF spectrometers with large aggregate bandwidth. A principal challenge is Size/Weight/Power scaling. Objectives of this work: 1) Demonstrate ultra-compact (100 cm3) 52-channel IF processor (enabler); 2) Demonstrate a hyperspectral microwave receiver subsystem; and 3) Deliver a flight-ready system to validate HM sounding.

  15. Wavelet Analysis for RADARSAT Exploitation: Demonstration of Algorithms for Maritime Surveillance

    DTIC Science & Technology

    2007-02-01

    this study , we demonstrate wavelet analysis for exploitation of RADARSAT ocean imagery, including wind direction estimation, oceanic and atmospheric ...of image striations that can arise as a texture pattern caused by turbulent coherent structures in the marine atmospheric boundary layer. The image...associated change in the pattern texture (i.e., the nature of the turbulent atmospheric structures) across the front. Due to the large spatial scale of

  16. Inexpensive Dramatic Pneumatic Lift

    NASA Astrophysics Data System (ADS)

    Morse, Robert A.

    2017-09-01

    Various experiments and demonstrations relate air pressure and air pressure difference to force and area. Carpenter and Minnix describe a large-scale pneumatic lift in which a person sitting on a board atop a plastic garbage bag is lifted when the bag is connected to the exhaustport of a vacuum cleaner, which easily lifts the person. This article describes the construction and use of an inexpensive hand-held pneumatic lift to demonstrate the same principle.

  17. Impacts of Social-Emotional Curricula on Three-Year-Olds: Exploratory Findings from the Head Start CARES Demonstration. Research Snapshot. OPRE Report 2014-78

    ERIC Educational Resources Information Center

    Hsueh, JoAnn; Lowenstein, Amy E.; Morris, Pamela; Mattera, Shira K.; Bangser, Michael

    2014-01-01

    This report presents exploratory impact findings for 3-year-olds from the Head Start CARES demonstration, a large-scale randomized controlled trial implemented in Head Start centers for one academic year across the country. The study was designed primarily to test the effects of the enhancements on 4-year-olds, but it also provides an opportunity…

  18. UXO Detection and Characterization in the Marine Environment

    DTIC Science & Technology

    2009-12-01

    10 Figure 8. A part of Ostrich Bay adjacent to the Naval Ammunition Depot Puget Sound during the period of its...large-scale demonstration focus on a marine geophysical magnetometry survey of Ostrich Bay adjacent to the Former Naval Ammunition Depot – Puget Sound ...The Puget Sound Demonstration was also supported by NAVFAC Northwest, the current manager of the site. The Navy Project Manager is Mr. Mark Murphy

  19. Zebrafish Whole-Adult-Organism Chemogenomics for Large-Scale Predictive and Discovery Chemical Biology

    PubMed Central

    Lam, Siew Hong; Mathavan, Sinnakarupan; Tong, Yan; Li, Haixia; Karuturi, R. Krishna Murthy; Wu, Yilian; Vega, Vinsensius B.; Liu, Edison T.; Gong, Zhiyuan

    2008-01-01

    The ability to perform large-scale, expression-based chemogenomics on whole adult organisms, as in invertebrate models (worm and fly), is highly desirable for a vertebrate model but its feasibility and potential has not been demonstrated. We performed expression-based chemogenomics on the whole adult organism of a vertebrate model, the zebrafish, and demonstrated its potential for large-scale predictive and discovery chemical biology. Focusing on two classes of compounds with wide implications to human health, polycyclic (halogenated) aromatic hydrocarbons [P(H)AHs] and estrogenic compounds (ECs), we generated robust prediction models that can discriminate compounds of the same class from those of different classes in two large independent experiments. The robust expression signatures led to the identification of biomarkers for potent aryl hydrocarbon receptor (AHR) and estrogen receptor (ER) agonists, respectively, and were validated in multiple targeted tissues. Knowledge-based data mining of human homologs of zebrafish genes revealed highly conserved chemical-induced biological responses/effects, health risks, and novel biological insights associated with AHR and ER that could be inferred to humans. Thus, our study presents an effective, high-throughput strategy of capturing molecular snapshots of chemical-induced biological states of a whole adult vertebrate that provides information on biomarkers of effects, deregulated signaling pathways, and possible affected biological functions, perturbed physiological systems, and increased health risks. These findings place zebrafish in a strategic position to bridge the wide gap between cell-based and rodent models in chemogenomics research and applications, especially in preclinical drug discovery and toxicology. PMID:18618001

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunayama, Tomomi; Padmanabhan, Nikhil; Heitmann, Katrin

    Precision measurements of the large scale structure of the Universe require large numbers of high fidelity mock catalogs to accurately assess, and account for, the presence of systematic effects. We introduce and test a scheme for generating mock catalogs rapidly using suitably derated N-body simulations. Our aim is to reproduce the large scale structure and the gross properties of dark matter halos with high accuracy, while sacrificing the details of the halo's internal structure. By adjusting global and local time-steps in an N-body code, we demonstrate that we recover halo masses to better than 0.5% and the power spectrum tomore » better than 1% both in real and redshift space for k =1 h Mpc{sup −1}, while requiring a factor of 4 less CPU time. We also calibrate the redshift spacing of outputs required to generate simulated light cones. We find that outputs separated by Δ z =0.05 allow us to interpolate particle positions and velocities to reproduce the real and redshift space power spectra to better than 1% (out to k =1 h Mpc{sup −1}). We apply these ideas to generate a suite of simulations spanning a range of cosmologies, motivated by the Baryon Oscillation Spectroscopic Survey (BOSS) but broadly applicable to future large scale structure surveys including eBOSS and DESI. As an initial demonstration of the utility of such simulations, we calibrate the shift in the baryonic acoustic oscillation peak position as a function of galaxy bias with higher precision than has been possible so far. This paper also serves to document the simulations, which we make publicly available.« less

  1. A hierarchy of distress and invariant item ordering in the General Health Questionnaire-12.

    PubMed

    Doyle, F; Watson, R; Morgan, K; McBride, O

    2012-06-01

    Invariant item ordering (IIO) is defined as the extent to which items have the same ordering (in terms of item difficulty/severity - i.e. demonstrating whether items are difficult [rare] or less difficult [common]) for each respondent who completes a scale. IIO is therefore crucial for establishing a scale hierarchy that is replicable across samples, but no research has demonstrated IIO in scales of psychological distress. We aimed to determine if a hierarchy of distress with IIO exists in a large general population sample who completed a scale measuring distress. Data from 4107 participants who completed the 12-item General Health Questionnaire (GHQ-12) from the Northern Ireland Health and Social Wellbeing Survey 2005-6 were analysed. Mokken scaling was used to determine the dimensionality and hierarchy of the GHQ-12, and items were investigated for IIO. All items of the GHQ-12 formed a single, strong unidimensional scale (H=0.58). IIO was found for six of the 12 items (H-trans=0.55), and these symptoms reflected the following hierarchy: anhedonia, concentration, participation, coping, decision-making and worthlessness. The cross-sectional analysis needs replication. The GHQ-12 showed a hierarchy of distress, but IIO is only demonstrated for six of the items, and the scale could therefore be shortened. Adopting brief, hierarchical scales with IIO may be beneficial in both clinical and research contexts. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. RoboPIV: how robotics enable PIV on a large industrial scale

    NASA Astrophysics Data System (ADS)

    Michaux, F.; Mattern, P.; Kallweit, S.

    2018-07-01

    This work demonstrates how the interaction between particle image velocimetry (PIV) and robotics can massively increase measurement efficiency. The interdisciplinary approach is shown using the complex example of an automated, large scale, industrial environment: a typical automotive wind tunnel application. Both the high degree of flexibility in choosing the measurement region and the complete automation of stereo PIV measurements are presented. The setup consists of a combination of three robots, individually used as a 6D traversing unit for the laser illumination system as well as for each of the two cameras. Synchronised movements in the same reference frame are realised through a master-slave setup with a single interface to the user. By integrating the interface into the standard wind tunnel management system, a single measurement plane or a predefined sequence of several planes can be requested through a single trigger event, providing the resulting vector fields within minutes. In this paper, a brief overview on the demands of large scale industrial PIV and the existing solutions is given. Afterwards, the concept of RoboPIV is introduced as a new approach. In a first step, the usability of a selection of commercially available robot arms is analysed. The challenges of pose uncertainty and importance of absolute accuracy are demonstrated through comparative measurements, explaining the individual pros and cons of the analysed systems. Subsequently, the advantage of integrating RoboPIV directly into the existing wind tunnel management system is shown on basis of a typical measurement sequence. In a final step, a practical measurement procedure, including post-processing, is given by using real data and results. Ultimately, the benefits of high automation are demonstrated, leading to a drastic reduction in necessary measurement time compared to non-automated systems, thus massively increasing the efficiency of PIV measurements.

  3. Antitoxin Treatment of Inhalation Anthrax: A Systematic Review

    PubMed Central

    Huang, Eileen; Pillai, Satish K.; Bower, William A.; Hendricks, Katherine A.; Guarnizo, Julie T.; Hoyle, Jamechia D.; Gorman, Susan E.; Boyer, Anne E.; Quinn, Conrad P.; Meaney-Delman, Dana

    2016-01-01

    Concern about use of anthrax as a bioweapon prompted development of novel anthrax antitoxins for treatment. Clinical guidelines for the treatment of anthrax recommend antitoxin therapy in combination with intravenous antimicrobials; however, a large-scale or mass anthrax incident may exceed antitoxin availability and create a need for judicious antitoxin use. We conducted a systematic review of antitoxin treatment of inhalation anthrax in humans and experimental animals to inform antitoxin recommendations during a large-scale or mass anthrax incident. A comprehensive search of 11 databases and the FDA website was conducted to identify relevant animal studies and human reports: 28 animal studies and 3 human cases were identified. Antitoxin monotherapy at or shortly after symptom onset demonstrates increased survival compared to no treatment in animals. With early treatment, survival did not differ between antimicrobial monotherapy and antimicrobial-antitoxin therapy in nonhuman primates and rabbits. With delayed treatment, antitoxin-antimicrobial treatment increased rabbit survival. Among human cases, addition of antitoxin to combination antimicrobial treatment was associated with survival in 2 of the 3 cases treated. Despite the paucity of human data, limited animal data suggest that adjunctive antitoxin therapy may improve survival. Delayed treatment studies suggest improved survival with combined antitoxin-antimicrobial therapy, although a survival difference compared with antimicrobial therapy alone was not demonstrated statistically. In a mass anthrax incident with limited antitoxin supplies, antitoxin treatment of individuals who have not demonstrated a clinical benefit from antimicrobials, or those who present with more severe illness, may be warranted. Additional pathophysiology studies are needed, and a point-of-care assay correlating toxin levels with clinical status may provide important information to guide antitoxin use during a large-scale anthrax incident. PMID:26690378

  4. Antitoxin Treatment of Inhalation Anthrax: A Systematic Review.

    PubMed

    Huang, Eileen; Pillai, Satish K; Bower, William A; Hendricks, Katherine A; Guarnizo, Julie T; Hoyle, Jamechia D; Gorman, Susan E; Boyer, Anne E; Quinn, Conrad P; Meaney-Delman, Dana

    2015-01-01

    Concern about use of anthrax as a bioweapon prompted development of novel anthrax antitoxins for treatment. Clinical guidelines for the treatment of anthrax recommend antitoxin therapy in combination with intravenous antimicrobials; however, a large-scale or mass anthrax incident may exceed antitoxin availability and create a need for judicious antitoxin use. We conducted a systematic review of antitoxin treatment of inhalation anthrax in humans and experimental animals to inform antitoxin recommendations during a large-scale or mass anthrax incident. A comprehensive search of 11 databases and the FDA website was conducted to identify relevant animal studies and human reports: 28 animal studies and 3 human cases were identified. Antitoxin monotherapy at or shortly after symptom onset demonstrates increased survival compared to no treatment in animals. With early treatment, survival did not differ between antimicrobial monotherapy and antimicrobial-antitoxin therapy in nonhuman primates and rabbits. With delayed treatment, antitoxin-antimicrobial treatment increased rabbit survival. Among human cases, addition of antitoxin to combination antimicrobial treatment was associated with survival in 2 of the 3 cases treated. Despite the paucity of human data, limited animal data suggest that adjunctive antitoxin therapy may improve survival. Delayed treatment studies suggest improved survival with combined antitoxin-antimicrobial therapy, although a survival difference compared with antimicrobial therapy alone was not demonstrated statistically. In a mass anthrax incident with limited antitoxin supplies, antitoxin treatment of individuals who have not demonstrated a clinical benefit from antimicrobials, or those who present with more severe illness, may be warranted. Additional pathophysiology studies are needed, and a point-of-care assay correlating toxin levels with clinical status may provide important information to guide antitoxin use during a large-scale anthrax incident.

  5. Resonant soft X-ray scattering for polymer materials

    DOE PAGES

    Liu, Feng; Brady, Michael A.; Wang, Cheng

    2016-04-16

    Resonant Soft X-ray Scattering (RSoXS) was developed within the last few years, and the first dedicated resonant soft X-ray scattering beamline for soft materials was constructed at the Advanced Light Source, LBNL. RSoXS combines soft X-ray spectroscopy with X-ray scattering and thus offers statistical information for 3D chemical morphology over a large length scale range from nanometers to micrometers. Using RSoXS to characterize multi-length scale soft materials with heterogeneous chemical structures, we have demonstrated that soft X-ray scattering is a unique complementary technique to conventional hard X-ray and neutron scattering. Its unique chemical sensitivity, large accessible size scale, molecular bondmore » orientation sensitivity with polarized X-rays, and high coherence have shown great potential for chemically specific structural characterization for many classes of materials.« less

  6. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  7. Efficient parallelization of analytic bond-order potentials for large-scale atomistic simulations

    NASA Astrophysics Data System (ADS)

    Teijeiro, C.; Hammerschmidt, T.; Drautz, R.; Sutmann, G.

    2016-07-01

    Analytic bond-order potentials (BOPs) provide a way to compute atomistic properties with controllable accuracy. For large-scale computations of heterogeneous compounds at the atomistic level, both the computational efficiency and memory demand of BOP implementations have to be optimized. Since the evaluation of BOPs is a local operation within a finite environment, the parallelization concepts known from short-range interacting particle simulations can be applied to improve the performance of these simulations. In this work, several efficient parallelization methods for BOPs that use three-dimensional domain decomposition schemes are described. The schemes are implemented into the bond-order potential code BOPfox, and their performance is measured in a series of benchmarks. Systems of up to several millions of atoms are simulated on a high performance computing system, and parallel scaling is demonstrated for up to thousands of processors.

  8. A Scalable Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Aiken, Alexander

    2001-01-01

    The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.

  9. Cl-Assisted Large Scale Synthesis of Cm-Scale Buckypapers of Fe₃C-Filled Carbon Nanotubes with Pseudo-Capacitor Properties: The Key Role of SBA-16 Catalyst Support as Synthesis Promoter.

    PubMed

    Boi, Filippo S; He, Yi; Wen, Jiqiu; Wang, Shanling; Yan, Kai; Zhang, Jingdong; Medranda, Daniel; Borowiec, Joanna; Corrias, Anna

    2017-10-23

    We show a novel chemical vapour deposition (CVD) approach, in which the large-scale fabrication of ferromagnetically-filled cm-scale buckypapers is achieved through the deposition of a mesoporous supported catalyst (SBA-16) on a silicon substrate. We demonstrate that SBA-16 has the crucial role of promoting the growth of carbon nanotubes (CNTs) on a horizontal plane with random orientation rather than in a vertical direction, therefore allowing a facile fabrication of cm-scale CNTs buckypapers free from the onion-crust by-product observed on the buckypaper-surface in previous reports. The morphology and composition of the obtained CNTs-buckypapers are analyzed in detail by scanning electron microscopy (SEM), Energy Dispersive X-ray (EDX), transmission electron microscopy (TEM), high resolution TEM (HRTEM), and thermogravimetric analysis (TGA), while structural analysis is performed by Rietveld Refinement of XRD data. The room temperature magnetic properties of the produced buckypapers are also investigated and reveal the presence of a high coercivity of 650 Oe. Additionally, the electrochemical performances of these buckypapers are demonstrated and reveal a behavior that is compatible with that of a pseudo-capacitor (resistive-capacitor) with better performances than those presented in other previously studied layered-buckypapers of Fe-filled CNTs, obtained by pyrolysis of dichlorobenzene-ferrocene mixtures. These measurements indicate that these materials show promise for applications in energy storage systems as flexible electrodes.

  10. Active Learning in PhysicsTechnology and Research-based Techniques Emphasizing Interactive Lecture Demonstrations

    NASA Astrophysics Data System (ADS)

    Thornton, Ronald

    2010-10-01

    Physics education research has shown that learning environments that engage students and allow them to take an active part in their learning can lead to large conceptual gains compared to traditional instruction. Examples of successful curricula and methods include Peer Instruction, Just in Time Teaching, RealTime Physics, Workshop Physics, Scale-Up, and Interactive Lecture Demonstrations (ILDs). An active learning environment is often difficult to achieve in lecture sessions. This presentation will demonstrate the use of sequences of Interactive Lecture Demonstrations (ILDs) that use real experiments often involving real-time data collection and display combined with student interaction to create an active learning environment in large or small lecture classes. Interactive lecture demonstrations will be done in the area of mechanics using real-time motion probes and the Visualizer. A video tape of students involved in interactive lecture demonstrations will be shown. The results of a number of research studies at various institutions (including international) to measure the effectiveness of ILDs and guided inquiry conceptual laboratories will be presented.

  11. The up-scaling of ecosystem functions in a heterogeneous world

    NASA Astrophysics Data System (ADS)

    Lohrer, Andrew M.; Thrush, Simon F.; Hewitt, Judi E.; Kraan, Casper

    2015-05-01

    Earth is in the midst of a biodiversity crisis that is impacting the functioning of ecosystems and the delivery of valued goods and services. However, the implications of large scale species losses are often inferred from small scale ecosystem functioning experiments with little knowledge of how the dominant drivers of functioning shift across scales. Here, by integrating observational and manipulative experimental field data, we reveal scale-dependent influences on primary productivity in shallow marine habitats, thus demonstrating the scalability of complex ecological relationships contributing to coastal marine ecosystem functioning. Positive effects of key consumers (burrowing urchins, Echinocardium cordatum) on seafloor net primary productivity (NPP) elucidated by short-term, single-site experiments persisted across multiple sites and years. Additional experimentation illustrated how these effects amplified over time, resulting in greater primary producer biomass sediment chlorophyll a content (Chla) in the longer term, depending on climatic context and habitat factors affecting the strengths of mutually reinforcing feedbacks. The remarkable coherence of results from small and large scales is evidence of real-world ecosystem function scalability and ecological self-organisation. This discovery provides greater insights into the range of responses to broad-scale anthropogenic stressors in naturally heterogeneous environmental settings.

  12. The up-scaling of ecosystem functions in a heterogeneous world

    PubMed Central

    Lohrer, Andrew M.; Thrush, Simon F.; Hewitt, Judi E.; Kraan, Casper

    2015-01-01

    Earth is in the midst of a biodiversity crisis that is impacting the functioning of ecosystems and the delivery of valued goods and services. However, the implications of large scale species losses are often inferred from small scale ecosystem functioning experiments with little knowledge of how the dominant drivers of functioning shift across scales. Here, by integrating observational and manipulative experimental field data, we reveal scale-dependent influences on primary productivity in shallow marine habitats, thus demonstrating the scalability of complex ecological relationships contributing to coastal marine ecosystem functioning. Positive effects of key consumers (burrowing urchins, Echinocardium cordatum) on seafloor net primary productivity (NPP) elucidated by short-term, single-site experiments persisted across multiple sites and years. Additional experimentation illustrated how these effects amplified over time, resulting in greater primary producer biomass sediment chlorophyll a content (Chla) in the longer term, depending on climatic context and habitat factors affecting the strengths of mutually reinforcing feedbacks. The remarkable coherence of results from small and large scales is evidence of real-world ecosystem function scalability and ecological self-organisation. This discovery provides greater insights into the range of responses to broad-scale anthropogenic stressors in naturally heterogeneous environmental settings. PMID:25993477

  13. Prospective cohort studies of newly marketed medications: using covariate data to inform the design of large-scale studies.

    PubMed

    Franklin, Jessica M; Rassen, Jeremy A; Bartels, Dorothee B; Schneeweiss, Sebastian

    2014-01-01

    Nonrandomized safety and effectiveness studies are often initiated immediately after the approval of a new medication, but patients prescribed the new medication during this period may be substantially different from those receiving an existing comparator treatment. Restricting the study to comparable patients after data have been collected is inefficient in prospective studies with primary collection of outcomes. We discuss design and methods for evaluating covariate data to assess the comparability of treatment groups, identify patient subgroups that are not comparable, and decide when to transition to a large-scale comparative study. We demonstrate methods in an example study comparing Cox-2 inhibitors during their postmarketing period (1999-2005) with nonselective nonsteroidal anti-inflammatory drugs (NSAIDs). Graphical checks of propensity score distributions in each treatment group showed substantial problems with overlap in the initial cohorts. In the first half of 1999, >40% of patients were in the region of nonoverlap on the propensity score, and across the study period this fraction never dropped below 10% (the a priori decision threshold for transitioning to the large-scale study). After restricting to patients with no prior NSAID use, <1% of patients were in the region of nonoverlap, indicating that a large-scale study could be initiated in this subgroup and few patients would need to be trimmed from analysis. A sequential study design that uses pilot data to evaluate treatment selection can guide the efficient design of large-scale outcome studies with primary data collection by focusing on comparable patients.

  14. Cryogenic Tank Technology Program (CTTP)

    NASA Technical Reports Server (NTRS)

    Vaughn, T. P.

    2001-01-01

    The objectives of the Cryogenic Tank Technology Program were to: (1) determine the feasibility and cost effectiveness of near net shape hardware; (2) demonstrate near net shape processes by fabricating large scale-flight quality hardware; and (3) advance state of current weld processing technologies for aluminum lithium alloys.

  15. Validating a Geographical Image Retrieval System.

    ERIC Educational Resources Information Center

    Zhu, Bin; Chen, Hsinchun

    2000-01-01

    Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…

  16. Chem Ed Compacts.

    ERIC Educational Resources Information Center

    Wolf, Walter A., Ed.

    1978-01-01

    Reported here are brief descriptions of a common grading and scaling formula for large multi-section courses, an ion exchange amino acid separation and thin layer chromatography identification experiment, a conservation of energy demonstration, a catalyst for synthesizing esters from fatty aids, and an inexpensive method for preparing platinum…

  17. Educational Technology--The White Elephant.

    ERIC Educational Resources Information Center

    Molnar, Andrew R.

    A ten year experiment in educational technology sponsored under Title VII of the National Defense Education Act (NDEA) demonstrated the feasibility of large-scale educational systems which can extend education to all while permitting the individualization of instruction without significant increase in cost (through television, computer systems,…

  18. TORONTO HARBOUR COMMISSIONERS (THC) SOIL RECYCLE TREATMENT TRAIN - APPLICATIONS ANALYSIS REPORT

    EPA Science Inventory

    The Toronto Harbour Commissioners (THC) have developed a soil treatment train designed to treat inorganic and organic contaminants in soils. THC has conducted a large-scale demonstration of these technologies in an attempt to establish that contaminated soils at the Toronto Port ...

  19. Effect of small scale transport processes on phytoplankton distribution in coastal seas.

    PubMed

    Hernández-Carrasco, Ismael; Orfila, Alejandro; Rossi, Vincent; Garçon, Veronique

    2018-06-05

    Coastal ocean ecosystems are major contributors to the global biogeochemical cycles and biological productivity. Physical factors induced by the turbulent flow play a crucial role in regulating marine ecosystems. However, while large-scale open-ocean dynamics is well described by geostrophy, the role of multiscale transport processes in coastal regions is still poorly understood due to the lack of continuous high-resolution observations. Here, the influence of small-scale dynamics (O(3.5-25) km, i.e. spanning upper submesoscale and mesoscale processes) on surface phytoplankton derived from satellite chlorophyll-a (Chl-a) is studied using Lagrangian metrics computed from High-Frequency Radar currents. The combination of complementary Lagrangian diagnostics, including the Lagrangian divergence along fluid trajectories, provides an improved description of the 3D flow geometry which facilitates the interpretation of two non-exclusive physical mechanisms affecting phytoplankton dynamics and patchiness. Attracting small-scale fronts, unveiled by backwards Lagrangian Coherent Structures, are associated to negative divergence where particles and Chl-a standing stocks cluster. Filaments of positive divergence, representing large accumulated upward vertical velocities and suggesting accrued injection of subsurface nutrients, match areas with large Chl-a concentrations. Our findings demonstrate that an accurate characterization of small-scale transport processes is necessary to comprehend bio-physical interactions in coastal seas.

  20. A 14 × 14 μm2 footprint polarization-encoded quantum controlled-NOT gate based on hybrid waveguide

    PubMed Central

    Wang, S. M.; Cheng, Q. Q.; Gong, Y. X.; Xu, P.; Sun, C.; Li, L.; Li, T.; Zhu, S. N.

    2016-01-01

    Photonic quantum information processing system has been widely used in communication, metrology and lithography. The recent emphasis on the miniaturized photonic platform is thus motivated by the urgent need for realizing large-scale information processing and computing. Although the integrated quantum logic gates and quantum algorithms based on path encoding have been successfully demonstrated, the technology for handling another commonly used polarization-encoded qubits has yet to be fully developed. Here, we show the implementation of a polarization-dependent beam-splitter in the hybrid waveguide system. With precisely design, the polarization-encoded controlled-NOT gate can be implemented using only single such polarization-dependent beam-splitter with the significant size reduction of the overall device footprint to 14 × 14 μm2. The experimental demonstration of the highly integrated controlled-NOT gate sets the stage to develop large-scale quantum information processing system. Our hybrid design also establishes the new capabilities in controlling the polarization modes in integrated photonic circuits. PMID:27142992

  1. Unsupervised Discovery of Demixed, Low-Dimensional Neural Dynamics across Multiple Timescales through Tensor Component Analysis.

    PubMed

    Williams, Alex H; Kim, Tony Hyun; Wang, Forea; Vyas, Saurabh; Ryu, Stephen I; Shenoy, Krishna V; Schnitzer, Mark; Kolda, Tamara G; Ganguli, Surya

    2018-06-27

    Perceptions, thoughts, and actions unfold over millisecond timescales, while learned behaviors can require many days to mature. While recent experimental advances enable large-scale and long-term neural recordings with high temporal fidelity, it remains a formidable challenge to extract unbiased and interpretable descriptions of how rapid single-trial circuit dynamics change slowly over many trials to mediate learning. We demonstrate a simple tensor component analysis (TCA) can meet this challenge by extracting three interconnected, low-dimensional descriptions of neural data: neuron factors, reflecting cell assemblies; temporal factors, reflecting rapid circuit dynamics mediating perceptions, thoughts, and actions within each trial; and trial factors, describing both long-term learning and trial-to-trial changes in cognitive state. We demonstrate the broad applicability of TCA by revealing insights into diverse datasets derived from artificial neural networks, large-scale calcium imaging of rodent prefrontal cortex during maze navigation, and multielectrode recordings of macaque motor cortex during brain machine interface learning. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. Helium ion microscopy of Lepidoptera scales.

    PubMed

    Boden, Stuart A; Asadollahbaik, Asa; Rutt, Harvey N; Bagnall, Darren M

    2012-01-01

    In this report, helium ion microscopy (HIM) is used to study the micro and nanostructures responsible for structural color in the wings of two species of Lepidotera from the Papilionidae family: Papilio ulysses (Blue Mountain Butterfly) and Parides sesostris (Emerald-patched Cattleheart). Electronic charging of uncoated scales from the wings of these butterflies, due to the incident ion beam, is successfully neutralized, leading to images displaying a large depth-of-field and a high level of surface detail, which would normally be obscured by traditional coating methods used for scanning electron microscopy (SEM). The images are compared with those from variable pressure SEM, demonstrating the superiority of HIM at high magnifications. In addition, the large depth-of-field capabilities of HIM are exploited through the creation of stereo pairs that allows the exploration of the third dimension. Furthermore, the extraction of quantitative height information which matches well with cross-sectional transmission electron microscopy measurements from the literature is demonstrated. © Wiley Periodicals, Inc.

  3. On the impact of approximate computation in an analog DeSTIN architecture.

    PubMed

    Young, Steven; Lu, Junjie; Holleman, Jeremy; Arel, Itamar

    2014-05-01

    Deep machine learning (DML) holds the potential to revolutionize machine learning by automating rich feature extraction, which has become the primary bottleneck of human engineering in pattern recognition systems. However, the heavy computational burden renders DML systems implemented on conventional digital processors impractical for large-scale problems. The highly parallel computations required to implement large-scale deep learning systems are well suited to custom hardware. Analog computation has demonstrated power efficiency advantages of multiple orders of magnitude relative to digital systems while performing nonideal computations. In this paper, we investigate typical error sources introduced by analog computational elements and their impact on system-level performance in DeSTIN--a compositional deep learning architecture. These inaccuracies are evaluated on a pattern classification benchmark, clearly demonstrating the robustness of the underlying algorithm to the errors introduced by analog computational elements. A clear understanding of the impacts of nonideal computations is necessary to fully exploit the efficiency of analog circuits.

  4. Combination of Universal Mechanical Testing Machine with Atomic Force Microscope for Materials Research

    PubMed Central

    Zhong, Jian; He, Dannong

    2015-01-01

    Surface deformation and fracture processes of materials under external force are important for understanding and developing materials. Here, a combined horizontal universal mechanical testing machine (HUMTM)-atomic force microscope (AFM) system is developed by modifying UMTM to combine with AFM and designing a height-adjustable stabilizing apparatus. Then the combined HUMTM-AFM system is evaluated. Finally, as initial demonstrations, it is applied to analyze the relationship among macroscopic mechanical properties, surface nanomorphological changes under external force, and fracture processes of two kinds of representative large scale thin film materials: polymer material with high strain rate (Parafilm) and metal material with low strain rate (aluminum foil). All the results demonstrate the combined HUMTM-AFM system overcomes several disadvantages of current AFM-combined tensile/compression devices including small load force, incapability for large scale specimens, disability for materials with high strain rate, and etc. Therefore, the combined HUMTM-AFM system is a promising tool for materials research in the future. PMID:26265357

  5. Combination of Universal Mechanical Testing Machine with Atomic Force Microscope for Materials Research.

    PubMed

    Zhong, Jian; He, Dannong

    2015-08-12

    Surface deformation and fracture processes of materials under external force are important for understanding and developing materials. Here, a combined horizontal universal mechanical testing machine (HUMTM)-atomic force microscope (AFM) system is developed by modifying UMTM to combine with AFM and designing a height-adjustable stabilizing apparatus. Then the combined HUMTM-AFM system is evaluated. Finally, as initial demonstrations, it is applied to analyze the relationship among macroscopic mechanical properties, surface nanomorphological changes under external force, and fracture processes of two kinds of representative large scale thin film materials: polymer material with high strain rate (Parafilm) and metal material with low strain rate (aluminum foil). All the results demonstrate the combined HUMTM-AFM system overcomes several disadvantages of current AFM-combined tensile/compression devices including small load force, incapability for large scale specimens, disability for materials with high strain rate, and etc. Therefore, the combined HUMTM-AFM system is a promising tool for materials research in the future.

  6. The Segmented Aperture Interferometric Nulling Testbed (SAINT) I: Overview and Air-side System Description

    NASA Technical Reports Server (NTRS)

    Hicks, Brian A.; Lyon, Richard G.; Petrone, Peter, III; Bolcar, Matthew R.; Bolognese, Jeff; Clampin, Mark; Dogoda, Peter; Dworzanski, Daniel; Helmbrecht, Michael A.; Koca, Corina; hide

    2016-01-01

    This work presents an overview of the This work presents an overview of the Segmented Aperture Interferometric Nulling Testbed (SAINT), a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes., a project that will pair an actively-controlled macro-scale segmented mirror with the Visible Nulling Coronagraph (VNC). SAINT will incorporate the VNCs demonstrated wavefront sensing and control system to refine and quantify the end-to-end system performance for high-contrast starlight suppression. This pathfinder system will be used as a tool to study and refine approaches to mitigating instabilities and complex diffraction expected from future large segmented aperture telescopes.

  7. A 14 × 14 μm(2) footprint polarization-encoded quantum controlled-NOT gate based on hybrid waveguide.

    PubMed

    Wang, S M; Cheng, Q Q; Gong, Y X; Xu, P; Sun, C; Li, L; Li, T; Zhu, S N

    2016-05-04

    Photonic quantum information processing system has been widely used in communication, metrology and lithography. The recent emphasis on the miniaturized photonic platform is thus motivated by the urgent need for realizing large-scale information processing and computing. Although the integrated quantum logic gates and quantum algorithms based on path encoding have been successfully demonstrated, the technology for handling another commonly used polarization-encoded qubits has yet to be fully developed. Here, we show the implementation of a polarization-dependent beam-splitter in the hybrid waveguide system. With precisely design, the polarization-encoded controlled-NOT gate can be implemented using only single such polarization-dependent beam-splitter with the significant size reduction of the overall device footprint to 14 × 14 μm(2). The experimental demonstration of the highly integrated controlled-NOT gate sets the stage to develop large-scale quantum information processing system. Our hybrid design also establishes the new capabilities in controlling the polarization modes in integrated photonic circuits.

  8. Cross-axis synchronous flow-through coil planet centrifuge for large-scale preparative counter-current chromatography. III. Performance of large-bore coils in slow planetary motion.

    PubMed

    Ito, Y; Zhang, T Y

    1988-11-25

    A preparative capability of the present cross-axis synchronous flow-through coil planet centrifuge was demonstrated with 0.5 cm I.D. multilayer coils. Results of the model studies with short coils indicated that the optimal separations are obtained at low revolutional speeds of 100-200 rpm in both central and lateral coil positions. Preparative separations were successfully performed on 2.5-10 g quantities of test samples in a pair of multilayer coils connected in series with a total capacity of 2.5 l. The sample loading capacity will be scaled up in several folds by increasing the column width.

  9. A Zonal Approach for Prediction of Jet Noise

    NASA Technical Reports Server (NTRS)

    Shih, S. H.; Hixon, D. R.; Mankbadi, Reda R.

    1995-01-01

    A zonal approach for direct computation of sound generation and propagation from a supersonic jet is investigated. The present work splits the computational domain into a nonlinear, acoustic-source regime and a linear acoustic wave propagation regime. In the nonlinear regime, the unsteady flow is governed by the large-scale equations, which are the filtered compressible Navier-Stokes equations. In the linear acoustic regime, the sound wave propagation is described by the linearized Euler equations. Computational results are presented for a supersonic jet at M = 2. 1. It is demonstrated that no spurious modes are generated in the matching region and the computational expense is reduced substantially as opposed to fully large-scale simulation.

  10. Extracellular matrix motion and early morphogenesis

    PubMed Central

    Loganathan, Rajprasad; Rongish, Brenda J.; Smith, Christopher M.; Filla, Michael B.; Czirok, Andras; Bénazéraf, Bertrand

    2016-01-01

    For over a century, embryologists who studied cellular motion in early amniotes generally assumed that morphogenetic movement reflected migration relative to a static extracellular matrix (ECM) scaffold. However, as we discuss in this Review, recent investigations reveal that the ECM is also moving during morphogenesis. Time-lapse studies show how convective tissue displacement patterns, as visualized by ECM markers, contribute to morphogenesis and organogenesis. Computational image analysis distinguishes between cell-autonomous (active) displacements and convection caused by large-scale (composite) tissue movements. Modern quantification of large-scale ‘total’ cellular motion and the accompanying ECM motion in the embryo demonstrates that a dynamic ECM is required for generation of the emergent motion patterns that drive amniote morphogenesis. PMID:27302396

  11. Large-scale brain networks in affective and social neuroscience: Towards an integrative functional architecture of the brain

    PubMed Central

    Barrett, Lisa Feldman; Satpute, Ajay

    2013-01-01

    Understanding how a human brain creates a human mind ultimately depends on mapping psychological categories and concepts to physical measurements of neural response. Although it has long been assumed that emotional, social, and cognitive phenomena are realized in the operations of separate brain regions or brain networks, we demonstrate that it is possible to understand the body of neuroimaging evidence using a framework that relies on domain general, distributed structure-function mappings. We review current research in affective and social neuroscience and argue that the emerging science of large-scale intrinsic brain networks provides a coherent framework for a domain-general functional architecture of the human brain. PMID:23352202

  12. Large-scale evaluation of multimodal biometric authentication using state-of-the-art systems.

    PubMed

    Snelick, Robert; Uludag, Umut; Mink, Alan; Indovina, Michael; Jain, Anil

    2005-03-01

    We examine the performance of multimodal biometric authentication systems using state-of-the-art Commercial Off-the-Shelf (COTS) fingerprint and face biometric systems on a population approaching 1,000 individuals. The majority of prior studies of multimodal biometrics have been limited to relatively low accuracy non-COTS systems and populations of a few hundred users. Our work is the first to demonstrate that multimodal fingerprint and face biometric systems can achieve significant accuracy gains over either biometric alone, even when using highly accurate COTS systems on a relatively large-scale population. In addition to examining well-known multimodal methods, we introduce new methods of normalization and fusion that further improve the accuracy.

  13. Effect of ploidy on scale-cover pattern in linear ornamental (koi) common carp Cyprinus carpio.

    PubMed

    Gomelsky, B; Schneider, K J; Glennon, R P; Plouffe, D A

    2012-09-01

    The effect of ploidy on scale-cover pattern in linear ornamental (koi) common carp Cyprinus carpio was investigated. To obtain diploid and triploid linear fish, eggs taken from a leather C. carpio female (genotype ssNn) and sperm taken from a scaled C. carpio male (genotype SSnn) were used for the production of control (no shock) and heat-shocked progeny. In heat-shocked progeny, the 2 min heat shock (40° C) was applied 6 min after insemination. Diploid linear fish (genotype SsNn) demonstrated a scale-cover pattern typical for this category with one even row of scales along lateral line and few scales located near operculum and at bases of fins. The majority (97%) of triploid linear fish (genotype SssNnn) exhibited non-typical scale patterns which were characterized by the appearance of additional scales on the body. The extent of additional scales in triploid linear fish was variable; some fish had large scales, which covered almost the entire body. Apparently, the observed difference in scale-cover pattern between triploid and diploid linear fish was caused by different phenotypic expression of gene N/n. Due to incomplete dominance of allele N, triploids Nnn demonstrate less profound reduction of scale cover compared with diploids Nn. © 2012 The Authors. Journal of Fish Biology © 2012 The Fisheries Society of the British Isles.

  14. Status of EPA's (Environmental Protection Agency's) LIMB (Limestone Injection Multistage Burner) demonstration program at Ohio Edison's Edgewater Unit 4. Report for September-December 1986

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendriks, R.V.; Nolan, P.S.

    1987-01-01

    The paper describes and discusses the key design features of the retrofit of EPA's Limestone Injection Multistage Burner (LIMB) system to an operating, wall-fired utility boiler at Ohio Edison's Edgewater Station. It further describes results of the pertinent projects in EPA's LIMB program and shows how these results were used as the basis for the design of the system. The full-scale demonstration is expected to prove the effectiveness and cost of the LIMB concept for use on large-scale utility boilers. The equipment is now being installed at Edgewater, with system start-up scheduled for May 1987.

  15. A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems.

    PubMed

    Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping

    2013-01-01

    Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.

  16. Variability in large-scale wind power generation: Variability in large-scale wind power generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kiviluoma, Juha; Holttinen, Hannele; Weir, David

    2015-10-25

    The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1more » h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.« less

  17. Experimental Demonstration of Technologies for Autonomous On-Orbit Robotic Assembly

    NASA Technical Reports Server (NTRS)

    LeMaster, Edward A.; Schaechter, David B.; Carrington, Connie K.

    2006-01-01

    The Modular Reconfigurable High Energy (MRHE) program aimed to develop technologies for the automated assembly and deployment of large-scale space structures and aggregate spacecraft. Part of the project involved creation of a terrestrial robotic testbed for validation and demonstration of these technologies and for the support of future development activities. This testbed was completed in 2005, and was thereafter used to demonstrate automated rendezvous, docking, and self-assembly tasks between a group of three modular robotic spacecraft emulators. This paper discusses the rationale for the MRHE project, describes the testbed capabilities, and presents the MRHE assembly demonstration sequence.

  18. An examination of three sets of MMPI-2 personality disorder scales.

    PubMed

    Jones, Alvin

    2005-08-01

    Three sets of personality disorder scales (PD scales) can be scored for the MMPI-2 (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989). Two sets (Levitt & Gotts, 1995; Morey, Waugh, & Blashfield, 1985) are derived from the MMPI (Hathaway & McKinley, 1983), and a third set (Somwaru & Ben-Porath, 1995) is based on the MMPI-2. There is no validity research for the Levitt and Gotts scale, and limited validity research is available for the Somwaru and Ben-Porath scales. There is a large body of research suggesting that the Morey et al. scales have good to excellent convergent validity when compared to a variety of other measures of personality disorders. Since the Morey et al. scales have established validity, there is a question if additional sets of PD scales are needed. The primary purpose of this research was to determine if the PD scales developed by Levitt and Gotts and those developed by Somwaru and Ben-Porath contribute incrementally to the scales developed by Morey et al. in predicting corresponding scales on the MCMI-II (Millon, 1987). In a sample of 494 individuals evaluated at an Army medical center, a hierarchical regression analysis demonstrated that the Somwaru and Ben-Porath Borderline, Antisocial, and Schizoid PD scales and the Levitt and Gotts Narcissistic and Histrionic scales contributed significantly and meaningfully to the Morey et al. scales in predicting the corresponding MCMI-II (Millon, 1987) scale. However, only the Somwaru and Ben-Porath scales demonstrated acceptable internal consistency and convergent validity.

  19. Large scale preparation of high mannose and paucimannose N-glycans from soybean proteins by oxidative release of natural glycans (ORNG).

    PubMed

    Zhu, Yuyang; Yan, Maomao; Lasanajak, Yi; Smith, David F; Song, Xuezheng

    2018-07-15

    Despite the important advances in chemical and chemoenzymatic synthesis of glycans, access to large quantities of complex natural glycans remains a major impediment to progress in Glycoscience. Here we report a large-scale preparation of N-glycans from a kilogram of commercial soy proteins using oxidative release of natural glycans (ORNG). The high mannose and paucimannose N-glycans were labeled with a fluorescent tag and purified by size exclusion and multidimensional preparative HPLC. Side products are identified and potential mechanisms for the oxidative release of natural N-glycans from glycoproteins are proposed. This study demonstrates the potential for using the ORNG approach as a complementary route to synthetic approaches for the preparation of multi-milligram quantities of biomedically relevant complex glycans. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions

    PubMed Central

    Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong

    2016-01-01

    Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140

  1. Questionnaire-based assessment of executive functioning: Psychometrics.

    PubMed

    Castellanos, Irina; Kronenberger, William G; Pisoni, David B

    2018-01-01

    The psychometric properties of the Learning, Executive, and Attention Functioning (LEAF) scale were investigated in an outpatient clinical pediatric sample. As a part of clinical testing, the LEAF scale, which broadly measures neuropsychological abilities related to executive functioning and learning, was administered to parents of 118 children and adolescents referred for psychological testing at a pediatric psychology clinic; 85 teachers also completed LEAF scales to assess reliability across different raters and settings. Scores on neuropsychological tests of executive functioning and academic achievement were abstracted from charts. Psychometric analyses of the LEAF scale demonstrated satisfactory internal consistency, parent-teacher inter-rater reliability in the small to large effect size range, and test-retest reliability in the large effect size range, similar to values for other executive functioning checklists. Correlations between corresponding subscales on the LEAF and other behavior checklists were large, while most correlations with neuropsychological tests of executive functioning and achievement were significant but in the small to medium range. Results support the utility of the LEAF as a reliable and valid questionnaire-based assessment of delays and disturbances in executive functioning and learning. Applications and advantages of the LEAF and other questionnaire measures of executive functioning in clinical neuropsychology settings are discussed.

  2. PLASMA TURBULENCE AND KINETIC INSTABILITIES AT ION SCALES IN THE EXPANDING SOLAR WIND

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellinger, Petr; Trávnícek, Pavel M.; Matteini, Lorenzo

    The relationship between a decaying strong turbulence and kinetic instabilities in a slowly expanding plasma is investigated using two-dimensional (2D) hybrid expanding box simulations. We impose an initial ambient magnetic field perpendicular to the simulation box, and we start with a spectrum of large-scale, linearly polarized, random-phase Alfvénic fluctuations that have energy equipartition between kinetic and magnetic fluctuations and vanishing correlation between the two fields. A turbulent cascade rapidly develops; magnetic field fluctuations exhibit a power-law spectrum at large scales and a steeper spectrum at ion scales. The turbulent cascade leads to an overall anisotropic proton heating, protons are heatedmore » in the perpendicular direction, and, initially, also in the parallel direction. The imposed expansion leads to generation of a large parallel proton temperature anisotropy which is at later stages partly reduced by turbulence. The turbulent heating is not sufficient to overcome the expansion-driven perpendicular cooling and the system eventually drives the oblique firehose instability in a form of localized nonlinear wave packets which efficiently reduce the parallel temperature anisotropy. This work demonstrates that kinetic instabilities may coexist with strong plasma turbulence even in a constrained 2D regime.« less

  3. Discrete Event Modeling and Massively Parallel Execution of Epidemic Outbreak Phenomena

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2011-01-01

    In complex phenomena such as epidemiological outbreaks, the intensity of inherent feedback effects and the significant role of transients in the dynamics make simulation the only effective method for proactive, reactive or post-facto analysis. The spatial scale, runtime speed, and behavioral detail needed in detailed simulations of epidemic outbreaks make it necessary to use large-scale parallel processing. Here, an optimistic parallel execution of a new discrete event formulation of a reaction-diffusion simulation model of epidemic propagation is presented to facilitate in dramatically increasing the fidelity and speed by which epidemiological simulations can be performed. Rollback support needed during optimistic parallelmore » execution is achieved by combining reverse computation with a small amount of incremental state saving. Parallel speedup of over 5,500 and other runtime performance metrics of the system are observed with weak-scaling execution on a small (8,192-core) Blue Gene / P system, while scalability with a weak-scaling speedup of over 10,000 is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes exceeding several hundreds of millions of individuals in the largest cases are successfully exercised to verify model scalability.« less

  4. Forced Imbibition in Porous Media: A Fourfold Scenario

    NASA Astrophysics Data System (ADS)

    Odier, Céleste; Levaché, Bertrand; Santanach-Carreras, Enric; Bartolo, Denis

    2017-11-01

    We establish a comprehensive description of the patterns formed when a wetting liquid displaces a viscous fluid confined in a porous medium. Building on model microfluidic experiments, we evidence four imbibition scenarios all yielding different large-scale morphologies. Combining high-resolution imaging and confocal microscopy, we show that they originate from two liquid-entrainment transitions and a Rayleigh-Plateau instability at the pore scale. Finally, we demonstrate and explain the long-time coarsening of the resulting patterns.

  5. Identification and Characterization of Genomic Amplifications in Ovarian Serous Carcinoma

    DTIC Science & Technology

    2009-07-01

    oncogenes, Rsf1 and Notch3, which were up-regulated in both genomic DNA and transcript levels in ovarian cancer. In a large- scale FISH analysis, Rsf1...associated with worse disease outcome, suggesting that Rsf1 could be potentially used as a prognostic marker in the future (Appendix #1). For the...over- expressed in a recurrent carcinoma. Although the follow-up study in a larger- scale sample size did not demonstrate clear amplification in NAC1

  6. Demonstration of Regenerable, Large-Scale Ion Exchange System Using WBA Resin in Rialto, CA (Drinking Water Treatment - Pilot Scale)

    DTIC Science & Technology

    2008-08-01

    Administration NDBA N-nitrosodi-n-butylamine NDEA N-nitrosodiethylamine NDMA N-nitrosodimethylamine NDPA N-nitrosodi-n-propylamine v ACRONYMS...spectrometry (IC-MS/MS). Nitrosamines were analyzed using EPA Method 521. N-nitrosodimethylamine ( NDMA ) was 2.6 parts per trillion (ppt) with a detection...and metals (Ca, Cu, Fe, Mg, Mn, K, Na , and Zn). Specific methods are listed in Table 5. ** N-nitrosodimethylamine ( NDMA ), N-nitrosodiethylamine

  7. Electrokinetic decontamination of concrete. Final report, August 3, 1993--September 15, 1996

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1998-12-31

    The ELECTROSORB{reg_sign} {open_quotes}C{close_quotes} process is an electrokinetic process for decontaminating concrete. ELECTROSORB{reg_sign} {open_quotes}C{close_quotes} uses a carpet-like extraction pad which is placed on the contaminated concrete surface. An electrolyte solution is circulated from a supporting module. This module keeps the electrolyte solution clean. The work is advancing through the engineering development stage with steady progress toward a full scale demonstration unit which will be ready for incorporation in the DOE Large Scale Demonstration Program by Summer 1997. A demonstration was carried out at the Mound Facility in Miamisburg, Ohio, in June 1996. Third party verification by EG&G verified the effectiveness ofmore » the process. Results of this work and the development work that proceeded are described herein.« less

  8. Coherent nonhelical shear dynamos driven by magnetic fluctuations at low Reynolds numbers

    DOE PAGES

    Squire, J.; Bhattacharjee, A.

    2015-10-28

    Nonhelical shear dynamos are studied with a particular focus on the possibility of coherent dynamo action. The primary results—serving as a follow up to the results of Squire & Bhattacharjee—pertain to the "magnetic shear-current effect" as a viable mechanism to drive large-scale magnetic field generation. This effect raises the interesting possibility that the saturated state of the small-scale dynamo could drive large-scale dynamo action, and is likely to be important in the unstratified regions of accretion disk turbulence. In this paper, the effect is studied at low Reynolds numbers, removing the complications of small-scale dynamo excitation and aiding analysis bymore » enabling the use of quasi-linear statistical simulation methods. In addition to the magnetically driven dynamo, new results on the kinematic nonhelical shear dynamo are presented. Furthermore, these illustrate the relationship between coherent and incoherent driving in such dynamos, demonstrating the importance of rotation in determining the relative dominance of each mechanism.« less

  9. COHERENT NONHELICAL SHEAR DYNAMOS DRIVEN BY MAGNETIC FLUCTUATIONS AT LOW REYNOLDS NUMBERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Squire, J.; Bhattacharjee, A., E-mail: jsquire@caltech.edu

    2015-11-01

    Nonhelical shear dynamos are studied with a particular focus on the possibility of coherent dynamo action. The primary results—serving as a follow up to the results of Squire and Bhattacharjee—pertain to the “magnetic shear-current effect” as a viable mechanism to drive large-scale magnetic field generation. This effect raises the interesting possibility that the saturated state of the small-scale dynamo could drive large-scale dynamo action, and is likely to be important in the unstratified regions of accretion disk turbulence. In this paper, the effect is studied at low Reynolds numbers, removing the complications of small-scale dynamo excitation and aiding analysis bymore » enabling the use of quasi-linear statistical simulation methods. In addition to the magnetically driven dynamo, new results on the kinematic nonhelical shear dynamo are presented. These illustrate the relationship between coherent and incoherent driving in such dynamos, demonstrating the importance of rotation in determining the relative dominance of each mechanism.« less

  10. Scale-space measures for graph topology link protein network architecture to function.

    PubMed

    Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen

    2014-06-15

    The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.

  11. Milankovitch-scale correlations between deeply buried microbial populations and biogenic ooze lithology

    USGS Publications Warehouse

    Aiello, I.W.; Bekins, B.A.

    2010-01-01

    The recent discoveries of large, active populations of microbes in the subseafloor of the world's oceans supports the impact of the deep biosphere biota on global biogeochemical cycles and raises important questions concerning the functioning of these extreme environments for life. These investigations demonstrated that subseafloor microbes are unevenly distributed and that cell abundances and metabolic activities are often independent from sediment depths, with increased prokaryotic activity at geochemical and/or sedimentary interfaces. In this study we demonstrate that microbial populations vary at the scale of individual beds in the biogenic oozes of a drill site in the eastern equatorial Pacific (Ocean Drilling Program Leg 201, Site 1226). We relate bedding-scale changes in biogenic ooze sediment composition to organic carbon (OC) and microbial cell concentrations using high-resolution color reflectance data as proxy for lithology. Our analyses demonstrate that microbial concentrations are an order of magnitude higher in the more organic-rich diatom oozes than in the nannofossil oozes. The variations mimic small-scale variations in diatom abundance and OC, indicating that the modern distribution of microbial biomass is ultimately controlled by Milankovitch-frequency variations in past oceanographic conditions. ?? 2010 Geological Society of America.

  12. Integration of Robotics and 3D Visualization to Modernize the Expeditionary Warfare Demonstrator (EWD)

    DTIC Science & Technology

    2009-09-01

    his schedule is. I learned most from our informal discussions and collaboration with other industry professionals. Amela was instrumental in allowing...me to effectively analyze, structure and critique my work. I take many professional lessons learned from Amela with me as I leave NPS. Thanks to...observers began learning about maneuver warfare in a large-scale battle. The demonstration was recognized as a huge success after General von Muffling

  13. The Brief Negative Symptom Scale (BNSS): Independent validation in a large sample of Italian patients with schizophrenia.

    PubMed

    Mucci, A; Galderisi, S; Merlotti, E; Rossi, A; Rocca, P; Bucci, P; Piegari, G; Chieffi, M; Vignapiano, A; Maj, M

    2015-07-01

    The Brief Negative Symptom Scale (BNSS) was developed to address the main limitations of the existing scales for the assessment of negative symptoms of schizophrenia. The initial validation of the scale by the group involved in its development demonstrated good convergent and discriminant validity, and a factor structure confirming the two domains of negative symptoms (reduced emotional/verbal expression and anhedonia/asociality/avolition). However, only relatively small samples of patients with schizophrenia were investigated. Further independent validation in large clinical samples might be instrumental to the broad diffusion of the scale in clinical research. The present study aimed to examine the BNSS inter-rater reliability, convergent/discriminant validity and factor structure in a large Italian sample of outpatients with schizophrenia. Our results confirmed the excellent inter-rater reliability of the BNSS (the intraclass correlation coefficient ranged from 0.81 to 0.98 for individual items and was 0.98 for the total score). The convergent validity measures had r values from 0.62 to 0.77, while the divergent validity measures had r values from 0.20 to 0.28 in the main sample (n=912) and in a subsample without clinically significant levels of depression and extrapyramidal symptoms (n=496). The BNSS factor structure was supported in both groups. The study confirms that the BNSS is a promising measure for quantifying negative symptoms of schizophrenia in large multicenter clinical studies. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  14. Scale-invariance underlying the logistic equation and its social applications

    NASA Astrophysics Data System (ADS)

    Hernando, A.; Plastino, A.

    2013-01-01

    On the basis of dynamical principles we i) advance a derivation of the Logistic Equation (LE), widely employed (among multiple applications) in the simulation of population growth, and ii) demonstrate that scale-invariance and a mean-value constraint are sufficient and necessary conditions for obtaining it. We also generalize the LE to multi-component systems and show that the above dynamical mechanisms underlie a large number of scale-free processes. Examples are presented regarding city-populations, diffusion in complex networks, and popularity of technological products, all of them obeying the multi-component logistic equation in an either stochastic or deterministic way.

  15. A 17 GHz molecular rectifier

    PubMed Central

    Trasobares, J.; Vuillaume, D.; Théron, D.; Clément, N.

    2016-01-01

    Molecular electronics originally proposed that small molecules sandwiched between electrodes would accomplish electronic functions and enable ultimate scaling to be reached. However, so far, functional molecular devices have only been demonstrated at low frequency. Here, we demonstrate molecular diodes operating up to 17.8 GHz. Direct current and radio frequency (RF) properties were simultaneously measured on a large array of molecular junctions composed of gold nanocrystal electrodes, ferrocenyl undecanethiol molecules and the tip of an interferometric scanning microwave microscope. The present nanometre-scale molecular diodes offer a current density increase by several orders of magnitude compared with that of micrometre-scale molecular diodes, allowing RF operation. The measured S11 parameters show a diode rectification ratio of 12 dB which is linked to the rectification behaviour of the direct current conductance. From the RF measurements, we extrapolate a cut-off frequency of 520 GHz. A comparison with the silicon RF-Schottky diodes, architecture suggests that the RF-molecular diodes are extremely attractive for scaling and high-frequency operation. PMID:27694833

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lenee-Bluhm, P.; Rhinefrank, Ken

    The overarching project objective is to demonstrate the feasibility of using an innovative PowerTake-Off (PTO) Module in Columbia Power's utility-scale wave energy converter (WEC). The PTO Module uniquely combines a large-diameter, direct-drive, rotary permanent magnet generator; a patent-pending rail-bearing system; and a corrosion-resistant fiber-reinforced-plastic structure

  17. Impact of Addressing Accountability Demands in the United States

    ERIC Educational Resources Information Center

    Banta, Trudy W.

    2010-01-01

    Since 1970, quality assurance, or outcomes assessment, has provided guidance for improving pedagogy, curricula and student support programmes in the US. But evidence that student learning has improved remains elusive. Large-scale long-term studies are needed to demonstrate the effects of outcomes assessment on learning.

  18. A Multi-Scale Settlement Matching Algorithm Based on ARG

    NASA Astrophysics Data System (ADS)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  19. Creation of current filaments in the solar corona

    NASA Technical Reports Server (NTRS)

    Mikic, Z.; Schnack, D. D.; Van Hoven, G.

    1989-01-01

    It has been suggested that the solar corona is heated by the dissipation of electric currents. The low value of the resistivity requires the magnetic field to have structure at very small length scales if this mechanism is to work. In this paper it is demonstrated that the coronal magnetic field acquires small-scale structure through the braiding produced by smooth, randomly phased, photospheric flows. The current density develops a filamentary structure and grows exponentially in time. Nonlinear processes in the ideal magnetohydrodynamic equations produce a cascade effect, in which the structure introduced by the flow at large length scales is transferred to smaller scales. If this process continues down to the resistive dissipation length scale, it would provide an effective mechanism for coronal heating.

  20. Turbofan engine demonstration of sensor failure detection

    NASA Technical Reports Server (NTRS)

    Merrill, Walter C.; Delaat, John C.; Abdelwahab, Mahmood

    1991-01-01

    In the paper, the results of a full-scale engine demonstration of a sensor failure detection algorithm are presented. The algorithm detects, isolates, and accommodates sensor failures using analytical redundancy. The experimental hardware, including the F100 engine, is described. Demonstration results were obtained over a large portion of a typical flight envelope for the F100 engine. They include both subsonic and supersonic conditions at both medium and full, nonafter burning, power. Estimated accuracy, minimum detectable levels of sensor failures, and failure accommodation performance for an F100 turbofan engine control system are discussed.

  1. SOURCE EXPLORER: Towards Web Browser Based Tools for Astronomical Source Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.

    2014-05-01

    As a new generation of large format, high-resolution imagers come online (ODI, DECAM, LSST, etc.) we are faced with the daunting prospect of astronomical images containing upwards of hundreds of thousands of identifiable sources. Visualizing and interacting with such large datasets using traditional astronomical tools appears to be unfeasible, and a new approach is required. We present here a method for the display and analysis of arbitrarily large source datasets using dynamically scaling levels of detail, enabling scientists to rapidly move from large-scale spatial overviews down to the level of individual sources and everything in-between. Based on the recognized standards of HTML5+JavaScript, we enable observers and archival users to interact with their images and sources from any modern computer without having to install specialized software. We demonstrate the ability to produce large-scale source lists from the images themselves, as well as overlaying data from publicly available source ( 2MASS, GALEX, SDSS, etc.) or user provided source lists. A high-availability cluster of computational nodes allows us to produce these source maps on demand and customized based on user input. User-generated source lists and maps are persistent across sessions and are available for further plotting, analysis, refinement, and culling.

  2. Maestro: an orchestration framework for large-scale WSN simulations.

    PubMed

    Riliskis, Laurynas; Osipov, Evgeny

    2014-03-18

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation.

  3. Fault Tolerant Frequent Pattern Mining

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shohdy, Sameh; Vishnu, Abhinav; Agrawal, Gagan

    FP-Growth algorithm is a Frequent Pattern Mining (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivotal to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing,more » though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.« less

  4. A forward-advancing wave expansion method for numerical solution of large-scale sound propagation problems

    NASA Astrophysics Data System (ADS)

    Rolla, L. Barrera; Rice, H. J.

    2006-09-01

    In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.

  5. Maestro: An Orchestration Framework for Large-Scale WSN Simulations

    PubMed Central

    Riliskis, Laurynas; Osipov, Evgeny

    2014-01-01

    Contemporary wireless sensor networks (WSNs) have evolved into large and complex systems and are one of the main technologies used in cyber-physical systems and the Internet of Things. Extensive research on WSNs has led to the development of diverse solutions at all levels of software architecture, including protocol stacks for communications. This multitude of solutions is due to the limited computational power and restrictions on energy consumption that must be accounted for when designing typical WSN systems. It is therefore challenging to develop, test and validate even small WSN applications, and this process can easily consume significant resources. Simulations are inexpensive tools for testing, verifying and generally experimenting with new technologies in a repeatable fashion. Consequently, as the size of the systems to be tested increases, so does the need for large-scale simulations. This article describes a tool called Maestro for the automation of large-scale simulation and investigates the feasibility of using cloud computing facilities for such task. Using tools that are built into Maestro, we demonstrate a feasible approach for benchmarking cloud infrastructure in order to identify cloud Virtual Machine (VM)instances that provide an optimal balance of performance and cost for a given simulation. PMID:24647123

  6. Diversity of herbaceous plants and bacterial communities regulates soil resistome across forest biomes.

    PubMed

    Hu, Hang-Wei; Wang, Jun-Tao; Singh, Brajesh K; Liu, Yu-Rong; Chen, Yong-Liang; Zhang, Yu-Jing; He, Ji-Zheng

    2018-04-24

    Antibiotic resistance is ancient and prevalent in natural ecosystems and evolved long before the utilization of synthetic antibiotics started, but factors influencing the large-scale distribution patterns of natural antibiotic resistance genes (ARGs) remain largely unknown. Here, a large-scale investigation over 4000 km was performed to profile soil ARGs, plant communities and bacterial communities from 300 quadrats across five forest biomes with minimal human impact. We detected diverse and abundant ARGs in forests, including over 160 genes conferring resistance to eight major categories of antibiotics. The diversity of ARGs was strongly and positively correlated with the diversity of bacteria, herbaceous plants and mobile genetic elements (MGEs). The ARG composition was strongly correlated with the taxonomic structure of bacteria and herbs. Consistent with this strong correlation, structural equation modelling demonstrated that the positive effects of bacterial and herb communities on ARG patterns were maintained even when simultaneously accounting for multiple drivers (climate, spatial predictors and edaphic factors). These findings suggest a paradigm that the interactions between aboveground and belowground communities shape the large-scale distribution of soil resistomes, providing new knowledge for tackling the emerging environmental antibiotic resistance. © 2018 Society for Applied Microbiology and John Wiley & Sons Ltd.

  7. Neighborhood scale quantification of ecosystem goods and ...

    EPA Pesticide Factsheets

    Ecosystem goods and services are those ecological structures and functions that humans can directly relate to their state of well-being. Ecosystem goods and services include, but are not limited to, a sufficient fresh water supply, fertile lands to produce agricultural products, shading, air and water of sufficient quality for designated uses, flood water retention, and places to recreate. The US Environmental Protection Agency (USEPA) Office of Research and Development’s Tampa Bay Ecosystem Services Demonstration Project (TBESDP) modeling efforts organized existing literature values for biophysical attributes and processes related to EGS. The goal was to develop a database for informing mapped-based EGS assessments for current and future land cover/use scenarios at multiple scales. This report serves as a demonstration of applying an EGS assessment approach at the large neighborhood scale (~1,000 acres of residential parcels plus common areas). Here, we present mapped inventories of ecosystem goods and services production at a neighborhood scale within the Tampa Bay, FL region. Comparisons of the inventory between two alternative neighborhood designs are presented as an example of how one might apply EGS concepts at this scale.

  8. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  9. Neutrino footprint in large scale structure

    NASA Astrophysics Data System (ADS)

    Garay, Carlos Peña; Verde, Licia; Jimenez, Raul

    2017-03-01

    Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.

  10. Performance of ceramic superconductors in magnetic bearings

    NASA Technical Reports Server (NTRS)

    Kirtley, James L., Jr.; Downer, James R.

    1993-01-01

    Magnetic bearings are large-scale applications of magnet technology, quite similar in certain ways to synchronous machinery. They require substantial flux density over relatively large volumes of space. Large flux density is required to have satisfactory force density. Satisfactory dynamic response requires that magnetic circuit permeances not be too large, implying large air gaps. Superconductors, which offer large magnetomotive forces and high flux density in low permeance circuits, appear to be desirable in these situations. Flux densities substantially in excess of those possible with iron can be produced, and no ferromagnetic material is required. Thus the inductance of active coils can be made low, indicating good dynamic response of the bearing system. The principal difficulty in using superconductors is, of course, the deep cryogenic temperatures at which they must operate. Because of the difficulties in working with liquid helium, the possibility of superconductors which can be operated in liquid nitrogen is thought to extend the number and range of applications of superconductivity. Critical temperatures of about 98 degrees Kelvin were demonstrated in a class of materials which are, in fact, ceramics. Quite a bit of public attention was attracted to these new materials. There is a difficulty with the ceramic superconducting materials which were developed to date. Current densities sufficient for use in large-scale applications have not been demonstrated. In order to be useful, superconductors must be capable of carrying substantial currents in the presence of large magnetic fields. The possible use of ceramic superconductors in magnetic bearings is investigated and discussed and requirements that must be achieved by superconductors operating at liquid nitrogen temperatures to make their use comparable with niobium-titanium superconductors operating at liquid helium temperatures are identified.

  11. Large-scale production of lentiviral vector in a closed system hollow fiber bioreactor

    PubMed Central

    Sheu, Jonathan; Beltzer, Jim; Fury, Brian; Wilczek, Katarzyna; Tobin, Steve; Falconer, Danny; Nolta, Jan; Bauer, Gerhard

    2015-01-01

    Lentiviral vectors are widely used in the field of gene therapy as an effective method for permanent gene delivery. While current methods of producing small scale vector batches for research purposes depend largely on culture flasks, the emergence and popularity of lentiviral vectors in translational, preclinical and clinical research has demanded their production on a much larger scale, a task that can be difficult to manage with the numbers of producer cell culture flasks required for large volumes of vector. To generate a large scale, partially closed system method for the manufacturing of clinical grade lentiviral vector suitable for the generation of induced pluripotent stem cells (iPSCs), we developed a method employing a hollow fiber bioreactor traditionally used for cell expansion. We have demonstrated the growth, transfection, and vector-producing capability of 293T producer cells in this system. Vector particle RNA titers after subsequent vector concentration yielded values comparable to lentiviral iPSC induction vector batches produced using traditional culture methods in 225 cm2 flasks (T225s) and in 10-layer cell factories (CF10s), while yielding a volume nearly 145 times larger than the yield from a T225 flask and nearly three times larger than the yield from a CF10. Employing a closed system hollow fiber bioreactor for vector production offers the possibility of manufacturing large quantities of gene therapy vector while minimizing reagent usage, equipment footprint, and open system manipulation. PMID:26151065

  12. Large-scale model quality assessment for improving protein tertiary structure prediction.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-06-15

    Sampling structural models and ranking them are the two major challenges of protein structure prediction. Traditional protein structure prediction methods generally use one or a few quality assessment (QA) methods to select the best-predicted models, which cannot consistently select relatively better models and rank a large number of models well. Here, we develop a novel large-scale model QA method in conjunction with model clustering to rank and select protein structural models. It unprecedentedly applied 14 model QA methods to generate consensus model rankings, followed by model refinement based on model combination (i.e. averaging). Our experiment demonstrates that the large-scale model QA approach is more consistent and robust in selecting models of better quality than any individual QA method. Our method was blindly tested during the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM group. It was officially ranked third out of all 143 human and server predictors according to the total scores of the first models predicted for 78 CASP11 protein domains and second according to the total scores of the best of the five models predicted for these domains. MULTICOM's outstanding performance in the extremely competitive 2014 CASP11 experiment proves that our large-scale QA approach together with model clustering is a promising solution to one of the two major problems in protein structure modeling. The web server is available at: http://sysbio.rnet.missouri.edu/multicom_cluster/human/. © The Author 2015. Published by Oxford University Press.

  13. Soil organic matter composition from correlated thermal analysis and nuclear magnetic resonance data in Australian national inventory of agricultural soils

    NASA Astrophysics Data System (ADS)

    Moore, T. S.; Sanderman, J.; Baldock, J.; Plante, A. F.

    2016-12-01

    National-scale inventories typically include soil organic carbon (SOC) content, but not chemical composition or biogeochemical stability. Australia's Soil Carbon Research Programme (SCaRP) represents a national inventory of SOC content and composition in agricultural systems. The program used physical fractionation followed by 13C nuclear magnetic resonance (NMR) spectroscopy. While these techniques are highly effective, they are typically too expensive and time consuming for use in large-scale SOC monitoring. We seek to understand if analytical thermal analysis is a viable alternative. Coupled differential scanning calorimetry (DSC) and evolved gas analysis (CO2- and H2O-EGA) yields valuable data on SOC composition and stability via ramped combustion. The technique requires little training to use, and does not require fractionation or other sample pre-treatment. We analyzed 300 agricultural samples collected by SCaRP, divided into four fractions: whole soil, coarse particulates (POM), untreated mineral associated (HUM), and hydrofluoric acid (HF)-treated HUM. All samples were analyzed by DSC-EGA, but only the POM and HF-HUM fractions were analyzed by NMR. Multivariate statistical analyses were used to explore natural clustering in SOC composition and stability based on DSC-EGA data. A partial least-squares regression (PLSR) model was used to explore correlations among the NMR and DSC-EGA data. Correlations demonstrated regions of combustion attributable to specific functional groups, which may relate to SOC stability. We are increasingly challenged with developing an efficient technique to assess SOC composition and stability at large spatial and temporal scales. Correlations between NMR and DSC-EGA may demonstrate the viability of using thermal analysis in lieu of more demanding methods in future large-scale surveys, and may provide data that goes beyond chemical composition to better approach quantification of biogeochemical stability.

  14. Using memory-efficient algorithm for large-scale time-domain modeling of surface plasmon polaritons propagation in organic light emitting diodes

    NASA Astrophysics Data System (ADS)

    Zakirov, Andrey; Belousov, Sergei; Valuev, Ilya; Levchenko, Vadim; Perepelkina, Anastasia; Zempo, Yasunari

    2017-10-01

    We demonstrate an efficient approach to numerical modeling of optical properties of large-scale structures with typical dimensions much greater than the wavelength of light. For this purpose, we use the finite-difference time-domain (FDTD) method enhanced with a memory efficient Locally Recursive non-Locally Asynchronous (LRnLA) algorithm called DiamondTorre and implemented for General Purpose Graphical Processing Units (GPGPU) architecture. We apply our approach to simulation of optical properties of organic light emitting diodes (OLEDs), which is an essential step in the process of designing OLEDs with improved efficiency. Specifically, we consider a problem of excitation and propagation of surface plasmon polaritons (SPPs) in a typical OLED, which is a challenging task given that SPP decay length can be about two orders of magnitude greater than the wavelength of excitation. We show that with our approach it is possible to extend the simulated volume size sufficiently so that SPP decay dynamics is accounted for. We further consider an OLED with periodically corrugated metallic cathode and show how the SPP decay length can be greatly reduced due to scattering off the corrugation. Ultimately, we compare the performance of our algorithm to the conventional FDTD and demonstrate that our approach can efficiently be used for large-scale FDTD simulations with the use of only a single GPGPU-powered workstation, which is not practically feasible with the conventional FDTD.

  15. Large-Scale Network Analysis of Whole-Brain Resting-State Functional Connectivity in Spinal Cord Injury: A Comparative Study.

    PubMed

    Kaushal, Mayank; Oni-Orisan, Akinwunmi; Chen, Gang; Li, Wenjun; Leschke, Jack; Ward, Doug; Kalinosky, Benjamin; Budde, Matthew; Schmit, Brian; Li, Shi-Jiang; Muqeet, Vaishnavi; Kurpad, Shekar

    2017-09-01

    Network analysis based on graph theory depicts the brain as a complex network that allows inspection of overall brain connectivity pattern and calculation of quantifiable network metrics. To date, large-scale network analysis has not been applied to resting-state functional networks in complete spinal cord injury (SCI) patients. To characterize modular reorganization of whole brain into constituent nodes and compare network metrics between SCI and control subjects, fifteen subjects with chronic complete cervical SCI and 15 neurologically intact controls were scanned. The data were preprocessed followed by parcellation of the brain into 116 regions of interest (ROI). Correlation analysis was performed between every ROI pair to construct connectivity matrices and ROIs were categorized into distinct modules. Subsequently, local efficiency (LE) and global efficiency (GE) network metrics were calculated at incremental cost thresholds. The application of a modularity algorithm organized the whole-brain resting-state functional network of the SCI and the control subjects into nine and seven modules, respectively. The individual modules differed across groups in terms of the number and the composition of constituent nodes. LE demonstrated statistically significant decrease at multiple cost levels in SCI subjects. GE did not differ significantly between the two groups. The demonstration of modular architecture in both groups highlights the applicability of large-scale network analysis in studying complex brain networks. Comparing modules across groups revealed differences in number and membership of constituent nodes, indicating modular reorganization due to neural plasticity.

  16. Demonstration of reduced-order urban scale building energy models

    DOE PAGES

    Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew; ...

    2017-09-08

    The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less

  17. Demonstration of reduced-order urban scale building energy models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heidarinejad, Mohammad; Mattise, Nicholas; Dahlhausen, Matthew

    The aim of this study is to demonstrate a developed framework to rapidly create urban scale reduced-order building energy models using a systematic summary of the simplifications required for the representation of building exterior and thermal zones. These urban scale reduced-order models rely on the contribution of influential variables to the internal, external, and system thermal loads. OpenStudio Application Programming Interface (API) serves as a tool to automate the process of model creation and demonstrate the developed framework. The results of this study show that the accuracy of the developed reduced-order building energy models varies only up to 10% withmore » the selection of different thermal zones. In addition, to assess complexity of the developed reduced-order building energy models, this study develops a novel framework to quantify complexity of the building energy models. Consequently, this study empowers the building energy modelers to quantify their building energy model systematically in order to report the model complexity alongside the building energy model accuracy. An exhaustive analysis on four university campuses suggests that the urban neighborhood buildings lend themselves to simplified typical shapes. Specifically, building energy modelers can utilize the developed typical shapes to represent more than 80% of the U.S. buildings documented in the CBECS database. One main benefits of this developed framework is the opportunity for different models including airflow and solar radiation models to share the same exterior representation, allowing a unifying exchange data. Altogether, the results of this study have implications for a large-scale modeling of buildings in support of urban energy consumption analyses or assessment of a large number of alternative solutions in support of retrofit decision-making in the building industry.« less

  18. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  19. Biogeochemistry and ecology of terrestrial ecosystems of Amazonia

    NASA Astrophysics Data System (ADS)

    Malhi, Yadvinder; Davidson, Eric A.

    The last decade of research associated with the Large-Scale Biosphere-Atmosphere Experiment in Amazonia (LBA) has led to substantial advances in our understanding of the biogeochemistry and ecology of Amazonian forests and savannas, in particular in relation to the carbon cycle of Amazonia. In this chapter, we present a synthesis of results and ideas that are presented in more detail in subsequent chapters, drawing together evidence from studies of forest ecology, ecophysiology, trace gas fluxes and atmospheric flux towers, large-scale rainfall manipulation experiments and soil surveys, satellite remote sensing, and quantification of carbon and nutrient stocks and flows. The studies have demonstrated the variability of the functioning and biogeochemistry of Amazonian forests at a range of spatial and temporal scales, and they provide clues as to how Amazonia will respond to ongoing direct pressure and global atmospheric change. We conclude by highlighting key questions for the next decade of research to address.

  20. First-principles electron transport with phonon coupling: Large scale at low cost

    NASA Astrophysics Data System (ADS)

    Gunst, Tue; Markussen, Troels; Palsgaard, Mattias L. N.; Stokbro, Kurt; Brandbyge, Mads

    2017-10-01

    Phonon-assisted tunneling plays a crucial role for electronic device performance and even more so with future size down-scaling. We show how one can include this effect in large-scale first-principles calculations using a single "special thermal displacement" (STD) of the atomic coordinates at almost the same cost as elastic transport calculations, by extending the recent method of Zacharias et al. [Phys. Rev. B 94, 075125 (2016), 10.1103/PhysRevB.94.075125] to the important case of Landauer conductance. We apply the method to ultrascaled silicon devices and demonstrate the importance of phonon-assisted band-to-band and source-to-drain tunneling. In a diode the phonons lead to a rectification ratio suppression in good agreement with experiments, while in an ultrathin body transistor the phonons increase off currents by four orders of magnitude, and the subthreshold swing by a factor of 4, in agreement with perturbation theory.

  1. Generation of large-scale magnetic fields, non-Gaussianity, and primordial gravitational waves in inflationary cosmology

    NASA Astrophysics Data System (ADS)

    Bamba, Kazuharu

    2015-02-01

    The generation of large-scale magnetic fields in inflationary cosmology is explored, in particular, in a kind of moduli inflation motivated by racetrack inflation in the context of the type IIB string theory. In this model, the conformal invariance of the hypercharge electromagnetic fields is broken thanks to the coupling of both the scalar and pseudoscalar fields to the hypercharge electromagnetic fields. The following three cosmological observable quantities are first evaluated: the current magnetic field strength on the Hubble horizon scale, which is much smaller than the upper limit from the backreaction problem, local non-Gaussianity of the curvature perturbations due to the existence of the massive gauge fields, and the tensor-to-scalar ratio. It is explicitly demonstrated that the resultant values of local non-Gaussianity and the tensor-to-scalar ratio are consistent with the Planck data.

  2. Large scale Brownian dynamics of confined suspensions of rigid particles

    NASA Astrophysics Data System (ADS)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.

  3. Environmentally induced amplitude death and firing provocation in large-scale networks of neuronal systems

    NASA Astrophysics Data System (ADS)

    Pankratova, Evgeniya V.; Kalyakulina, Alena I.

    2016-12-01

    We study the dynamics of multielement neuronal systems taking into account both the direct interaction between the cells via linear coupling and nondiffusive cell-to-cell communication via common environment. For the cells exhibiting individual bursting behavior, we have revealed the dependence of the network activity on its scale. Particularly, we show that small-scale networks demonstrate the inability to maintain complicated oscillations: for a small number of elements in an ensemble, the phenomenon of amplitude death is observed. The existence of threshold network scales and mechanisms causing firing in artificial and real multielement neural networks, as well as their significance for biological applications, are discussed.

  4. Programming Self-Assembly of DNA Origami Honeycomb Two-Dimensional Lattices and Plasmonic Metamaterials.

    PubMed

    Wang, Pengfei; Gaitanaros, Stavros; Lee, Seungwoo; Bathe, Mark; Shih, William M; Ke, Yonggang

    2016-06-22

    Scaffolded DNA origami has proven to be a versatile method for generating functional nanostructures with prescribed sub-100 nm shapes. Programming DNA-origami tiles to form large-scale 2D lattices that span hundreds of nanometers to the micrometer scale could provide an enabling platform for diverse applications ranging from metamaterials to surface-based biophysical assays. Toward this end, here we design a family of hexagonal DNA-origami tiles using computer-aided design and demonstrate successful self-assembly of micrometer-scale 2D honeycomb lattices and tubes by controlling their geometric and mechanical properties including their interconnecting strands. Our results offer insight into programmed self-assembly of low-defect supra-molecular DNA-origami 2D lattices and tubes. In addition, we demonstrate that these DNA-origami hexagon tiles and honeycomb lattices are versatile platforms for assembling optical metamaterials via programmable spatial arrangement of gold nanoparticles (AuNPs) into cluster and superlattice geometries.

  5. Scalable ion-photon quantum interface based on integrated diffractive mirrors

    NASA Astrophysics Data System (ADS)

    Ghadimi, Moji; Blūms, Valdis; Norton, Benjamin G.; Fisher, Paul M.; Connell, Steven C.; Amini, Jason M.; Volin, Curtis; Hayden, Harley; Pai, Chien-Shing; Kielpinski, David; Lobino, Mirko; Streed, Erik W.

    2017-12-01

    Quantum networking links quantum processors through remote entanglement for distributed quantum information processing and secure long-range communication. Trapped ions are a leading quantum information processing platform, having demonstrated universal small-scale processors and roadmaps for large-scale implementation. Overall rates of ion-photon entanglement generation, essential for remote trapped ion entanglement, are limited by coupling efficiency into single mode fibers and scaling to many ions. Here, we show a microfabricated trap with integrated diffractive mirrors that couples 4.1(6)% of the fluorescence from a 174Yb+ ion into a single mode fiber, nearly triple the demonstrated bulk optics efficiency. The integrated optic collects 5.8(8)% of the π transition fluorescence, images the ion with sub-wavelength resolution, and couples 71(5)% of the collected light into the fiber. Our technology is suitable for entangling multiple ions in parallel and overcomes mode quality limitations of existing integrated optical interconnects.

  6. Large Pilot-Scale Carbon Dioxide (CO2) Capture Project Using Aminosilicone Solvent.Final Scientific/Technical Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hancu, Dan

    GE Global Research has developed, over the last 8 years, a platform of cost effective CO2 capture technologies based on a non-aqueous aminosilicone solvent (GAP-1m). As demonstrated in previous funded DOE projects (DE-FE0007502 and DEFE0013755), the GAP-1m solvent has increased CO2 working capacity, lower volatility and corrosivity than the benchmark aqueous amine technology. Performance of the GAP-1m solvent was recently demonstrated in a 0.5 MWe pilot at National Carbon Capture Center, AL with real flue gas for over 500 hours of operation using a Steam Stripper Column (SSC). The pilot-scale PSTU engineering data were used to (i) update the techno-economicmore » analysis, and EH&S assessment, (ii) perform technology gap analysis, and (iii) conduct the solvent manufacturability and scale-up study.« less

  7. A measure of association for ordered categorical data in population-based studies

    PubMed Central

    Nelson, Kerrie P; Edwards, Don

    2016-01-01

    Ordinal classification scales are commonly used to define a patient’s disease status in screening and diagnostic tests such as mammography. Challenges arise in agreement studies when evaluating the association between many raters’ classifications of patients’ disease or health status when an ordered categorical scale is used. In this paper, we describe a population-based approach and chance-corrected measure of association to evaluate the strength of relationship between multiple raters’ ordinal classifications where any number of raters can be accommodated. In contrast to Shrout and Fleiss’ intraclass correlation coefficient, the proposed measure of association is invariant with respect to changes in disease prevalence. We demonstrate how unique characteristics of individual raters can be explored using random effects. Simulation studies are conducted to demonstrate the properties of the proposed method under varying assumptions. The methods are applied to two large-scale agreement studies of breast cancer screening and prostate cancer severity. PMID:27184590

  8. Spatial Modeling and Uncertainty Assessment of Fine Scale Surface Processes Based on Coarse Terrain Elevation Data

    NASA Astrophysics Data System (ADS)

    Rasera, L. G.; Mariethoz, G.; Lane, S. N.

    2017-12-01

    Frequent acquisition of high-resolution digital elevation models (HR-DEMs) over large areas is expensive and difficult. Satellite-derived low-resolution digital elevation models (LR-DEMs) provide extensive coverage of Earth's surface but at coarser spatial and temporal resolutions. Although useful for large scale problems, LR-DEMs are not suitable for modeling hydrologic and geomorphic processes at scales smaller than their spatial resolution. In this work, we present a multiple-point geostatistical approach for downscaling a target LR-DEM based on available high-resolution training data and recurrent high-resolution remote sensing images. The method aims at generating several equiprobable HR-DEMs conditioned to a given target LR-DEM by borrowing small scale topographic patterns from an analogue containing data at both coarse and fine scales. An application of the methodology is demonstrated by using an ensemble of simulated HR-DEMs as input to a flow-routing algorithm. The proposed framework enables a probabilistic assessment of the spatial structures generated by natural phenomena operating at scales finer than the available terrain elevation measurements. A case study in the Swiss Alps is provided to illustrate the methodology.

  9. High Fidelity Modeling of Turbulent Mixing and Chemical Kinetics Interactions in a Post-Detonation Flow Field

    NASA Astrophysics Data System (ADS)

    Sinha, Neeraj; Zambon, Andrea; Ott, James; Demagistris, Michael

    2015-06-01

    Driven by the continuing rapid advances in high-performance computing, multi-dimensional high-fidelity modeling is an increasingly reliable predictive tool capable of providing valuable physical insight into complex post-detonation reacting flow fields. Utilizing a series of test cases featuring blast waves interacting with combustible dispersed clouds in a small-scale test setup under well-controlled conditions, the predictive capabilities of a state-of-the-art code are demonstrated and validated. Leveraging physics-based, first principle models and solving large system of equations on highly-resolved grids, the combined effects of finite-rate/multi-phase chemical processes (including thermal ignition), turbulent mixing and shock interactions are captured across the spectrum of relevant time-scales and length scales. Since many scales of motion are generated in a post-detonation environment, even if the initial ambient conditions are quiescent, turbulent mixing plays a major role in the fireball afterburning as well as in dispersion, mixing, ignition and burn-out of combustible clouds in its vicinity. Validating these capabilities at the small scale is critical to establish a reliable predictive tool applicable to more complex and large-scale geometries of practical interest.

  10. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  11. Large-scale atomistic simulations demonstrate dominant alloy disorder effects in GaBixAs1 -x/GaAs multiple quantum wells

    NASA Astrophysics Data System (ADS)

    Usman, Muhammad

    2018-04-01

    Bismide semiconductor materials and heterostructures are considered a promising candidate for the design and implementation of photonic, thermoelectric, photovoltaic, and spintronic devices. This work presents a detailed theoretical study of the electronic and optical properties of strongly coupled GaBixAs1 -x /GaAs multiple quantum well (MQW) structures. Based on a systematic set of large-scale atomistic tight-binding calculations, our results reveal that the impact of atomic-scale fluctuations in alloy composition is stronger than the interwell coupling effect, and plays an important role in the electronic and optical properties of the investigated MQW structures. Independent of QW geometry parameters, alloy disorder leads to a strong confinement of charge carriers, a large broadening of the hole energies, and a red-shift in the ground-state transition wavelength. Polarization-resolved optical transition strengths exhibit a striking effect of disorder, where the inhomogeneous broadening could exceed an order of magnitude for MQWs, in comparison to a factor of about 3 for single QWs. The strong influence of alloy disorder effects persists when small variations in the size and composition of MQWs typically expected in a realistic experimental environment are considered. The presented results highlight the limited scope of continuum methods and emphasize on the need for large-scale atomistic approaches to design devices with tailored functionalities based on the novel properties of bismide materials.

  12. Large scale simulation of liquid water transport in a gas diffusion layer of polymer electrolyte membrane fuel cells using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi

    2017-09-01

    A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.

  13. Low-Cost and Scaled-Up Production of Fluorine-Free, Substrate-Independent, Large-Area Superhydrophobic Coatings Based on Hydroxyapatite Nanowire Bundles.

    PubMed

    Chen, Fei-Fei; Yang, Zi-Yue; Zhu, Ying-Jie; Xiong, Zhi-Chao; Dong, Li-Ying; Lu, Bing-Qiang; Wu, Jin; Yang, Ri-Long

    2018-01-09

    To date, the scaled-up production and large-area applications of superhydrophobic coatings are limited because of complicated procedures, environmentally harmful fluorinated compounds, restrictive substrates, expensive equipment, and raw materials usually involved in the fabrication process. Herein, the facile, low-cost, and green production of superhydrophobic coatings based on hydroxyapatite nanowire bundles (HNBs) is reported. Hydrophobic HNBs are synthesised by using a one-step solvothermal method with oleic acid as the structure-directing and hydrophobic agent. During the reaction process, highly hydrophobic C-H groups of oleic acid molecules can be attached in situ to the surface of HNBs through the chelate interaction between Ca 2+ ions and carboxylic groups. This facile synthetic method allows the scaled-up production of HNBs up to about 8 L, which is the largest production scale of superhydrophobic paint based on HNBs ever reported. In addition, the design of the 100 L reaction system is also shown. The HNBs can be coated on any substrate with an arbitrary shape by the spray-coating technique. The self-cleaning ability in air and oil, high-temperature stability, and excellent mechanical durability of the as-prepared superhydrophobic coatings are demonstrated. More importantly, the HNBs are coated on large-sized practical objects to form large-area superhydrophobic coatings. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Mesoderm Lineage 3D Tissue Constructs Are Produced at Large-Scale in a 3D Stem Cell Bioprocess.

    PubMed

    Cha, Jae Min; Mantalaris, Athanasios; Jung, Sunyoung; Ji, Yurim; Bang, Oh Young; Bae, Hojae

    2017-09-01

    Various studies have presented different approaches to direct pluripotent stem cell differentiation such as applying defined sets of exogenous biochemical signals and genetic/epigenetic modifications. Although differentiation to target lineages can be successfully regulated, such conventional methods are often complicated, laborious, and not cost-effective to be employed to the large-scale production of 3D stem cell-based tissue constructs. A 3D-culture platform that could realize the large-scale production of mesoderm lineage tissue constructs from embryonic stem cells (ESCs) is developed. ESCs are cultured using our previously established 3D-bioprocess platform which is amenable to mass-production of 3D ESC-based tissue constructs. Hepatocarcinoma cell line conditioned medium is introduced to the large-scale 3D culture to provide a specific biomolecular microenvironment to mimic in vivo mesoderm formation process. After 5 days of spontaneous differentiation period, the resulting 3D tissue constructs are composed of multipotent mesodermal progenitor cells verified by gene and molecular expression profiles. Subsequently the optimal time points to trigger terminal differentiation towards cardiomyogenesis or osteogenesis from the mesodermal tissue constructs is found. A simple and affordable 3D ESC-bioprocess that can reach the scalable production of mesoderm origin tissues with significantly improved correspondent tissue properties is demonstrated. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. TREATMENT OF A PENTACHLOROPHENOL AND CREOSOTE-CONTAMINATED SOIL USING THE LIGNIN-DEGRADING FUNGUS PHANERO- CHAETE SORDIDA: A FIELD DEMONSTRATION

    EPA Science Inventory

    The feasibility of large-scale fungal bioaugmentation was evaluated by assessing the ability of the lignin-degrading fungus Phanerochaete sordida to decrease the soil concentrations of pentachlorophenol (PCP) and 13 priority pollutant polynuclear aromatic (PNA) creosote component...

  16. Thailand national programme of the Earth Resources Technology Satellite

    NASA Technical Reports Server (NTRS)

    Sabhasri, S. (Principal Investigator)

    1976-01-01

    The author has identified the following significant results. The study on locating hill tribe villages from LANDSAT imagery was successful and exceeded the initial expectations. Results of the study on land use and forest mapping using Skylab data demonstrated the capability and feasibility of large scale mapping with high accuracy.

  17. Dust from southern Africa: rate of emission and biogeochemical properties

    USDA-ARS?s Scientific Manuscript database

    The stabilized linear dunefields in the southern Kalahari show signs of reactivation due to reduced vegetation cover owing to drought and/or overgrazing. It has been demonstrated with a laboratory dust generator that the southern Kalahari soils are good emitters of dust and that large-scale dune rea...

  18. State Test Results Are Predictable

    ERIC Educational Resources Information Center

    Tienken, Christopher H.

    2014-01-01

    Out-of-school, community demographic and family-level variables have an important influence on student achievement as measured by large-scale standardized tests. Studies described here demonstrated that about half of the test score is accounted for by variables outside the control of teachers and school administrators. The results from these…

  19. Analyzing hydrotreated renewable jet fuel (HRJ) feedstock availability using crop simulation modeling

    USDA-ARS?s Scientific Manuscript database

    While hydrotreated renewable jet fuel (HRJ) has been demonstrated for use in commercial and military aviation, a challenge to large-scale adoption is availability of cost competitive feedstocks. Brassica oilseed crops like Brassica napus, B. rapa, B. juncea, B. carinata, Sinapis alba, and Camelina s...

  20. Control of rabbit myxomatosis in Poland.

    PubMed

    Górski, J; Mizak, B; Chrobocińska, M

    1994-09-01

    The authors present an epizootiological analysis of myxomatosis in Poland. The biological, physical and chemical properties of virus strains used for the production and control of 'Myxovac M' vaccine are discussed. The long-term stability, safety and efficacy of the vaccine are demonstrated. Laboratory experiments were confirmed in large-scale field observations.

  1. ENVIRONMENTALLY STRATIFIED SAMPLING DESIGN FOR THE DEVELOPMENT OF THE GREAT LAKES ENVIRONMENTAL INDICATORS

    EPA Science Inventory

    Ecological indicators must be shown to be responsive to stress. For large-scale observational studies the best way to demonstrate responsiveness is by evaluating indicators along a gradient of stress, but such gradients are often unknown for a population of sites prior to site se...

  2. A genome-wide association study platform built on iPlant cyber-infrastructure

    USDA-ARS?s Scientific Manuscript database

    We demonstrated a flexible Genome-Wide Association (GWA) Study (GWAS) platform built upon the iPlant Collaborative Cyber-infrastructure. The platform supports big data management, sharing, and large scale study of both genotype and phenotype data on clusters. End users can add their own analysis too...

  3. Cloud Computing for Complex Performance Codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Appel, Gordon John; Hadgu, Teklu; Klein, Brandon Thorin

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  4. Flexible, High-Speed CdSe Nanocrystal Integrated Circuits.

    PubMed

    Stinner, F Scott; Lai, Yuming; Straus, Daniel B; Diroll, Benjamin T; Kim, David K; Murray, Christopher B; Kagan, Cherie R

    2015-10-14

    We report large-area, flexible, high-speed analog and digital colloidal CdSe nanocrystal integrated circuits operating at low voltages. Using photolithography and a newly developed process to fabricate vertical interconnect access holes, we scale down device dimensions, reducing parasitic capacitances and increasing the frequency of circuit operation, and scale up device fabrication over 4 in. flexible substrates. We demonstrate amplifiers with ∼7 kHz bandwidth, ring oscillators with <10 μs stage delays, and NAND and NOR logic gates.

  5. Synchrony in broadband fluctuation and the 2008 financial crisis.

    PubMed

    Lin, Der Chyan

    2013-01-01

    We propose phase-like characteristics in scale-free broadband processes and consider fluctuation synchrony based on the temporal signature of significant amplitude fluctuation. Using wavelet transform, successful captures of similar fluctuation pattern between such broadband processes are demonstrated. The application to the financial data leading to the 2008 financial crisis reveals the transition towards a qualitatively different dynamical regime with many equity price in fluctuation synchrony. Further analysis suggests an underlying scale free "price fluctuation network" with large clustering coefficient.

  6. Fast-dynamo action in unsteady flows and maps in three dimensions

    NASA Technical Reports Server (NTRS)

    Bayly, B. J.; Childress, S.

    1987-01-01

    Unsteady fast-dynamo action is obtained in a family of stretch-fold-shear maps applied to a spatially periodic magnetic field in three dimensions. Exponential growth of a mean field in the limit of vanishing diffusivity is demonstrated by a numerical method which alternates instantaneous deformations with molecular diffusion over a finite time interval. Analysis indicates that the dynamo is a coherent feature of the large scales, essentially independent of the cascade of structure to small scales.

  7. Relay discovery and selection for large-scale P2P streaming

    PubMed Central

    Zhang, Chengwei; Wang, Angela Yunxian

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers’ network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used “best-out-of-K” selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs. PMID:28410384

  8. Relay discovery and selection for large-scale P2P streaming.

    PubMed

    Zhang, Chengwei; Wang, Angela Yunxian; Hei, Xiaojun

    2017-01-01

    In peer-to-peer networks, application relays have been commonly used to provide various networking services. The service performance often improves significantly if a relay is selected appropriately based on its network location. In this paper, we studied the location-aware relay discovery and selection problem for large-scale P2P streaming networks. In these large-scale and dynamic overlays, it incurs significant communication and computation cost to discover a sufficiently large relay candidate set and further to select one relay with good performance. The network location can be measured directly or indirectly with the tradeoffs between timeliness, overhead and accuracy. Based on a measurement study and the associated error analysis, we demonstrate that indirect measurements, such as King and Internet Coordinate Systems (ICS), can only achieve a coarse estimation of peers' network location and those methods based on pure indirect measurements cannot lead to a good relay selection. We also demonstrate that there exists significant error amplification of the commonly used "best-out-of-K" selection methodology using three RTT data sets publicly available. We propose a two-phase approach to achieve efficient relay discovery and accurate relay selection. Indirect measurements are used to narrow down a small number of high-quality relay candidates and the final relay selection is refined based on direct probing. This two-phase approach enjoys an efficient implementation using the Distributed-Hash-Table (DHT). When the DHT is constructed, the node keys carry the location information and they are generated scalably using indirect measurements, such as the ICS coordinates. The relay discovery is achieved efficiently utilizing the DHT-based search. We evaluated various aspects of this DHT-based approach, including the DHT indexing procedure, key generation under peer churn and message costs.

  9. Using SQL Databases for Sequence Similarity Searching and Analysis.

    PubMed

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  10. Geospatial Augmented Reality for the interactive exploitation of large-scale walkable orthoimage maps in museums

    NASA Astrophysics Data System (ADS)

    Wüest, Robert; Nebiker, Stephan

    2018-05-01

    In this paper we present an app framework for augmenting large-scale walkable maps and orthoimages in museums or public spaces using standard smartphones and tablets. We first introduce a novel approach for using huge orthoimage mosaic floor prints covering several hundred square meters as natural Augmented Reality (AR) markers. We then present a new app architecture and subsequent tests in the Swissarena of the Swiss National Transport Museum in Lucerne demonstrating the capabilities of accurately tracking and augmenting different map topics, including dynamic 3d data such as live air traffic. The resulting prototype was tested with everyday visitors of the museum to get feedback on the usability of the AR app and to identify pitfalls when using AR in the context of a potentially crowded museum. The prototype is to be rolled out to the public after successful testing and optimization of the app. We were able to show that AR apps on standard smartphone devices can dramatically enhance the interactive use of large-scale maps for different purposes such as education or serious gaming in a museum context.

  11. A novel bonding method for large scale poly(methyl methacrylate) micro- and nanofluidic chip fabrication

    NASA Astrophysics Data System (ADS)

    Qu, Xingtian; Li, Jinlai; Yin, Zhifu

    2018-04-01

    Micro- and nanofluidic chips are becoming increasing significance for biological and medical applications. Future advances in micro- and nanofluidics and its utilization in commercial applications depend on the development and fabrication of low cost and high fidelity large scale plastic micro- and nanofluidic chips. However, the majority of the present fabrication methods suffer from a low bonding rate of the chip during thermal bonding process due to air trapping between the substrate and the cover plate. In the present work, a novel bonding technique based on Ar plasma and water treatment was proposed to fully bond the large scale micro- and nanofluidic chips. The influence of Ar plasma parameters on the water contact angle and the effect of bonding conditions on the bonding rate and the bonding strength of the chip were studied. The fluorescence tests demonstrate that the 5 × 5 cm2 poly(methyl methacrylate) chip with 180 nm wide and 180 nm deep nanochannels can be fabricated without any block and leakage by our newly developed method.

  12. Large-area, lightweight and thick biomimetic composites with superior material properties via fast, economic, and green pathways.

    PubMed

    Walther, Andreas; Bjurhager, Ingela; Malho, Jani-Markus; Pere, Jaakko; Ruokolainen, Janne; Berglund, Lars A; Ikkala, Olli

    2010-08-11

    Although remarkable success has been achieved to mimic the mechanically excellent structure of nacre in laboratory-scale models, it remains difficult to foresee mainstream applications due to time-consuming sequential depositions or energy-intensive processes. Here, we introduce a surprisingly simple and rapid methodology for large-area, lightweight, and thick nacre-mimetic films and laminates with superior material properties. Nanoclay sheets with soft polymer coatings are used as ideal building blocks with intrinsic hard/soft character. They are forced to rapidly self-assemble into aligned nacre-mimetic films via paper-making, doctor-blading or simple painting, giving rise to strong and thick films with tensile modulus of 45 GPa and strength of 250 MPa, that is, partly exceeding nacre. The concepts are environmentally friendly, energy-efficient, and economic and are ready for scale-up via continuous roll-to-roll processes. Excellent gas barrier properties, optical translucency, and extraordinary shape-persistent fire-resistance are demonstrated. We foresee advanced large-scale biomimetic materials, relevant for lightweight sustainable construction and energy-efficient transportation.

  13. Mild clinical involvement in two males with a large FMR1 premutation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagerman, R.; O`Connor, R.; Staley, L.

    1994-09-01

    Both male and female individuals who carry the FMR1 premutation are considered to be clinically unaffected and have been reported to have normal transcription of their FMR1 gene and normal FMR1 protein (FMRP) production. We have evaluated two males who are mildly affected clinically with features of fragile X syndrome and demonstrate a large premutation on DNA studies. The first patient is a 2 year 8 month old boy who demonstrated the fragile X chromosome in 3% of his lymphocytes on cytogenetic testing. His physical features include mildly prominent ears and hyperextensible finger joints. He has language delays along withmore » behavioral problems including tantrums and attention deficit. Developmental testing revealed a mental scale of 116 on the Bayley Scales of Infant Development, which is in the normal range. DNA testing demonstrated a premutation with 161 CGG repeats. This premutation was methylated in a small percent of his cells (<2%). These findings were observed in both blood leukocytes and buccal cells. Protein studies of transformed lymphocytes from this boy showed approximately 50 to 70% of the normal level of FMRP. The second patient is a 14 year old male who was cytogenetically negative for fragile X expression. His physical exam demonstrates a long face, a high palate and macroorchidism, (testicular volume of approximately 35 ml). His overall full scale IQ on the WISC-III is 73. He has language deficits and visual spatial perceptual deficits which have caused significant learning problems in school. Behaviorally he has problems with shyness and social anxiety, although he does not have attention deficit hyperactivity disorder. DNA testing revealed an FMR1 mutation of approximately 210 CGG repeats that is methylated in 4.7% of his cells.« less

  14. Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration

    DOE PAGES

    Anefalos Pereira, S.; Baltzell, N.; Barion, L.; ...

    2016-02-11

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less

  15. Stochastic dynamics of genetic broadcasting networks

    NASA Astrophysics Data System (ADS)

    Potoyan, Davit A.; Wolynes, Peter G.

    2017-11-01

    The complex genetic programs of eukaryotic cells are often regulated by key transcription factors occupying or clearing out of a large number of genomic locations. Orchestrating the residence times of these factors is therefore important for the well organized functioning of a large network. The classic models of genetic switches sidestep this timing issue by assuming the binding of transcription factors to be governed entirely by thermodynamic protein-DNA affinities. Here we show that relying on passive thermodynamics and random release times can lead to a "time-scale crisis" for master genes that broadcast their signals to a large number of binding sites. We demonstrate that this time-scale crisis for clearance in a large broadcasting network can be resolved by actively regulating residence times through molecular stripping. We illustrate these ideas by studying a model of the stochastic dynamics of the genetic network of the central eukaryotic master regulator NFκ B which broadcasts its signals to many downstream genes that regulate immune response, apoptosis, etc.

  16. Distance indicators based on the luminosity-profile shapes of early-type galaxies-a reply

    NASA Astrophysics Data System (ADS)

    Young, Christopher Ke-Shih; Currie, Malcolm J.

    1998-05-01

    In a recent paper, Binggeli & Jerjen (1998) question the value of the extragalactic distance indicators presented by Young & Currie (1994 & 1995) and state that they have refuted `the claim that the Virgo dEs [dwarf-elliptical galaxies]...are distributed in a prolate structure stretching from 8 to 20 Mpc distance (Young & Currie 1995).' even though no such claim was ever made. In this paper, we examine Binggeli & Jerjen's claims that intrinsic scatter rather than spatial depth must be the main cause of the large scatters observed in the relevant scaling relationships for Virgo galaxies. We investigate the accuracy of Binggeli & Jerjen's photometric parameters and find that while their profile curvature and scalelength measurements are probably useful, their total magnitude and central surface-brightness measurements are not useful for the purpose of investigating scaling laws because they suffer from serious systematic and random errors. We also investigate Binggeli & Jerjen's criticisms of our (1995) analysis. We demonstrate that their test for strong mutual dependence between distance estimates based on the two different scaling laws is invalid because of its prior assumption of negligible cluster depth. We further demonstrate that the [relative] distance estimates on which their kinematical arguments are based cannot be meaningful, not only because of the seriousness of the photometric errors, but also because they are undermined by the prior assumption that depth effects can again be neglected. Interestingly, we also find that Binggeli & Jerjen's own dataset does itself contain evidence for large depth. Using the observed correlation between scale-length and profile-curvature, (the only correlation that can be investigated meaningfully using their dataset), we find that the frequency distribution of residuals with respect to the best fitting curve deviates significantly from that expected from a uni-modal Gaussian distribution. Clearly, if as Binggeli & Jerjen claim, the very large scatter observed in this scaling relationship for Virgo galaxies (which is not observed for Fornax or Local Group ones) were intrinsic, one would expect a uni-modal Gaussian distribution.

  17. The plume head-continental lithosphere interaction using a tectonically realistic formulation for the lithosphere

    NASA Astrophysics Data System (ADS)

    Burov, E.; Guillou-Frottier, L.

    2005-05-01

    Current debates on the existence of mantle plumes largely originate from interpretations of supposed signatures of plume-induced surface topography that are compared with predictions of geodynamic models of plume-lithosphere interactions. These models often inaccurately predict surface evolution: in general, they assume a fixed upper surface and consider the lithosphere as a single viscous layer. In nature, the surface evolution is affected by the elastic-brittle-ductile deformation, by a free upper surface and by the layered structure of the lithosphere. We make a step towards reconciling mantle- and tectonic-scale studies by introducing a tectonically realistic continental plate model in large-scale plume-lithosphere interaction. This model includes (i) a natural free surface boundary condition, (ii) an explicit elastic-viscous(ductile)-plastic(brittle) rheology and (iii) a stratified structure of continental lithosphere. The numerical experiments demonstrate a number of important differences from predictions of conventional models. In particular, this relates to plate bending, mechanical decoupling of crustal and mantle layers and tension-compression instabilities, which produce transient topographic signatures such as uplift and subsidence at large (>500 km) and small scale (300-400, 200-300 and 50-100 km). The mantle plumes do not necessarily produce detectable large-scale topographic highs but often generate only alternating small-scale surface features that could otherwise be attributed to regional tectonics. A single large-wavelength deformation, predicted by conventional models, develops only for a very cold and thick lithosphere. Distinct topographic wavelengths or temporarily spaced events observed in the East African rift system, as well as over French Massif Central, can be explained by a single plume impinging at the base of the continental lithosphere, without evoking complex asthenospheric upwelling.

  18. Large-scale block adjustment without use of ground control points based on the compensation of geometric calibration for ZY-3 images

    NASA Astrophysics Data System (ADS)

    Yang, Bo; Wang, Mi; Xu, Wen; Li, Deren; Gong, Jianya; Pi, Yingdong

    2017-12-01

    The potential of large-scale block adjustment (BA) without ground control points (GCPs) has long been a concern among photogrammetric researchers, which is of effective guiding significance for global mapping. However, significant problems with the accuracy and efficiency of this method remain to be solved. In this study, we analyzed the effects of geometric errors on BA, and then developed a step-wise BA method to conduct integrated processing of large-scale ZY-3 satellite images without GCPs. We first pre-processed the BA data, by adopting a geometric calibration (GC) method based on the viewing-angle model to compensate for systematic errors, such that the BA input images were of good initial geometric quality. The second step was integrated BA without GCPs, in which a series of technical methods were used to solve bottleneck problems and ensure accuracy and efficiency. The BA model, based on virtual control points (VCPs), was constructed to address the rank deficiency problem caused by lack of absolute constraints. We then developed a parallel matching strategy to improve the efficiency of tie points (TPs) matching, and adopted a three-array data structure based on sparsity to relieve the storage and calculation burden of the high-order modified equation. Finally, we used the conjugate gradient method to improve the speed of solving the high-order equations. To evaluate the feasibility of the presented large-scale BA method, we conducted three experiments on real data collected by the ZY-3 satellite. The experimental results indicate that the presented method can effectively improve the geometric accuracies of ZY-3 satellite images. This study demonstrates the feasibility of large-scale mapping without GCPs.

  19. Development of a metal-clad advanced composite shear web design concept

    NASA Technical Reports Server (NTRS)

    Laakso, J. H.

    1974-01-01

    An advanced composite web concept was developed for potential application to the Space Shuttle Orbiter main engine thrust structure. The program consisted of design synthesis, analysis, detail design, element testing, and large scale component testing. A concept was sought that offered significant weight saving by the use of Boron/Epoxy (B/E) reinforced titanium plate structure. The desired concept was one that was practical and that utilized metal to efficiently improve structural reliability. The resulting development of a unique titanium-clad B/E shear web design concept is described. Three large scale components were fabricated and tested to demonstrate the performance of the concept: a titanium-clad plus or minus 45 deg B/E web laminate stiffened with vertical B/E reinforced aluminum stiffeners.

  20. [Development and application of morphological analysis method in Aspergillus niger fermentation].

    PubMed

    Tang, Wenjun; Xia, Jianye; Chu, Ju; Zhuang, Yingping; Zhang, Siliang

    2015-02-01

    Filamentous fungi are widely used in industrial fermentation. Particular fungal morphology acts as a critical index for a successful fermentation. To break the bottleneck of morphological analysis, we have developed a reliable method for fungal morphological analysis. By this method, we can prepare hundreds of pellet samples simultaneously and obtain quantitative morphological information at large scale quickly. This method can largely increase the accuracy and reliability of morphological analysis result. Based on that, the studies of Aspergillus niger morphology under different oxygen supply conditions and shear rate conditions were carried out. As a result, the morphological responding patterns of A. niger morphology to these conditions were quantitatively demonstrated, which laid a solid foundation for the further scale-up.

Top