Sample records for require large scale

  1. Networks and landscapes: a framework for setting goals and evaluating performance at the large landscape scale

    Treesearch

    R Patrick Bixler; Shawn Johnson; Kirk Emerson; Tina Nabatchi; Melly Reuling; Charles Curtin; Michele Romolini; Morgan Grove

    2016-01-01

    The objective of large landscape conser vation is to mitigate complex ecological problems through interventions at multiple and overlapping scales. Implementation requires coordination among a diverse network of individuals and organizations to integrate local-scale conservation activities with broad-scale goals. This requires an understanding of the governance options...

  2. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  3. pycola: N-body COLA method code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Eisenstein, Daniel J.; Wandelt, Benjamin D.; Zaldarriagag, Matias

    2015-09-01

    pycola is a multithreaded Python/Cython N-body code, implementing the Comoving Lagrangian Acceleration (COLA) method in the temporal and spatial domains, which trades accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing. The COLA method achieves its speed by calculating the large-scale dynamics exactly using LPT while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos.

  4. Gas-Centered Swirl Coaxial Liquid Injector Evaluations

    NASA Technical Reports Server (NTRS)

    Cohn, A. K.; Strakey, P. A.; Talley, D. G.

    2005-01-01

    Development of Liquid Rocket Engines is expensive. Extensive testing at large scales usually required. In order to verify engine lifetime, large number of tests required. Limited Resources available for development. Sub-scale cold-flow and hot-fire testing is extremely cost effective. Could be a necessary (but not sufficient) condition for long engine lifetime. Reduces overall costs and risk of large scale testing. Goal: Determine knowledge that can be gained from sub-scale cold-flow and hot-fire evaluations of LRE injectors. Determine relationships between cold-flow and hot-fire data.

  5. A Functional Model for Management of Large Scale Assessments.

    ERIC Educational Resources Information Center

    Banta, Trudy W.; And Others

    This functional model for managing large-scale program evaluations was developed and validated in connection with the assessment of Tennessee's Nutrition Education and Training Program. Management of such a large-scale assessment requires the development of a structure for the organization; distribution and recovery of large quantities of…

  6. Large-Scale Multiobjective Static Test Generation for Web-Based Testing with Integer Programming

    ERIC Educational Resources Information Center

    Nguyen, M. L.; Hui, Siu Cheung; Fong, A. C. M.

    2013-01-01

    Web-based testing has become a ubiquitous self-assessment method for online learning. One useful feature that is missing from today's web-based testing systems is the reliable capability to fulfill different assessment requirements of students based on a large-scale question data set. A promising approach for supporting large-scale web-based…

  7. Large Composite Structures Processing Technologies for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Clinton, R. G., Jr.; Vickers, J. H.; McMahon, W. M.; Hulcher, A. B.; Johnston, N. J.; Cano, R. J.; Belvin, H. L.; McIver, K.; Franklin, W.; Sidwell, D.

    2001-01-01

    Significant efforts have been devoted to establishing the technology foundation to enable the progression to large scale composite structures fabrication. We are not capable today of fabricating many of the composite structures envisioned for the second generation reusable launch vehicle (RLV). Conventional 'aerospace' manufacturing and processing methodologies (fiber placement, autoclave, tooling) will require substantial investment and lead time to scale-up. Out-of-autoclave process techniques will require aggressive efforts to mature the selected technologies and to scale up. Focused composite processing technology development and demonstration programs utilizing the building block approach are required to enable envisioned second generation RLV large composite structures applications. Government/industry partnerships have demonstrated success in this area and represent best combination of skills and capabilities to achieve this goal.

  8. On the resilience of helical magnetic fields to turbulent diffusion and the astrophysical implications

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.; Subramanian, Kandaswamy

    2013-02-01

    The extent to which large-scale magnetic fields are susceptible to turbulent diffusion is important for interpreting the need for in situ large-scale dynamos in astrophysics and for observationally inferring field strengths compared to kinetic energy. By solving coupled evolution equations for magnetic energy and magnetic helicity in a system initialized with isotropic turbulence and an arbitrarily helical large-scale field, we quantify the decay rate of the latter for a bounded or periodic system. The magnetic energy associated with the non-helical large-scale field decays at least as fast as the kinematically estimated turbulent diffusion rate, but the decay rate of the helical part depends on whether the ratio of its magnetic energy to the turbulent kinetic energy exceeds a critical value given by M1, c = (k1/k2)2, where k1 and k2 are the wavenumbers of the large and forcing scales. Turbulently diffusing helical fields to small scales while conserving magnetic helicity requires a rapid increase in total magnetic energy. As such, only when the helical field is subcritical can it so diffuse. When supercritical, it decays slowly, at a rate determined by microphysical dissipation even in the presence of macroscopic turbulence. In effect, turbulent diffusion of such a large-scale helical field produces small-scale helicity whose amplification abates further turbulent diffusion. Two curious implications are that (1) standard arguments supporting the need for in situ large-scale dynamos based on the otherwise rapid turbulent diffusion of large-scale fields require re-thinking since only the large-scale non-helical field is so diffused in a closed system. Boundary terms could however provide potential pathways for rapid change of the large-scale helical field. (2) Since M1, c ≪ 1 for k1 ≪ k2, the presence of long-lived ordered large-scale helical fields as in extragalactic jets do not guarantee that the magnetic field dominates the kinetic energy.

  9. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions

    PubMed Central

    Taylor, Richard L.; Bentley, Christopher D. B.; Pedernales, Julen S.; Lamata, Lucas; Solano, Enrique; Carvalho, André R. R.; Hope, Joseph J.

    2017-01-01

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10−5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period. PMID:28401945

  10. A Study on Fast Gates for Large-Scale Quantum Simulation with Trapped Ions.

    PubMed

    Taylor, Richard L; Bentley, Christopher D B; Pedernales, Julen S; Lamata, Lucas; Solano, Enrique; Carvalho, André R R; Hope, Joseph J

    2017-04-12

    Large-scale digital quantum simulations require thousands of fundamental entangling gates to construct the simulated dynamics. Despite success in a variety of small-scale simulations, quantum information processing platforms have hitherto failed to demonstrate the combination of precise control and scalability required to systematically outmatch classical simulators. We analyse how fast gates could enable trapped-ion quantum processors to achieve the requisite scalability to outperform classical computers without error correction. We analyze the performance of a large-scale digital simulator, and find that fidelity of around 70% is realizable for π-pulse infidelities below 10 -5 in traps subject to realistic rates of heating and dephasing. This scalability relies on fast gates: entangling gates faster than the trap period.

  11. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    DOE PAGES

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic naturemore » of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.« less

  12. Generation of Large-Scale Magnetic Fields by Small-Scale Dynamo in Shear Flows.

    PubMed

    Squire, J; Bhattacharjee, A

    2015-10-23

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.

  13. Large-Scale 3D Printing: The Way Forward

    NASA Astrophysics Data System (ADS)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  14. A large scale GIS geodatabase of soil parameters supporting the modeling of conservation practice alternatives in the United States

    USDA-ARS?s Scientific Manuscript database

    Water quality modeling requires across-scale support of combined digital soil elements and simulation parameters. This paper presents the unprecedented development of a large spatial scale (1:250,000) ArcGIS geodatabase coverage designed as a functional repository of soil-parameters for modeling an...

  15. Large-Scale Traffic Microsimulation From An MPO Perspective

    DOT National Transportation Integrated Search

    1997-01-01

    One potential advancement of the four-step travel model process is the forecasting and simulation of individual activities and travel. A common concern with such an approach is that the data and computational requirements for a large-scale, regional ...

  16. Exploring Unidimensional Proficiency Classification Accuracy from Multidimensional Data in a Vertical Scaling Context

    ERIC Educational Resources Information Center

    Kroopnick, Marc Howard

    2010-01-01

    When Item Response Theory (IRT) is operationally applied for large scale assessments, unidimensionality is typically assumed. This assumption requires that the test measures a single latent trait. Furthermore, when tests are vertically scaled using IRT, the assumption of unidimensionality would require that the battery of tests across grades…

  17. Large-scale modeling of rain fields from a rain cell deterministic model

    NASA Astrophysics Data System (ADS)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  18. Transfer of movement sequences: bigger is better.

    PubMed

    Dean, Noah J; Kovacs, Attila J; Shea, Charles H

    2008-02-01

    Experiment 1 was conducted to determine if proportional transfer from "small to large" scale movements is as effective as transferring from "large to small." We hypothesize that the learning of larger scale movement will require the participant to learn to manage the generation, storage, and dissipation of forces better than when practicing smaller scale movements. Thus, we predict an advantage for transfer of larger scale movements to smaller scale movements relative to transfer from smaller to larger scale movements. Experiment 2 was conducted to determine if adding a load to a smaller scale movement would enhance later transfer to a larger scale movement sequence. It was hypothesized that the added load would require the participants to consider the dynamics of the movement to a greater extent than without the load. The results replicated earlier findings of effective transfer from large to small movements, but consistent with our hypothesis, transfer was less effective from small to large (Experiment 1). However, when a load was added during acquisition transfer from small to large was enhanced even though the load was removed during the transfer test. These results are consistent with the notion that the transfer asymmetry noted in Experiment 1 was due to factors related to movement dynamics that were enhanced during practice of the larger scale movement sequence, but not during the practice of the smaller scale movement sequence. The findings that the movement structure is unaffected by transfer direction but the movement dynamics are influenced by transfer direction is consistent with hierarchal models of sequence production.

  19. Spiking neural network simulation: memory-optimal synaptic event scheduling.

    PubMed

    Stewart, Robert D; Gurney, Kevin N

    2011-06-01

    Spiking neural network simulations incorporating variable transmission delays require synaptic events to be scheduled prior to delivery. Conventional methods have memory requirements that scale with the total number of synapses in a network. We introduce novel scheduling algorithms for both discrete and continuous event delivery, where the memory requirement scales instead with the number of neurons. Superior algorithmic performance is demonstrated using large-scale, benchmarking network simulations.

  20. Requirements for a mobile communications satellite system. Volume 3: Large space structures measurements study

    NASA Technical Reports Server (NTRS)

    Akle, W.

    1983-01-01

    This study report defines a set of tests and measurements required to characterize the performance of a Large Space System (LSS), and to scale this data to other LSS satellites. Requirements from the Mobile Communication Satellite (MSAT) configurations derived in the parent study were used. MSAT utilizes a large, mesh deployable antenna, and encompasses a significant range of LSS technology issues in the areas of structural/dynamics, control, and performance predictability. In this study, performance requirements were developed for the antenna. Special emphasis was placed on antenna surface accuracy, and pointing stability. Instrumentation and measurement systems, applicable to LSS, were selected from existing or on-going technology developments. Laser ranging and angulation systems, presently in breadboard status, form the backbone of the measurements. Following this, a set of ground, STS, and GEO-operational were investigated. A third scale (15 meter) antenna system as selected for ground characterization followed by STS flight technology development. This selection ensures analytical scaling from ground-to-orbit, and size scaling. Other benefits are cost and ability to perform reasonable ground tests. Detail costing of the various tests and measurement systems were derived and are included in the report.

  1. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  2. Task Effects on Linguistic Complexity and Accuracy: A Large-Scale Learner Corpus Analysis Employing Natural Language Processing Techniques

    ERIC Educational Resources Information Center

    Alexopoulou, Theodora; Michel, Marije; Murakami, Akira; Meurers, Detmar

    2017-01-01

    Large-scale learner corpora collected from online language learning platforms, such as the EF-Cambridge Open Language Database (EFCAMDAT), provide opportunities to analyze learner data at an unprecedented scale. However, interpreting the learner language in such corpora requires a precise understanding of tasks: How does the prompt and input of a…

  3. Ascertaining Validity in the Abstract Realm of PMESII Simulation Models: An Analysis of the Peace Support Operations Model (PSOM)

    DTIC Science & Technology

    2009-06-01

    simulation is the campaign-level Peace Support Operations Model (PSOM). This thesis provides a quantitative analysis of PSOM. The results are based ...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . 15. NUMBER OF PAGES 159...multiple potential outcomes , further development and analysis is required before the model is used for large scale analysis . vi THIS PAGE

  4. Methanol production from Eucalyptus wood chips. Working Document 2. Vegetative propagation of Eucalypts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fishkind, H.H.

    1982-04-01

    The feasibility of large-scale plantation establishment by various methods was examined, and the following conclusions were reached: seedling plantations are limited in potential yield due to genetic variation among the planting stock and often inadequate supplies of appropriate seed; vegetative propagation by rooted cuttings can provide good genetic uniformity of select hybrid planting stock; however, large-scale production requires establishment and maintenance of extensive cutting orchards. The collection of shoots and preparation of cuttings, although successfully implemented in the Congo and Brazil, would not be economically feasible in Florida for large-scale plantations; tissue culture propagation of select hybrid eucalypts offers themore » only opportunity to produce the very large number of trees required to establish the energy plantation. The cost of tissue culture propagation, although higher than seedling production, is more than off-set by the increased productivity of vegetative plantations established from select hybrid Eucalyptus.« less

  5. Random access in large-scale DNA data storage.

    PubMed

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  6. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  7. Science Competencies That Go Unassessed

    ERIC Educational Resources Information Center

    Gilmer, Penny J.; Sherdan, Danielle M.; Oosterhof, Albert; Rohani, Faranak; Rouby, Aaron

    2011-01-01

    Present large-scale assessments require the use of item formats, such as multiple choice, that can be administered and scored efficiently. This limits competencies that can be measured by these assessments. An alternative approach to large-scale assessments is being investigated that would include the use of complex performance assessments. As…

  8. TRANSITION FROM KINETIC TO MHD BEHAVIOR IN A COLLISIONLESS PLASMA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parashar, Tulasi N.; Matthaeus, William H.; Shay, Michael A.

    The study of kinetic effects in heliospheric plasmas requires representation of dynamics at sub-proton scales, but in most cases the system is driven by magnetohydrodynamic (MHD) activity at larger scales. The latter requirement challenges available computational resources, which raises the question of how large such a system must be to exhibit MHD traits at large scales while kinetic behavior is accurately represented at small scales. Here we study this implied transition from kinetic to MHD-like behavior using particle-in-cell (PIC) simulations, initialized using an Orszag–Tang Vortex. The PIC code treats protons, as well as electrons, kinetically, and we address the questionmore » of interest by examining several different indicators of MHD-like behavior.« less

  9. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    NASA Astrophysics Data System (ADS)

    Squire, Jonathan; Bhattacharjee, Amitava

    2015-11-01

    A new mechanism for turbulent mean-field dynamo is proposed, in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the ``shear-current'' effect. The dynamo is studied using a variety of computational and analytic techniques, both when the magnetic fluctuations arise self-consistently through the small-scale dynamo and in lower Reynolds number regimes. Given the inevitable existence of non-helical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help to explain generation of large-scale magnetic fields across a wide range of astrophysical objects. This work was supported by a Procter Fellowship at Princeton University, and the US Department of Energy Grant DE-AC02-09-CH11466.

  10. Comparison of Conjugate Gradient Density Matrix Search and Chebyshev Expansion Methods for Avoiding Diagonalization in Large-Scale Electronic Structure Calculations

    NASA Technical Reports Server (NTRS)

    Bates, Kevin R.; Daniels, Andrew D.; Scuseria, Gustavo E.

    1998-01-01

    We report a comparison of two linear-scaling methods which avoid the diagonalization bottleneck of traditional electronic structure algorithms. The Chebyshev expansion method (CEM) is implemented for carbon tight-binding calculations of large systems and its memory and timing requirements compared to those of our previously implemented conjugate gradient density matrix search (CG-DMS). Benchmark calculations are carried out on icosahedral fullerenes from C60 to C8640 and the linear scaling memory and CPU requirements of the CEM demonstrated. We show that the CPU requisites of the CEM and CG-DMS are similar for calculations with comparable accuracy.

  11. Solving large scale structure in ten easy steps with COLA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less

  12. On the Large-Scaling Issues of Cloud-based Applications for Earth Science Dat

    NASA Astrophysics Data System (ADS)

    Hua, H.

    2016-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as NASA's SWOT and NISAR where its SAR data volumes and data throughput rates are order of magnitude larger than present day missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Experiences have shown that to embrace efficient cloud computing approaches for large-scale science data systems requires more than just moving existing code to cloud environments. At large cloud scales, we need to deal with scaling and cost issues. We present our experiences on deploying multiple instances of our hybrid-cloud computing science data system (HySDS) to support large-scale processing of Earth Science data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer 75%-90% costs savings but with an unpredictable computing environment based on market forces.

  13. The requirements for a new full scale subsonic wind tunnel

    NASA Technical Reports Server (NTRS)

    Kelly, M. W.; Mckinney, M. O.; Luidens, R. W.

    1972-01-01

    Justification and requirements are presented for a large subsonic wind tunnel capable of testing full scale aircraft, rotor systems, and advanced V/STOL propulsion systems. The design considerations and constraints for such a facility are reviewed, and the trades between facility test capability and costs are discussed.

  14. Side effects of problem-solving strategies in large-scale nutrition science: towards a diversification of health.

    PubMed

    Penders, Bart; Vos, Rein; Horstman, Klasien

    2009-11-01

    Solving complex problems in large-scale research programmes requires cooperation and division of labour. Simultaneously, large-scale problem solving also gives rise to unintended side effects. Based upon 5 years of researching two large-scale nutrigenomic research programmes, we argue that problems are fragmented in order to be solved. These sub-problems are given priority for practical reasons and in the process of solving them, various changes are introduced in each sub-problem. Combined with additional diversity as a result of interdisciplinarity, this makes reassembling the original and overall goal of the research programme less likely. In the case of nutrigenomics and health, this produces a diversification of health. As a result, the public health goal of contemporary nutrition science is not reached in the large-scale research programmes we studied. Large-scale research programmes are very successful in producing scientific publications and new knowledge; however, in reaching their political goals they often are less successful.

  15. Parallel Clustering Algorithm for Large-Scale Biological Data Sets

    PubMed Central

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246

  16. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells

    PubMed Central

    Osiecki, Michael J.; Michl, Thomas D.; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B.; Griesser, Hans J.; Doran, Michael R.

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs. PMID:26660475

  17. Packed Bed Bioreactor for the Isolation and Expansion of Placental-Derived Mesenchymal Stromal Cells.

    PubMed

    Osiecki, Michael J; Michl, Thomas D; Kul Babur, Betul; Kabiri, Mahboubeh; Atkinson, Kerry; Lott, William B; Griesser, Hans J; Doran, Michael R

    2015-01-01

    Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs.

  18. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    PubMed

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trujillo, Angelina Michelle

    Strategy, Planning, Acquiring- very large scale computing platforms come and go and planning for immensely scalable machines often precedes actual procurement by 3 years. Procurement can be another year or more. Integration- After Acquisition, machines must be integrated into the computing environments at LANL. Connection to scalable storage via large scale storage networking, assuring correct and secure operations. Management and Utilization – Ongoing operations, maintenance, and trouble shooting of the hardware and systems software at massive scale is required.

  20. Formation of large-scale structure from cosmic-string loops and cold dark matter

    NASA Technical Reports Server (NTRS)

    Melott, Adrian L.; Scherrer, Robert J.

    1987-01-01

    Some results from a numerical simulation of the formation of large-scale structure from cosmic-string loops are presented. It is found that even though G x mu is required to be lower than 2 x 10 to the -6th (where mu is the mass per unit length of the string) to give a low enough autocorrelation amplitude, there is excessive power on smaller scales, so that galaxies would be more dense than observed. The large-scale structure does not include a filamentary or connected appearance and shares with more conventional models based on Gaussian perturbations the lack of cluster-cluster correlation at the mean cluster separation scale as well as excessively small bulk velocities at these scales.

  1. Anisotropies of the cosmic microwave background in nonstandard cold dark matter models

    NASA Technical Reports Server (NTRS)

    Vittorio, Nicola; Silk, Joseph

    1992-01-01

    Small angular scale cosmic microwave anisotropies in flat, vacuum-dominated, cold dark matter cosmological models which fit large-scale structure observations and are consistent with a high value for the Hubble constant are reexamined. New predictions for CDM models in which the large-scale power is boosted via a high baryon content and low H(0) are presented. Both classes of models are consistent with current limits: an improvement in sensitivity by a factor of about 3 for experiments which probe angular scales between 7 arcmin and 1 deg is required, in the absence of very early reionization, to test boosted CDM models for large-scale structure formation.

  2. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  3. Teachers' Perceptions of Teaching in Workplace Simulations in Vocational Education

    ERIC Educational Resources Information Center

    Jossberger, Helen; Brand-Gruwel, Saskia; van de Wiel, Margje W.; Boshuizen, Henny P.

    2015-01-01

    In a large-scale top-down innovation operation in the Netherlands, workplace simulations have been implemented in vocational schools, where students are required to work independently and self-direct their learning. However, research has shown that the success of such large-scale top-down innovations depends on how well their execution in schools…

  4. 75 FR 24742 - In the Matter of Certain Large Scale Integrated Circuit Semiconductor Chips and Products...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-05

    ... Integrated Circuit Semiconductor Chips and Products Containing Same; Notice of Investigation AGENCY: U.S... of certain large scale integrated circuit semiconductor chips and products containing same by reason... alleges that an industry in the United States exists as required by subsection (a)(2) of section 337. The...

  5. Sensemaking in a Value Based Context for Large Scale Complex Engineered Systems

    NASA Astrophysics Data System (ADS)

    Sikkandar Basha, Nazareen

    The design and the development of Large-Scale Complex Engineered Systems (LSCES) requires the involvement of multiple teams and numerous levels of the organization and interactions with large numbers of people and interdisciplinary departments. Traditionally, requirements-driven Systems Engineering (SE) is used in the design and development of these LSCES. The requirements are used to capture the preferences of the stakeholder for the LSCES. Due to the complexity of the system, multiple levels of interactions are required to elicit the requirements of the system within the organization. Since LSCES involves people and interactions between the teams and interdisciplinary departments, it should be socio-technical in nature. The elicitation of the requirements of most large-scale system projects are subjected to creep in time and cost due to the uncertainty and ambiguity of requirements during the design and development. In an organization structure, the cost and time overrun can occur at any level and iterate back and forth thus increasing the cost and time. To avoid such creep past researches have shown that rigorous approaches such as value based designing can be used to control it. But before the rigorous approaches can be used, the decision maker should have a proper understanding of requirements creep and the state of the system when the creep occurs. Sensemaking is used to understand the state of system when the creep occurs and provide a guidance to decision maker. This research proposes the use of the Cynefin framework, sensemaking framework which can be used in the design and development of LSCES. It can aide in understanding the system and decision making to minimize the value gap due to requirements creep by eliminating ambiguity which occurs during design and development. A sample hierarchical organization is used to demonstrate the state of the system at the occurrence of requirements creep in terms of cost and time using the Cynefin framework. These trials are continued for different requirements and at different sub-system level. The results obtained show that the Cynefin framework can be used to improve the value of the system and can be used for predictive analysis. The decision makers can use these findings and use rigorous approaches and improve the design of Large Scale Complex Engineered Systems.

  6. Large-angle cosmic microwave background anisotropies in an open universe

    NASA Technical Reports Server (NTRS)

    Kamionkowski, Marc; Spergel, David N.

    1994-01-01

    If the universe is open, scales larger than the curvature scale may be probed by observation of large-angle fluctuations in the cosmic microwave background (CMB). We consider primordial adiabatic perturbations and discuss power spectra that are power laws in volume, wavelength, and eigenvalue of the Laplace operator. Such spectra may have arisen if, for example, the universe underwent a period of `frustated' inflation. The resulting large-angle anisotropies of the CMB are computed. The amplitude generally increases as Omega is decreased but decreases as h is increased. Interestingly enough, for all three Ansaetze, anisotropies on angular scales larger than the curvature scale are suppressed relative to the anisotropies on scales smaller than the curvature scale, but cosmic variance makes discrimination between various models difficult. Models with 0.2 approximately less than Omega h approximately less than 0.3 appear compatible with CMB fluctuations detected by Cosmic Background Explorer Satellite (COBE) and the Tenerife experiment and with the amplitude and spectrum of fluctuations of galaxy counts in the APM, CfA, and 1.2 Jy IRAS surveys. COBE normalization for these models yields sigma(sub 8) approximately = 0.5 - 0.7. Models with smaller values of Omega h when normalized to COBE require bias factors in excess of 2 to be compatible with the observed galaxy counts on the 8/h Mpc scale. Requiring that the age of the universe exceed 10 Gyr implies that Omega approximately greater than 0.25, while requiring that from the last-scattering term in the Sachs-Wolfe formula, large-angle anisotropies come primarily from the decay of potential fluctuations at z approximately less than 1/Omega. Thus, if the universe is open, COBE has been detecting temperature fluctuations produced at moderate redshift rather than at z approximately 1300.

  7. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  8. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  9. Cloud computing for genomic data analysis and collaboration.

    PubMed

    Langmead, Ben; Nellore, Abhinav

    2018-04-01

    Next-generation sequencing has made major strides in the past decade. Studies based on large sequencing data sets are growing in number, and public archives for raw sequencing data have been doubling in size every 18 months. Leveraging these data requires researchers to use large-scale computational resources. Cloud computing, a model whereby users rent computers and storage from large data centres, is a solution that is gaining traction in genomics research. Here, we describe how cloud computing is used in genomics for research and large-scale collaborations, and argue that its elasticity, reproducibility and privacy features make it ideally suited for the large-scale reanalysis of publicly available archived data, including privacy-protected data.

  10. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  11. NASA's Information Power Grid: Large Scale Distributed Computing and Data Management

    NASA Technical Reports Server (NTRS)

    Johnston, William E.; Vaziri, Arsi; Hinke, Tom; Tanner, Leigh Ann; Feiereisen, William J.; Thigpen, William; Tang, Harry (Technical Monitor)

    2001-01-01

    Large-scale science and engineering are done through the interaction of people, heterogeneous computing resources, information systems, and instruments, all of which are geographically and organizationally dispersed. The overall motivation for Grids is to facilitate the routine interactions of these resources in order to support large-scale science and engineering. Multi-disciplinary simulations provide a good example of a class of applications that are very likely to require aggregation of widely distributed computing, data, and intellectual resources. Such simulations - e.g. whole system aircraft simulation and whole system living cell simulation - require integrating applications and data that are developed by different teams of researchers frequently in different locations. The research team's are the only ones that have the expertise to maintain and improve the simulation code and/or the body of experimental data that drives the simulations. This results in an inherently distributed computing and data management environment.

  12. Large-scale weakly supervised object localization via latent category learning.

    PubMed

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  13. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  14. A Comparison of Linking Methods for Estimating National Trends in International Comparative Large-Scale Assessments in the Presence of Cross-national DIF

    ERIC Educational Resources Information Center

    Sachse, Karoline A.; Roppelt, Alexander; Haag, Nicole

    2016-01-01

    Trend estimation in international comparative large-scale assessments relies on measurement invariance between countries. However, cross-national differential item functioning (DIF) has been repeatedly documented. We ran a simulation study using national item parameters, which required trends to be computed separately for each country, to compare…

  15. Evolving from bioinformatics in-the-small to bioinformatics in-the-large.

    PubMed

    Parker, D Stott; Gorlick, Michael M; Lee, Christopher J

    2003-01-01

    We argue the significance of a fundamental shift in bioinformatics, from in-the-small to in-the-large. Adopting a large-scale perspective is a way to manage the problems endemic to the world of the small-constellations of incompatible tools for which the effort required to assemble an integrated system exceeds the perceived benefit of the integration. Where bioinformatics in-the-small is about data and tools, bioinformatics in-the-large is about metadata and dependencies. Dependencies represent the complexities of large-scale integration, including the requirements and assumptions governing the composition of tools. The popular make utility is a very effective system for defining and maintaining simple dependencies, and it offers a number of insights about the essence of bioinformatics in-the-large. Keeping an in-the-large perspective has been very useful to us in large bioinformatics projects. We give two fairly different examples, and extract lessons from them showing how it has helped. These examples both suggest the benefit of explicitly defining and managing knowledge flows and knowledge maps (which represent metadata regarding types, flows, and dependencies), and also suggest approaches for developing bioinformatics database systems. Generally, we argue that large-scale engineering principles can be successfully adapted from disciplines such as software engineering and data management, and that having an in-the-large perspective will be a key advantage in the next phase of bioinformatics development.

  16. Cosmological explosions from cold dark matter perturbations

    NASA Technical Reports Server (NTRS)

    Scherrer, Robert J.

    1992-01-01

    The cosmological-explosion model is examined for a universe dominated by cold dark matter in which explosion seeds are produced from the growth of initial density perturbations of a given form. Fragmentation of the exploding shells is dominated by the dark-matter potential wells rather than the self-gravity of the shells, and particular conditions are required for the explosions to bootstrap up to very large scales. The final distribution of dark matter is strongly correlated with the baryons on small scales, but uncorrelated on large scales.

  17. Command and Control for Large-Scale Hybrid Warfare Systems

    DTIC Science & Technology

    2014-06-05

    Prescribed by ANSI Std Z39-18 2 CK Pang et al. in C2 architectures was proposed using Petri nets (PNs).10 Liao in [11] reported an architecture for...arises from the chal- lenging and often-conflicting user requirements, scale, scope, inter-connectivity with different large-scale net - worked teams and...resources can be easily modelled and reconfigured by the notion of block matrix. At any time, the various missions of the net - worked team can be added

  18. Sound production due to large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.

  19. Large space telescope engineering scale model optical design

    NASA Technical Reports Server (NTRS)

    Facey, T. A.

    1973-01-01

    The objective is to develop the detailed design and tolerance data for the LST engineering scale model optical system. This will enable MSFC to move forward to the optical element procurement phase and also to evaluate tolerances, manufacturing requirements, assembly/checkout procedures, reliability, operational complexity, stability requirements of the structure and thermal system, and the flexibility to change and grow.

  20. Assessing estimation techniques for missing plot observations in the U.S. forest inventory

    Treesearch

    Grant M. Domke; Christopher W. Woodall; Ronald E. McRoberts; James E. Smith; Mark A. Hatfield

    2012-01-01

    The U.S. Forest Service, Forest Inventory and Analysis Program made a transition from state-by-state periodic forest inventories--with reporting standards largely tailored to regional requirements--to a nationally consistent, annual inventory tailored to large-scale strategic requirements. Lack of measurements on all forest land during the periodic inventory, along...

  1. Governance of extended lifecycle in large-scale eHealth initiatives: analyzing variability of enterprise architecture elements.

    PubMed

    Mykkänen, Juha; Virkanen, Hannu; Tuomainen, Mika

    2013-01-01

    The governance of large eHealth initiatives requires traceability of many requirements and design decisions. We provide a model which we use to conceptually analyze variability of several enterprise architecture (EA) elements throughout the extended lifecycle of development goals using interrelated projects related to the national ePrescription in Finland.

  2. Structural Similitude and Scaling Laws

    NASA Technical Reports Server (NTRS)

    Simitses, George J.

    1998-01-01

    Aircraft and spacecraft comprise the class of aerospace structures that require efficiency and wisdom in design, sophistication and accuracy in analysis and numerous and careful experimental evaluations of components and prototype, in order to achieve the necessary system reliability, performance and safety. Preliminary and/or concept design entails the assemblage of system mission requirements, system expected performance and identification of components and their connections as well as of manufacturing and system assembly techniques. This is accomplished through experience based on previous similar designs, and through the possible use of models to simulate the entire system characteristics. Detail design is heavily dependent on information and concepts derived from the previous steps. This information identifies critical design areas which need sophisticated analyses, and design and redesign procedures to achieve the expected component performance. This step may require several independent analysis models, which, in many instances, require component testing. The last step in the design process, before going to production, is the verification of the design. This step necessitates the production of large components and prototypes in order to test component and system analytical predictions and verify strength and performance requirements under the worst loading conditions that the system is expected to encounter in service. Clearly then, full-scale testing is in many cases necessary and always very expensive. In the aircraft industry, in addition to full-scale tests, certification and safety necessitate large component static and dynamic testing. Such tests are extremely difficult, time consuming and definitely absolutely necessary. Clearly, one should not expect that prototype testing will be totally eliminated in the aircraft industry. It is hoped, though, that we can reduce full-scale testing to a minimum. Full-scale large component testing is necessary in other industries as well, Ship building, automobile and railway car construction all rely heavily on testing. Regardless of the application, a scaled-down (by a large factor) model (scale model) which closely represents the structural behavior of the full-scale system (prototype) can prove to be an extremely beneficial tool. This possible development must be based on the existence of certain structural parameters that control the behavior of a structural system when acted upon by static and/or dynamic loads. If such structural parameters exist, a scaled-down replica can be built, which will duplicate the response of the full-scale system. The two systems are then said to be structurally similar. The term, then, that best describes this similarity is structural similitude. Similarity of systems requires that the relevant system parameters be identical and these systems be governed by a unique set of characteristic equations. Thus, if a relation or equation of variables is written for a system, it is valid for all systems which are similar to it. Each variable in a model is proportional to the corresponding variable of the prototype. This ratio, which plays an essential role in predicting the relationship between the model and its prototype, is called the scale factor.

  3. A fast boosting-based screening method for large-scale association study in complex traits with genetic heterogeneity.

    PubMed

    Wang, Lu-Yong; Fasulo, D

    2006-01-01

    Genome-wide association study for complex diseases will generate massive amount of single nucleotide polymorphisms (SNPs) data. Univariate statistical test (i.e. Fisher exact test) was used to single out non-associated SNPs. However, the disease-susceptible SNPs may have little marginal effects in population and are unlikely to retain after the univariate tests. Also, model-based methods are impractical for large-scale dataset. Moreover, genetic heterogeneity makes the traditional methods harder to identify the genetic causes of diseases. A more recent random forest method provides a more robust method for screening the SNPs in thousands scale. However, for more large-scale data, i.e., Affymetrix Human Mapping 100K GeneChip data, a faster screening method is required to screening SNPs in whole-genome large scale association analysis with genetic heterogeneity. We propose a boosting-based method for rapid screening in large-scale analysis of complex traits in the presence of genetic heterogeneity. It provides a relatively fast and fairly good tool for screening and limiting the candidate SNPs for further more complex computational modeling task.

  4. Detonation failure characterization of non-ideal explosives

    NASA Astrophysics Data System (ADS)

    Janesheski, Robert S.; Groven, Lori J.; Son, Steven

    2012-03-01

    Non-ideal explosives are currently poorly characterized, hence limiting the modeling of them. Current characterization requires large-scale testing to obtain steady detonation wave characterization for analysis due to the relatively thick reaction zones. Use of a microwave interferometer applied to small-scale confined transient experiments is being implemented to allow for time resolved characterization of a failing detonation. The microwave interferometer measures the position of a failing detonation wave in a tube that is initiated with a booster charge. Experiments have been performed with ammonium nitrate and various fuel compositions (diesel fuel and mineral oil). It was observed that the failure dynamics are influenced by factors such as chemical composition and confiner thickness. Future work is planned to calibrate models to these small-scale experiments and eventually validate the models with available large scale experiments. This experiment is shown to be repeatable, shows dependence on reactive properties, and can be performed with little required material.

  5. Optimizing the scale of markets for water quality trading

    NASA Astrophysics Data System (ADS)

    Doyle, Martin W.; Patterson, Lauren A.; Chen, Yanyou; Schnier, Kurt E.; Yates, Andrew J.

    2014-09-01

    Applying market approaches to environmental regulations requires establishing a spatial scale for trading. Spatially large markets usually increase opportunities for abatement cost savings but increase the potential for pollution damages (hot spots), vice versa for spatially small markets. We develop a coupled hydrologic-economic modeling approach for application to point source emissions trading by a large number of sources and apply this approach to the wastewater treatment plants (WWTPs) within the watershed of the second largest estuary in the U.S. We consider two different administrative structures that govern the trade of emission permits: one-for-one trading (the number of permits required for each unit of emission is the same for every WWTP) and trading ratios (the number of permits required for each unit of emissions varies across WWTP). Results show that water quality regulators should allow trading to occur at the river basin scale as an appropriate first-step policy, as is being done in a limited number of cases via compliance associations. Larger spatial scales may be needed under conditions of increased abatement costs. The optimal scale of the market is generally the same regardless of whether one-for-one trading or trading ratios are employed.

  6. Comparison of an algebraic multigrid algorithm to two iterative solvers used for modeling ground water flow and transport

    USGS Publications Warehouse

    Detwiler, R.L.; Mehl, S.; Rajaram, H.; Cheung, W.W.

    2002-01-01

    Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling.

  7. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  8. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  9. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    NASA Astrophysics Data System (ADS)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  10. State of the Art in Large-Scale Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  11. Fermi rules out the IC/CMB model for the Large-Scale Jet X-ray emission of 3C 273

    NASA Astrophysics Data System (ADS)

    Georganopoulos, Markos; Meyer, E. T.

    2014-01-01

    The process responsible for the Chandra-detected X-ray emission from the large-scale jets of powerful quasars is not clear yet. The two main models are inverse Compton scattering off the cosmic microwave background (IC/CMB) photons and synchrotron emission from a population of electrons separate from those producing the radio-IR emission. These two models imply radically different conditions in the large scale jet in terms of jet speed and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the larger-scale environment. Georganopoulos et al. (2006) proposed a diagnostic based on a fundamental difference between these two models: the production of synchrotron X-rays requires multi-TeV electrons, while the EC/CMB model requires a cutoff in the electron energy distribution below TeV energies. This has significant implications for the gamma-ray emission predicted by these two models. Here we present new Fermi observations that put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that clearly violates the flux expected from the IC/CMB X-ray interpretation found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source. Further, the Fermi upper limit constraints the Doppler beaming factor delta <5.

  12. What Determines Upscale Growth of Oceanic Convection into MCSs?

    NASA Astrophysics Data System (ADS)

    Zipser, E. J.

    2017-12-01

    Over tropical oceans, widely scattered convection of various depths may or may not grow upscale into mesoscale convective systems (MCSs). But what distinguishes the large-scale environment that favors such upscale growth from that favoring "unorganized", scattered convection? Is it some combination of large-scale low-level convergence and ascending motion, combined with sufficient instability? We recently put this to a test with ERA-I reanalysis data, with disappointing results. The "usual suspects" of total column water vapor, large-scale ascent, and CAPE may all be required to some extent, but their differences between large MCSs and scattered convection are small. The main positive results from this work (already published) demonstrate that the strength of convection is well correlated with the size and perhaps "organization" of convective features over tropical oceans, in contrast to tropical land, where strong convection is common for large or small convective features. So, important questions remain: Over tropical oceans, how should we define "organized" convection? By size of the precipitation area? And what environmental conditions lead to larger and better organized MCSs? Some recent attempts to answer these questions will be described, but good answers may require more data, and more insights.

  13. A successful trap design for capturing large terrestrial snakes

    Treesearch

    Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer

    2005-01-01

    Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...

  14. Physics implications of the diphoton excess from the perspective of renormalization group flow

    DOE PAGES

    Gu, Jiayin; Liu, Zhen

    2016-04-06

    A very plausible explanation for the recently observed diphoton excess at the 13 TeV LHC is a (pseudo)scalar with mass around 750 GeV, which couples to a gluon pair and to a photon pair through loops involving vector-like quarks (VLQs). To accommodate the observed rate, the required Yukawa couplings tend to be large. A large Yukawa coupling would rapidly run up with the scale and quickly reach the perturbativity bound, indicating that new physics, possibly with a strong dynamics origin, is near by. The case becomes stronger especially if the ATLAS observation of a large width persists. In this papermore » we study the implication on the scale of new physics from the 750 GeV diphoton excess using the method of renormalization group running with careful treatment of different contributions and perturbativity criterion. Our results suggest that the scale of new physics is generically not much larger than the TeV scale, in particular if the width of the hinted (pseudo)scalar is large. Introducing multiple copies of VLQs, lowing the VLQ masses and enlarging VLQ electric charges help reduce the required Yukawa couplings and can push the cutoff scale to higher values. Nevertheless, if the width of the 750 GeV resonance turns out to be larger than about 1 GeV, it is very hard to increase the cutoff scale beyond a few TeVs. This is a strong hint that new particles in addition to the 750 GeV resonance and the vector-like quarks should be around the TeV scale.« less

  15. Bundle block adjustment of large-scale remote sensing data with Block-based Sparse Matrix Compression combined with Preconditioned Conjugate Gradient

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhang, Yongjun; Zhou, Shunping; Zhu, Junfeng; Xiong, Xiaodong

    2016-07-01

    In recent years, new platforms and sensors in photogrammetry, remote sensing and computer vision areas have become available, such as Unmanned Aircraft Vehicles (UAV), oblique camera systems, common digital cameras and even mobile phone cameras. Images collected by all these kinds of sensors could be used as remote sensing data sources. These sensors can obtain large-scale remote sensing data which consist of a great number of images. Bundle block adjustment of large-scale data with conventional algorithm is very time and space (memory) consuming due to the super large normal matrix arising from large-scale data. In this paper, an efficient Block-based Sparse Matrix Compression (BSMC) method combined with the Preconditioned Conjugate Gradient (PCG) algorithm is chosen to develop a stable and efficient bundle block adjustment system in order to deal with the large-scale remote sensing data. The main contribution of this work is the BSMC-based PCG algorithm which is more efficient in time and memory than the traditional algorithm without compromising the accuracy. Totally 8 datasets of real data are used to test our proposed method. Preliminary results have shown that the BSMC method can efficiently decrease the time and memory requirement of large-scale data.

  16. A study on the required performance of a 2G HTS wire for HTS wind power generators

    NASA Astrophysics Data System (ADS)

    Sung, Hae-Jin; Park, Minwon; Go, Byeong-Soo; Yu, In-Keun

    2016-05-01

    YBCO or REBCO coated conductor (2G) materials are developed for their superior performance at high magnetic field and temperature. Power system applications based on high temperature superconducting (HTS) 2G wire technology are attracting attention, including large-scale wind power generators. In particular, to solve problems associated with the foundations and mechanical structure of offshore wind turbines, due to the large diameter and heavy weight of the generator, an HTS generator is suggested as one of the key technologies. Many researchers have tried to develop feasible large-scale HTS wind power generator technologies. In this paper, a study on the required performance of a 2G HTS wire for large-scale wind power generators is discussed. A 12 MW class large-scale wind turbine and an HTS generator are designed using 2G HTS wire. The total length of the 2G HTS wire for the 12 MW HTS generator is estimated, and the essential prerequisites of the 2G HTS wire based generator are described. The magnetic field distributions of a pole module are illustrated, and the mechanical stress and strain of the pole module are analysed. Finally, a reasonable price for 2G HTS wire for commercialization of the HTS generator is suggested, reflecting the results of electromagnetic and mechanical analyses of the generator.

  17. Windvan laser study

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The goal of defining a CO2 laser transmitter approach suited to Shuttle Coherent Atmospheric Lidar Experiment (SCALE) requirements is discussed. The adaptation of the existing WINDVAN system to the shuttle environment is addressed. The size, weight, reliability, and efficiency of the existing WINDVAN system are largely compatible with SCALE requirements. Repacking is needed for compatibility with vacuum and thermal environments. Changes are required to ensure survival through launch and landing, mechanical, vibration, and acoustic loads. Existing WINDVAN thermal management approaches depending on convection need to be upgraded zero gravity operations.

  18. Querying Large Biological Network Datasets

    ERIC Educational Resources Information Center

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  19. Setting Learning Analytics in Context: Overcoming the Barriers to Large-Scale Adoption

    ERIC Educational Resources Information Center

    Ferguson, Rebecca; Macfadyen, Leah P.; Clow, Doug; Tynan, Belinda; Alexander, Shirley; Dawson, Shane

    2014-01-01

    A core goal for most learning analytic projects is to move from small-scale research towards broader institutional implementation, but this introduces a new set of challenges because institutions are stable systems, resistant to change. To avoid failure and maximize success, implementation of learning analytics at scale requires explicit and…

  20. 48 CFR 852.236-71 - Specifications and drawings for construction.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...

  1. 48 CFR 852.236-71 - Specifications and drawings for construction.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...

  2. 48 CFR 852.236-71 - Specifications and drawings for construction.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...

  3. 48 CFR 852.236-71 - Specifications and drawings for construction.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    .... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...

  4. 48 CFR 852.236-71 - Specifications and drawings for construction.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    .... (b) Large scale drawings supersede small scale drawings. (c) Dimensions govern in all cases. Scaling of drawings may be done only for general location and general size of items. (d) Dimensions shown of existing work and all dimensions required for work that is to connect with existing work shall be verified...

  5. Application of Small-Scale Systems: Evaluation of Alternatives

    Treesearch

    John Wilhoit; Robert Rummer

    1999-01-01

    Large-scale mechanized systems are not well-suited for harvesting smaller tracts of privately owned forest land. New alternative small-scale harvesting systems are needed which utilize mechanized felling, have a low capital investment requirement, are small in physical size, and are based primarily on adaptations of current harvesting technology. This paper presents...

  6. Exploring Google Earth Engine platform for big data processing: classification of multi-temporal satellite imagery for crop mapping

    NASA Astrophysics Data System (ADS)

    Shelestov, Andrii; Lavreniuk, Mykola; Kussul, Nataliia; Novikov, Alexei; Skakun, Sergii

    2017-02-01

    Many applied problems arising in agricultural monitoring and food security require reliable crop maps at national or global scale. Large scale crop mapping requires processing and management of large amount of heterogeneous satellite imagery acquired by various sensors that consequently leads to a “Big Data” problem. The main objective of this study is to explore efficiency of using the Google Earth Engine (GEE) platform when classifying multi-temporal satellite imagery with potential to apply the platform for a larger scale (e.g. country level) and multiple sensors (e.g. Landsat-8 and Sentinel-2). In particular, multiple state-of-the-art classifiers available in the GEE platform are compared to produce a high resolution (30 m) crop classification map for a large territory ( 28,100 km2 and 1.0 M ha of cropland). Though this study does not involve large volumes of data, it does address efficiency of the GEE platform to effectively execute complex workflows of satellite data processing required with large scale applications such as crop mapping. The study discusses strengths and weaknesses of classifiers, assesses accuracies that can be achieved with different classifiers for the Ukrainian landscape, and compares them to the benchmark classifier using a neural network approach that was developed in our previous studies. The study is carried out for the Joint Experiment of Crop Assessment and Monitoring (JECAM) test site in Ukraine covering the Kyiv region (North of Ukraine) in 2013. We found that Google Earth Engine (GEE) provides very good performance in terms of enabling access to the remote sensing products through the cloud platform and providing pre-processing; however, in terms of classification accuracy, the neural network based approach outperformed support vector machine (SVM), decision tree and random forest classifiers available in GEE.

  7. Exploring Cloud Computing for Large-scale Scientific Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Guang; Han, Binh; Yin, Jian

    This paper explores cloud computing for large-scale data-intensive scientific applications. Cloud computing is attractive because it provides hardware and software resources on-demand, which relieves the burden of acquiring and maintaining a huge amount of resources that may be used only once by a scientific application. However, unlike typical commercial applications that often just requires a moderate amount of ordinary resources, large-scale scientific applications often need to process enormous amount of data in the terabyte or even petabyte range and require special high performance hardware with low latency connections to complete computation in a reasonable amount of time. To address thesemore » challenges, we build an infrastructure that can dynamically select high performance computing hardware across institutions and dynamically adapt the computation to the selected resources to achieve high performance. We have also demonstrated the effectiveness of our infrastructure by building a system biology application and an uncertainty quantification application for carbon sequestration, which can efficiently utilize data and computation resources across several institutions.« less

  8. Report on phase 1 of the Microprocessor Seminar. [and associated large scale integration

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Proceedings of a seminar on microprocessors and associated large scale integrated (LSI) circuits are presented. The potential for commonality of device requirements, candidate processes and mechanisms for qualifying candidate LSI technologies for high reliability applications, and specifications for testing and testability were among the topics discussed. Various programs and tentative plans of the participating organizations in the development of high reliability LSI circuits are given.

  9. Breaking barriers through collaboration: the example of the Cell Migration Consortium.

    PubMed

    Horwitz, Alan Rick; Watson, Nikki; Parsons, J Thomas

    2002-10-15

    Understanding complex integrated biological processes, such as cell migration, requires interdisciplinary approaches. The Cell Migration Consortium, funded by a Large-Scale Collaborative Project Award from the National Institute of General Medical Science, develops and disseminates new technologies, data, reagents, and shared information to a wide audience. The development and operation of this Consortium may provide useful insights for those who plan similarly large-scale, interdisciplinary approaches.

  10. Experience with specifications applicable to certification. [of photovoltaic modules for large-scale application

    NASA Technical Reports Server (NTRS)

    Ross, R. G., Jr.

    1982-01-01

    The Jet Propulsion Laboratory has developed a number of photovoltaic test and measurement specifications to guide the development of modules toward the requirements of future large-scale applications. Experience with these specifications and the extensive module measurement and testing that has accompanied their use is examined. Conclusions are drawn relative to three aspects of product certification: performance measurement, endurance testing and safety evaluation.

  11. [A large-scale accident in Alpine terrain].

    PubMed

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  12. Using Computing and Data Grids for Large-Scale Science and Engineering

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2001-01-01

    We use the term "Grid" to refer to a software system that provides uniform and location independent access to geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. These emerging data and computing Grids promise to provide a highly capable and scalable environment for addressing large-scale science problems. We describe the requirements for science Grids, the resulting services and architecture of NASA's Information Power Grid (IPG) and DOE's Science Grid, and some of the scaling issues that have come up in their implementation.

  13. Collecting verbal autopsies: improving and streamlining data collection processes using electronic tablets.

    PubMed

    Flaxman, Abraham D; Stewart, Andrea; Joseph, Jonathan C; Alam, Nurul; Alam, Sayed Saidul; Chowdhury, Hafizur; Mooney, Meghan D; Rampatige, Rasika; Remolador, Hazel; Sanvictores, Diozele; Serina, Peter T; Streatfield, Peter Kim; Tallo, Veronica; Murray, Christopher J L; Hernandez, Bernardo; Lopez, Alan D; Riley, Ian Douglas

    2018-02-01

    There is increasing interest in using verbal autopsy to produce nationally representative population-level estimates of causes of death. However, the burden of processing a large quantity of surveys collected with paper and pencil has been a barrier to scaling up verbal autopsy surveillance. Direct electronic data capture has been used in other large-scale surveys and can be used in verbal autopsy as well, to reduce time and cost of going from collected data to actionable information. We collected verbal autopsy interviews using paper and pencil and using electronic tablets at two sites, and measured the cost and time required to process the surveys for analysis. From these cost and time data, we extrapolated costs associated with conducting large-scale surveillance with verbal autopsy. We found that the median time between data collection and data entry for surveys collected on paper and pencil was approximately 3 months. For surveys collected on electronic tablets, this was less than 2 days. For small-scale surveys, we found that the upfront costs of purchasing electronic tablets was the primary cost and resulted in a higher total cost. For large-scale surveys, the costs associated with data entry exceeded the cost of the tablets, so electronic data capture provides both a quicker and cheaper method of data collection. As countries increase verbal autopsy surveillance, it is important to consider the best way to design sustainable systems for data collection. Electronic data capture has the potential to greatly reduce the time and costs associated with data collection. For long-term, large-scale surveillance required by national vital statistical systems, electronic data capture reduces costs and allows data to be available sooner.

  14. Performance of ceramic superconductors in magnetic bearings

    NASA Technical Reports Server (NTRS)

    Kirtley, James L., Jr.; Downer, James R.

    1993-01-01

    Magnetic bearings are large-scale applications of magnet technology, quite similar in certain ways to synchronous machinery. They require substantial flux density over relatively large volumes of space. Large flux density is required to have satisfactory force density. Satisfactory dynamic response requires that magnetic circuit permeances not be too large, implying large air gaps. Superconductors, which offer large magnetomotive forces and high flux density in low permeance circuits, appear to be desirable in these situations. Flux densities substantially in excess of those possible with iron can be produced, and no ferromagnetic material is required. Thus the inductance of active coils can be made low, indicating good dynamic response of the bearing system. The principal difficulty in using superconductors is, of course, the deep cryogenic temperatures at which they must operate. Because of the difficulties in working with liquid helium, the possibility of superconductors which can be operated in liquid nitrogen is thought to extend the number and range of applications of superconductivity. Critical temperatures of about 98 degrees Kelvin were demonstrated in a class of materials which are, in fact, ceramics. Quite a bit of public attention was attracted to these new materials. There is a difficulty with the ceramic superconducting materials which were developed to date. Current densities sufficient for use in large-scale applications have not been demonstrated. In order to be useful, superconductors must be capable of carrying substantial currents in the presence of large magnetic fields. The possible use of ceramic superconductors in magnetic bearings is investigated and discussed and requirements that must be achieved by superconductors operating at liquid nitrogen temperatures to make their use comparable with niobium-titanium superconductors operating at liquid helium temperatures are identified.

  15. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  16. Policy Driven Development: Flexible Policy Insertion for Large Scale Systems.

    PubMed

    Demchak, Barry; Krüger, Ingolf

    2012-07-01

    The success of a software system depends critically on how well it reflects and adapts to stakeholder requirements. Traditional development methods often frustrate stakeholders by creating long latencies between requirement articulation and system deployment, especially in large scale systems. One source of latency is the maintenance of policy decisions encoded directly into system workflows at development time, including those involving access control and feature set selection. We created the Policy Driven Development (PDD) methodology to address these development latencies by enabling the flexible injection of decision points into existing workflows at runtime , thus enabling policy composition that integrates requirements furnished by multiple, oblivious stakeholder groups. Using PDD, we designed and implemented a production cyberinfrastructure that demonstrates policy and workflow injection that quickly implements stakeholder requirements, including features not contemplated in the original system design. PDD provides a path to quickly and cost effectively evolve such applications over a long lifetime.

  17. Generating large-scale estimates from sparse, in-situ networks: multi-scale soil moisture modeling at ARS watersheds for NASA’s soil moisture active passive (SMAP) calibration/validation mission

    USDA-ARS?s Scientific Manuscript database

    NASA’s SMAP satellite, launched in November of 2014, produces estimates of average volumetric soil moisture at 3, 9, and 36-kilometer scales. The calibration and validation process of these estimates requires the generation of an identically-scaled soil moisture product from existing in-situ networ...

  18. Mean-field dynamo in a turbulence with shear and kinetic helicity fluctuations.

    PubMed

    Kleeorin, Nathan; Rogachevskii, Igor

    2008-03-01

    We study the effects of kinetic helicity fluctuations in a turbulence with large-scale shear using two different approaches: the spectral tau approximation and the second-order correlation approximation (or first-order smoothing approximation). These two approaches demonstrate that homogeneous kinetic helicity fluctuations alone with zero mean value in a sheared homogeneous turbulence cannot cause a large-scale dynamo. A mean-field dynamo is possible when the kinetic helicity fluctuations are inhomogeneous, which causes a nonzero mean alpha effect in a sheared turbulence. On the other hand, the shear-current effect can generate a large-scale magnetic field even in a homogeneous nonhelical turbulence with large-scale shear. This effect was investigated previously for large hydrodynamic and magnetic Reynolds numbers. In this study we examine the threshold required for the shear-current dynamo versus Reynolds number. We demonstrate that there is no need for a developed inertial range in order to maintain the shear-current dynamo (e.g., the threshold in the Reynolds number is of the order of 1).

  19. Study of multi-functional precision optical measuring system for large scale equipment

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi

    2017-10-01

    The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.

  20. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  1. Reversible Parallel Discrete-Event Execution of Large-scale Epidemic Outbreak Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S; Seal, Sudip K

    2010-01-01

    The spatial scale, runtime speed and behavioral detail of epidemic outbreak simulations together require the use of large-scale parallel processing. In this paper, an optimistic parallel discrete event execution of a reaction-diffusion simulation model of epidemic outbreaks is presented, with an implementation over themore » $$\\mu$$sik simulator. Rollback support is achieved with the development of a novel reversible model that combines reverse computation with a small amount of incremental state saving. Parallel speedup and other runtime performance metrics of the simulation are tested on a small (8,192-core) Blue Gene / P system, while scalability is demonstrated on 65,536 cores of a large Cray XT5 system. Scenarios representing large population sizes (up to several hundred million individuals in the largest case) are exercised.« less

  2. Resolving Properties of Polymers and Nanoparticle Assembly through Coarse-Grained Computational Studies.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grest, Gary S.

    2017-09-01

    Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects themore » measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.« less

  3. Mesoscale Predictability and Error Growth in Short Range Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Gingrich, Mark

    Although it was originally suggested that small-scale, unresolved errors corrupt forecasts at all scales through an inverse error cascade, some authors have proposed that those mesoscale circulations resulting from stationary forcing on the larger scale may inherit the predictability of the large-scale motions. Further, the relative contributions of large- and small-scale uncertainties in producing error growth in the mesoscales remain largely unknown. Here, 100 member ensemble forecasts are initialized from an ensemble Kalman filter (EnKF) to simulate two winter storms impacting the East Coast of the United States in 2010. Four verification metrics are considered: the local snow water equivalence, total liquid water, and 850 hPa temperatures representing mesoscale features; and the sea level pressure field representing a synoptic feature. It is found that while the predictability of the mesoscale features can be tied to the synoptic forecast, significant uncertainty existed on the synoptic scale at lead times as short as 18 hours. Therefore, mesoscale details remained uncertain in both storms due to uncertainties at the large scale. Additionally, the ensemble perturbation kinetic energy did not show an appreciable upscale propagation of error for either case. Instead, the initial condition perturbations from the cycling EnKF were maximized at large scales and immediately amplified at all scales without requiring initial upscale propagation. This suggests that relatively small errors in the synoptic-scale initialization may have more importance in limiting predictability than errors in the unresolved, small-scale initial conditions.

  4. Distributed Storage Inverter and Legacy Generator Integration Plus Renewables Solution for Microgrids

    DTIC Science & Technology

    2015-07-01

    Reactive kVAR Kilo Watts kW Lithium Ion Li Ion Lithium-Titanate Oxide nLTO Natural gas NG Performance Objectives PO Photovoltaic PV Power ...cloud covered) periods. The demonstration features a large (relative to the overall system power requirements) photovoltaic solar array, whose inverter...microgrid with less expensive power storage instead of large scale energy storage and that the renewable energy with small-scale power storage can

  5. Data Crosscutting Requirements Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleese van Dam, Kerstin; Shoshani, Arie; Plata, Charity

    2013-04-01

    In April 2013, a diverse group of researchers from the U.S. Department of Energy (DOE) scientific community assembled to assess data requirements associated with DOE-sponsored scientific facilities and large-scale experiments. Participants in the review included facilities staff, program managers, and scientific experts from the offices of Basic Energy Sciences, Biological and Environmental Research, High Energy Physics, and Advanced Scientific Computing Research. As part of the meeting, review participants discussed key issues associated with three distinct aspects of the data challenge: 1) processing, 2) management, and 3) analysis. These discussions identified commonalities and differences among the needs of varied scientific communities.more » They also helped to articulate gaps between current approaches and future needs, as well as the research advances that will be required to close these gaps. Moreover, the review provided a rare opportunity for experts from across the Office of Science to learn about their collective expertise, challenges, and opportunities. The "Data Crosscutting Requirements Review" generated specific findings and recommendations for addressing large-scale data crosscutting requirements.« less

  6. Development of a Two-Stage Microalgae Dewatering Process – A Life Cycle Assessment Approach

    PubMed Central

    Soomro, Rizwan R.; Zeng, Xianhai; Lu, Yinghua; Lin, Lu; Danquah, Michael K.

    2016-01-01

    Even though microalgal biomass is leading the third generation biofuel research, significant effort is required to establish an economically viable commercial-scale microalgal biofuel production system. Whilst a significant amount of work has been reported on large-scale cultivation of microalgae using photo-bioreactors and pond systems, research focus on establishing high performance downstream dewatering operations for large-scale processing under optimal economy is limited. The enormous amount of energy and associated cost required for dewatering large-volume microalgal cultures has been the primary hindrance to the development of the needed biomass quantity for industrial-scale microalgal biofuels production. The extremely dilute nature of large-volume microalgal suspension and the small size of microalgae cells in suspension create a significant processing cost during dewatering and this has raised major concerns towards the economic success of commercial-scale microalgal biofuel production as an alternative to conventional petroleum fuels. This article reports an effective framework to assess the performance of different dewatering technologies as the basis to establish an effective two-stage dewatering system. Bioflocculation coupled with tangential flow filtration (TFF) emerged a promising technique with total energy input of 0.041 kWh, 0.05 kg CO2 emissions and a cost of $ 0.0043 for producing 1 kg of microalgae biomass. A streamlined process for operational analysis of two-stage microalgae dewatering technique, encompassing energy input, carbon dioxide emission, and process cost, is presented. PMID:26904075

  7. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  8. Robust scalable stabilisability conditions for large-scale heterogeneous multi-agent systems with uncertain nonlinear interactions: towards a distributed computing architecture

    NASA Astrophysics Data System (ADS)

    Manfredi, Sabato

    2016-06-01

    Large-scale dynamic systems are becoming highly pervasive in their occurrence with applications ranging from system biology, environment monitoring, sensor networks, and power systems. They are characterised by high dimensionality, complexity, and uncertainty in the node dynamic/interactions that require more and more computational demanding methods for their analysis and control design, as well as the network size and node system/interaction complexity increase. Therefore, it is a challenging problem to find scalable computational method for distributed control design of large-scale networks. In this paper, we investigate the robust distributed stabilisation problem of large-scale nonlinear multi-agent systems (briefly MASs) composed of non-identical (heterogeneous) linear dynamical systems coupled by uncertain nonlinear time-varying interconnections. By employing Lyapunov stability theory and linear matrix inequality (LMI) technique, new conditions are given for the distributed control design of large-scale MASs that can be easily solved by the toolbox of MATLAB. The stabilisability of each node dynamic is a sufficient assumption to design a global stabilising distributed control. The proposed approach improves some of the existing LMI-based results on MAS by both overcoming their computational limits and extending the applicative scenario to large-scale nonlinear heterogeneous MASs. Additionally, the proposed LMI conditions are further reduced in terms of computational requirement in the case of weakly heterogeneous MASs, which is a common scenario in real application where the network nodes and links are affected by parameter uncertainties. One of the main advantages of the proposed approach is to allow to move from a centralised towards a distributed computing architecture so that the expensive computation workload spent to solve LMIs may be shared among processors located at the networked nodes, thus increasing the scalability of the approach than the network size. Finally, a numerical example shows the applicability of the proposed method and its advantage in terms of computational complexity when compared with the existing approaches.

  9. Solving large scale traveling salesman problems by chaotic neurodynamics.

    PubMed

    Hasegawa, Mikio; Ikeguch, Tohru; Aihara, Kazuyuki

    2002-03-01

    We propose a novel approach for solving large scale traveling salesman problems (TSPs) by chaotic dynamics. First, we realize the tabu search on a neural network, by utilizing the refractory effects as the tabu effects. Then, we extend it to a chaotic neural network version. We propose two types of chaotic searching methods, which are based on two different tabu searches. While the first one requires neurons of the order of n2 for an n-city TSP, the second one requires only n neurons. Moreover, an automatic parameter tuning method of our chaotic neural network is presented for easy application to various problems. Last, we show that our method with n neurons is applicable to large TSPs such as an 85,900-city problem and exhibits better performance than the conventional stochastic searches and the tabu searches.

  10. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  11. Semihierarchical quantum repeaters based on moderate lifetime quantum memories

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Zhou, Zong-Quan; Hua, Yi-Lin; Li, Chuan-Feng; Guo, Guang-Can

    2017-01-01

    The construction of large-scale quantum networks relies on the development of practical quantum repeaters. Many approaches have been proposed with the goal of outperforming the direct transmission of photons, but most of them are inefficient or difficult to implement with current technology. Here, we present a protocol that uses a semihierarchical structure to improve the entanglement distribution rate while reducing the requirement of memory time to a range of tens of milliseconds. This protocol can be implemented with a fixed distance of elementary links and fixed requirements on quantum memories, which are independent of the total distance. This configuration is especially suitable for scalable applications in large-scale quantum networks.

  12. Large-Scale Wind-Tunnel Tests of an Airplane Model with an Unswept, Aspect-Ratio-10 Wing, Two Propellers, and Blowing Flaps

    NASA Technical Reports Server (NTRS)

    Griffin, Roy N., Jr.; Holzhauser, Curt A.; Weiberg, James A.

    1958-01-01

    An investigation was made to determine the lifting effectiveness and flow requirements of blowing over the trailing-edge flaps and ailerons on a large-scale model of a twin-engine, propeller-driven airplane having a high-aspect-ratio, thick, straight wing. With sufficient blowing jet momentum to prevent flow separation on the flap, the lift increment increased for flap deflections up to 80 deg (the maximum tested). This lift increment also increased with increasing propeller thrust coefficient. The blowing jet momentum coefficient required for attached flow on the flaps was not significantly affected by thrust coefficient, angle of attack, or blowing nozzle height.

  13. Requirements and principles for the implementation and construction of large-scale geographic information systems

    NASA Technical Reports Server (NTRS)

    Smith, Terence R.; Menon, Sudhakar; Star, Jeffrey L.; Estes, John E.

    1987-01-01

    This paper provides a brief survey of the history, structure and functions of 'traditional' geographic information systems (GIS), and then suggests a set of requirements that large-scale GIS should satisfy, together with a set of principles for their satisfaction. These principles, which include the systematic application of techniques from several subfields of computer science to the design and implementation of GIS and the integration of techniques from computer vision and image processing into standard GIS technology, are discussed in some detail. In particular, the paper provides a detailed discussion of questions relating to appropriate data models, data structures and computational procedures for the efficient storage, retrieval and analysis of spatially-indexed data.

  14. Autonomous management of a recursive area hierarchy for large scale wireless sensor networks using multiple parents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cree, Johnathan Vee; Delgado-Frias, Jose

    Large scale wireless sensor networks have been proposed for applications ranging from anomaly detection in an environment to vehicle tracking. Many of these applications require the networks to be distributed across a large geographic area while supporting three to five year network lifetimes. In order to support these requirements large scale wireless sensor networks of duty-cycled devices need a method of efficient and effective autonomous configuration/maintenance. This method should gracefully handle the synchronization tasks duty-cycled networks. Further, an effective configuration solution needs to recognize that in-network data aggregation and analysis presents significant benefits to wireless sensor network and should configuremore » the network in a way such that said higher level functions benefit from the logically imposed structure. NOA, the proposed configuration and maintenance protocol, provides a multi-parent hierarchical logical structure for the network that reduces the synchronization workload. It also provides higher level functions with significant inherent benefits such as but not limited to: removing network divisions that are created by single-parent hierarchies, guarantees for when data will be compared in the hierarchy, and redundancies for communication as well as in-network data aggregation/analysis/storage.« less

  15. A global probabilistic tsunami hazard assessment from earthquake sources

    USGS Publications Warehouse

    Davies, Gareth; Griffin, Jonathan; Lovholt, Finn; Glimsdal, Sylfest; Harbitz, Carl; Thio, Hong Kie; Lorito, Stefano; Basili, Roberto; Selva, Jacopo; Geist, Eric L.; Baptista, Maria Ana

    2017-01-01

    Large tsunamis occur infrequently but have the capacity to cause enormous numbers of casualties, damage to the built environment and critical infrastructure, and economic losses. A sound understanding of tsunami hazard is required to underpin management of these risks, and while tsunami hazard assessments are typically conducted at regional or local scales, globally consistent assessments are required to support international disaster risk reduction efforts, and can serve as a reference for local and regional studies. This study presents a global-scale probabilistic tsunami hazard assessment (PTHA), extending previous global-scale assessments based largely on scenario analysis. Only earthquake sources are considered, as they represent about 80% of the recorded damaging tsunami events. Globally extensive estimates of tsunami run-up height are derived at various exceedance rates, and the associated uncertainties are quantified. Epistemic uncertainties in the exceedance rates of large earthquakes often lead to large uncertainties in tsunami run-up. Deviations between modelled tsunami run-up and event observations are quantified, and found to be larger than suggested in previous studies. Accounting for these deviations in PTHA is important, as it leads to a pronounced increase in predicted tsunami run-up for a given exceedance rate.

  16. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  17. Temporal transferability of soil moisture calibration equations

    USDA-ARS?s Scientific Manuscript database

    Several large-scale field campaigns have been conducted over the last 20 years that require accurate estimates of soil moisture conditions. These measurements are manually conducted using soil moisture probes which require calibration. The calibration process involves the collection of hundreds of...

  18. A novel representation of groundwater dynamics in large-scale land surface modelling

    NASA Astrophysics Data System (ADS)

    Rahman, Mostaquimur; Rosolem, Rafael; Kollet, Stefan

    2017-04-01

    Land surface processes are connected to groundwater dynamics via shallow soil moisture. For example, groundwater affects evapotranspiration (by influencing the variability of soil moisture) and runoff generation mechanisms. However, contemporary Land Surface Models (LSM) generally consider isolated soil columns and free drainage lower boundary condition for simulating hydrology. This is mainly due to the fact that incorporating detailed groundwater dynamics in LSMs usually requires considerable computing resources, especially for large-scale applications (e.g., continental to global). Yet, these simplifications undermine the potential effect of groundwater dynamics on land surface mass and energy fluxes. In this study, we present a novel approach of representing high-resolution groundwater dynamics in LSMs that is computationally efficient for large-scale applications. This new parameterization is incorporated in the Joint UK Land Environment Simulator (JULES) and tested at the continental-scale.

  19. Investigating the dependence of SCM simulated precipitation and clouds on the spatial scale of large-scale forcing at SGP [Investigating the scale dependence of SCM simulated precipitation and cloud by using gridded forcing data at SGP

    DOE PAGES

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    2017-08-05

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  20. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  1. Scaling the Pyramid Model across Complex Systems Providing Early Care for Preschoolers: Exploring How Models for Decision Making May Enhance Implementation Science

    ERIC Educational Resources Information Center

    Johnson, LeAnne D.

    2017-01-01

    Bringing effective practices to scale across large systems requires attending to how information and belief systems come together in decisions to adopt, implement, and sustain those practices. Statewide scaling of the Pyramid Model, a framework for positive behavior intervention and support, across different types of early childhood programs…

  2. Community-based native seed production for restoration in Brazil - the role of science and policy.

    PubMed

    Schmidt, I B; de Urzedo, D I; Piña-Rodrigues, F C M; Vieira, D L M; de Rezende, G M; Sampaio, A B; Junqueira, R G P

    2018-05-20

    Large-scale restoration programmes in the tropics require large volumes of high quality, genetically diverse and locally adapted seeds from a large number of species. However, scarcity of native seeds is a critical restriction to achieve restoration targets. In this paper, we analyse three successful community-based networks that supply native seeds and seedlings for Brazilian Amazon and Cerrado restoration projects. In addition, we propose directions to promote local participation, legal, technical and commercialisation issues for up-scaling the market of native seeds for restoration with high quality and social justice. We argue that effective community-based restoration arrangements should follow some principles: (i) seed production must be based on real market demand; (ii) non-governmental and governmental organisations have a key role in supporting local organisation, legal requirements and selling processes; (iii) local ecological knowledge and labour should be valued, enabling local communities to promote large-scale seed production; (iv) applied research can help develop appropriate techniques and solve technical issues. The case studies from Brazil and principles presented here can be useful for the up-scaling restoration ecology efforts in many other parts of the world and especially in tropical countries where improving rural community income is a strategy for biodiversity conservation and restoration. © 2018 German Society for Plant Sciences and The Royal Botanical Society of the Netherlands.

  3. Combining points and lines in rectifying satellite images

    NASA Astrophysics Data System (ADS)

    Elaksher, Ahmed F.

    2017-09-01

    The quick advance in remote sensing technologies established the potential to gather accurate and reliable information about the Earth surface using high resolution satellite images. Remote sensing satellite images of less than one-meter pixel size are currently used in large-scale mapping. Rigorous photogrammetric equations are usually used to describe the relationship between the image coordinates and ground coordinates. These equations require the knowledge of the exterior and interior orientation parameters of the image that might not be available. On the other hand, the parallel projection transformation could be used to represent the mathematical relationship between the image-space and objectspace coordinate systems and provides the required accuracy for large-scale mapping using fewer ground control features. This article investigates the differences between point-based and line-based parallel projection transformation models in rectifying satellite images with different resolutions. The point-based parallel projection transformation model and its extended form are presented and the corresponding line-based forms are developed. Results showed that the RMS computed using the point- or line-based transformation models are equivalent and satisfy the requirement for large-scale mapping. The differences between the transformation parameters computed using the point- and line-based transformation models are insignificant. The results showed high correlation between the differences in the ground elevation and the RMS.

  4. Large-scale thermal events in the solar nebula: evidence from Fe,Ni metal grains in primitive meteorites

    PubMed

    Meibom; Desch; Krot; Cuzzi; Petaev; Wilson; Keil

    2000-05-05

    Chemical zoning patterns in some iron, nickel metal grains from CH carbonaceous chondrites imply formation at temperatures from 1370 to 1270 kelvin by condensation from a solar nebular gas cooling at a rate of approximately 0.2 kelvin per hour. This cooling rate requires a large-scale thermal event in the nebula, in contrast to the localized, transient heating events inferred for chondrule formation. In our model, mass accretion through the protoplanetary disk caused large-scale evaporation of precursor dust near its midplane inside of a few astronomical units. Gas convectively moved from the midplane to cooler regions above it, and the metal grains condensed in these parcels of rising gas.

  5. Does lower Omega allow a resolution of the large-scale structure problem?

    NASA Technical Reports Server (NTRS)

    Silk, Joseph; Vittorio, Nicola

    1987-01-01

    The intermediate angular scale anisotropy of the cosmic microwave background, peculiar velocities, density correlations, and mass fluctuations for both neutrino and baryon-dominated universes with Omega less than one are evaluated. The large coherence length associated with a low-Omega, hot dark matter-dominated universe provides substantial density fluctuations on scales up to 100 Mpc: there is a range of acceptable models that are capable of producing large voids and superclusters of galaxies and the clustering of galaxy clusters, with Omega roughly 0.3, without violating any observational constraint. Low-Omega, cold dark matter-dominated cosmologies are also examined. All of these models may be reconciled with the inflationary requirement of a flat universe by introducing a cosmological constant 1-Omega.

  6. Implementation of a large-scale hospital information infrastructure for multi-unit health-care services.

    PubMed

    Yoo, Sun K; Kim, Dong Keun; Kim, Jung C; Park, Youn Jung; Chang, Byung Chul

    2008-01-01

    With the increase in demand for high quality medical services, the need for an innovative hospital information system has become essential. An improved system has been implemented in all hospital units of the Yonsei University Health System. Interoperability between multi-units required appropriate hardware infrastructure and software architecture. This large-scale hospital information system encompassed PACS (Picture Archiving and Communications Systems), EMR (Electronic Medical Records) and ERP (Enterprise Resource Planning). It involved two tertiary hospitals and 50 community hospitals. The monthly data production rate by the integrated hospital information system is about 1.8 TByte and the total quantity of data produced so far is about 60 TByte. Large scale information exchange and sharing will be particularly useful for telemedicine applications.

  7. Fire extinguishing agents for oxygen-enriched atmospheres

    NASA Astrophysics Data System (ADS)

    Plugge, M. A.; Wilson, C. W.; Zallen, D. M.; Walker, J. L.

    1985-12-01

    Fire-suppression agent requirements for extinguishing fires in oxygen-enriched atmospheres were determined employing small-, medium-, large-, and full-scale test apparatuses. The small- and medium-scale tests showed that a doubling of the oxygen concentration required five times more HALON for extinguishment. For fires of similar size and intensity, the effect of oxygen enrichment of the diluent volume in the HC-131A was not as grate as in the smaller compartments of the B-52 which presented a higher damage scenario. The full-scale tests showed that damage to the airframe was as important a factor in extinguishment as oxygen enrichment.

  8. Optimization of a Fluorescence-Based Assay for Large-Scale Drug Screening against Babesia and Theileria Parasites

    PubMed Central

    Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo

    2015-01-01

    A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses. PMID:25915529

  9. Optimization of a Fluorescence-Based Assay for Large-Scale Drug Screening against Babesia and Theileria Parasites.

    PubMed

    Rizk, Mohamed Abdo; El-Sayed, Shimaa Abd El-Salam; Terkawi, Mohamed Alaa; Youssef, Mohamed Ahmed; El Said, El Said El Shirbini; Elsayed, Gehad; El-Khodery, Sabry; El-Ashker, Maged; Elsify, Ahmed; Omar, Mosaab; Salama, Akram; Yokoyama, Naoaki; Igarashi, Ikuo

    2015-01-01

    A rapid and accurate assay for evaluating antibabesial drugs on a large scale is required for the discovery of novel chemotherapeutic agents against Babesia parasites. In the current study, we evaluated the usefulness of a fluorescence-based assay for determining the efficacies of antibabesial compounds against bovine and equine hemoparasites in in vitro cultures. Three different hematocrits (HCTs; 2.5%, 5%, and 10%) were used without daily replacement of the medium. The results of a high-throughput screening assay revealed that the best HCT was 2.5% for bovine Babesia parasites and 5% for equine Babesia and Theileria parasites. The IC50 values of diminazene aceturate obtained by fluorescence and microscopy did not differ significantly. Likewise, the IC50 values of luteolin, pyronaridine tetraphosphate, nimbolide, gedunin, and enoxacin did not differ between the two methods. In conclusion, our fluorescence-based assay uses low HCT and does not require daily replacement of culture medium, making it highly suitable for in vitro large-scale drug screening against Babesia and Theileria parasites that infect cattle and horses.

  10. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    NASA Astrophysics Data System (ADS)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we contend that creating believable soil carbon predictions requires a robust, transparent, and community-available benchmarking framework. I will present an ILAMB evaluation of several of the above-mentioned approaches in ACME, and attempt to motivate community adoption of this evaluation approach.

  11. Role of substrate quality on IC performance and yields

    NASA Technical Reports Server (NTRS)

    Thomas, R. N.

    1981-01-01

    The development of silicon and gallium arsenide crystal growth for the production of large diameter substrates are discussed. Large area substrates of significantly improved compositional purity, dopant distribution and structural perfection on a microscopic as well as macroscopic scale are important requirements. The exploratory use of magnetic fields to suppress convection effects in Czochralski crystal growth is addressed. The growth of large crystals in space appears impractical at present however the efforts to improve substrate quality could benefit from the experiences gained in smaller scale growth experiments conducted in the zero gravity environment of space.

  12. Using stroboscopic flow imaging to validate large-scale computational fluid dynamics simulations

    NASA Astrophysics Data System (ADS)

    Laurence, Ted A.; Ly, Sonny; Fong, Erika; Shusteff, Maxim; Randles, Amanda; Gounley, John; Draeger, Erik

    2017-02-01

    The utility and accuracy of computational modeling often requires direct validation against experimental measurements. The work presented here is motivated by taking a combined experimental and computational approach to determine the ability of large-scale computational fluid dynamics (CFD) simulations to understand and predict the dynamics of circulating tumor cells in clinically relevant environments. We use stroboscopic light sheet fluorescence imaging to track the paths and measure the velocities of fluorescent microspheres throughout a human aorta model. Performed over complex physiologicallyrealistic 3D geometries, large data sets are acquired with microscopic resolution over macroscopic distances.

  13. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael

    2012-01-01

    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  14. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  15. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    NASA Technical Reports Server (NTRS)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes generated by such subgrid-scale landscape discontinuities in large-scale atmospheric models.

  16. BECCS capability of dedicated bioenergy crops under a future land-use scenario targeting net negative carbon emissions

    NASA Astrophysics Data System (ADS)

    Kato, E.; Yamagata, Y.

    2014-12-01

    Bioenergy with Carbon Capture and Storage (BECCS) is a key component of mitigation strategies in future socio-economic scenarios that aim to keep mean global temperature rise below 2°C above pre-industrial, which would require net negative carbon emissions in the end of the 21st century. Because of the additional need for land, developing sustainable low-carbon scenarios requires careful consideration of the land-use implications of deploying large-scale BECCS. We evaluated the feasibility of the large-scale BECCS in RCP2.6, which is a scenario with net negative emissions aiming to keep the 2°C temperature target, with a top-down analysis of required yields and a bottom-up evaluation of BECCS potential using a process-based global crop model. Land-use change carbon emissions related to the land expansion were examined using a global terrestrial biogeochemical cycle model. Our analysis reveals that first-generation bioenergy crops would not meet the required BECCS of the RCP2.6 scenario even with a high fertilizer and irrigation application. Using second-generation bioenergy crops can marginally fulfill the required BECCS only if a technology of full post-process combustion CO2 capture is deployed with a high fertilizer application in the crop production. If such an assumed technological improvement does not occur in the future, more than doubling the area for bioenergy production for BECCS around 2050 assumed in RCP2.6 would be required, however, such scenarios implicitly induce large-scale land-use changes that would cancel half of the assumed CO2 sequestration by BECCS. Otherwise a conflict of land-use with food production is inevitable.

  17. BECCS capability of dedicated bioenergy crops under a future land-use scenario targeting net negative carbon emissions

    NASA Astrophysics Data System (ADS)

    Kato, Etsushi; Yamagata, Yoshiki

    2014-09-01

    Bioenergy with Carbon Capture and Storage (BECCS) is a key component of mitigation strategies in future socioeconomic scenarios that aim to keep mean global temperature rise below 2°C above preindustrial, which would require net negative carbon emissions in the end of the 21st century. Because of the additional need for land, developing sustainable low-carbon scenarios requires careful consideration of the land-use implications of deploying large scale BECCS. We evaluated the feasibility of the large-scale BECCS in RCP2.6, which is a scenario with net negative emissions aiming to keep the 2°C temperature target, with a top-down analysis of required yields and a bottom-up evaluation of BECCS potential using a process-based global crop model. Land-use change carbon emissions related to the land expansion were examined using a global terrestrial biogeochemical cycle model. Our analysis reveals that first-generation bioenergy crops would not meet the required BECCS of the RCP2.6 scenario even with a high-fertilizer and irrigation application. Using second-generation bioenergy crops can marginally fulfill the required BECCS only if a technology of full postprocess combustion CO2 capture is deployed with a high-fertilizer application in the crop production. If such an assumed technological improvement does not occur in the future, more than doubling the area for bioenergy production for BECCS around 2050 assumed in RCP2.6 would be required; however, such scenarios implicitly induce large-scale land-use changes that would cancel half of the assumed CO2 sequestration by BECCS. Otherwise, a conflict of land use with food production is inevitable.

  18. Development of the Large-Scale Forcing Data to Support MC3E Cloud Modeling Studies

    NASA Astrophysics Data System (ADS)

    Xie, S.; Zhang, Y.

    2011-12-01

    The large-scale forcing fields (e.g., vertical velocity and advective tendencies) are required to run single-column and cloud-resolving models (SCMs/CRMs), which are the two key modeling frameworks widely used to link field data to climate model developments. In this study, we use an advanced objective analysis approach to derive the required forcing data from the soundings collected by the Midlatitude Continental Convective Cloud Experiment (MC3E) in support of its cloud modeling studies. MC3E is the latest major field campaign conducted during the period 22 April 2011 to 06 June 2011 in south-central Oklahoma through a joint effort between the DOE ARM program and the NASA Global Precipitation Measurement Program. One of its primary goals is to provide a comprehensive dataset that can be used to describe the large-scale environment of convective cloud systems and evaluate model cumulus parameterizations. The objective analysis used in this study is the constrained variational analysis method. A unique feature of this approach is the use of domain-averaged surface and top-of-the atmosphere (TOA) observations (e.g., precipitation and radiative and turbulent fluxes) as constraints to adjust atmospheric state variables from soundings by the smallest possible amount to conserve column-integrated mass, moisture, and static energy so that the final analysis data is dynamically and thermodynamically consistent. To address potential uncertainties in the surface observations, an ensemble forcing dataset will be developed. Multi-scale forcing will be also created for simulating various scale convective systems. At the meeting, we will provide more details about the forcing development and present some preliminary analysis of the characteristics of the large-scale forcing structures for several selected convective systems observed during MC3E.

  19. Improving crop condition monitoring at field scale by using optimal Landsat and MODIS images

    USDA-ARS?s Scientific Manuscript database

    Satellite remote sensing data at coarse resolution (kilometers) have been widely used in monitoring crop condition for decades. However, crop condition monitoring at field scale requires high resolution data in both time and space. Although a large number of remote sensing instruments with different...

  20. Comparison of different estimation techniques for biomass concentration in large scale yeast fermentation.

    PubMed

    Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U

    2011-04-01

    In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Moving contact lines on vibrating surfaces

    NASA Astrophysics Data System (ADS)

    Solomenko, Zlatko; Spelt, Peter; Scott, Julian

    2017-11-01

    Large-scale simulations of flows with moving contact lines for realistic conditions generally requires a subgrid scale model (analyses based on matched asymptotics) to account for the unresolved part of the flow, given the large range of length scales involved near contact lines. Existing models for the interface shape in the contact-line region are primarily for steady flows on homogeneous substrates, with encouraging results in 3D simulations. Introduction of complexities would require further investigation of the contact-line region, however. Here we study flows with moving contact lines on planar substrates subject to vibrations, with applications in controlling wetting/dewetting. The challenge here is to determine the change in interface shape near contact lines due to vibrations. To develop further insight, 2D direct numerical simulations (wherein the flow is resolved down to an imposed slip length) have been performed to enable comparison with asymptotic theory, which is also developed further. Perspectives will also be presented on the final objective of the work, which is to develop a subgrid scale model that can be utilized in large-scale simulations. The authors gratefully acknowledge the ANR for financial support (ANR-15-CE08-0031) and the meso-centre FLMSN for use of computational resources. This work was Granted access to the HPC resources of CINES under the allocation A0012B06893 made by GENCI.

  2. Read-In Integrated Circuits for Large-Format Multi-Chip Emitter Arrays

    DTIC Science & Technology

    2015-03-31

    chip has been designed and fabricated using ONSEMI C5N process to verify our approach. Keywords: Large scale arrays; Tiling; Mosaic; Abutment ...required. X and y addressing is not a sustainable and easily expanded addressing architecture nor will it work well with abutted RIICs. Abutment Method... Abutting RIICs into an array is challenging because of the precise positioning required to achieve a uniform image. This problem is a new design

  3. An Optimization Code for Nonlinear Transient Problems of a Large Scale Multidisciplinary Mathematical Model

    NASA Astrophysics Data System (ADS)

    Takasaki, Koichi

    This paper presents a program for the multidisciplinary optimization and identification problem of the nonlinear model of large aerospace vehicle structures. The program constructs the global matrix of the dynamic system in the time direction by the p-version finite element method (pFEM), and the basic matrix for each pFEM node in the time direction is described by a sparse matrix similarly to the static finite element problem. The algorithm used by the program does not require the Hessian matrix of the objective function and so has low memory requirements. It also has a relatively low computational cost, and is suited to parallel computation. The program was integrated as a solver module of the multidisciplinary analysis system CUMuLOUS (Computational Utility for Multidisciplinary Large scale Optimization of Undense System) which is under development by the Aerospace Research and Development Directorate (ARD) of the Japan Aerospace Exploration Agency (JAXA).

  4. Applications of Parallel Process HiMAP for Large Scale Multidisciplinary Problems

    NASA Technical Reports Server (NTRS)

    Guruswamy, Guru P.; Potsdam, Mark; Rodriguez, David; Kwak, Dochay (Technical Monitor)

    2000-01-01

    HiMAP is a three level parallel middleware that can be interfaced to a large scale global design environment for code independent, multidisciplinary analysis using high fidelity equations. Aerospace technology needs are rapidly changing. Computational tools compatible with the requirements of national programs such as space transportation are needed. Conventional computation tools are inadequate for modern aerospace design needs. Advanced, modular computational tools are needed, such as those that incorporate the technology of massively parallel processors (MPP).

  5. Model Analysis of an Aircraft Fueslage Panel using Experimental and Finite-Element Techniques

    NASA Technical Reports Server (NTRS)

    Fleming, Gary A.; Buehrle, Ralph D.; Storaasli, Olaf L.

    1998-01-01

    The application of Electro-Optic Holography (EOH) for measuring the center bay vibration modes of an aircraft fuselage panel under forced excitation is presented. The requirement of free-free panel boundary conditions made the acquisition of quantitative EOH data challenging since large scale rigid body motions corrupted measurements of the high frequency vibrations of interest. Image processing routines designed to minimize effects of large scale motions were applied to successfully resurrect quantitative EOH vibrational amplitude measurements

  6. Scale-Up: Improving Large Enrollment Physics Courses

    NASA Astrophysics Data System (ADS)

    Beichner, Robert

    1999-11-01

    The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.

  7. The X-ray emission mechanism of large scale powerful quasar jets: Fermi rules out IC/CMB for 3C 273.

    NASA Astrophysics Data System (ADS)

    Georganopoulos, Markos; Meyer, Eileen T.

    2013-12-01

    The process responsible for the Chandra-detected X-ray emission from the large-scale jets of powerful quasars is not clear yet. The two main models are inverse Compton scattering off the cosmic microwave background photons (IC/CMB) and synchrotron emission from a population of electrons separate from those producing the radio-IR emission. These two models imply radically different conditions in the large scale jet in terms of jet speed, kinetic power, and maximum energy of the particle acceleration mechanism, with important implications for the impact of the jet on the larger-scale environment. Georganopoulos et al. (2006) proposed a diagnostic based on a fundamental difference between these two models: the production of synchrotron X-rays requires multi-TeV electrons, while the EC/CMB model requires a cutoff in the electron energy distribution below TeV energies. This has significant implications for the γ-ray emission predicted by these two models. Here we present new Fermi observations that put an upper limit on the gamma-ray flux from the large-scale jet of 3C 273 that clearly violates the flux expected from the IC/CMB X-ray interpretation found by extrapolation of the UV to X-ray spectrum of knot A, thus ruling out the IC/CMB interpretation entirely for this source. Further, the upper limit from Fermi puts a limit on the Doppler beaming factor of at least δ <9, assuming equipartition fields, and possibly as low as δ <5 assuming no major deceleration of the jet from knots A through D1.

  8. Soil organic carbon across scales.

    PubMed

    O'Rourke, Sharon M; Angers, Denis A; Holden, Nicholas M; McBratney, Alex B

    2015-10-01

    Mechanistic understanding of scale effects is important for interpreting the processes that control the global carbon cycle. Greater attention should be given to scale in soil organic carbon (SOC) science so that we can devise better policy to protect/enhance existing SOC stocks and ensure sustainable use of soils. Global issues such as climate change require consideration of SOC stock changes at the global and biosphere scale, but human interaction occurs at the landscape scale, with consequences at the pedon, aggregate and particle scales. This review evaluates our understanding of SOC across all these scales in the context of the processes involved in SOC cycling at each scale and with emphasis on stabilizing SOC. Current synergy between science and policy is explored at each scale to determine how well each is represented in the management of SOC. An outline of how SOC might be integrated into a framework of soil security is examined. We conclude that SOC processes at the biosphere to biome scales are not well understood. Instead, SOC has come to be viewed as a large-scale pool subjects to carbon flux. Better understanding exists for SOC processes operating at the scales of the pedon, aggregate and particle. At the landscape scale, the influence of large- and small-scale processes has the greatest interaction and is exposed to the greatest modification through agricultural management. Policy implemented at regional or national scale tends to focus at the landscape scale without due consideration of the larger scale factors controlling SOC or the impacts of policy for SOC at the smaller SOC scales. What is required is a framework that can be integrated across a continuum of scales to optimize SOC management. © 2015 John Wiley & Sons Ltd.

  9. Get It Together

    ERIC Educational Resources Information Center

    Coffey, Dave

    2006-01-01

    The scale of the mechanical and plumbing systems required to support a large, multi-building academic health sciences/research center entails a lot of ductwork. Getting mechanical systems installed and running while carrying out activities from other building disciplines requires a great deal of coordinated effort. A university and its…

  10. The large-scale effect of environment on galactic conformity

    NASA Astrophysics Data System (ADS)

    Sun, Shuangpeng; Guo, Qi; Wang, Lan; Lacey, Cedric G.; Wang, Jie; Gao, Liang; Pan, Jun

    2018-07-01

    We use a volume-limited galaxy sample from the Sloan Digital Sky Survey Data Release 7 to explore the dependence of galactic conformity on the large-scale environment, measured on ˜4 Mpc scales. We find that the star formation activity of neighbour galaxies depends more strongly on the environment than on the activity of their primary galaxies. In underdense regions most neighbour galaxies tend to be active, while in overdense regions neighbour galaxies are mostly passive, regardless of the activity of their primary galaxies. At a given stellar mass, passive primary galaxies reside in higher density regions than active primary galaxies, leading to the apparently strong conformity signal. The dependence of the activity of neighbour galaxies on environment can be explained by the corresponding dependence of the fraction of satellite galaxies. Similar results are found for galaxies in a semi-analytical model, suggesting that no new physics is required to explain the observed large-scale conformity.

  11. Toward exascale production of recombinant adeno-associated virus for gene transfer applications.

    PubMed

    Cecchini, S; Negrete, A; Kotin, R M

    2008-06-01

    To gain acceptance as a medical treatment, adeno-associated virus (AAV) vectors require a scalable and economical production method. Recent developments indicate that recombinant AAV (rAAV) production in insect cells is compatible with current good manufacturing practice production on an industrial scale. This platform can fully support development of rAAV therapeutics from tissue culture to small animal models, to large animal models, to toxicology studies, to Phase I clinical trials and beyond. Efforts to characterize, optimize and develop insect cell-based rAAV production have culminated in successful bioreactor-scale production of rAAV, with total yields potentially capable of approaching the exa-(10(18)) scale. These advances in large-scale AAV production will allow us to address specific catastrophic, intractable human diseases such as Duchenne muscular dystrophy, for which large amounts of recombinant vector are essential for successful outcome.

  12. A class of hybrid finite element methods for electromagnetics: A review

    NASA Technical Reports Server (NTRS)

    Volakis, J. L.; Chatterjee, A.; Gong, J.

    1993-01-01

    Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.

  13. Control of fluxes in metabolic networks

    PubMed Central

    Basler, Georg; Nikoloski, Zoran; Larhlimi, Abdelhalim; Barabási, Albert-László; Liu, Yang-Yu

    2016-01-01

    Understanding the control of large-scale metabolic networks is central to biology and medicine. However, existing approaches either require specifying a cellular objective or can only be used for small networks. We introduce new coupling types describing the relations between reaction activities, and develop an efficient computational framework, which does not require any cellular objective for systematic studies of large-scale metabolism. We identify the driver reactions facilitating control of 23 metabolic networks from all kingdoms of life. We find that unicellular organisms require a smaller degree of control than multicellular organisms. Driver reactions are under complex cellular regulation in Escherichia coli, indicating their preeminent role in facilitating cellular control. In human cancer cells, driver reactions play pivotal roles in malignancy and represent potential therapeutic targets. The developed framework helps us gain insights into regulatory principles of diseases and facilitates design of engineering strategies at the interface of gene regulation, signaling, and metabolism. PMID:27197218

  14. Using a framework to implement large-scale innovation in medical education with the intent of achieving sustainability.

    PubMed

    Hudson, Judith N; Farmer, Elizabeth A; Weston, Kathryn M; Bushnell, John A

    2015-01-16

    Particularly when undertaken on a large scale, implementing innovation in higher education poses many challenges. Sustaining the innovation requires early adoption of a coherent implementation strategy. Using an example from clinical education, this article describes a process used to implement a large-scale innovation with the intent of achieving sustainability. Desire to improve the effectiveness of undergraduate medical education has led to growing support for a longitudinal integrated clerkship (LIC) model. This involves a move away from the traditional clerkship of 'block rotations' with frequent changes in disciplines, to a focus upon clerkships with longer duration and opportunity for students to build sustained relationships with supervisors, mentors, colleagues and patients. A growing number of medical schools have adopted the LIC model for a small percentage of their students. At a time when increasing medical school numbers and class sizes are leading to competition for clinical supervisors it is however a daunting challenge to provide a longitudinal clerkship for an entire medical school class. This challenge is presented to illustrate the strategy used to implement sustainable large scale innovation. A strategy to implement and build a sustainable longitudinal integrated community-based clerkship experience for all students was derived from a framework arising from Roberto and Levesque's research in business. The framework's four core processes: chartering, learning, mobilising and realigning, provided guidance in preparing and rolling out the 'whole of class' innovation. Roberto and Levesque's framework proved useful for identifying the foundations of the implementation strategy, with special emphasis on the relationship building required to implement such an ambitious initiative. Although this was innovation in a new School it required change within the school, wider university and health community. Challenges encountered included some resistance to moving away from traditional hospital-centred education, initial student concern, resource limitations, workforce shortage and potential burnout of the innovators. Large-scale innovations in medical education may productively draw upon research from other disciplines for guidance on how to lay the foundations for successfully achieving sustainability.

  15. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE PAGES

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De; ...

    2017-01-28

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  16. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bicer, Tekin; Gursoy, Doga; Andrade, Vincent De

    Here, synchrotron light source and detector technologies enable scientists to perform advanced experiments. These scientific instruments and experiments produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used data acquisition technique at light sources is Computed Tomography, which can generate tens of GB/s depending on x-ray range. A large-scale tomographic dataset, such as mouse brain, may require hours of computation time with a medium size workstation. In this paper, we present Trace, a data-intensive computing middleware we developed for implementation and parallelization of iterative tomographic reconstruction algorithms. Tracemore » provides fine-grained reconstruction of tomography datasets using both (thread level) shared memory and (process level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations we have done on the replicated reconstruction objects and evaluate them using a shale and a mouse brain sinogram. Our experimental evaluations show that the applied optimizations and parallelization techniques can provide 158x speedup (using 32 compute nodes) over single core configuration, which decreases the reconstruction time of a sinogram (with 4501 projections and 22400 detector resolution) from 12.5 hours to less than 5 minutes per iteration.« less

  17. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  18. Improving efficiency of polystyrene concrete production with composite binders

    NASA Astrophysics Data System (ADS)

    Lesovik, R. V.; Ageeva, M. S.; Lesovik, G. A.; Sopin, D. M.; Kazlitina, O. V.; Mitrokhina, A. A.

    2018-03-01

    According to leading marketing researchers, the construction market in Russia and CIS will continue growing at a rapid rate; this applies not only to a large-scale major construction, but to a construction of single-family houses and small-scale industrial facilities as well. Due to this, there are increased requirements for heat insulation of the building enclosures and a significant demand for efficient walling materials with high thermal performance. All these developments led to higher requirements imposed on the equipment that produces such materials.

  19. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  20. Visual attention mitigates information loss in small- and large-scale neural codes

    PubMed Central

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-01-01

    Summary The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires processing sensory signals in a manner that protects information about relevant stimuli from degradation. Such selective processing – or selective attention – is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. PMID:25769502

  1. Parameter estimation in large-scale systems biology models: a parallel and self-adaptive cooperative strategy.

    PubMed

    Penas, David R; González, Patricia; Egea, Jose A; Doallo, Ramón; Banga, Julio R

    2017-01-21

    The development of large-scale kinetic models is one of the current key issues in computational systems biology and bioinformatics. Here we consider the problem of parameter estimation in nonlinear dynamic models. Global optimization methods can be used to solve this type of problems but the associated computational cost is very large. Moreover, many of these methods need the tuning of a number of adjustable search parameters, requiring a number of initial exploratory runs and therefore further increasing the computation times. Here we present a novel parallel method, self-adaptive cooperative enhanced scatter search (saCeSS), to accelerate the solution of this class of problems. The method is based on the scatter search optimization metaheuristic and incorporates several key new mechanisms: (i) asynchronous cooperation between parallel processes, (ii) coarse and fine-grained parallelism, and (iii) self-tuning strategies. The performance and robustness of saCeSS is illustrated by solving a set of challenging parameter estimation problems, including medium and large-scale kinetic models of the bacterium E. coli, bakerés yeast S. cerevisiae, the vinegar fly D. melanogaster, Chinese Hamster Ovary cells, and a generic signal transduction network. The results consistently show that saCeSS is a robust and efficient method, allowing very significant reduction of computation times with respect to several previous state of the art methods (from days to minutes, in several cases) even when only a small number of processors is used. The new parallel cooperative method presented here allows the solution of medium and large scale parameter estimation problems in reasonable computation times and with small hardware requirements. Further, the method includes self-tuning mechanisms which facilitate its use by non-experts. We believe that this new method can play a key role in the development of large-scale and even whole-cell dynamic models.

  2. Cloud-based federation and fusion of distributed data sources for supporting hurricane response : requirements, challenges, and opportunities.

    DOT National Transportation Integrated Search

    2016-12-01

    Geospatial data have been playing an increasingly important role in disaster response and recovery. For large-scale natural disasters such as Hurricanes which often have the capacity to topple a large region within a span of a few days, disaster prep...

  3. Voices Carry

    ERIC Educational Resources Information Center

    Guth, Douglas J.

    2017-01-01

    A community college's success hinges in large part on the effectiveness of its teaching faculty, no more so than in times of major organizational change. However, any large-scale foundational shift requires institutional buy-in, with the onus on leadership to create an environment where everyone is working together toward the same endpoint.…

  4. Building to Scale: An Analysis of Web-Based Services in CIC (Big Ten) Libraries.

    ERIC Educational Resources Information Center

    Dewey, Barbara I.

    Advancing library services in large universities requires creative approaches for "building to scale." This is the case for CIC, Committee on Institutional Cooperation (Big Ten), libraries whose home institutions serve thousands of students, faculty, staff, and others. Developing virtual Web-based services is an increasingly viable…

  5. Relative Costs of Various Types of Assessments.

    ERIC Educational Resources Information Center

    Wheeler, Patricia H.

    Issues of the relative costs of multiple choice tests and alternative types of assessment are explored. Before alternative assessments in large-scale or small-scale programs are used, attention must be given to cost considerations and the resources required to develop and implement the assessment. Major categories of cost to be considered are…

  6. The watershed-scale optimized and rearranged landscape design (WORLD) model and local biomass processing depots for sustainable biofuel production: Integrated life cycle assessments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eranki, Pragnya L.; Manowitz, David H.; Bals, Bryan D.

    An array of feedstock is being evaluated as potential raw material for cellulosic biofuel production. Thorough assessments are required in regional landscape settings before these feedstocks can be cultivated and sustainable management practices can be implemented. On the processing side, a potential solution to the logistical challenges of large biorefi neries is provided by a network of distributed processing facilities called local biomass processing depots. A large-scale cellulosic ethanol industry is likely to emerge soon in the United States. We have the opportunity to influence the sustainability of this emerging industry. The watershed-scale optimized and rearranged landscape design (WORLD) modelmore » estimates land allocations for different cellulosic feedstocks at biorefinery scale without displacing current animal nutrition requirements. This model also incorporates a network of the aforementioned depots. An integrated life cycle assessment is then conducted over the unified system of optimized feedstock production, processing, and associated transport operations to evaluate net energy yields (NEYs) and environmental impacts.« less

  7. Environmental impact assessment and environmental audit in large-scale public infrastructure construction: the case of the Qinghai-Tibet Railway.

    PubMed

    He, Guizhen; Zhang, Lei; Lu, Yonglong

    2009-09-01

    Large-scale public infrastructure projects have featured in China's modernization course since the early 1980s. During the early stages of China's rapid economic development, public attention focused on the economic and social impact of high-profile construction projects. In recent years, however, we have seen a shift in public concern toward the environmental and ecological effects of such projects, and today governments are required to provide valid environmental impact assessments prior to allowing large-scale construction. The official requirement for the monitoring of environmental conditions has led to an increased number of debates in recent years regarding the effectiveness of Environmental Impact Assessments (EIAs) and Governmental Environmental Audits (GEAs) as environmental safeguards in instances of large-scale construction. Although EIA and GEA are conducted by different institutions and have different goals and enforcement potential, these two practices can be closely related in terms of methodology. This article cites the construction of the Qinghai-Tibet Railway as an instance in which EIA and GEA offer complementary approaches to environmental impact management. This study concludes that the GEA approach can serve as an effective follow-up to the EIA and establishes that the EIA lays a base for conducting future GEAs. The relationship that emerges through a study of the Railway's construction calls for more deliberate institutional arrangements and cooperation if the two practices are to be used in concert to optimal effect.

  8. Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.

    PubMed

    Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T

    2017-01-01

    Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.

  9. Water limited agriculture in Africa: Climate change sensitivity of large scale land investments

    NASA Astrophysics Data System (ADS)

    Rulli, M. C.; D'Odorico, P.; Chiarelli, D. D.; Davis, K. F.

    2015-12-01

    The past few decades have seen unprecedented changes in the global agricultural system with a dramatic increase in the rates of food production fueled by an escalating demand for food calories, as a result of demographic growth, dietary changes, and - more recently - new bioenergy policies. Food prices have become consistently higher and increasingly volatile with dramatic spikes in 2007-08 and 2010-11. The confluence of these factors has heightened demand for land and brought a wave of land investment to the developing world: some of the more affluent countries are trying to secure land rights in areas suitable for agriculture. According to some estimates, to date, roughly 38 million hectares have been acquired worldwide by large scale investors, 16 million of which in Africa. More than 85% of large scale land acquisitions in Africa are by foreign investors. Many land deals are motivated not only by the need for fertile land but for the water resources required for crop production. Despite some recent assessments of the water appropriation associated with large scale land investments, their impact on the water resources of the target countries under present conditions and climate change scenarios remains poorly understood. Here we investigate irrigation water requirements by various crops planted in the acquired land as an indicator of the pressure likely placed by land investors on ("blue") water resources of target regions in Africa and evaluate the sensitivity to climate changes scenarios.

  10. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  11. Modified dispersion relations, inflation, and scale invariance

    NASA Astrophysics Data System (ADS)

    Bianco, Stefano; Friedhoff, Victor Nicolai; Wilson-Ewing, Edward

    2018-02-01

    For a certain type of modified dispersion relations, the vacuum quantum state for very short wavelength cosmological perturbations is scale-invariant and it has been suggested that this may be the source of the scale-invariance observed in the temperature anisotropies in the cosmic microwave background. We point out that for this scenario to be possible, it is necessary to redshift these short wavelength modes to cosmological scales in such a way that the scale-invariance is not lost. This requires nontrivial background dynamics before the onset of standard radiation-dominated cosmology; we demonstrate that one possible solution is inflation with a sufficiently large Hubble rate, for this slow roll is not necessary. In addition, we also show that if the slow-roll condition is added to inflation with a large Hubble rate, then for any power law modified dispersion relation quantum vacuum fluctuations become nearly scale-invariant when they exit the Hubble radius.

  12. Large-scale oscillatory calcium waves in the immature cortex.

    PubMed

    Garaschuk, O; Linn, J; Eilers, J; Konnerth, A

    2000-05-01

    Two-photon imaging of large neuronal networks in cortical slices of newborn rats revealed synchronized oscillations in intracellular Ca2+ concentration. These spontaneous Ca2+ waves usually started in the posterior cortex and propagated slowly (2.1 mm per second) toward its anterior end. Ca2+ waves were associated with field-potential changes and required activation of AMPA and NMDA receptors. Although GABAA receptors were not involved in wave initiation, the developmental transition of GABAergic transmission from depolarizing to hyperpolarizing (around postnatal day 7) stopped the oscillatory activity. Thus we identified a type of large-scale Ca2+ wave that may regulate long-distance wiring in the immature cortex.

  13. Friction Stir Welding of Large Scale Cryogenic Tanks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Russell, Carolyn; Ding, R. Jeffrey

    1998-01-01

    The Marshall Space Flight Center (MSFC) has established a facility for the joining of large-scale aluminum cryogenic propellant tanks using the friction stir welding process. Longitudinal welds, approximately five meters in length, have been made by retrofitting an existing vertical fusion weld system, designed to fabricate tank barrel sections ranging from two to ten meters in diameter. The structural design requirements of the tooling, clamping and travel system will be described in this presentation along with process controls and real-time data acquisition developed for this application. The approach to retrofitting other large welding tools at MSFC with the friction stir welding process will also be discussed.

  14. FDTD method for laser absorption in metals for large scale problems.

    PubMed

    Deng, Chun; Ki, Hyungson

    2013-10-21

    The FDTD method has been successfully used for many electromagnetic problems, but its application to laser material processing has been limited because even a several-millimeter domain requires a prohibitively large number of grids. In this article, we present a novel FDTD method for simulating large-scale laser beam absorption problems, especially for metals, by enlarging laser wavelength while maintaining the material's reflection characteristics. For validation purposes, the proposed method has been tested with in-house FDTD codes to simulate p-, s-, and circularly polarized 1.06 μm irradiation on Fe and Sn targets, and the simulation results are in good agreement with theoretical predictions.

  15. Improvement of CFD Methods for Modeling Full Scale Circulating Fluidized Bed Combustion Systems

    NASA Astrophysics Data System (ADS)

    Shah, Srujal; Klajny, Marcin; Myöhänen, Kari; Hyppänen, Timo

    With the currently available methods of computational fluid dynamics (CFD), the task of simulating full scale circulating fluidized bed combustors is very challenging. In order to simulate the complex fluidization process, the size of calculation cells should be small and the calculation should be transient with small time step size. For full scale systems, these requirements lead to very large meshes and very long calculation times, so that the simulation in practice is difficult. This study investigates the requirements of cell size and the time step size for accurate simulations, and the filtering effects caused by coarser mesh and longer time step. A modeling study of a full scale CFB furnace is presented and the model results are compared with experimental data.

  16. Satellite orbit and data sampling requirements

    NASA Technical Reports Server (NTRS)

    Rossow, William

    1993-01-01

    Climate forcings and feedbacks vary over a wide range of time and space scales. The operation of non-linear feedbacks can couple variations at widely separated time and space scales and cause climatological phenomena to be intermittent. Consequently, monitoring of global, decadal changes in climate requires global observations that cover the whole range of space-time scales and are continuous over several decades. The sampling of smaller space-time scales must have sufficient statistical accuracy to measure the small changes in the forcings and feedbacks anticipated in the next few decades, while continuity of measurements is crucial for unambiguous interpretation of climate change. Shorter records of monthly and regional (500-1000 km) measurements with similar accuracies can also provide valuable information about climate processes, when 'natural experiments' such as large volcanic eruptions or El Ninos occur. In this section existing satellite datasets and climate model simulations are used to test the satellite orbits and sampling required to achieve accurate measurements of changes in forcings and feedbacks at monthly frequency and 1000 km (regional) scale.

  17. Errors introduced by dose scaling for relative dosimetry

    PubMed Central

    Watanabe, Yoichi; Hayashi, Naoki

    2012-01-01

    Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658

  18. Single-trabecula building block for large-scale finite element models of cancellous bone.

    PubMed

    Dagan, D; Be'ery, M; Gefen, A

    2004-07-01

    Recent development of high-resolution imaging of cancellous bone allows finite element (FE) analysis of bone tissue stresses and strains in individual trabeculae. However, specimen-specific stress/strain analyses can include effects of anatomical variations and local damage that can bias the interpretation of the results from individual specimens with respect to large populations. This study developed a standard (generic) 'building-block' of a trabecula for large-scale FE models. Being parametric and based on statistics of dimensions of ovine trabeculae, this building block can be scaled for trabecular thickness and length and be used in commercial or custom-made FE codes to construct generic, large-scale FE models of bone, using less computer power than that currently required to reproduce the accurate micro-architecture of trabecular bone. Orthogonal lattices constructed with this building block, after it was scaled to trabeculae of the human proximal femur, provided apparent elastic moduli of approximately 150 MPa, in good agreement with experimental data for the stiffness of cancellous bone from this site. Likewise, lattices with thinner, osteoporotic-like trabeculae could predict a reduction of approximately 30% in the apparent elastic modulus, as reported in experimental studies of osteoporotic femora. Based on these comparisons, it is concluded that the single-trabecula element developed in the present study is well-suited for representing cancellous bone in large-scale generic FE simulations.

  19. Mechanisation of large-scale agricultural fields in developing countries - a review.

    PubMed

    Onwude, Daniel I; Abdulstter, Rafia; Gomes, Chandima; Hashim, Norhashila

    2016-09-01

    Mechanisation of large-scale agricultural fields often requires the application of modern technologies such as mechanical power, automation, control and robotics. These technologies are generally associated with relatively well developed economies. The application of these technologies in some developing countries in Africa and Asia is limited by factors such as technology compatibility with the environment, availability of resources to facilitate the technology adoption, cost of technology purchase, government policies, adequacy of technology and appropriateness in addressing the needs of the population. As a result, many of the available resources have been used inadequately by farmers, who continue to rely mostly on conventional means of agricultural production, using traditional tools and equipment in most cases. This has led to low productivity and high cost of production among others. Therefore this paper attempts to evaluate the application of present day technology and its limitations to the advancement of large-scale mechanisation in developing countries of Africa and Asia. Particular emphasis is given to a general understanding of the various levels of mechanisation, present day technology, its management and application to large-scale agricultural fields. This review also focuses on/gives emphasis to future outlook that will enable a gradual, evolutionary and sustainable technological change. The study concludes that large-scale-agricultural farm mechanisation for sustainable food production in Africa and Asia must be anchored on a coherent strategy based on the actual needs and priorities of the large-scale farmers. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  20. Potential climatic impacts and reliability of large-scale offshore wind farms

    NASA Astrophysics Data System (ADS)

    Wang, Chien; Prinn, Ronald G.

    2011-04-01

    The vast availability of wind power has fueled substantial interest in this renewable energy source as a potential near-zero greenhouse gas emission technology for meeting future world energy needs while addressing the climate change issue. However, in order to provide even a fraction of the estimated future energy needs, a large-scale deployment of wind turbines (several million) is required. The consequent environmental impacts, and the inherent reliability of such a large-scale usage of intermittent wind power would have to be carefully assessed, in addition to the need to lower the high current unit wind power costs. Our previous study (Wang and Prinn 2010 Atmos. Chem. Phys. 10 2053) using a three-dimensional climate model suggested that a large deployment of wind turbines over land to meet about 10% of predicted world energy needs in 2100 could lead to a significant temperature increase in the lower atmosphere over the installed regions. A global-scale perturbation to the general circulation patterns as well as to the cloud and precipitation distribution was also predicted. In the later study reported here, we conducted a set of six additional model simulations using an improved climate model to further address the potential environmental and intermittency issues of large-scale deployment of offshore wind turbines for differing installation areas and spatial densities. In contrast to the previous land installation results, the offshore wind turbine installations are found to cause a surface cooling over the installed offshore regions. This cooling is due principally to the enhanced latent heat flux from the sea surface to lower atmosphere, driven by an increase in turbulent mixing caused by the wind turbines which was not entirely offset by the concurrent reduction of mean wind kinetic energy. We found that the perturbation of the large-scale deployment of offshore wind turbines to the global climate is relatively small compared to the case of land-based installations. However, the intermittency caused by the significant seasonal wind variations over several major offshore sites is substantial, and demands further options to ensure the reliability of large-scale offshore wind power. The method that we used to simulate the offshore wind turbine effect on the lower atmosphere involved simply increasing the ocean surface drag coefficient. While this method is consistent with several detailed fine-scale simulations of wind turbines, it still needs further study to ensure its validity. New field observations of actual wind turbine arrays are definitely required to provide ultimate validation of the model predictions presented here.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng

    Large-scale forcing data, such as vertical velocity and advective tendencies, are required to drive single-column models (SCMs), cloud-resolving models, and large-eddy simulations. Previous studies suggest that some errors of these model simulations could be attributed to the lack of spatial variability in the specified domain-mean large-scale forcing. This study investigates the spatial variability of the forcing and explores its impact on SCM simulated precipitation and clouds. A gridded large-scale forcing data during the March 2000 Cloud Intensive Operational Period at the Atmospheric Radiation Measurement program's Southern Great Plains site is used for analysis and to drive the single-column version ofmore » the Community Atmospheric Model Version 5 (SCAM5). When the gridded forcing data show large spatial variability, such as during a frontal passage, SCAM5 with the domain-mean forcing is not able to capture the convective systems that are partly located in the domain or that only occupy part of the domain. This problem has been largely reduced by using the gridded forcing data, which allows running SCAM5 in each subcolumn and then averaging the results within the domain. This is because the subcolumns have a better chance to capture the timing of the frontal propagation and the small-scale systems. As a result, other potential uses of the gridded forcing data, such as understanding and testing scale-aware parameterizations, are also discussed.« less

  2. Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities (Book)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    2013-03-01

    To accomplish Federal goals for renewable energy, sustainability, and energy security, large-scale renewable energy projects must be developed and constructed on Federal sites at a significant scale with significant private investment. The U.S. Department of Energy's Federal Energy Management Program (FEMP) helps Federal agencies meet these goals and assists agency personnel navigate the complexities of developing such projects and attract the necessary private capital to complete them. This guide is intended to provide a general resource that will begin to develop the Federal employee's awareness and understanding of the project developer's operating environment and the private sector's awareness and understandingmore » of the Federal environment. Because the vast majority of the investment that is required to meet the goals for large-scale renewable energy projects will come from the private sector, this guide has been organized to match Federal processes with typical phases of commercial project development. The main purpose of this guide is to provide a project development framework to allow the Federal Government, private developers, and investors to work in a coordinated fashion on large-scale renewable energy projects. The framework includes key elements that describe a successful, financially attractive large-scale renewable energy project.« less

  3. Fast Generation of Ensembles of Cosmological N-Body Simulations via Mode-Resampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M D; Cole, S; Frenk, C S

    2011-02-14

    We present an algorithm for quickly generating multiple realizations of N-body simulations to be used, for example, for cosmological parameter estimation from surveys of large-scale structure. Our algorithm uses a new method to resample the large-scale (Gaussian-distributed) Fourier modes in a periodic N-body simulation box in a manner that properly accounts for the nonlinear mode-coupling between large and small scales. We find that our method for adding new large-scale mode realizations recovers the nonlinear power spectrum to sub-percent accuracy on scales larger than about half the Nyquist frequency of the simulation box. Using 20 N-body simulations, we obtain a powermore » spectrum covariance matrix estimate that matches the estimator from Takahashi et al. (from 5000 simulations) with < 20% errors in all matrix elements. Comparing the rates of convergence, we determine that our algorithm requires {approx}8 times fewer simulations to achieve a given error tolerance in estimates of the power spectrum covariance matrix. The degree of success of our algorithm indicates that we understand the main physical processes that give rise to the correlations in the matter power spectrum. Namely, the large-scale Fourier modes modulate both the degree of structure growth through the variation in the effective local matter density and also the spatial frequency of small-scale perturbations through large-scale displacements. We expect our algorithm to be useful for noise modeling when constraining cosmological parameters from weak lensing (cosmic shear) and galaxy surveys, rescaling summary statistics of N-body simulations for new cosmological parameter values, and any applications where the influence of Fourier modes larger than the simulation size must be accounted for.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sunayama, Tomomi; Padmanabhan, Nikhil; Heitmann, Katrin

    Precision measurements of the large scale structure of the Universe require large numbers of high fidelity mock catalogs to accurately assess, and account for, the presence of systematic effects. We introduce and test a scheme for generating mock catalogs rapidly using suitably derated N-body simulations. Our aim is to reproduce the large scale structure and the gross properties of dark matter halos with high accuracy, while sacrificing the details of the halo's internal structure. By adjusting global and local time-steps in an N-body code, we demonstrate that we recover halo masses to better than 0.5% and the power spectrum tomore » better than 1% both in real and redshift space for k =1 h Mpc{sup −1}, while requiring a factor of 4 less CPU time. We also calibrate the redshift spacing of outputs required to generate simulated light cones. We find that outputs separated by Δ z =0.05 allow us to interpolate particle positions and velocities to reproduce the real and redshift space power spectra to better than 1% (out to k =1 h Mpc{sup −1}). We apply these ideas to generate a suite of simulations spanning a range of cosmologies, motivated by the Baryon Oscillation Spectroscopic Survey (BOSS) but broadly applicable to future large scale structure surveys including eBOSS and DESI. As an initial demonstration of the utility of such simulations, we calibrate the shift in the baryonic acoustic oscillation peak position as a function of galaxy bias with higher precision than has been possible so far. This paper also serves to document the simulations, which we make publicly available.« less

  5. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; Werth, Charles J.; Valocchi, Albert J.

    2016-07-01

    Characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydrogeophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with "big data" processing and numerous large-scale numerical simulations. To tackle such difficulties, the principal component geostatistical approach (PCGA) has been proposed as a "Jacobian-free" inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed in the traditional inversion methods. PCGA can be conveniently linked to any multiphysics simulation software with independent parallel executions. In this paper, we extend PCGA to handle a large number of measurements (e.g., 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data were compressed by the zeroth temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Only about 2000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method.

  6. Large-Scale Distributed Computational Fluid Dynamics on the Information Power Grid Using Globus

    NASA Technical Reports Server (NTRS)

    Barnard, Stephen; Biswas, Rupak; Saini, Subhash; VanderWijngaart, Robertus; Yarrow, Maurice; Zechtzer, Lou; Foster, Ian; Larsson, Olle

    1999-01-01

    This paper describes an experiment in which a large-scale scientific application development for tightly-coupled parallel machines is adapted to the distributed execution environment of the Information Power Grid (IPG). A brief overview of the IPG and a description of the computational fluid dynamics (CFD) algorithm are given. The Globus metacomputing toolkit is used as the enabling device for the geographically-distributed computation. Modifications related to latency hiding and Load balancing were required for an efficient implementation of the CFD application in the IPG environment. Performance results on a pair of SGI Origin 2000 machines indicate that real scientific applications can be effectively implemented on the IPG; however, a significant amount of continued effort is required to make such an environment useful and accessible to scientists and engineers.

  7. A Framework for Spatial Interaction Analysis Based on Large-Scale Mobile Phone Data

    PubMed Central

    Li, Weifeng; Cheng, Xiaoyun; Guo, Gaohua

    2014-01-01

    The overall understanding of spatial interaction and the exact knowledge of its dynamic evolution are required in the urban planning and transportation planning. This study aimed to analyze the spatial interaction based on the large-scale mobile phone data. The newly arisen mass dataset required a new methodology which was compatible with its peculiar characteristics. A three-stage framework was proposed in this paper, including data preprocessing, critical activity identification, and spatial interaction measurement. The proposed framework introduced the frequent pattern mining and measured the spatial interaction by the obtained association. A case study of three communities in Shanghai was carried out as verification of proposed method and demonstration of its practical application. The spatial interaction patterns and the representative features proved the rationality of the proposed framework. PMID:25435865

  8. EvArnoldi: A New Algorithm for Large-Scale Eigenvalue Problems.

    PubMed

    Tal-Ezer, Hillel

    2016-05-19

    Eigenvalues and eigenvectors are an essential theme in numerical linear algebra. Their study is mainly motivated by their high importance in a wide range of applications. Knowledge of eigenvalues is essential in quantum molecular science. Solutions of the Schrödinger equation for the electrons composing the molecule are the basis of electronic structure theory. Electronic eigenvalues compose the potential energy surfaces for nuclear motion. The eigenvectors allow calculation of diople transition matrix elements, the core of spectroscopy. The vibrational dynamics molecule also requires knowledge of the eigenvalues of the vibrational Hamiltonian. Typically in these problems, the dimension of Hilbert space is huge. Practically, only a small subset of eigenvalues is required. In this paper, we present a highly efficient algorithm, named EvArnoldi, for solving the large-scale eigenvalues problem. The algorithm, in its basic formulation, is mathematically equivalent to ARPACK ( Sorensen , D. C. Implicitly Restarted Arnoldi/Lanczos Methods for Large Scale Eigenvalue Calculations ; Springer , 1997 ; Lehoucq , R. B. ; Sorensen , D. C. SIAM Journal on Matrix Analysis and Applications 1996 , 17 , 789 ; Calvetti , D. ; Reichel , L. ; Sorensen , D. C. Electronic Transactions on Numerical Analysis 1994 , 2 , 21 ) (or Eigs of Matlab) but significantly simpler.

  9. Measures of Agreement Between Many Raters for Ordinal Classifications

    PubMed Central

    Nelson, Kerrie P.; Edwards, Don

    2015-01-01

    Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449

  10. Application and research of block caving in Pulang copper mine

    NASA Astrophysics Data System (ADS)

    Ge, Qifa; Fan, Wenlu; Zhu, Weigen; Chen, Xiaowei

    2018-01-01

    The application of block caving in mines shows significant advantages in large scale, low cost and high efficiency, thus block caving is worth promoting in the mines that meets the requirement of natural caving. Due to large scale of production and low ore grade in Pulang copper mine in China, comprehensive analysis and research were conducted on rock mechanics, mining sequence, undercutting and stability of bottom structure in terms of raising mine benefit and maximizing the recovery mineral resources. Finally this study summarizes that block caving is completely suitable for Pulang copper mine.

  11. Academic-industrial partnerships in drug discovery in the age of genomics.

    PubMed

    Harris, Tim; Papadopoulos, Stelios; Goldstein, David B

    2015-06-01

    Many US FDA-approved drugs have been developed through productive interactions between the biotechnology industry and academia. Technological breakthroughs in genomics, in particular large-scale sequencing of human genomes, is creating new opportunities to understand the biology of disease and to identify high-value targets relevant to a broad range of disorders. However, the scale of the work required to appropriately analyze large genomic and clinical data sets is challenging industry to develop a broader view of what areas of work constitute precompetitive research. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Inventory and analysis of natural vegetation and related resources from space and high altitude photography

    NASA Technical Reports Server (NTRS)

    Poulton, C. E.

    1972-01-01

    A multiple sampling technique was developed whereby spacecraft photographs supported by aircraft photographs could be used to quantify plant communities. Large scale (1:600 to 1:2,400) color infrared aerial photographs were required to identify shrub and herbaceous species. These photos were used to successfully estimate a herbaceous standing crop biomass. Microdensitometry was used to discriminate among specific plant communities and individual plant species. Large scale infrared photography was also used to estimate mule deer deaths and population density of northern pocket gophers.

  13. Large increase in fracture resistance of stishovite with crack extension less than one micrometer

    PubMed Central

    Yoshida, Kimiko; Wakai, Fumihiro; Nishiyama, Norimasa; Sekine, Risako; Shinoda, Yutaka; Akatsu, Takashi; Nagoshi, Takashi; Sone, Masato

    2015-01-01

    The development of strong, tough, and damage-tolerant ceramics requires nano/microstructure design to utilize toughening mechanisms operating at different length scales. The toughening mechanisms so far known are effective in micro-scale, then, they require the crack extension of more than a few micrometers to increase the fracture resistance. Here, we developed a micro-mechanical test method using micro-cantilever beam specimens to determine the very early part of resistance-curve of nanocrystalline SiO2 stishovite, which exhibited fracture-induced amorphization. We revealed that this novel toughening mechanism was effective even at length scale of nanometer due to narrow transformation zone width of a few tens of nanometers and large dilatational strain (from 60 to 95%) associated with the transition of crystal to amorphous state. This testing method will be a powerful tool to search for toughening mechanisms that may operate at nanoscale for attaining both reliability and strength of structural materials. PMID:26051871

  14. Approaches for advancing scientific understanding of macrosystems

    USGS Publications Warehouse

    Levy, Ofir; Ball, Becky A.; Bond-Lamberty, Ben; Cheruvelil, Kendra S.; Finley, Andrew O.; Lottig, Noah R.; Surangi W. Punyasena,; Xiao, Jingfeng; Zhou, Jizhong; Buckley, Lauren B.; Filstrup, Christopher T.; Keitt, Tim H.; Kellner, James R.; Knapp, Alan K.; Richardson, Andrew D.; Tcheng, David; Toomey, Michael; Vargas, Rodrigo; Voordeckers, James W.; Wagner, Tyler; Williams, John W.

    2014-01-01

    The emergence of macrosystems ecology (MSE), which focuses on regional- to continental-scale ecological patterns and processes, builds upon a history of long-term and broad-scale studies in ecology. Scientists face the difficulty of integrating the many elements that make up macrosystems, which consist of hierarchical processes at interacting spatial and temporal scales. Researchers must also identify the most relevant scales and variables to be considered, the required data resources, and the appropriate study design to provide the proper inferences. The large volumes of multi-thematic data often associated with macrosystem studies typically require validation, standardization, and assimilation. Finally, analytical approaches need to describe how cross-scale and hierarchical dynamics and interactions relate to macroscale phenomena. Here, we elaborate on some key methodological challenges of MSE research and discuss existing and novel approaches to meet them.

  15. The magnetic shear-current effect: Generation of large-scale magnetic fields by the small-scale dynamo

    DOE PAGES

    Squire, J.; Bhattacharjee, A.

    2016-03-14

    A novel large-scale dynamo mechanism, the magnetic shear-current effect, is discussed and explored. Here, the effect relies on the interaction of magnetic fluctuations with a mean shear flow, meaning the saturated state of the small-scale dynamo can drive a large-scale dynamo – in some sense the inverse of dynamo quenching. The dynamo is non-helical, with the mean fieldmore » $${\\it\\alpha}$$coefficient zero, and is caused by the interaction between an off-diagonal component of the turbulent resistivity and the stretching of the large-scale field by shear flow. Following up on previous numerical and analytic work, this paper presents further details of the numerical evidence for the effect, as well as an heuristic description of how magnetic fluctuations can interact with shear flow to produce the required electromotive force. The pressure response of the fluid is fundamental to this mechanism, which helps explain why the magnetic effect is stronger than its kinematic cousin, and the basic idea is related to the well-known lack of turbulent resistivity quenching by magnetic fluctuations. As well as being interesting for its applications to general high Reynolds number astrophysical turbulence, where strong small-scale magnetic fluctuations are expected to be prevalent, the magnetic shear-current effect is a likely candidate for large-scale dynamo in the unstratified regions of ionized accretion disks. Evidence for this is discussed, as well as future research directions and the challenges involved with understanding details of the effect in astrophysically relevant regimes.« less

  16. Overview of Opportunities for Co-Location of Solar Energy Technologies and Vegetation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macknick, Jordan; Beatty, Brenda; Hill, Graham

    2013-12-01

    Large-scale solar facilities have the potential to contribute significantly to national electricity production. Many solar installations are large-scale or utility-scale, with a capacity over 1 MW and connected directly to the electric grid. Large-scale solar facilities offer an opportunity to achieve economies of scale in solar deployment, yet there have been concerns about the amount of land required for solar projects and the impact of solar projects on local habitat. During the site preparation phase for utility-scale solar facilities, developers often grade land and remove all vegetation to minimize installation and operational costs, prevent plants from shading panels, and minimizemore » potential fire or wildlife risks. However, the common site preparation practice of removing vegetation can be avoided in certain circumstances, and there have been successful examples where solar facilities have been co-located with agricultural operations or have native vegetation growing beneath the panels. In this study we outline some of the impacts that large-scale solar facilities can have on the local environment, provide examples of installations where impacts have been minimized through co-location with vegetation, characterize the types of co-location, and give an overview of the potential benefits from co-location of solar energy projects and vegetation. The varieties of co-location can be replicated or modified for site-specific use at other solar energy installations around the world. We conclude with opportunities to improve upon our understanding of ways to reduce the environmental impacts of large-scale solar installations.« less

  17. Visual attention mitigates information loss in small- and large-scale neural codes.

    PubMed

    Sprague, Thomas C; Saproo, Sameer; Serences, John T

    2015-04-01

    The visual system transforms complex inputs into robust and parsimonious neural codes that efficiently guide behavior. Because neural communication is stochastic, the amount of encoded visual information necessarily decreases with each synapse. This constraint requires that sensory signals are processed in a manner that protects information about relevant stimuli from degradation. Such selective processing--or selective attention--is implemented via several mechanisms, including neural gain and changes in tuning properties. However, examining each of these effects in isolation obscures their joint impact on the fidelity of stimulus feature representations by large-scale population codes. Instead, large-scale activity patterns can be used to reconstruct representations of relevant and irrelevant stimuli, thereby providing a holistic understanding about how neuron-level modulations collectively impact stimulus encoding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. REDUCING THE WASTE STREAM: BRINGING ENVIRONMENTAL, ECONOMICAL, AND EDUCATIONAL COMPOSTING TO A LIBERAL ARTS COLLEGE

    EPA Science Inventory

    The Northfield, Minnesota area contains three institutions that produce a large amount of compostable food waste. St. Olaf College uses a large-scale on-site composting machine that effectively transforms the food waste to compost, but the system requires an immense start-up c...

  19. Validating the BERMS in situ soil moisture network with a large scale temporary network

    USDA-ARS?s Scientific Manuscript database

    Calibration and validation of soil moisture satellite products requires data records of large spatial and temporal extent, but obtaining this data can be challenging. These challenges can include remote locations, and expense of equipment. One location with a long record of soil moisture data is th...

  20. Correlation between Academic and Skills-Based Tests in Computer Networks

    ERIC Educational Resources Information Center

    Buchanan, William

    2006-01-01

    Computing-related programmes and modules have many problems, especially related to large class sizes, large-scale plagiarism, module franchising, and an increased requirement from students for increased amounts of hands-on, practical work. This paper presents a practical computer networks module which uses a mixture of online examinations and a…

  1. Integrating complexity into data-driven multi-hazard supply chain network strategies

    USGS Publications Warehouse

    Long, Suzanna K.; Shoberg, Thomas G.; Ramachandran, Varun; Corns, Steven M.; Carlo, Hector J.

    2013-01-01

    Major strategies in the wake of a large-scale disaster have focused on short-term emergency response solutions. Few consider medium-to-long-term restoration strategies that reconnect urban areas to the national supply chain networks (SCN) and their supporting infrastructure. To re-establish this connectivity, the relationships within the SCN must be defined and formulated as a model of a complex adaptive system (CAS). A CAS model is a representation of a system that consists of large numbers of inter-connections, demonstrates non-linear behaviors and emergent properties, and responds to stimulus from its environment. CAS modeling is an effective method of managing complexities associated with SCN restoration after large-scale disasters. In order to populate the data space large data sets are required. Currently access to these data is hampered by proprietary restrictions. The aim of this paper is to identify the data required to build a SCN restoration model, look at the inherent problems associated with these data, and understand the complexity that arises due to integration of these data.

  2. Identification and measurement of shrub type vegetation on large scale aerial photography

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.

    1970-01-01

    Important range-shrub species were identified at acceptable levels of accuracy on large-scale 70 mm color and color infrared aerial photographs. Identification of individual shrubs was significantly higher, however, on color infrared. Photoscales smaller than 1:2400 had limited value except for mature individuals of relatively tall species, and then only if crown margins did not overlap and sharp contrast was evident between the species and background. Larger scale photos were required for low-growing species in dense stands. The crown cover for individual species was estimated from the aerial photos either with a measuring magnifier or a projected-scale micrometer. These crown cover measurements provide techniques for earth-resource analyses when used in conjunction with space and high-altitude remotely procured photos.

  3. The Soldier Fitness Tracker: Global Delivery of Comprehensive Soldier Fitness

    ERIC Educational Resources Information Center

    Fravell, Mike; Nasser, Katherine; Cornum, Rhonda

    2011-01-01

    Carefully implemented technology strategies are vital to the success of large-scale initiatives such as the U.S. Army's Comprehensive Soldier Fitness (CSF) program. Achieving the U.S. Army's vision for CSF required a robust information technology platform that was scaled to millions of users and that leveraged the Internet to enable global reach.…

  4. Desalination: Status and Federal Issues

    DTIC Science & Technology

    2009-12-30

    on one side and lets purified water through. Reverse osmosis plants have fewer problems with corrosion and usually have lower energy requirements...Texas) and cities are actively researching and investigating the feasibility of large-scale desalination plants for municipal water supplies...desalination research and development, and in construction and operational costs of desalination demonstration projects and full-scale plants

  5. Using nocturnal cold air drainage flow to monitor ecosystem processes in complex terrain

    Treesearch

    Thomas G. Pypker; Michael H. Unsworth; Alan C. Mix; William Rugh; Troy Ocheltree; Karrin Alstad; Barbara J. Bond

    2007-01-01

    This paper presents initial investigations of a new approach to monitor ecosystem processes in complex terrain on large scales. Metabolic processes in mountainous ecosystems are poorly represented in current ecosystem monitoring campaigns because the methods used for monitoring metabolism at the ecosystem scale (e.g., eddy covariance) require flat study sites. Our goal...

  6. LARGE-SCALE HYDROGEN PRODUCTION FROM NUCLEAR ENERGY USING HIGH TEMPERATURE ELECTROLYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    James E. O'Brien

    2010-08-01

    Hydrogen can be produced from water splitting with relatively high efficiency using high-temperature electrolysis. This technology makes use of solid-oxide cells, running in the electrolysis mode to produce hydrogen from steam, while consuming electricity and high-temperature process heat. When coupled to an advanced high temperature nuclear reactor, the overall thermal-to-hydrogen efficiency for high-temperature electrolysis can be as high as 50%, which is about double the overall efficiency of conventional low-temperature electrolysis. Current large-scale hydrogen production is based almost exclusively on steam reforming of methane, a method that consumes a precious fossil fuel while emitting carbon dioxide to the atmosphere. Demandmore » for hydrogen is increasing rapidly for refining of increasingly low-grade petroleum resources, such as the Athabasca oil sands and for ammonia-based fertilizer production. Large quantities of hydrogen are also required for carbon-efficient conversion of biomass to liquid fuels. With supplemental nuclear hydrogen, almost all of the carbon in the biomass can be converted to liquid fuels in a nearly carbon-neutral fashion. Ultimately, hydrogen may be employed as a direct transportation fuel in a “hydrogen economy.” The large quantity of hydrogen that would be required for this concept should be produced without consuming fossil fuels or emitting greenhouse gases. An overview of the high-temperature electrolysis technology will be presented, including basic theory, modeling, and experimental activities. Modeling activities include both computational fluid dynamics and large-scale systems analysis. We have also demonstrated high-temperature electrolysis in our laboratory at the 15 kW scale, achieving a hydrogen production rate in excess of 5500 L/hr.« less

  7. Crater size estimates for large-body terrestrial impact

    NASA Technical Reports Server (NTRS)

    Schmidt, Robert M.; Housen, Kevin R.

    1988-01-01

    Calculating the effects of impacts leading to global catastrophes requires knowledge of the impact process at very large size scales. This information cannot be obtained directly but must be inferred from subscale physical simulations, numerical simulations, and scaling laws. Schmidt and Holsapple presented scaling laws based upon laboratory-scale impact experiments performed on a centrifuge (Schmidt, 1980 and Schmidt and Holsapple, 1980). These experiments were used to develop scaling laws which were among the first to include gravity dependence associated with increasing event size. At that time using the results of experiments in dry sand and in water to provide bounds on crater size, they recognized that more precise bounds on large-body impact crater formation could be obtained with additional centrifuge experiments conducted in other geological media. In that previous work, simple power-law formulae were developed to relate final crater diameter to impactor size and velocity. In addition, Schmidt (1980) and Holsapple and Schmidt (1982) recognized that the energy scaling exponent is not a universal constant but depends upon the target media. Recently, Holsapple and Schmidt (1987) includes results for non-porous materials and provides a basis for estimating crater formation kinematics and final crater size. A revised set of scaling relationships for all crater parameters of interest are presented. These include results for various target media and include the kinematics of formation. Particular attention is given to possible limits brought about by very large impactors.

  8. Flame Synthesis Of Single-Walled Carbon Nanotubes And Nanofibers

    NASA Technical Reports Server (NTRS)

    Wal, Randy L. Vander; Berger, Gordon M.; Ticich, Thomas M.

    2003-01-01

    Carbon nanotubes are widely sought for a variety of applications including gas storage, intercalation media, catalyst support and composite reinforcing material [1]. Each of these applications will require large scale quantities of CNTs. A second consideration is that some of these applications may require redispersal of the collected CNTs and attachment to a support structure. If the CNTs could be synthesized directly upon the support to be used in the end application, a tremendous savings in post-synthesis processing could be realized. Therein we have pursued both aerosol and supported catalyst synthesis of CNTs. Given space limitations, only the aerosol portion of the work is outlined here though results from both thrusts will be presented during the talk. Aerosol methods of SWNT, MWNT or nanofiber synthesis hold promise of large-scale production to supply the tonnage quantities these applications will require. Aerosol methods may potentially permit control of the catalyst particle size, offer continuous processing, provide highest product purity and most importantly, are scaleable. Only via economy of scale will the cost of CNTs be sufficient to realize the large-scale structural and power applications on both earth and in space. Present aerosol methods for SWNT synthesis include laser ablation of composite metalgraphite targets or thermal decomposition/pyrolysis of a sublimed or vaporized organometallic [2]. Both approaches, conducted within a high temperature furnace, have produced single-walled nanotubes (SWNTs). The former method requires sophisticated hardware and is inherently limited by the energy deposition that can be realized using pulsed laser light. The latter method, using expensive organometallics is difficult to control for SWNT synthesis given a range of gasparticle mixing conditions along variable temperature gradients; multi-walled nanotubes (MWNTs) are a far more likely end products. Both approaches require large energy expenditures and produce CNTs at prohibitive costs, around $500 per gram. Moreover these approaches do not possess demonstrated scalability. In contrast to these approaches, flame synthesis can be a very energy efficient, low-cost process [3]; a portion of the fuel serves as the heating source while the remainder serves as reactant. Moreover, flame systems are geometrically versatile as illustrated by innumerable boiler and furnace designs. Addressing scalability, flame systems are commercially used for producing megatonnage quantities of carbon black [4]. Although it presents a complex chemically reacting flow, a flame also offers many variables for control, e.g. temperature, chemical environment and residence times [5]. Despite these advantages, there are challenges to scaling flame synthesis as well.

  9. Scaling of an information system in a public healthcare market--infrastructuring from the vendor's perspective.

    PubMed

    Johannessen, Liv Karen; Obstfelder, Aud; Lotherington, Ann Therese

    2013-05-01

    The purpose of this paper is to explore the making and scaling of information infrastructures, as well as how the conditions for scaling a component may change for the vendor. The first research question is how the making and scaling of a healthcare information infrastructure can be done and by whom. The second question is what scope for manoeuvre there might be for vendors aiming to expand their market. This case study is based on an interpretive approach, whereby data is gathered through participant observation and semi-structured interviews. A case study of the making and scaling of an electronic system for general practitioners ordering laboratory services from hospitals is described as comprising two distinct phases. The first may be characterized as an evolving phase, when development, integration and implementation were achieved in small steps, and the vendor, together with end users, had considerable freedom to create the solution according to the users' needs. The second phase was characterized by a large-scale procurement process over which regional healthcare authorities exercised much more control and the needs of groups other than the end users influenced the design. The making and scaling of healthcare information infrastructures is not simply a process of evolution, in which the end users use and change the technology. It also consists of large steps, during which different actors, including vendors and healthcare authorities, may make substantial contributions. This process requires work, negotiation and strategies. The conditions for the vendor may change dramatically, from considerable freedom and close relationships with users and customers in the small-scale development, to losing control of the product and being required to engage in more formal relations with customers in the wider public healthcare market. Onerous procurement processes may be one of the reasons why large-scale implementation of information projects in healthcare is difficult and slow. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Quantifying aggregated uncertainty in Plasmodium falciparum malaria prevalence and populations at risk via efficient space-time geostatistical joint simulation.

    PubMed

    Gething, Peter W; Patil, Anand P; Hay, Simon I

    2010-04-01

    Risk maps estimating the spatial distribution of infectious diseases are required to guide public health policy from local to global scales. The advent of model-based geostatistics (MBG) has allowed these maps to be generated in a formal statistical framework, providing robust metrics of map uncertainty that enhances their utility for decision-makers. In many settings, decision-makers require spatially aggregated measures over large regions such as the mean prevalence within a country or administrative region, or national populations living under different levels of risk. Existing MBG mapping approaches provide suitable metrics of local uncertainty--the fidelity of predictions at each mapped pixel--but have not been adapted for measuring uncertainty over large areas, due largely to a series of fundamental computational constraints. Here the authors present a new efficient approximating algorithm that can generate for the first time the necessary joint simulation of prevalence values across the very large prediction spaces needed for global scale mapping. This new approach is implemented in conjunction with an established model for P. falciparum allowing robust estimates of mean prevalence at any specified level of spatial aggregation. The model is used to provide estimates of national populations at risk under three policy-relevant prevalence thresholds, along with accompanying model-based measures of uncertainty. By overcoming previously unchallenged computational barriers, this study illustrates how MBG approaches, already at the forefront of infectious disease mapping, can be extended to provide large-scale aggregate measures appropriate for decision-makers.

  11. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency and decrease the cost of the developing large-scale, heterogeneous circuits needed to enable large-scale application in high-energy physics and other important areas.

  12. Innovative cold tolerance test for conifer seedlings

    Treesearch

    Peter A. Balk; Peter Bronnum; Mike Perks; Eva Stattin; Lonneke H. M. van der Geest; Monique F. van Wordragen

    2007-01-01

    Forest tree nurseries rely on tight scheduling of operations to deliver vital seedlings to the planting site. Cold storage is required to: (1) prevent winter damage, especially in container seedlings; (2) to maintain planting stock in an inactive condition; and (3) to ensure plant supply for geographically distinct planting sites, a definite requirement for large-scale...

  13. Development of immature tiger-fly Coenosia attenuata (Stein) reared on larvae of the fungus gnat Bradysia impatiens (Johannsen) in coir substrate

    USDA-ARS?s Scientific Manuscript database

    The ability to rear a beneficial predatory insect is often required for its use in inoculative releases for classical biological control applications. However, affordable mass production is required before a beneficial predatory insect will be commercialized for large scale repetitive releases. The...

  14. High-Threshold Fault-Tolerant Quantum Computation with Analog Quantum Error Correction

    NASA Astrophysics Data System (ADS)

    Fukui, Kosuke; Tomita, Akihisa; Okamoto, Atsushi; Fujii, Keisuke

    2018-04-01

    To implement fault-tolerant quantum computation with continuous variables, the Gottesman-Kitaev-Preskill (GKP) qubit has been recognized as an important technological element. However, it is still challenging to experimentally generate the GKP qubit with the required squeezing level, 14.8 dB, of the existing fault-tolerant quantum computation. To reduce this requirement, we propose a high-threshold fault-tolerant quantum computation with GKP qubits using topologically protected measurement-based quantum computation with the surface code. By harnessing analog information contained in the GKP qubits, we apply analog quantum error correction to the surface code. Furthermore, we develop a method to prevent the squeezing level from decreasing during the construction of the large-scale cluster states for the topologically protected, measurement-based, quantum computation. We numerically show that the required squeezing level can be relaxed to less than 10 dB, which is within the reach of the current experimental technology. Hence, this work can considerably alleviate this experimental requirement and take a step closer to the realization of large-scale quantum computation.

  15. CImbinator: a web-based tool for drug synergy analysis in small- and large-scale datasets.

    PubMed

    Flobak, Åsmund; Vazquez, Miguel; Lægreid, Astrid; Valencia, Alfonso

    2017-08-01

    Drug synergies are sought to identify combinations of drugs particularly beneficial. User-friendly software solutions that can assist analysis of large-scale datasets are required. CImbinator is a web-service that can aid in batch-wise and in-depth analyzes of data from small-scale and large-scale drug combination screens. CImbinator offers to quantify drug combination effects, using both the commonly employed median effect equation, as well as advanced experimental mathematical models describing dose response relationships. CImbinator is written in Ruby and R. It uses the R package drc for advanced drug response modeling. CImbinator is available at http://cimbinator.bioinfo.cnio.es , the source-code is open and available at https://github.com/Rbbt-Workflows/combination_index . A Docker image is also available at https://hub.docker.com/r/mikisvaz/rbbt-ci_mbinator/ . asmund.flobak@ntnu.no or miguel.vazquez@cnio.es. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  16. Preferential pathways in complex fracture systems and their influence on large scale transport

    NASA Astrophysics Data System (ADS)

    Willmann, M.; Mañé, R.; Tyukhova, A.

    2017-12-01

    Many subsurface applications in complex fracture systems require large-scale predictions. Precise predictions are difficult because of the existence of preferential pathways at different scales. The intrinsic complexity of fracture systems increases within fractured sedimentary formations, because also the coupling of fractures and matrix has to be taken into account. This interplay of fracture system and the sedimentary matrix is strongly controlled by the actual fracture aperture of an individual fracture. And an effective aperture cannot be easily be determined because of the preferential pathways along the fracture plane. We investigate the influence of these preferential pathways on large scale solute transport and upscale the aperture. By explicitly modeling flow and particle tracking in individual fractures, we develop a new effective transport aperture, which is weighted by the aperture along the preferential paths, a Lagrangian aperture. We show that this new aperture is consistently larger than existing definitions of effective flow and transport apertures. Finally, we apply our results to a fractured sedimentary formation in Northern Switzerland.

  17. Large-scale functional models of visual cortex for remote sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less

  18. Annual Research Briefs

    NASA Technical Reports Server (NTRS)

    Spinks, Debra (Compiler)

    1997-01-01

    This report contains the 1997 annual progress reports of the research fellows and students supported by the Center for Turbulence Research (CTR). Titles include: Invariant modeling in large-eddy simulation of turbulence; Validation of large-eddy simulation in a plain asymmetric diffuser; Progress in large-eddy simulation of trailing-edge turbulence and aeronautics; Resolution requirements in large-eddy simulations of shear flows; A general theory of discrete filtering for LES in complex geometry; On the use of discrete filters for large eddy simulation; Wall models in large eddy simulation of separated flow; Perspectives for ensemble average LES; Anisotropic grid-based formulas for subgrid-scale models; Some modeling requirements for wall models in large eddy simulation; Numerical simulation of 3D turbulent boundary layers using the V2F model; Accurate modeling of impinging jet heat transfer; Application of turbulence models to high-lift airfoils; Advances in structure-based turbulence modeling; Incorporating realistic chemistry into direct numerical simulations of turbulent non-premixed combustion; Effects of small-scale structure on turbulent mixing; Turbulent premixed combustion in the laminar flamelet and the thin reaction zone regime; Large eddy simulation of combustion instabilities in turbulent premixed burners; On the generation of vorticity at a free-surface; Active control of turbulent channel flow; A generalized framework for robust control in fluid mechanics; Combined immersed-boundary/B-spline methods for simulations of flow in complex geometries; and DNS of shock boundary-layer interaction - preliminary results for compression ramp flow.

  19. Large-scale automated histology in the pursuit of connectomes.

    PubMed

    Kleinfeld, David; Bharioke, Arjun; Blinder, Pablo; Bock, Davi D; Briggman, Kevin L; Chklovskii, Dmitri B; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P; Lee, Wei-Chung Allen; Meyer, Hanno S; Micheva, Kristina D; Oberlaender, Marcel; Prohaska, Steffen; Reid, R Clay; Smith, Stephen J; Takemura, Shinya; Tsai, Philbert S; Sakmann, Bert

    2011-11-09

    How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity.

  20. Large-Scale Automated Histology in the Pursuit of Connectomes

    PubMed Central

    Bharioke, Arjun; Blinder, Pablo; Bock, Davi D.; Briggman, Kevin L.; Chklovskii, Dmitri B.; Denk, Winfried; Helmstaedter, Moritz; Kaufhold, John P.; Lee, Wei-Chung Allen; Meyer, Hanno S.; Micheva, Kristina D.; Oberlaender, Marcel; Prohaska, Steffen; Reid, R. Clay; Smith, Stephen J.; Takemura, Shinya; Tsai, Philbert S.; Sakmann, Bert

    2011-01-01

    How does the brain compute? Answering this question necessitates neuronal connectomes, annotated graphs of all synaptic connections within defined brain areas. Further, understanding the energetics of the brain's computations requires vascular graphs. The assembly of a connectome requires sensitive hardware tools to measure neuronal and neurovascular features in all three dimensions, as well as software and machine learning for data analysis and visualization. We present the state of the art on the reconstruction of circuits and vasculature that link brain anatomy and function. Analysis at the scale of tens of nanometers yields connections between identified neurons, while analysis at the micrometer scale yields probabilistic rules of connection between neurons and exact vascular connectivity. PMID:22072665

  1. Combining states without scale hierarchies with ordered parton showers

    DOE PAGES

    Fischer, Nadine; Prestel, Stefan

    2017-09-12

    Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less

  2. Pyrotechnic hazards classification and evaluation program. Run-up reaction testing in pyrotechnic dust suspensions

    NASA Technical Reports Server (NTRS)

    1971-01-01

    A preliminary investigation of the parameters included in run-up dust reactions is presented. Two types of tests were conducted: (1) ignition criteria of large bulk pyrotechnic dusts, and (2) optimal run-up conditions of large bulk pyrotechnic dusts. These tests were used to evaluate the order of magnitude and gross scale requirements needed to induce run-up reactions in pyrotechnic dusts and to simulate at reduced scale an accident that occurred in a manufacturing installation. Test results showed that propagation of pyrotechnic dust clouds resulted in a fireball of relatively long duration and large size. In addition, a plane wave front was observed to travel down the length of the gallery.

  3. A new way to protect privacy in large-scale genome-wide association studies.

    PubMed

    Kamm, Liina; Bogdanov, Dan; Laur, Sven; Vilo, Jaak

    2013-04-01

    Increased availability of various genotyping techniques has initiated a race for finding genetic markers that can be used in diagnostics and personalized medicine. Although many genetic risk factors are known, key causes of common diseases with complex heritage patterns are still unknown. Identification of such complex traits requires a targeted study over a large collection of data. Ideally, such studies bring together data from many biobanks. However, data aggregation on such a large scale raises many privacy issues. We show how to conduct such studies without violating privacy of individual donors and without leaking the data to third parties. The presented solution has provable security guarantees. Supplementary data are available at Bioinformatics online.

  4. Oceanic Transport

    NASA Technical Reports Server (NTRS)

    Chase, R.; Mcgoldrick, L.

    1984-01-01

    The importance of large-scale ocean movements to the moderation of Global Temperature is discussed. The observational requirements of physical oceanography are discussed. Satellite-based oceanographic observing systems are seen as central to oceanography in 1990's.

  5. Remote visualization and scale analysis of large turbulence datatsets

    NASA Astrophysics Data System (ADS)

    Livescu, D.; Pulido, J.; Burns, R.; Canada, C.; Ahrens, J.; Hamann, B.

    2015-12-01

    Accurate simulations of turbulent flows require solving all the dynamically relevant scales of motions. This technique, called Direct Numerical Simulation, has been successfully applied to a variety of simple flows; however, the large-scale flows encountered in Geophysical Fluid Dynamics (GFD) would require meshes outside the range of the most powerful supercomputers for the foreseeable future. Nevertheless, the current generation of petascale computers has enabled unprecedented simulations of many types of turbulent flows which focus on various GFD aspects, from the idealized configurations extensively studied in the past to more complex flows closer to the practical applications. The pace at which such simulations are performed only continues to increase; however, the simulations themselves are restricted to a small number of groups with access to large computational platforms. Yet the petabytes of turbulence data offer almost limitless information on many different aspects of the flow, from the hierarchy of turbulence moments, spectra and correlations, to structure-functions, geometrical properties, etc. The ability to share such datasets with other groups can significantly reduce the time to analyze the data, help the creative process and increase the pace of discovery. Using the largest DOE supercomputing platforms, we have performed some of the biggest turbulence simulations to date, in various configurations, addressing specific aspects of turbulence production and mixing mechanisms. Until recently, the visualization and analysis of such datasets was restricted by access to large supercomputers. The public Johns Hopkins Turbulence database simplifies the access to multi-Terabyte turbulence datasets and facilitates turbulence analysis through the use of commodity hardware. First, one of our datasets, which is part of the database, will be described and then a framework that adds high-speed visualization and wavelet support for multi-resolution analysis of turbulence will be highlighted. The addition of wavelet support reduces the latency and bandwidth requirements for visualization, allowing for many concurrent users, and enables new types of analyses, including scale decomposition and coherent feature extraction.

  6. The AgMIP GRIDded Crop Modeling Initiative (AgGRID) and the Global Gridded Crop Model Intercomparison (GGCMI)

    NASA Technical Reports Server (NTRS)

    Elliott, Joshua; Muller, Christoff

    2015-01-01

    Climate change is a significant risk for agricultural production. Even under optimistic scenarios for climate mitigation action, present-day agricultural areas are likely to face significant increases in temperatures in the coming decades, in addition to changes in precipitation, cloud cover, and the frequency and duration of extreme heat, drought, and flood events (IPCC, 2013). These factors will affect the agricultural system at the global scale by impacting cultivation regimes, prices, trade, and food security (Nelson et al., 2014a). Global-scale evaluation of crop productivity is a major challenge for climate impact and adaptation assessment. Rigorous global assessments that are able to inform planning and policy will benefit from consistent use of models, input data, and assumptions across regions and time that use mutually agreed protocols designed by the modeling community. To ensure this consistency, large-scale assessments are typically performed on uniform spatial grids, with spatial resolution of typically 10 to 50 km, over specified time-periods. Many distinct crop models and model types have been applied on the global scale to assess productivity and climate impacts, often with very different results (Rosenzweig et al., 2014). These models are based to a large extent on field-scale crop process or ecosystems models and they typically require resolved data on weather, environmental, and farm management conditions that are lacking in many regions (Bondeau et al., 2007; Drewniak et al., 2013; Elliott et al., 2014b; Gueneau et al., 2012; Jones et al., 2003; Liu et al., 2007; M¨uller and Robertson, 2014; Van den Hoof et al., 2011;Waha et al., 2012; Xiong et al., 2014). Due to data limitations, the requirements of consistency, and the computational and practical limitations of running models on a large scale, a variety of simplifying assumptions must generally be made regarding prevailing management strategies on the grid scale in both the baseline and future periods. Implementation differences in these and other modeling choices contribute to significant variation among global-scale crop model assessments in addition to differences in crop model implementations that also cause large differences in site-specific crop modeling (Asseng et al., 2013; Bassu et al., 2014).

  7. Experimental design for estimating unknown groundwater pumping using genetic algorithm and reduced order model

    NASA Astrophysics Data System (ADS)

    Ushijima, Timothy T.; Yeh, William W.-G.

    2013-10-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provide maximum information about unknown groundwater pumping in a confined, anisotropic aquifer. The design uses a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. The formulated optimization problem is non-convex and contains integer variables necessitating a combinatorial search. Given a realistic large-scale model, the size of the combinatorial search required can make the problem difficult, if not impossible, to solve using traditional mathematical programming techniques. Genetic algorithms (GAs) can be used to perform the global search; however, because a GA requires a large number of calls to a groundwater model, the formulated optimization problem still may be infeasible to solve. As a result, proper orthogonal decomposition (POD) is applied to the groundwater model to reduce its dimensionality. Then, the information matrix in the full model space can be searched without solving the full model. Results from a small-scale test case show identical optimal solutions among the GA, integer programming, and exhaustive search methods. This demonstrates the GA's ability to determine the optimal solution. In addition, the results show that a GA with POD model reduction is several orders of magnitude faster in finding the optimal solution than a GA using the full model. The proposed experimental design algorithm is applied to a realistic, two-dimensional, large-scale groundwater problem. The GA converged to a solution for this large-scale problem.

  8. [Privacy and public benefit in using large scale health databases].

    PubMed

    Yamamoto, Ryuichi

    2014-01-01

    In Japan, large scale heath databases were constructed in a few years, such as National Claim insurance and health checkup database (NDB) and Japanese Sentinel project. But there are some legal issues for making adequate balance between privacy and public benefit by using such databases. NDB is carried based on the act for elderly person's health care but in this act, nothing is mentioned for using this database for general public benefit. Therefore researchers who use this database are forced to pay much concern about anonymization and information security that may disturb the research work itself. Japanese Sentinel project is a national project to detecting drug adverse reaction using large scale distributed clinical databases of large hospitals. Although patients give the future consent for general such purpose for public good, it is still under discussion using insufficiently anonymized data. Generally speaking, researchers of study for public benefit will not infringe patient's privacy, but vague and complex requirements of legislation about personal data protection may disturb the researches. Medical science does not progress without using clinical information, therefore the adequate legislation that is simple and clear for both researchers and patients is strongly required. In Japan, the specific act for balancing privacy and public benefit is now under discussion. The author recommended the researchers including the field of pharmacology should pay attention to, participate in the discussion of, and make suggestion to such act or regulations.

  9. Automation of large scale transient protein expression in mammalian cells

    PubMed Central

    Zhao, Yuguang; Bishop, Benjamin; Clay, Jordan E.; Lu, Weixian; Jones, Margaret; Daenke, Susan; Siebold, Christian; Stuart, David I.; Yvonne Jones, E.; Radu Aricescu, A.

    2011-01-01

    Traditional mammalian expression systems rely on the time-consuming generation of stable cell lines; this is difficult to accommodate within a modern structural biology pipeline. Transient transfections are a fast, cost-effective solution, but require skilled cell culture scientists, making man-power a limiting factor in a setting where numerous samples are processed in parallel. Here we report a strategy employing a customised CompacT SelecT cell culture robot allowing the large-scale expression of multiple protein constructs in a transient format. Successful protocols have been designed for automated transient transfection of human embryonic kidney (HEK) 293T and 293S GnTI− cells in various flask formats. Protein yields obtained by this method were similar to those produced manually, with the added benefit of reproducibility, regardless of user. Automation of cell maintenance and transient transfection allows the expression of high quality recombinant protein in a completely sterile environment with limited support from a cell culture scientist. The reduction in human input has the added benefit of enabling continuous cell maintenance and protein production, features of particular importance to structural biology laboratories, which typically use large quantities of pure recombinant proteins, and often require rapid characterisation of a series of modified constructs. This automated method for large scale transient transfection is now offered as a Europe-wide service via the P-cube initiative. PMID:21571074

  10. Developing eThread pipeline using SAGA-pilot abstraction for large-scale structural bioinformatics.

    PubMed

    Ragothaman, Anjani; Boddu, Sairam Chowdary; Kim, Nayong; Feinstein, Wei; Brylinski, Michal; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread--a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure.

  11. Developing eThread Pipeline Using SAGA-Pilot Abstraction for Large-Scale Structural Bioinformatics

    PubMed Central

    Ragothaman, Anjani; Feinstein, Wei; Jha, Shantenu; Kim, Joohyun

    2014-01-01

    While most of computational annotation approaches are sequence-based, threading methods are becoming increasingly attractive because of predicted structural information that could uncover the underlying function. However, threading tools are generally compute-intensive and the number of protein sequences from even small genomes such as prokaryotes is large typically containing many thousands, prohibiting their application as a genome-wide structural systems biology tool. To leverage its utility, we have developed a pipeline for eThread—a meta-threading protein structure modeling tool, that can use computational resources efficiently and effectively. We employ a pilot-based approach that supports seamless data and task-level parallelism and manages large variation in workload and computational requirements. Our scalable pipeline is deployed on Amazon EC2 and can efficiently select resources based upon task requirements. We present runtime analysis to characterize computational complexity of eThread and EC2 infrastructure. Based on results, we suggest a pathway to an optimized solution with respect to metrics such as time-to-solution or cost-to-solution. Our eThread pipeline can scale to support a large number of sequences and is expected to be a viable solution for genome-scale structural bioinformatics and structure-based annotation, particularly, amenable for small genomes such as prokaryotes. The developed pipeline is easily extensible to other types of distributed cyberinfrastructure. PMID:24995285

  12. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Rucio, the next-generation Data Management system in ATLAS

    NASA Astrophysics Data System (ADS)

    Serfon, C.; Barisits, M.; Beermann, T.; Garonne, V.; Goossens, L.; Lassnig, M.; Nairz, A.; Vigne, R.; ATLAS Collaboration

    2016-04-01

    Rucio is the next-generation of Distributed Data Management (DDM) system benefiting from recent advances in cloud and ;Big Data; computing to address HEP experiments scaling requirements. Rucio is an evolution of the ATLAS DDM system Don Quixote 2 (DQ2), which has demonstrated very large scale data management capabilities with more than 160 petabytes spread worldwide across 130 sites, and accesses from 1,000 active users. However, DQ2 is reaching its limits in terms of scalability, requiring a large number of support staff to operate and being hard to extend with new technologies. Rucio addresses these issues by relying on new technologies to ensure system scalability, cover new user requirements and employ new automation framework to reduce operational overheads. This paper shows the key concepts of Rucio, details the Rucio design, and the technology it employs, the tests that were conducted to validate it and finally describes the migration steps that were conducted to move from DQ2 to Rucio.

  14. Control of fluxes in metabolic networks.

    PubMed

    Basler, Georg; Nikoloski, Zoran; Larhlimi, Abdelhalim; Barabási, Albert-László; Liu, Yang-Yu

    2016-07-01

    Understanding the control of large-scale metabolic networks is central to biology and medicine. However, existing approaches either require specifying a cellular objective or can only be used for small networks. We introduce new coupling types describing the relations between reaction activities, and develop an efficient computational framework, which does not require any cellular objective for systematic studies of large-scale metabolism. We identify the driver reactions facilitating control of 23 metabolic networks from all kingdoms of life. We find that unicellular organisms require a smaller degree of control than multicellular organisms. Driver reactions are under complex cellular regulation in Escherichia coli, indicating their preeminent role in facilitating cellular control. In human cancer cells, driver reactions play pivotal roles in malignancy and represent potential therapeutic targets. The developed framework helps us gain insights into regulatory principles of diseases and facilitates design of engineering strategies at the interface of gene regulation, signaling, and metabolism. © 2016 Basler et al.; Published by Cold Spring Harbor Laboratory Press.

  15. 2:1 for naturalness at the LHC?

    NASA Astrophysics Data System (ADS)

    Arkani-Hamed, Nima; Blum, Kfir; D'Agnolo, Raffaele Tito; Fan, JiJi

    2013-01-01

    A large enhancement of a factor of 1.5 - 2 in Higgs production and decay in the diphoton channel, with little deviation in the ZZ channel, can only plausibly arise from a loop of new charged particles with large couplings to the Higgs. We show that, allowing only new fermions with marginal interactions at the weak scale, the required Yukawa couplings for a factor of 2 enhancement are so large that the Higgs quartic coupling is pushed to large negative values in the UV, triggering an unacceptable vacuum instability far beneath the 10 TeV scale. An enhancement by a factor of 1.5 can be accommodated if the charged particles are lighter than 150 GeV, within reach of discovery in almost all cases in the 8 TeV run at the LHC, and in even the most difficult cases at 14 TeV. Thus if the diphoton enhancement survives further scrutiny, and no charged particles beneath 150 GeV are found, there must be new bosons far beneath the 10 TeV scale. This would unambiguously rule out a large class of fine-tuned theories for physics beyond the Standard Model, including split SUSY and many of its variants, and provide strong circumstantial evidence for a natural theory of electroweak symmetry breaking at the TeV scale. Alternately, theories with only a single fine-tuned Higgs and new fermions at the weak scale, with no additional scalars or gauge bosons up to a cutoff much larger than the 10 TeV scale, unambiguously predict that the hints for a large diphoton enhancement in the current data will disappear.

  16. Constraints and consequences of reducing small scale structure via large dark matter-neutrino interactions

    DOE PAGES

    Bertoni, Bridget; Ipek, Seyda; McKeen, David; ...

    2015-04-30

    Here, cold dark matter explains a wide range of data on cosmological scales. However, there has been a steady accumulation of evidence for discrepancies between simulations and observations at scales smaller than galaxy clusters. One promising way to affect structure formation on small scales is a relatively strong coupling of dark matter to neutrinos. We construct an experimentally viable, simple, renormalizable model with new interactions between neutrinos and dark matter and provide the first discussion of how these new dark matter-neutrino interactions affect neutrino phenomenology. We show that addressing the small scale structure problems requires asymmetric dark matter with amore » mass that is tens of MeV. Generating a sufficiently large dark matter-neutrino coupling requires a new heavy neutrino with a mass around 100 MeV. The heavy neutrino is mostly sterile but has a substantial τ neutrino component, while the three nearly massless neutrinos are partly sterile. This model can be tested by future astrophysical, particle physics, and neutrino oscillation data. Promising signatures of this model include alterations to the neutrino energy spectrum and flavor content observed from a future nearby supernova, anomalous matter effects in neutrino oscillations, and a component of the τ neutrino with mass around 100 MeV.« less

  17. Improving data workflow systems with cloud services and use of open data for bioinformatics research.

    PubMed

    Karim, Md Rezaul; Michel, Audrey; Zappa, Achille; Baranov, Pavel; Sahay, Ratnesh; Rebholz-Schuhmann, Dietrich

    2017-04-16

    Data workflow systems (DWFSs) enable bioinformatics researchers to combine components for data access and data analytics, and to share the final data analytics approach with their collaborators. Increasingly, such systems have to cope with large-scale data, such as full genomes (about 200 GB each), public fact repositories (about 100 TB of data) and 3D imaging data at even larger scales. As moving the data becomes cumbersome, the DWFS needs to embed its processes into a cloud infrastructure, where the data are already hosted. As the standardized public data play an increasingly important role, the DWFS needs to comply with Semantic Web technologies. This advancement to DWFS would reduce overhead costs and accelerate the progress in bioinformatics research based on large-scale data and public resources, as researchers would require less specialized IT knowledge for the implementation. Furthermore, the high data growth rates in bioinformatics research drive the demand for parallel and distributed computing, which then imposes a need for scalability and high-throughput capabilities onto the DWFS. As a result, requirements for data sharing and access to public knowledge bases suggest that compliance of the DWFS with Semantic Web standards is necessary. In this article, we will analyze the existing DWFS with regard to their capabilities toward public open data use as well as large-scale computational and human interface requirements. We untangle the parameters for selecting a preferable solution for bioinformatics research with particular consideration to using cloud services and Semantic Web technologies. Our analysis leads to research guidelines and recommendations toward the development of future DWFS for the bioinformatics research community. © The Author 2017. Published by Oxford University Press.

  18. Space Weather Research at the National Science Foundation

    NASA Astrophysics Data System (ADS)

    Moretto, T.

    2015-12-01

    There is growing recognition that the space environment can have substantial, deleterious, impacts on society. Consequently, research enabling specification and forecasting of hazardous space effects has become of great importance and urgency. This research requires studying the entire Sun-Earth system to understand the coupling of regions all the way from the source of disturbances in the solar atmosphere to the Earth's upper atmosphere. The traditional, region-based structure of research programs in Solar and Space physics is ill suited to fully support the change in research directions that the problem of space weather dictates. On the observational side, dense, distributed networks of observations are required to capture the full large-scale dynamics of the space environment. However, the cost of implementing these is typically prohibitive, especially for measurements in space. Thus, by necessity, the implementation of such new capabilities needs to build on creative and unconventional solutions. A particularly powerful idea is the utilization of new developments in data engineering and informatics research (big data). These new technologies make it possible to build systems that can collect and process huge amounts of noisy and inaccurate data and extract from them useful information. The shift in emphasis towards system level science for geospace also necessitates the development of large-scale and multi-scale models. The development of large-scale models capable of capturing the global dynamics of the Earth's space environment requires investment in research team efforts that go beyond what can typically be funded under the traditional grants programs. This calls for effective interdisciplinary collaboration and efficient leveraging of resources both nationally and internationally. This presentation will provide an overview of current and planned initiatives, programs, and activities at the National Science Foundation pertaining to space weathe research.

  19. Theory of wavelet-based coarse-graining hierarchies for molecular dynamics.

    PubMed

    Rinderspacher, Berend Christopher; Bardhan, Jaydeep P; Ismail, Ahmed E

    2017-07-01

    We present a multiresolution approach to compressing the degrees of freedom and potentials associated with molecular dynamics, such as the bond potentials. The approach suggests a systematic way to accelerate large-scale molecular simulations with more than two levels of coarse graining, particularly applications of polymeric materials. In particular, we derive explicit models for (arbitrarily large) linear (homo)polymers and iterative methods to compute large-scale wavelet decompositions from fragment solutions. This approach does not require explicit preparation of atomistic-to-coarse-grained mappings, but instead uses the theory of diffusion wavelets for graph Laplacians to develop system-specific mappings. Our methodology leads to a hierarchy of system-specific coarse-grained degrees of freedom that provides a conceptually clear and mathematically rigorous framework for modeling chemical systems at relevant model scales. The approach is capable of automatically generating as many coarse-grained model scales as necessary, that is, to go beyond the two scales in conventional coarse-grained strategies; furthermore, the wavelet-based coarse-grained models explicitly link time and length scales. Furthermore, a straightforward method for the reintroduction of omitted degrees of freedom is presented, which plays a major role in maintaining model fidelity in long-time simulations and in capturing emergent behaviors.

  20. Parallel Simulation of Unsteady Turbulent Flames

    NASA Technical Reports Server (NTRS)

    Menon, Suresh

    1996-01-01

    Time-accurate simulation of turbulent flames in high Reynolds number flows is a challenging task since both fluid dynamics and combustion must be modeled accurately. To numerically simulate this phenomenon, very large computer resources (both time and memory) are required. Although current vector supercomputers are capable of providing adequate resources for simulations of this nature, the high cost and their limited availability, makes practical use of such machines less than satisfactory. At the same time, the explicit time integration algorithms used in unsteady flow simulations often possess a very high degree of parallelism, making them very amenable to efficient implementation on large-scale parallel computers. Under these circumstances, distributed memory parallel computers offer an excellent near-term solution for greatly increased computational speed and memory, at a cost that may render the unsteady simulations of the type discussed above more feasible and affordable.This paper discusses the study of unsteady turbulent flames using a simulation algorithm that is capable of retaining high parallel efficiency on distributed memory parallel architectures. Numerical studies are carried out using large-eddy simulation (LES). In LES, the scales larger than the grid are computed using a time- and space-accurate scheme, while the unresolved small scales are modeled using eddy viscosity based subgrid models. This is acceptable for the moment/energy closure since the small scales primarily provide a dissipative mechanism for the energy transferred from the large scales. However, for combustion to occur, the species must first undergo mixing at the small scales and then come into molecular contact. Therefore, global models cannot be used. Recently, a new model for turbulent combustion was developed, in which the combustion is modeled, within the subgrid (small-scales) using a methodology that simulates the mixing and the molecular transport and the chemical kinetics within each LES grid cell. Finite-rate kinetics can be included without any closure and this approach actually provides a means to predict the turbulent rates and the turbulent flame speed. The subgrid combustion model requires resolution of the local time scales associated with small-scale mixing, molecular diffusion and chemical kinetics and, therefore, within each grid cell, a significant amount of computations must be carried out before the large-scale (LES resolved) effects are incorporated. Therefore, this approach is uniquely suited for parallel processing and has been implemented on various systems such as: Intel Paragon, IBM SP-2, Cray T3D and SGI Power Challenge (PC) using the system independent Message Passing Interface (MPI) compiler. In this paper, timing data on these machines is reported along with some characteristic results.

  1. Instantaneous variance scaling of AIRS thermodynamic profiles using a circular area Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Dorrestijn, Jesse; Kahn, Brian H.; Teixeira, João; Irion, Fredrick W.

    2018-05-01

    Satellite observations are used to obtain vertical profiles of variance scaling of temperature (T) and specific humidity (q) in the atmosphere. A higher spatial resolution nadir retrieval at 13.5 km complements previous Atmospheric Infrared Sounder (AIRS) investigations with 45 km resolution retrievals and enables the derivation of power law scaling exponents to length scales as small as 55 km. We introduce a variable-sized circular-area Monte Carlo methodology to compute exponents instantaneously within the swath of AIRS that yields additional insight into scaling behavior. While this method is approximate and some biases are likely to exist within non-Gaussian portions of the satellite observational swaths of T and q, this method enables the estimation of scale-dependent behavior within instantaneous swaths for individual tropical and extratropical systems of interest. Scaling exponents are shown to fluctuate between β = -1 and -3 at scales ≥ 500 km, while at scales ≤ 500 km they are typically near β ≈ -2, with q slightly lower than T at the smallest scales observed. In the extratropics, the large-scale β is near -3. Within the tropics, however, the large-scale β for T is closer to -1 as small-scale moist convective processes dominate. In the tropics, q exhibits large-scale β between -2 and -3. The values of β are generally consistent with previous works of either time-averaged spatial variance estimates, or aircraft observations that require averaging over numerous flight observational segments. The instantaneous variance scaling methodology is relevant for cloud parameterization development and the assessment of time variability of scaling exponents.

  2. Analysis of the ability of large-scale reanalysis data to define Siberian fire danger in preparation for future fire prediction

    NASA Astrophysics Data System (ADS)

    Soja, Amber; Westberg, David; Stackhouse, Paul, Jr.; McRae, Douglas; Jin, Ji-Zhong; Sukhinin, Anatoly

    2010-05-01

    Fire is the dominant disturbance that precipitates ecosystem change in boreal regions, and fire is largely under the control of weather and climate. Fire frequency, fire severity, area burned and fire season length are predicted to increase in boreal regions under current climate change scenarios. Therefore, changes in fire regimes have the potential to compel ecological change, moving ecosystems more quickly towards equilibrium with a new climate. The ultimate goal of this research is to assess the viability of large-scale (1°) data to be used to define fire weather danger and fire regimes, so that large-scale data can be confidently used to predict future fire regimes using large-scale fire weather data, like that available from current Intergovernmental Panel on Climate Change (IPCC) climate change scenarios. In this talk, we intent to: (1) evaluate Fire Weather Indices (FWI) derived using reanalysis and interpolated station data; (2) discuss the advantages and disadvantages of using these distinct data sources; and (3) highlight established relationships between large-scale fire weather data, area burned, active fires and ecosystems burned. Specifically, the Canadian Forestry Service (CFS) Fire Weather Index (FWI) will be derived using: (1) NASA Goddard Earth Observing System version 4 (GEOS-4) large-scale reanalysis and NASA Global Precipitation Climatology Project (GPCP) data; and National Climatic Data Center (NCDC) surface station-interpolated data. Requirements of the FWI are local noon surface-level air temperature, relative humidity, wind speed, and daily (noon-noon) rainfall. GEOS-4 reanalysis and NCDC station-interpolated fire weather indices are generally consistent spatially, temporally and quantitatively. Additionally, increased fire activity coincides with increased FWI ratings in both data products. Relationships have been established between large-scale FWI to area burned, fire frequency, ecosystem types, and these can be use to estimate historic and future fire regimes.

  3. Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.

    PubMed

    Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M

    2009-04-03

    We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.

  4. Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan.

    NASA Astrophysics Data System (ADS)

    Tsai, Kuang-Jung; Chiang, Jie-Lun; Lee, Ming-Hsi; Chen, Yie-Ruey

    2017-04-01

    Analysis on the Critical Rainfall Value For Predicting Large Scale Landslides Caused by Heavy Rainfall In Taiwan. Kuang-Jung Tsai 1, Jie-Lun Chiang 2,Ming-Hsi Lee 2, Yie-Ruey Chen 1, 1Department of Land Management and Development, Chang Jung Christian Universityt, Tainan, Taiwan. 2Department of Soil and Water Conservation, National Pingtung University of Science and Technology, Pingtung, Taiwan. ABSTRACT The accumulated rainfall amount was recorded more than 2,900mm that were brought by Morakot typhoon in August, 2009 within continuous 3 days. Very serious landslides, and sediment related disasters were induced by this heavy rainfall event. The satellite image analysis project conducted by Soil and Water Conservation Bureau after Morakot event indicated that more than 10,904 sites of landslide with total sliding area of 18,113ha were found by this project. At the same time, all severe sediment related disaster areas are also characterized based on their disaster type, scale, topography, major bedrock formations and geologic structures during the period of extremely heavy rainfall events occurred at the southern Taiwan. Characteristics and mechanism of large scale landslide are collected on the basis of the field investigation technology integrated with GPS/GIS/RS technique. In order to decrease the risk of large scale landslides on slope land, the strategy of slope land conservation, and critical rainfall database should be set up and executed as soon as possible. Meanwhile, study on the establishment of critical rainfall value used for predicting large scale landslides induced by heavy rainfall become an important issue which was seriously concerned by the government and all people live in Taiwan. The mechanism of large scale landslide, rainfall frequency analysis ,sediment budge estimation and river hydraulic analysis under the condition of extremely climate change during the past 10 years would be seriously concerned and recognized as a required issue by this research. Hopefully, all results developed from this research can be used as a warning system for Predicting Large Scale Landslides in the southern Taiwan. Keywords:Heavy Rainfall, Large Scale, landslides, Critical Rainfall Value

  5. Evaluation of LANDSAT multispectral scanner images for mapping altered rocks in the east Tintic Mountains, Utah

    NASA Technical Reports Server (NTRS)

    Rowan, L. C.; Abrams, M. J. (Principal Investigator)

    1979-01-01

    The author has identified the following significant results. Positive findings of earlier evaluations of the color-ratio compositing technique for mapping limonitic altered rocks in south-central Nevada are confirmed, but important limitations in the approach used are pointed out. These limitations arise from environmental, geologic, and image processing factors. The greater vegetation density in the East Tintic Mountains required several modifications in procedures to improve the overall mapping accuracy of the CRC approach. Large format ratio images provide better internal registration of the diazo films and avoids the problems associated with magnifications required in the original procedure. Use of the Linoscan 204 color recognition scanner permits accurate consistent extraction of the green pixels representing limonitic bedrock maps that can be used for mapping at large scales as well as for small scale reconnaissance.

  6. The importance of landscape diversity for carbon fluxes at the landscape level: small-scale heterogeneity matters

    Treesearch

    Katrin Premke; Katrin Attermeyer; Jurgen Augustin; Alvaro Cabezas; Peter Casper; Detlef Deumlich; Jorg Gelbrecht; Horst H. Gerke; Arthur Gessler; Hans-Peter Grossart; Sabine Hilt; Michael Hupfer; Thomas Kalettka; Zachary Kayler; Gunnar Lischeid; Michael Sommer; Dominik Zak

    2016-01-01

    Landscapes can be viewed as spatially heterogeneous areas encompassing terrestrial and aquatic domains. To date, most landscape carbon (C) fluxes have been estimated by accounting for terrestrial ecosystems, while aquatic ecosystems have been largely neglected. However, a robust assessment of C fluxes on the landscape scale requires the estimation of fluxes within and...

  7. Gene Expression Analysis: Teaching Students to Do 30,000 Experiments at Once with Microarray

    ERIC Educational Resources Information Center

    Carvalho, Felicia I.; Johns, Christopher; Gillespie, Marc E.

    2012-01-01

    Genome scale experiments routinely produce large data sets that require computational analysis, yet there are few student-based labs that illustrate the design and execution of these experiments. In order for students to understand and participate in the genomic world, teaching labs must be available where students generate and analyze large data…

  8. Multiresource inventories incorporating GIS, GPS, and database management systems

    Treesearch

    Loukas G. Arvanitis; Balaji Ramachandran; Daniel P. Brackett; Hesham Abd-El Rasol; Xuesong Du

    2000-01-01

    Large-scale natural resource inventories generate enormous data sets. Their effective handling requires a sophisticated database management system. Such a system must be robust enough to efficiently store large amounts of data and flexible enough to allow users to manipulate a wide variety of information. In a pilot project, related to a multiresource inventory of the...

  9. Sensitivity of tree ring growth to local and large-scale climate variability in a region of Southeastern Brazil

    NASA Astrophysics Data System (ADS)

    Venegas-González, Alejandro; Chagas, Matheus Peres; Anholetto Júnior, Claudio Roberto; Alvares, Clayton Alcarde; Roig, Fidel Alejandro; Tomazello Filho, Mario

    2016-01-01

    We explored the relationship between tree growth in two tropical species and local and large-scale climate variability in Southeastern Brazil. Tree ring width chronologies of Tectona grandis (teak) and Pinus caribaea (Caribbean pine) trees were compared with local (Water Requirement Satisfaction Index—WRSI, Standardized Precipitation Index—SPI, and Palmer Drought Severity Index—PDSI) and large-scale climate indices that analyze the equatorial pacific sea surface temperature (Trans-Niño Index-TNI and Niño-3.4-N3.4) and atmospheric circulation variations in the Southern Hemisphere (Antarctic Oscillation-AAO). Teak trees showed positive correlation with three indices in the current summer and fall. A significant correlation between WRSI index and Caribbean pine was observed in the dry season preceding tree ring formation. The influence of large-scale climate patterns was observed only for TNI and AAO, where there was a radial growth reduction in months preceding the growing season with positive values of the TNI in teak trees and radial growth increase (decrease) during December (March) to February (May) of the previous (current) growing season with positive phase of the AAO in teak (Caribbean pine) trees. The development of a new dendroclimatological study in Southeastern Brazil sheds light to local and large-scale climate influence on tree growth in recent decades, contributing in future climate change studies.

  10. Large-scale impacts of herbivores on the structural diversity of African savannas

    PubMed Central

    Asner, Gregory P.; Levick, Shaun R.; Kennedy-Bowdoin, Ty; Knapp, David E.; Emerson, Ruth; Jacobson, James; Colgan, Matthew S.; Martin, Roberta E.

    2009-01-01

    African savannas are undergoing management intensification, and decision makers are increasingly challenged to balance the needs of large herbivore populations with the maintenance of vegetation and ecosystem diversity. Ensuring the sustainability of Africa's natural protected areas requires information on the efficacy of management decisions at large spatial scales, but often neither experimental treatments nor large-scale responses are available for analysis. Using a new airborne remote sensing system, we mapped the three-dimensional (3-D) structure of vegetation at a spatial resolution of 56 cm throughout 1640 ha of savanna after 6-, 22-, 35-, and 41-year exclusions of herbivores, as well as in unprotected areas, across Kruger National Park in South Africa. Areas in which herbivores were excluded over the short term (6 years) contained 38%–80% less bare ground compared with those that were exposed to mammalian herbivory. In the longer-term (> 22 years), the 3-D structure of woody vegetation differed significantly between protected and accessible landscapes, with up to 11-fold greater woody canopy cover in the areas without herbivores. Our maps revealed 2 scales of ecosystem response to herbivore consumption, one broadly mediated by geologic substrate and the other mediated by hillslope-scale variation in soil nutrient availability and moisture conditions. Our results are the first to quantitatively illustrate the extent to which herbivores can affect the 3-D structural diversity of vegetation across large savanna landscapes. PMID:19258457

  11. Raccoon spatial requirements and multi-scale habitat selection within an intensively managed central Appalachian forest

    USGS Publications Warehouse

    Owen, Sheldon F.; Berl, Jacob L.; Edwards, John W.; Ford, W. Mark; Wood, Petra Bohall

    2015-01-01

    We studied a raccoon (Procyon lotor) population within a managed central Appalachian hardwood forest in West Virginia to investigate the effects of intensive forest management on raccoon spatial requirements and habitat selection. Raccoon home-range (95% utilization distribution) and core-area (50% utilization distribution) size differed between sexes with males maintaining larger (2×) home ranges and core areas than females. Home-range and core-area size did not differ between seasons for either sex. We used compositional analysis to quantify raccoon selection of six different habitat types at multiple spatial scales. Raccoons selected riparian corridors (riparian management zones [RMZ]) and intact forests (> 70 y old) at the core-area spatial scale. RMZs likely were used by raccoons because they provided abundant denning resources (i.e., large-diameter trees) as well as access to water. Habitat composition associated with raccoon foraging locations indicated selection for intact forests, riparian areas, and regenerating harvest (stands <10 y old). Although raccoons were able to utilize multiple habitat types for foraging resources, a selection of intact forest and RMZs at multiple spatial scales indicates the need of mature forest (with large-diameter trees) for this species in managed forests in the central Appalachians.

  12. BlazeDEM3D-GPU A Large Scale DEM simulation code for GPUs

    NASA Astrophysics Data System (ADS)

    Govender, Nicolin; Wilke, Daniel; Pizette, Patrick; Khinast, Johannes

    2017-06-01

    Accurately predicting the dynamics of particulate materials is of importance to numerous scientific and industrial areas with applications ranging across particle scales from powder flow to ore crushing. Computational discrete element simulations is a viable option to aid in the understanding of particulate dynamics and design of devices such as mixers, silos and ball mills, as laboratory scale tests comes at a significant cost. However, the computational time required to simulate an industrial scale simulation which consists of tens of millions of particles can take months to complete on large CPU clusters, making the Discrete Element Method (DEM) unfeasible for industrial applications. Simulations are therefore typically restricted to tens of thousands of particles with highly detailed particle shapes or a few million of particles with often oversimplified particle shapes. However, a number of applications require accurate representation of the particle shape to capture the macroscopic behaviour of the particulate system. In this paper we give an overview of the recent extensions to the open source GPU based DEM code, BlazeDEM3D-GPU, that can simulate millions of polyhedra and tens of millions of spheres on a desktop computer with a single or multiple GPUs.

  13. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  14. Detectability of large-scale power suppression in the galaxy distribution

    NASA Astrophysics Data System (ADS)

    Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan

    2010-12-01

    Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.

  15. Music in the moment? Revisiting the effect of large scale structures.

    PubMed

    Lalitte, P; Bigand, E

    2006-12-01

    The psychological relevance of large-scale musical structures has been a matter of debate in the music community. This issue was investigated with a method that allows assessing listeners' detection of musical incoherencies in normal and scrambled versions of popular and contemporary music pieces. Musical excerpts were segmented into 28 or 29 chunks. In the scrambled version, the temporal order of these chunks was altered with the constraint that the transitions between two chunks never created local acoustical and musical disruptions. Participants were required (1) to detect on-line incoherent linking of chunks, (2) to rate aesthetic quality of pieces, and (3) to evaluate their overall coherence. The findings indicate a moderate sensitivity to large-scale musical structures for popular and contemporary music in both musically trained and untrained listeners. These data are discussed in light of current models of music cognition.

  16. Successful scaling-up of self-sustained pyrolysis of oil palm biomass under pool-type reactor.

    PubMed

    Idris, Juferi; Shirai, Yoshihito; Andou, Yoshito; Mohd Ali, Ahmad Amiruddin; Othman, Mohd Ridzuan; Ibrahim, Izzudin; Yamamoto, Akio; Yasuda, Nobuhiko; Hassan, Mohd Ali

    2016-02-01

    An appropriate technology for waste utilisation, especially for a large amount of abundant pressed-shredded oil palm empty fruit bunch (OFEFB), is important for the oil palm industry. Self-sustained pyrolysis, whereby oil palm biomass was combusted by itself to provide the heat for pyrolysis without an electrical heater, is more preferable owing to its simplicity, ease of operation and low energy requirement. In this study, biochar production under self-sustained pyrolysis of oil palm biomass in the form of oil palm empty fruit bunch was tested in a 3-t large-scale pool-type reactor. During the pyrolysis process, the biomass was loaded layer by layer when the smoke appeared on the top, to minimise the entrance of oxygen. This method had significantly increased the yield of biochar. In our previous report, we have tested on a 30-kg pilot-scale capacity under self-sustained pyrolysis and found that the higher heating value (HHV) obtained was 22.6-24.7 MJ kg(-1) with a 23.5%-25.0% yield. In this scaled-up study, a 3-t large-scale procedure produced HHV of 22.0-24.3 MJ kg(-1) with a 30%-34% yield based on a wet-weight basis. The maximum self-sustained pyrolysis temperature for the large-scale procedure can reach between 600 °C and 700 °C. We concluded that large-scale biochar production under self-sustained pyrolysis was successfully conducted owing to the comparable biochar produced, compared with medium-scale and other studies with an electrical heating element, making it an appropriate technology for waste utilisation, particularly for the oil palm industry. © The Author(s) 2015.

  17. Scalable subsurface inverse modeling of huge data sets with an application to tracer concentration breakthrough data from magnetic resonance imaging

    DOE PAGES

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.; ...

    2016-06-09

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Jonghyun; Yoon, Hongkyu; Kitanidis, Peter K.

    When characterizing subsurface properties is crucial for reliable and cost-effective groundwater supply management and contaminant remediation. With recent advances in sensor technology, large volumes of hydro-geophysical and geochemical data can be obtained to achieve high-resolution images of subsurface properties. However, characterization with such a large amount of information requires prohibitive computational costs associated with “big data” processing and numerous large-scale numerical simulations. To tackle such difficulties, the Principal Component Geostatistical Approach (PCGA) has been proposed as a “Jacobian-free” inversion method that requires much smaller forward simulation runs for each iteration than the number of unknown parameters and measurements needed inmore » the traditional inversion methods. PCGA can be conveniently linked to any multi-physics simulation software with independent parallel executions. In our paper, we extend PCGA to handle a large number of measurements (e.g. 106 or more) by constructing a fast preconditioner whose computational cost scales linearly with the data size. For illustration, we characterize the heterogeneous hydraulic conductivity (K) distribution in a laboratory-scale 3-D sand box using about 6 million transient tracer concentration measurements obtained using magnetic resonance imaging. Since each individual observation has little information on the K distribution, the data was compressed by the zero-th temporal moment of breakthrough curves, which is equivalent to the mean travel time under the experimental setting. Moreover, only about 2,000 forward simulations in total were required to obtain the best estimate with corresponding estimation uncertainty, and the estimated K field captured key patterns of the original packing design, showing the efficiency and effectiveness of the proposed method. This article is protected by copyright. All rights reserved.« less

  19. Fast Updating National Geo-Spatial Databases with High Resolution Imagery: China's Methodology and Experience

    NASA Astrophysics Data System (ADS)

    Chen, J.; Wang, D.; Zhao, R. L.; Zhang, H.; Liao, A.; Jiu, J.

    2014-04-01

    Geospatial databases are irreplaceable national treasure of immense importance. Their up-to-dateness referring to its consistency with respect to the real world plays a critical role in its value and applications. The continuous updating of map databases at 1:50,000 scales is a massive and difficult task for larger countries of the size of more than several million's kilometer squares. This paper presents the research and technological development to support the national map updating at 1:50,000 scales in China, including the development of updating models and methods, production tools and systems for large-scale and rapid updating, as well as the design and implementation of the continuous updating workflow. The use of many data sources and the integration of these data to form a high accuracy, quality checked product were required. It had in turn required up to date techniques of image matching, semantic integration, generalization, data base management and conflict resolution. Design and develop specific software tools and packages to support the large-scale updating production with high resolution imagery and large-scale data generalization, such as map generalization, GIS-supported change interpretation from imagery, DEM interpolation, image matching-based orthophoto generation, data control at different levels. A national 1:50,000 databases updating strategy and its production workflow were designed, including a full coverage updating pattern characterized by all element topographic data modeling, change detection in all related areas, and whole process data quality controlling, a series of technical production specifications, and a network of updating production units in different geographic places in the country.

  20. The Saskatchewan River Basin - a large scale observatory for water security research (Invited)

    NASA Astrophysics Data System (ADS)

    Wheater, H. S.

    2013-12-01

    The 336,000 km2 Saskatchewan River Basin (SaskRB) in Western Canada illustrates many of the issues of Water Security faced world-wide. It poses globally-important science challenges due to the diversity in its hydro-climate and ecological zones. With one of the world's more extreme climates, it embodies environments of global significance, including the Rocky Mountains (source of the major rivers in Western Canada), the Boreal Forest (representing 30% of Canada's land area) and the Prairies (home to 80% of Canada's agriculture). Management concerns include: provision of water resources to more than three million inhabitants, including indigenous communities; balancing competing needs for water between different uses, such as urban centres, industry, agriculture, hydropower and environmental flows; issues of water allocation between upstream and downstream users in the three prairie provinces; managing the risks of flood and droughts; and assessing water quality impacts of discharges from major cities and intensive agricultural production. Superimposed on these issues is the need to understand and manage uncertain water futures, including effects of economic growth and environmental change, in a highly fragmented water governance environment. Key science questions focus on understanding and predicting the effects of land and water management and environmental change on water quantity and quality. To address the science challenges, observational data are necessary across multiple scales. This requires focussed research at intensively monitored sites and small watersheds to improve process understanding and fine-scale models. To understand large-scale effects on river flows and quality, land-atmosphere feedbacks, and regional climate, integrated monitoring, modelling and analysis is needed at large basin scale. And to support water management, new tools are needed for operational management and scenario-based planning that can be implemented across multiple scales and multiple jurisdictions. The SaskRB has therefore been developed as a large scale observatory, now a Regional Hydroclimate Project of the World Climate Research Programme's GEWEX project, and is available to contribute to the emerging North American Water Program. State-of-the-art hydro-ecological experimental sites have been developed for the key biomes, and a river and lake biogeochemical research facility, focussed on impacts of nutrients and exotic chemicals. Data are integrated at SaskRB scale to support the development of improved large scale climate and hydrological modelling products, the development of DSS systems for local, provincial and basin-scale management, and the development of related social science research, engaging stakeholders in the research and exploring their values and priorities for water security. The observatory provides multiple scales of observation and modelling required to develop: a) new climate, hydrological and ecological science and modelling tools to address environmental change in key environments, and their integrated effects and feedbacks at large catchment scale, b) new tools needed to support river basin management under uncertainty, including anthropogenic controls on land and water management and c) the place-based focus for the development of new transdisciplinary science.

  1. Radiometer requirements for Earth-observation systems using large space antennas

    NASA Technical Reports Server (NTRS)

    Keafer, L. S., Jr.; Harrington, R. F.

    1983-01-01

    Requirements are defined for Earth observation microwave radiometry for the decade of the 1990's by using large space antenna (LSA) systems with apertures in the range from 50 to 200 m. General Earth observation needs, specific measurement requirements, orbit mission guidelines and constraints, and general radiometer requirements are defined. General Earth observation needs are derived from NASA's basic space science program. Specific measurands include soil moisture, sea surface temperature, salinity, water roughness, ice boundaries, and water pollutants. Measurements are required with spatial resolution from 10 to 1 km and with temporal resolution from 3 days to 1 day. The primary orbit altitude and inclination ranges are 450 to 2200 km and 60 to 98 deg, respectively. Contiguous large scale coverage of several land and ocean areas over the globe dictates large (several hundred kilometers) swaths. Radiometer measurements are made in the bandwidth range from 1 to 37 GHz, preferably with dual polarization radiometers with a minimum of 90 percent beam efficiency. Reflector surface, root mean square deviation tolerances are in the wavelength range from 1/30 to 1/100.

  2. Resolving Dynamic Properties of Polymers through Coarse-Grained Computational Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Salerno, K. Michael; Agrawal, Anupriya; Perahia, Dvora

    2016-02-05

    Coupled length and time scales determine the dynamic behavior of polymers and underlie their unique viscoelastic properties. To resolve the long-time dynamics it is imperative to determine which time and length scales must be correctly modeled. In this paper, we probe the degree of coarse graining required to simultaneously retain significant atomistic details and access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using linear polyethylene as a model system, we probe how the coarse-graining scale affects the measured dynamics. Iterative Boltzmann inversion ismore » used to derive coarse-grained potentials with 2–6 methylene groups per coarse-grained bead from a fully atomistic melt simulation. We show that atomistic detail is critical to capturing large-scale dynamics. Finally, using these models we simulate polyethylene melts for times over 500 μs to study the viscoelastic properties of well-entangled polymer melts.« less

  3. High-Resiliency and Auto-Scaling of Large-Scale Cloud Computing for OCO-2 L2 Full Physics Processing

    NASA Astrophysics Data System (ADS)

    Hua, H.; Manipon, G.; Starch, M.; Dang, L. B.; Southam, P.; Wilson, B. D.; Avis, C.; Chang, A.; Cheng, C.; Smyth, M.; McDuffie, J. L.; Ramirez, P.

    2015-12-01

    Next generation science data systems are needed to address the incoming flood of data from new missions such as SWOT and NISAR where data volumes and data throughput rates are order of magnitude larger than present day missions. Additionally, traditional means of procuring hardware on-premise are already limited due to facilities capacity constraints for these new missions. Existing missions, such as OCO-2, may also require high turn-around time for processing different science scenarios where on-premise and even traditional HPC computing environments may not meet the high processing needs. We present our experiences on deploying a hybrid-cloud computing science data system (HySDS) for the OCO-2 Science Computing Facility to support large-scale processing of their Level-2 full physics data products. We will explore optimization approaches to getting best performance out of hybrid-cloud computing as well as common issues that will arise when dealing with large-scale computing. Novel approaches were utilized to do processing on Amazon's spot market, which can potentially offer ~10X costs savings but with an unpredictable computing environment based on market forces. We will present how we enabled high-tolerance computing in order to achieve large-scale computing as well as operational cost savings.

  4. Towards Large-area Field-scale Operational Evapotranspiration for Water Use Mapping

    NASA Astrophysics Data System (ADS)

    Senay, G. B.; Friedrichs, M.; Morton, C.; Huntington, J. L.; Verdin, J.

    2017-12-01

    Field-scale evapotranspiration (ET) estimates are needed for improving surface and groundwater use and water budget studies. Ideally, field-scale ET estimates would be at regional to national levels and cover long time periods. As a result of large data storage and computational requirements associated with processing field-scale satellite imagery such as Landsat, numerous challenges remain to develop operational ET estimates over large areas for detailed water use and availability studies. However, the combination of new science, data availability, and cloud computing technology is enabling unprecedented capabilities for ET mapping. To demonstrate this capability, we used Google's Earth Engine cloud computing platform to create nationwide annual ET estimates with 30-meter resolution Landsat ( 16,000 images) and gridded weather data using the Operational Simplified Surface Energy Balance (SSEBop) model in support of the National Water Census, a USGS research program designed to build decision support capacity for water management agencies and other natural resource managers. By leveraging Google's Earth Engine Application Programming Interface (API) and developing software in a collaborative, open-platform environment, we rapidly advance from research towards applications for large-area field-scale ET mapping. Cloud computing of the Landsat image archive combined with other satellite, climate, and weather data, is creating never imagined opportunities for assessing ET model behavior and uncertainty, and ultimately providing the ability for more robust operational monitoring and assessment of water use at field-scales.

  5. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  6. Large-scale deformation associated with ridge subduction

    USGS Publications Warehouse

    Geist, E.L.; Fisher, M.A.; Scholl, D. W.

    1993-01-01

    Continuum models are used to investigate the large-scale deformation associated with the subduction of aseismic ridges. Formulated in the horizontal plane using thin viscous sheet theory, these models measure the horizontal transmission of stress through the arc lithosphere accompanying ridge subduction. Modelling was used to compare the Tonga arc and Louisville ridge collision with the New Hebrides arc and d'Entrecasteaux ridge collision, which have disparate arc-ridge intersection speeds but otherwise similar characteristics. Models of both systems indicate that diffuse deformation (low values of the effective stress-strain exponent n) are required to explain the observed deformation. -from Authors

  7. Microwave evidence for large-scale changes associated with a filament eruption

    NASA Technical Reports Server (NTRS)

    Kundu, M. R.; Schmahl, E. J.; Fu, Q.-J.

    1989-01-01

    VLA observations at 6 and 20 cm wavelengths taken on August 3, 1985 are presented, showing an eruptive filament event in which microwave emission originated in two widely separated regions during the disintegration of the filament. The amount of heat required for the enhancement is estimated. Near-simultaneous changes in intensity and polarization were observed in the western components of the northern and southern regions. It is suggested that large-scale magnetic interconnections permitted the two regions to respond similarly to an external energy or mass source involved in the disruption of the filament.

  8. Combined heat and power supply using Carnot engines

    NASA Astrophysics Data System (ADS)

    Horlock, J. H.

    The Marshall Report on the thermodynamic and economic feasibility of introducing large scale combined heat and electrical power generation (CHP) into the United Kingdom is summarized. Combinations of reversible power plant (Carnot engines) to meet a given demand of power and heat production are analyzed. The Marshall Report states that fairly large scale CHP plants are an attractive energy saving option for areas of high heat load densities. Analysis shows that for given requirements, the total heat supply and utilization factor are functions of heat output, reservoir supply temperature, temperature of heat rejected to the reservoir, and an intermediate temperature for district heating.

  9. Friction-Stir Welding of Large Scale Cryogenic Fuel Tanks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Jones, Clyde S., III; Venable, Richard A.

    1998-01-01

    The Marshall Space Flight Center has established a facility for the joining of large-scale aluminum-lithium alloy 2195 cryogenic fuel tanks using the friction-stir welding process. Longitudinal welds, approximately five meters in length, were made possible by retrofitting an existing vertical fusion weld system, designed to fabricate tank barrel sections ranging from two to ten meters in diameter. The structural design requirements of the tooling, clamping and the spindle travel system will be described in this paper. Process controls and real-time data acquisition will also be described, and were critical elements contributing to successful weld operation.

  10. Beyond Solar-B: MTRAP, the Magnetic Transition Region Probe

    NASA Technical Reports Server (NTRS)

    Davis, John M.; Moore, Ronald L.; Hathaway, David H.

    2003-01-01

    The next generation of solar missions will reveal and measure fine-scale solar magnetic fields and their effects in the solar atmosphere at heights, small scales, sensitivities, and fields of view well beyond the reach of Solar-B. The necessity for, and potential of, such observations for understanding solar magnetic fields, their generation in and below the photosphere, and their control of the solar atmosphere and heliosphere, were the focus of a science definition workshop, 'High-Resolution Solar Magnetography from Space: Beyond Solar-B,' held in Huntsville Alabama in April 2001. Forty internationally prominent scientists active in solar research involving fine-scale solar magnetism participated in this Workshop and reached consensus that the key science objective to be pursued beyond Solar-B is a physical understanding of the fine-scale magnetic structure and activity in the magnetic transition region, defined as the region between the photosphere and corona where neither the plasma nor the magnetic field strongly dominates the other. The observational objective requires high cadence (less than 10s) vector magnetic field maps, and spatially resolved spectra from the IR, visible, vacuum UV, to the EUV at high resolution (less than 50km) over a large FOV (approximately 140,000 km). A polarimetric resolution of one part in ten thousand is required to measure transverse magnetic fields of less than 30G. The latest SEC Roadmap includes a mission identified as MTRAP to meet these requirements. Enabling technology development requirements include large, lightweight, reflecting optics, large format sensors (16K x 16K pixels) with high QE at 150 nm, and extendable spacecraft structures. The Science Organizing Committee of the Beyond Solar-B Workshop recommends that: (1) Science and Technology Definition Teams should be established in FY04 to finalize the science requirements and to define technology development efforts needed to ensure the practicality of MTRAP's observational goals; (2) The necessary technology development funding should be included in Code S budgets for FY06 and beyond to prepare MTRAP for a new start no later than the nominal end of the Solar-B mission, around 2010.

  11. Beyond Solar-B: MTRAP, the Magnetic TRAnsition Region Probe

    NASA Astrophysics Data System (ADS)

    Davis, J. M.; Moore, R. L.; Hathaway, D. H.; Science Definition CommitteeHigh-Resolution Solar Magnetography Beyond Solar-B Team

    2003-05-01

    The next generation of solar missions will reveal and measure fine-scale solar magnetic fields and their effects in the solar atmosphere at heights, small scales, sensitivities, and fields of view well beyond the reach of Solar-B. The necessity for, and potential of, such observations for understanding solar magnetic fields, their generation in and below the photosphere, and their control of the solar atmosphere and heliosphere, were the focus of a science definition workshop, "High-Resolution Solar Magnetography from Space: Beyond Solar-B," held in Huntsville Alabama in April 2001. Forty internationally prominent scientists active in solar research involving fine-scale solar magnetism participated in this Workshop and reached consensus that the key science objective to be pursued beyond Solar-B is a physical understanding of the fine-scale magnetic structure and activity in the magnetic transition region, defined as the region between the photosphere and corona where neither the plasma nor the magnetic field strongly dominates the other. The observational objective requires high cadence (< 10s) vector magnetic field maps, and spatially resolved spectra from the IR, visible, vacuum UV, to the EUV at high resolution (< 50km) over a large FOV ( 140,000 km). A polarimetric resolution of one part in ten thousand is required to measure transverse magnetic fields of < 30G. The latest SEC Roadmap includes a mission identified as MTRAP to meet these requirements. Enabling technology development requirements include large, lightweight, reflecting optics, large format sensors (16K x 16K pixels) with high QE at 150 nm, and extendable spacecraft structures. The Science Organizing Committee of the Beyond Solar-B Workshop recommends that: 1. Science and Technology Definition Teams should be established in FY04 to finalize the science requirements and to define technology development efforts needed to ensure the practicality of MTRAP's observational goals. 2. The necessary technology development funding should be included in Code S budgets for FY06 and beyond to prepare MTRAP for a new start no later than the nominal end of the Solar-B mission, around 2010.

  12. Written Parental Consent and the Use of Incentives in a Youth Smoking Prevention Trial: A Case Study from Project SPLASH

    ERIC Educational Resources Information Center

    Leakey, Tricia; Lunde, Kevin B.; Koga, Karin; Glanz, Karen

    2004-01-01

    More Institutional Review Boards (IRBs) are requiring written parental consent in school health intervention trials. Because this requirement presents a formidable challenge in conducting large-scale research, it is vital for investigators to share effective strategies learned from completed trials. Investigators for the recently completed Project…

  13. Solar Power in Space?

    DTIC Science & Technology

    2012-01-01

    orbit stupendously large orbital power plants—kilometers across—which collect the sun’s raw energy and beam it down to where it is needed on the earth...24-hour, large -scale power to the urban centers where the majority of humanity lives. A network of thousands of solar-power satellites (SPS) could...provide all the power required for an Earth-based population as large as 10 billion people, even for a fully developed “first world” lifestyle but

  14. Management applications of discontinuity theory

    EPA Science Inventory

    1.Human impacts on the environment are multifaceted and can occur across distinct spatiotemporal scales. Ecological responses to environmental change are therefore difficult to predict, and entail large degrees of uncertainty. Such uncertainty requires robust tools for management...

  15. Coincident scales of forest feedback on climate and conservation in a diversity hot spot

    PubMed Central

    Webb, Thomas J; Gaston, Kevin J; Hannah, Lee; Ian Woodward, F

    2005-01-01

    The dynamic relationship between vegetation and climate is now widely acknowledged. Climate influences the distribution of vegetation; and through a number of feedback mechanisms vegetation affects climate. This implies that land-use changes such as deforestation will have climatic consequences. However, the spatial scales at which such feedbacks occur remain largely unknown. Here, we use a large database of precipitation and tree cover records for an area of the biodiversity-rich Atlantic forest region in south eastern Brazil to investigate the forest–rainfall feedback at a range of spatial scales from ca 101–104 km2. We show that the strength of the feedback increases up to scales of at least 103 km2, with the climate at a particular locality influenced by the pattern of landcover extending over a large area. Thus, smaller forest fragments, even if well protected, may suffer degradation due to the climate responding to land-use change in the surrounding area. Atlantic forest vertebrate taxa also require large areas of forest to support viable populations. Areas of forest of ca 103 km2 would be large enough to support such populations at the same time as minimizing the risk of climatic feedbacks resulting from deforestation. PMID:16608697

  16. How institutions shaped the last major evolutionary transition to large-scale human societies

    PubMed Central

    Powers, Simon T.; van Schaik, Carel P.; Lehmann, Laurent

    2016-01-01

    What drove the transition from small-scale human societies centred on kinship and personal exchange, to large-scale societies comprising cooperation and division of labour among untold numbers of unrelated individuals? We propose that the unique human capacity to negotiate institutional rules that coordinate social actions was a key driver of this transition. By creating institutions, humans have been able to move from the default ‘Hobbesian’ rules of the ‘game of life’, determined by physical/environmental constraints, into self-created rules of social organization where cooperation can be individually advantageous even in large groups of unrelated individuals. Examples include rules of food sharing in hunter–gatherers, rules for the usage of irrigation systems in agriculturalists, property rights and systems for sharing reputation between mediaeval traders. Successful institutions create rules of interaction that are self-enforcing, providing direct benefits both to individuals that follow them, and to individuals that sanction rule breakers. Forming institutions requires shared intentionality, language and other cognitive abilities largely absent in other primates. We explain how cooperative breeding likely selected for these abilities early in the Homo lineage. This allowed anatomically modern humans to create institutions that transformed the self-reliance of our primate ancestors into the division of labour of large-scale human social organization. PMID:26729937

  17. Coincident scales of forest feedback on climate and conservation in a diversity hot spot.

    PubMed

    Webb, Thomas J; Gaston, Kevin J; Hannah, Lee; Ian Woodward, F

    2006-03-22

    The dynamic relationship between vegetation and climate is now widely acknowledged. Climate influences the distribution of vegetation; and through a number of feedback mechanisms vegetation affects climate. This implies that land-use changes such as deforestation will have climatic consequences. However, the spatial scales at which such feedbacks occur remain largely unknown. Here, we use a large database of precipitation and tree cover records for an area of the biodiversity-rich Atlantic forest region in south eastern Brazil to investigate the forest-rainfall feedback at a range of spatial scales from ca 10(1)-10(4) km2. We show that the strength of the feedback increases up to scales of at least 10(3) km2, with the climate at a particular locality influenced by the pattern of landcover extending over a large area. Thus, smaller forest fragments, even if well protected, may suffer degradation due to the climate responding to land-use change in the surrounding area. Atlantic forest vertebrate taxa also require large areas of forest to support viable populations. Areas of forest of ca 10(3) km2 would be large enough to support such populations at the same time as minimizing the risk of climatic feedbacks resulting from deforestation.

  18. Generating multi-photon W-like states for perfect quantum teleportation and superdense coding

    NASA Astrophysics Data System (ADS)

    Li, Ke; Kong, Fan-Zhen; Yang, Ming; Ozaydin, Fatih; Yang, Qing; Cao, Zhuo-Liang

    2016-08-01

    An interesting aspect of multipartite entanglement is that for perfect teleportation and superdense coding, not the maximally entangled W states but a special class of non-maximally entangled W-like states are required. Therefore, efficient preparation of such W-like states is of great importance in quantum communications, which has not been studied as much as the preparation of W states. In this paper, we propose a simple optical scheme for efficient preparation of large-scale polarization-based entangled W-like states by fusing two W-like states or expanding a W-like state with an ancilla photon. Our scheme can also generate large-scale W states by fusing or expanding W or even W-like states. The cost analysis shows that in generating large-scale W states, the fusion mechanism achieves a higher efficiency with non-maximally entangled W-like states than maximally entangled W states. Our scheme can also start fusion or expansion with Bell states, and it is composed of a polarization-dependent beam splitter, two polarizing beam splitters and photon detectors. Requiring no ancilla photon or controlled gate to operate, our scheme can be realized with the current photonics technology and we believe it enable advances in quantum teleportation and superdense coding in multipartite settings.

  19. Large-scale academic achievement testing of deaf and hard-of-hearing students: past, present, and future.

    PubMed

    Qi, Sen; Mitchell, Ross E

    2012-01-01

    The first large-scale, nationwide academic achievement testing program using Stanford Achievement Test (Stanford) for deaf and hard-of-hearing children in the United States started in 1969. Over the past three decades, the Stanford has served as a benchmark in the field of deaf education for assessing student academic achievement. However, the validity and reliability of using the Stanford for this special student population still require extensive scrutiny. Recent shifts in educational policy environment, which require that schools enable all children to achieve proficiency through accountability testing, warrants a close examination of the adequacy and relevance of the current large-scale testing of deaf and hard-of-hearing students. This study has three objectives: (a) it will summarize the historical data over the last three decades to indicate trends in academic achievement for this special population, (b) it will analyze the current federal laws and regulations related to educational testing and special education, thereby identifying gaps between policy and practice in the field, especially identifying the limitations of current testing programs in assessing what deaf and hard-of-hearing students know, and (c) it will offer some insights and suggestions for future testing programs for deaf and hard-of-hearing students.

  20. Institutionalizing telemedicine applications: the challenge of legitimizing decision-making.

    PubMed

    Zanaboni, Paolo; Lettieri, Emanuele

    2011-09-28

    During the last decades a variety of telemedicine applications have been trialed worldwide. However, telemedicine is still an example of major potential benefits that have not been fully attained. Health care regulators are still debating why institutionalizing telemedicine applications on a large scale has been so difficult and why health care professionals are often averse or indifferent to telemedicine applications, thus preventing them from becoming part of everyday clinical routines. We believe that the lack of consolidated procedures for supporting decision making by health care regulators is a major weakness. We aim to further the current debate on how to legitimize decision making about the institutionalization of telemedicine applications on a large scale. We discuss (1) three main requirements--rationality, fairness, and efficiency--that should underpin decision making so that the relevant stakeholders perceive them as being legitimate, and (2) the domains and criteria for comparing and assessing telemedicine applications--benefits and sustainability. According to these requirements and criteria, we illustrate a possible reference process for legitimate decision making about which telemedicine applications to implement on a large scale. This process adopts the health care regulators' perspective and is made up of 2 subsequent stages, in which a preliminary proposal and then a full proposal are reviewed.

  1. CERN data services for LHC computing

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Bocchi, E.; Chan, B.; Fiorot, A.; Iven, J.; Lo Presti, G.; Lopez, J.; Gonzalez, H.; Lamanna, M.; Mascetti, L.; Moscicki, J.; Pace, A.; Peters, A.; Ponce, S.; Rousseau, H.; van der Ster, D.

    2017-10-01

    Dependability, resilience, adaptability and efficiency. Growing requirements require tailoring storage services and novel solutions. Unprecedented volumes of data coming from the broad number of experiments at CERN need to be quickly available in a highly scalable way for large-scale processing and data distribution while in parallel they are routed to tape for long-term archival. These activities are critical for the success of HEP experiments. Nowadays we operate at high incoming throughput (14GB/s during 2015 LHC Pb-Pb run and 11PB in July 2016) and with concurrent complex production work-loads. In parallel our systems provide the platform for the continuous user and experiment driven work-loads for large-scale data analysis, including end-user access and sharing. The storage services at CERN cover the needs of our community: EOS and CASTOR as a large-scale storage; CERNBox for end-user access and sharing; Ceph as data back-end for the CERN OpenStack infrastructure, NFS services and S3 functionality; AFS for legacy distributed-file-system services. In this paper we will summarise the experience in supporting LHC experiments and the transition of our infrastructure from static monolithic systems to flexible components providing a more coherent environment with pluggable protocols, tuneable QoS, sharing capabilities and fine grained ACLs management while continuing to guarantee dependable and robust services.

  2. A fast time-difference inverse solver for 3D EIT with application to lung imaging.

    PubMed

    Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut

    2016-08-01

    A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.

  3. Large-scale DNA Barcode Library Generation for Biomolecule Identification in High-throughput Screens.

    PubMed

    Lyons, Eli; Sheridan, Paul; Tremmel, Georg; Miyano, Satoru; Sugano, Sumio

    2017-10-24

    High-throughput screens allow for the identification of specific biomolecules with characteristics of interest. In barcoded screens, DNA barcodes are linked to target biomolecules in a manner allowing for the target molecules making up a library to be identified by sequencing the DNA barcodes using Next Generation Sequencing. To be useful in experimental settings, the DNA barcodes in a library must satisfy certain constraints related to GC content, homopolymer length, Hamming distance, and blacklisted subsequences. Here we report a novel framework to quickly generate large-scale libraries of DNA barcodes for use in high-throughput screens. We show that our framework dramatically reduces the computation time required to generate large-scale DNA barcode libraries, compared with a naїve approach to DNA barcode library generation. As a proof of concept, we demonstrate that our framework is able to generate a library consisting of one million DNA barcodes for use in a fragment antibody phage display screening experiment. We also report generating a general purpose one billion DNA barcode library, the largest such library yet reported in literature. Our results demonstrate the value of our novel large-scale DNA barcode library generation framework for use in high-throughput screening applications.

  4. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  5. A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.

    PubMed

    Halloran, John T; Rocke, David M

    2018-05-04

    Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .

  6. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances

    PubMed Central

    Parker, V. Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  7. An Illustrative Guide to the Minerva Framework

    NASA Astrophysics Data System (ADS)

    Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration

    2017-10-01

    Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.

  8. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Comparison of Large eddy dynamo simulation using dynamic sub-grid scale (SGS) model with a fully resolved direct simulation in a rotating spherical shell

    NASA Astrophysics Data System (ADS)

    Matsui, H.; Buffett, B. A.

    2017-12-01

    The flow in the Earth's outer core is expected to have vast length scale from the geometry of the outer core to the thickness of the boundary layer. Because of the limitation of the spatial resolution in the numerical simulations, sub-grid scale (SGS) modeling is required to model the effects of the unresolved field on the large-scale fields. We model the effects of sub-grid scale flow and magnetic field using a dynamic scale similarity model. Four terms are introduced for the momentum flux, heat flux, Lorentz force and magnetic induction. The model was previously used in the convection-driven dynamo in a rotating plane layer and spherical shell using the Finite Element Methods. In the present study, we perform large eddy simulations (LES) using the dynamic scale similarity model. The scale similarity model is implement in Calypso, which is a numerical dynamo model using spherical harmonics expansion. To obtain the SGS terms, the spatial filtering in the horizontal directions is done by taking the convolution of a Gaussian filter expressed in terms of a spherical harmonic expansion, following Jekeli (1981). A Gaussian field is also applied in the radial direction. To verify the present model, we perform a fully resolved direct numerical simulation (DNS) with the truncation of the spherical harmonics L = 255 as a reference. And, we perform unresolved DNS and LES with SGS model on coarser resolution (L= 127, 84, and 63) using the same control parameter as the resolved DNS. We will discuss the verification results by comparison among these simulations and role of small scale fields to large scale fields through the role of the SGS terms in LES.

  10. Large-scale parallel genome assembler over cloud computing environment.

    PubMed

    Das, Arghya Kusum; Koppa, Praveen Kumar; Goswami, Sayan; Platania, Richard; Park, Seung-Jong

    2017-06-01

    The size of high throughput DNA sequencing data has already reached the terabyte scale. To manage this huge volume of data, many downstream sequencing applications started using locality-based computing over different cloud infrastructures to take advantage of elastic (pay as you go) resources at a lower cost. However, the locality-based programming model (e.g. MapReduce) is relatively new. Consequently, developing scalable data-intensive bioinformatics applications using this model and understanding the hardware environment that these applications require for good performance, both require further research. In this paper, we present a de Bruijn graph oriented Parallel Giraph-based Genome Assembler (GiGA), as well as the hardware platform required for its optimal performance. GiGA uses the power of Hadoop (MapReduce) and Giraph (large-scale graph analysis) to achieve high scalability over hundreds of compute nodes by collocating the computation and data. GiGA achieves significantly higher scalability with competitive assembly quality compared to contemporary parallel assemblers (e.g. ABySS and Contrail) over traditional HPC cluster. Moreover, we show that the performance of GiGA is significantly improved by using an SSD-based private cloud infrastructure over traditional HPC cluster. We observe that the performance of GiGA on 256 cores of this SSD-based cloud infrastructure closely matches that of 512 cores of traditional HPC cluster.

  11. Contrasting styles of large-scale displacement of unconsolidated sand: examples from the early Jurassic Navajo Sandstone on the Colorado Plateau, USA

    NASA Astrophysics Data System (ADS)

    Bryant, Gerald

    2015-04-01

    Large-scale soft-sediment deformation features in the Navajo Sandstone have been a topic of interest for nearly 40 years, ever since they were first explored as a criterion for discriminating between marine and continental processes in the depositional environment. For much of this time, evidence for large-scale sediment displacements was commonly attributed to processes of mass wasting. That is, gravity-driven movements of surficial sand. These slope failures were attributed to the inherent susceptibility of dune sand responding to environmental triggers such as earthquakes, floods, impacts, and the differential loading associated with dune topography. During the last decade, a new wave of research is focusing on the event significance of deformation features in more detail, revealing a broad diversity of large-scale deformation morphologies. This research has led to a better appreciation of subsurface dynamics in the early Jurassic deformation events recorded in the Navajo Sandstone, including the important role of intrastratal sediment flow. This report documents two illustrative examples of large-scale sediment displacements represented in extensive outcrops of the Navajo Sandstone along the Utah/Arizona border. Architectural relationships in these outcrops provide definitive constraints that enable the recognition of a large-scale sediment outflow, at one location, and an equally large-scale subsurface flow at the other. At both sites, evidence for associated processes of liquefaction appear at depths of at least 40 m below the original depositional surface, which is nearly an order of magnitude greater than has commonly been reported from modern settings. The surficial, mass flow feature displays attributes that are consistent with much smaller-scale sediment eruptions (sand volcanoes) that are often documented from modern earthquake zones, including the development of hydraulic pressure from localized, subsurface liquefaction and the subsequent escape of fluidized sand toward the unconfined conditions of the surface. The origin of the forces that produced the lateral, subsurface movement of a large body of sand at the other site is not readily apparent. The various constraints on modeling the generation of the lateral force required to produce the observed displacement are considered here, along with photodocumentation of key outcrop relationships.

  12. A survey on routing protocols for large-scale wireless sensor networks.

    PubMed

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. "Large-scale" means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed.

  13. Multi-scale comparison of source parameter estimation using empirical Green's function approach

    NASA Astrophysics Data System (ADS)

    Chen, X.; Cheng, Y.

    2015-12-01

    Analysis of earthquake source parameters requires correction of path effect, site response, and instrument responses. Empirical Green's function (EGF) method is one of the most effective methods in removing path effects and station responses by taking the spectral ratio between a larger and smaller event. Traditional EGF method requires identifying suitable event pairs, and analyze each event individually. This allows high quality estimations for strictly selected events, however, the quantity of resolvable source parameters is limited, which challenges the interpretation of spatial-temporal coherency. On the other hand, methods that exploit the redundancy of event-station pairs are proposed, which utilize the stacking technique to obtain systematic source parameter estimations for a large quantity of events at the same time. This allows us to examine large quantity of events systematically, facilitating analysis of spatial-temporal patterns, and scaling relationship. However, it is unclear how much resolution is scarified during this process. In addition to the empirical Green's function calculation, choice of model parameters and fitting methods also lead to biases. Here, using two regional focused arrays, the OBS array in the Mendocino region, and the borehole array in the Salton Sea geothermal field, I compare the results from the large scale stacking analysis, small-scale cluster analysis, and single event-pair analysis with different fitting methods to systematically compare the results within completely different tectonic environment, in order to quantify the consistency and inconsistency in source parameter estimations, and the associated problems.

  14. Enterprise tools to promote interoperability: MonitoringResources.org supports design and documentation of large-scale, long-term monitoringprograms

    NASA Astrophysics Data System (ADS)

    Weltzin, J. F.; Scully, R. A.; Bayer, J.

    2016-12-01

    Individual natural resource monitoring programs have evolved in response to different organizational mandates, jurisdictional needs, issues and questions. We are establishing a collaborative forum for large-scale, long-term monitoring programs to identify opportunities where collaboration could yield efficiency in monitoring design, implementation, analyses, and data sharing. We anticipate these monitoring programs will have similar requirements - e.g. survey design, standardization of protocols and methods, information management and delivery - that could be met by enterprise tools to promote sustainability, efficiency and interoperability of information across geopolitical boundaries or organizational cultures. MonitoringResources.org, a project of the Pacific Northwest Aquatic Monitoring Partnership, provides an on-line suite of enterprise tools focused on aquatic systems in the Pacific Northwest Region of the United States. We will leverage on and expand this existing capacity to support continental-scale monitoring of both aquatic and terrestrial systems. The current stakeholder group is focused on programs led by bureaus with the Department of Interior, but the tools will be readily and freely available to a broad variety of other stakeholders. Here, we report the results of two initial stakeholder workshops focused on (1) establishing a collaborative forum of large scale monitoring programs, (2) identifying and prioritizing shared needs, (3) evaluating existing enterprise resources, (4) defining priorities for development of enhanced capacity for MonitoringResources.org, and (5) identifying a small number of pilot projects that can be used to define and test development requirements for specific monitoring programs.

  15. Origin of the Two Scales of Wind Ripples on Mars

    NASA Technical Reports Server (NTRS)

    Lapotre, Mathieu G. A.; Ewing, Ryan C.; Lamb, Michael P.; Fischer, Woodward W.; Grotzinger, John P.; Rubin, David M.; Lewis, Kevin W.; Day, Mackenzie; Gupta, Sanjeev; Banham, Steeve G.; hide

    2016-01-01

    Earth's sandy deserts host two main types of bedforms - decimeter-scale ripples and larger dunes. Years of orbital observations on Mars also confirmed the existence of two modes of active eolian bedforms - meter-scale ripples, and dunes. By analogy to terrestrial ripples, which are thought to form from a grain mechanism, it was hypothesized that large martian ripples also formed from grain impacts, but spaced further apart due to elongated saltation trajectories from the lower martian gravity and different atmospheric properties. However, the Curiosity rover recently documented the coexistence of three scales of bedforms in Gale crater. Because a grain impact mechanism cannot readily explain two distinct and coeval ripple modes in similar sand sizes, a new mechanism seems to be required to explain one of the scales of ripples. Small ripples are most similar to Earth's impact ripples, with straight crests and subdued profiles. In contrast, large martian ripples are sinuous and asymmetric, with lee slopes dominated by grain flows and grainfall deposits. Thus, large martian ripples resemble current ripples formed underwater on Earth, suggesting that they may form from a fluid-drag mechanism. To test this hypothesis, we develop a scaling relation to predict the spacing of fluid-drag ripples from an extensive flume data compilation. The size of large martian ripples is predicted by our scaling relation when adjusted for martian atmospheric properties. Specifically, we propose that the wavelength of martian wind-drag ripples arises from the high kinematic viscosity of the low-density atmosphere. Because fluid density controls drag-ripple size, our scaling relation can help constrain paleoatmospheric density from wind-drag ripple stratification.

  16. Origin of the two scales of wind ripples on Mars

    NASA Astrophysics Data System (ADS)

    Lapotre, M. G. A.; Ewing, R. C.; Lamb, M. P.; Fischer, W. W.; Grotzinger, J. P.; Rubin, D. M.; Lewis, K. W.; Ballard, M.; Day, M. D.; Gupta, S.; Banham, S.; Bridges, N.; Des Marais, D. J.; Fraeman, A. A.; Grant, J. A., III; Ming, D. W.; Mischna, M.; Rice, M. S.; Sumner, D. Y.; Vasavada, A. R.; Yingst, R. A.

    2016-12-01

    Earth's sandy deserts host two main types of bedforms - decimeter-scale ripples and larger dunes. Years of orbital observations on Mars also confirmed the existence of two modes of active eolian bedforms - meter-scale ripples, and dunes. By analogy to terrestrial ripples, which are thought to form from a grain mechanism, it was hypothesized that large martian ripples also formed from grain impacts, but spaced further apart due to elongated saltation trajectories from the lower martian gravity and different atmospheric properties. However, the Curiosity rover recently documented the coexistence of three scales of bedforms in Gale crater. Because a grain impact mechanism cannot readily explain two distinct and coeval ripple modes in similar sand sizes, a new mechanism seems to be required to explain one of the scales of ripples. Small ripples are most similar to Earth's impact ripples, with straight crests and subdued profiles. In contrast, large martian ripples are sinuous and asymmetric, with lee slopes dominated by grain flows and grainfall deposits. Thus, large martian ripples resemble current ripples formed underwater on Earth, suggesting that they may form from a fluid-drag mechanism. To test this hypothesis, we develop a scaling relation to predict the spacing of fluid-drag ripples from an extensive flume data compilation. The size of large martian ripples is predicted by our scaling relation when adjusted for martian atmospheric properties. Specifically, we propose that the wavelength of martian wind-drag ripples arises from the high kinematic viscosity of the low-density atmosphere. Because fluid density controls drag-ripple size, our scaling relation can help constrain paleoatmospheric density from wind-drag ripple stratification.

  17. Assessing the effect of snow/water obstructions on the measurement of tree seedlings in a large-scale temperate forest inventory

    Treesearch

    C.W. Woodall; J.A. Westfall; K. Zhu; D.J. Johnson

    2013-01-01

    National-scale forest inventories have endeavoured to include holistic measurements of forest health inclusive of attributes such as downed dead wood and tree regeneration that occur in the forest understory. Inventories may require year-round measurement of inventory plots with some of these measurements being affected by seasonal obstructions (e.g. snowpacks and...

  18. Collaboratively Architecting a Scalable and Adaptable Petascale Infrastructure to Support Transdisciplinary Scientific Research for the Australian Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Wyborn, L. A.; Evans, B. J. K.; Pugh, T.; Lescinsky, D. T.; Foster, C.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) at the Australian National University (ANU) is a partnership between CSIRO, ANU, Bureau of Meteorology (BoM) and Geoscience Australia. Recent investments in a 1.2 PFlop Supercomputer (Raijin), ~ 20 PB data storage using Lustre filesystems and a 3000 core high performance cloud have created a hybrid platform for higher performance computing and data-intensive science to enable large scale earth and climate systems modelling and analysis. There are > 3000 users actively logging in and > 600 projects on the NCI system. Efficiently scaling and adapting data and software systems to petascale infrastructures requires the collaborative development of an architecture that is designed, programmed and operated to enable users to interactively invoke different forms of in-situ computation over complex and large scale data collections. NCI makes available major and long tail data collections from both the government and research sectors based on six themes: 1) weather, climate and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology and 6) astronomy, bio and social. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. Collections are the operational form for data management and access. Similar data types from individual custodians are managed cohesively. Use of international standards for discovery and interoperability allow complex interactions within and between the collections. This design facilitates a transdisciplinary approach to research and enables a shift from small scale, 'stove-piped' science efforts to large scale, collaborative systems science. This new and complex infrastructure requires a move to shared, globally trusted software frameworks that can be maintained and updated. Workflow engines become essential and need to integrate provenance, versioning, traceability, repeatability and publication. There are also human resource challenges as highly skilled HPC/HPD specialists, specialist programmers, and data scientists are required whose skills can support scaling to the new paradigm of effective and efficient data-intensive earth science analytics on petascale, and soon to be exascale systems.

  19. Regional modeling of groundwater flow and arsenic transport in the Bengal Basin: challenges of scale and complexity (Invited)

    NASA Astrophysics Data System (ADS)

    Michael, H. A.; Voss, C. I.

    2009-12-01

    Widespread arsenic poisoning is occurring in large areas of Bangladesh and West Bengal, India due to high arsenic levels in shallow groundwater, which is the primary source of irrigation and drinking water in the region. The high-arsenic groundwater exists in aquifers of the Bengal Basin, a huge sedimentary system approximately 500km x 500km wide and greater than 15km deep in places. Deeper groundwater (>150m) is nearly universally low in arsenic and a potential source of safe drinking water, but evaluation of its sustainability requires understanding of the entire, interconnected regional aquifer system. Numerical modeling of flow and arsenic transport in the basin introduces problems of scale: challenges in representing the system in enough detail to produce meaningful simulations and answer relevant questions while maintaining enough simplicity to understand controls on processes and operating within computational constraints. A regional groundwater flow and transport model of the Bengal Basin was constructed to assess the large-scale functioning of the deep groundwater flow system, the vulnerability of deep groundwater to pumping-induced migration from above, and the effect of chemical properties of sediments (sorption) on sustainability. The primary challenges include the very large spatial scale of the system, dynamic monsoonal hydrology (small temporal scale fluctuations), complex sedimentary architecture (small spatial scale heterogeneity), and a lack of reliable hydrologic and geologic data. The approach was simple. Detailed inputs were reduced to only those that affect the functioning of the deep flow system. Available data were used to estimate upscaled parameter values. Nested small-scale simulations were performed to determine the effects of the simplifications, which include treatment of the top boundary condition and transience, effects of small-scale heterogeneity, and effects of individual pumping wells. Simulation of arsenic transport at the large scale adds another element of complexity. Minimization of numerical oscillation and mass balance errors required experimentation with solvers and discretization. In the face of relatively few data in a very large-scale model, sensitivity analyses were essential. The scale of the system limits evaluation of localized behavior, but results clearly identified the primary controls on the system and effects of various pumping scenarios and sorptive properties. It was shown that limiting deep pumping to domestic supply may result in sustainable arsenic-safe water for 90% of the arsenic-affected region over a 1000 year timescale, and that sorption of arsenic onto deep, oxidized Pleistocene sediments may increase the breakthrough time in unsustainable zones by more than an order of magnitude. Thus, both hydraulic and chemical defenses indicate the potential for sustainable, managed use of deep, safe groundwater resources in the Bengal Basin.

  20. Revision of the Rawls et al. (1982) pedotransfer functions for their applicability to US croplands

    USDA-ARS?s Scientific Manuscript database

    Large scale environmental impact studies typically involve the use of simulation models and require a variety of inputs, some of which may need to be estimated in absence of adequate measured data. As an example, soil water retention needs to be estimated for a large number of soils that are to be u...

  1. A Scalable Multimedia Streaming Scheme with CBR-Transmission of VBR-Encoded Videos over the Internet

    ERIC Educational Resources Information Center

    Kabir, Md. H.; Shoja, Gholamali C.; Manning, Eric G.

    2006-01-01

    Streaming audio/video contents over the Internet requires large network bandwidth and timely delivery of media data. A streaming session is generally long and also needs a large I/O bandwidth at the streaming server. A streaming server, however, has limited network and I/O bandwidth. For this reason, a streaming server alone cannot scale a…

  2. Relativistic jets without large-scale magnetic fields

    NASA Astrophysics Data System (ADS)

    Parfrey, K.; Giannios, D.; Beloborodov, A.

    2014-07-01

    The canonical model of relativistic jets from black holes requires a large-scale ordered magnetic field to provide a significant magnetic flux through the ergosphere--in the Blandford-Znajek process, the jet power scales with the square of the magnetic flux. In many jet systems the presence of the required flux in the environment of the central engine is questionable. I will describe an alternative scenario, in which jets are produced by the continuous sequential accretion of small magnetic loops. The magnetic energy stored in these coronal flux systems is amplified by the differential rotation of the accretion disc and by the rotating spacetime of the black hole, leading to runaway field line inflation, magnetic reconnection in thin current layers, and the ejection of discrete bubbles of Poynting-flux-dominated plasma. For illustration I will show the results of general-relativistic force-free electrodynamic simulations of rotating black hole coronae, performed using a new resistivity model. The dissipation of magnetic energy by coronal reconnection events, as demonstrated in these simulations, is a potential source of the observed high-energy emission from accreting compact objects.

  3. An improved filter elution and cell culture assay procedure for evaluating public groundwater systems for culturable enteroviruses.

    PubMed

    Dahling, Daniel R

    2002-01-01

    Large-scale virus studies of groundwater systems require practical and sensitive procedures for both sample processing and viral assay. Filter adsorption-elution procedures have traditionally been used to process large-volume water samples for viruses. In this study, five filter elution procedures using cartridge filters were evaluated for their effectiveness in processing samples. Of the five procedures tested, the third method, which incorporated two separate beef extract elutions (one being an overnight filter immersion in beef extract), recovered 95% of seeded poliovirus compared with recoveries of 36 to 70% for the other methods. For viral enumeration, an expanded roller bottle quantal assay was evaluated using seeded poliovirus. This cytopathic-based method was considerably more sensitive than the standard plaque assay method. The roller bottle system was more economical than the plaque assay for the evaluation of comparable samples. Using roller bottles required less time and manipulation than the plaque procedure and greatly facilitated the examination of large numbers of samples. The combination of the improved filter elution procedure and the roller bottle assay for viral analysis makes large-scale virus studies of groundwater systems practical. This procedure was subsequently field tested during a groundwater study in which large-volume samples (exceeding 800 L) were processed through the filters.

  4. Evaluation of Large-scale Data to Detect Irregularity in Payment for Medical Services. An Extended Use of Benford's Law.

    PubMed

    Park, Junghyun A; Kim, Minki; Yoon, Seokjoon

    2016-05-17

    Sophisticated anti-fraud systems for the healthcare sector have been built based on several statistical methods. Although existing methods have been developed to detect fraud in the healthcare sector, these algorithms consume considerable time and cost, and lack a theoretical basis to handle large-scale data. Based on mathematical theory, this study proposes a new approach to using Benford's Law in that we closely examined the individual-level data to identify specific fees for in-depth analysis. We extended the mathematical theory to demonstrate the manner in which large-scale data conform to Benford's Law. Then, we empirically tested its applicability using actual large-scale healthcare data from Korea's Health Insurance Review and Assessment (HIRA) National Patient Sample (NPS). For Benford's Law, we considered the mean absolute deviation (MAD) formula to test the large-scale data. We conducted our study on 32 diseases, comprising 25 representative diseases and 7 DRG-regulated diseases. We performed an empirical test on 25 diseases, showing the applicability of Benford's Law to large-scale data in the healthcare industry. For the seven DRG-regulated diseases, we examined the individual-level data to identify specific fees to carry out an in-depth analysis. Among the eight categories of medical costs, we considered the strength of certain irregularities based on the details of each DRG-regulated disease. Using the degree of abnormality, we propose priority action to be taken by government health departments and private insurance institutions to bring unnecessary medical expenses under control. However, when we detect deviations from Benford's Law, relatively high contamination ratios are required at conventional significance levels.

  5. Unique Testing Capabilities of the NASA Langley Transonic Dynamics Tunnel, an Exercise in Aeroelastic Scaling

    NASA Technical Reports Server (NTRS)

    Ivanco, Thomas G.

    2013-01-01

    NASA Langley Research Center's Transonic Dynamics Tunnel (TDT) is the world's most capable aeroelastic test facility. Its large size, transonic speed range, variable pressure capability, and use of either air or R-134a heavy gas as a test medium enable unparalleled manipulation of flow-dependent scaling quantities. Matching these scaling quantities enables dynamic similitude of a full-scale vehicle with a sub-scale model, a requirement for proper characterization of any dynamic phenomenon, and many static elastic phenomena. Select scaling parameters are presented in order to quantify the scaling advantages of TDT and the consequence of testing in other facilities. In addition to dynamic testing, the TDT is uniquely well-suited for high risk testing or for those tests that require unusual model mount or support systems. Examples of recently conducted dynamic tests requiring unusual model support are presented. In addition to its unique dynamic test capabilities, the TDT is also evaluated in its capability to conduct aerodynamic performance tests as a result of its flow quality. Results of flow quality studies and a comparison to a many other transonic facilities are presented. Finally, the ability of the TDT to support future NASA research thrusts and likely vehicle designs is discussed.

  6. Geometry of a large-scale, low-angle, midcrustal thrust (Woodroffe Thrust, central Australia)

    NASA Astrophysics Data System (ADS)

    Wex, S.; Mancktelow, N. S.; Hawemann, F.; Camacho, A.; Pennacchioni, G.

    2017-11-01

    The Musgrave Block in central Australia exposes numerous large-scale mylonitic shear zones developed during the intracontinental Petermann Orogeny around 560-520 Ma. The most prominent structure is the crustal-scale, over 600 km long, E-W trending Woodroffe Thrust, which is broadly undulate but generally dips shallowly to moderately to the south and shows an approximately top-to-north sense of movement. The estimated metamorphic conditions of mylonitization indicate a regional variation from predominantly midcrustal (circa 520-620°C and 0.8-1.1 GPa) to lower crustal ( 650°C and 1.0-1.3 GPa) levels in the direction of thrusting, which is also reflected in the distribution of preserved deformation microstructures. This variation in metamorphic conditions is consistent with a south dipping thrust plane but is only small, implying that a ≥60 km long N-S segment of the Woodroffe Thrust was originally shallowly dipping at an average estimated angle of ≤6°. The reconstructed geometry suggests that basement-cored, thick-skinned, midcrustal thrusts can be very shallowly dipping on a scale of many tens of kilometers in the direction of movement. Such a geometry would require the rocks along the thrust to be weak, but field observations (e.g., large volumes of syntectonic pseudotachylyte) argue for a strong behavior, at least transiently. Localization on a low-angle, near-planar structure that crosscuts lithological layers requires a weak precursor, such as a seismic rupture in the middle to lower crust. If this was a single event, the intracontinental earthquake must have been large, with the rupture extending laterally over hundreds of kilometers.

  7. PLUM: Parallel Load Balancing for Unstructured Adaptive Meshes. Degree awarded by Colorado Univ.

    NASA Technical Reports Server (NTRS)

    Oliker, Leonid

    1998-01-01

    Dynamic mesh adaption on unstructured grids is a powerful tool for computing large-scale problems that require grid modifications to efficiently resolve solution features. By locally refining and coarsening the mesh to capture physical phenomena of interest, such procedures make standard computational methods more cost effective. Unfortunately, an efficient parallel implementation of these adaptive methods is rather difficult to achieve, primarily due to the load imbalance created by the dynamically-changing nonuniform grid. This requires significant communication at runtime, leading to idle processors and adversely affecting the total execution time. Nonetheless, it is generally thought that unstructured adaptive- grid techniques will constitute a significant fraction of future high-performance supercomputing. Various dynamic load balancing methods have been reported to date; however, most of them either lack a global view of loads across processors or do not apply their techniques to realistic large-scale applications.

  8. Advances in multi-scale modeling of solidification and casting processes

    NASA Astrophysics Data System (ADS)

    Liu, Baicheng; Xu, Qingyan; Jing, Tao; Shen, Houfa; Han, Zhiqiang

    2011-04-01

    The development of the aviation, energy and automobile industries requires an advanced integrated product/process R&D systems which could optimize the product and the process design as well. Integrated computational materials engineering (ICME) is a promising approach to fulfill this requirement and make the product and process development efficient, economic, and environmentally friendly. Advances in multi-scale modeling of solidification and casting processes, including mathematical models as well as engineering applications are presented in the paper. Dendrite morphology of magnesium and aluminum alloy of solidification process by using phase field and cellular automaton methods, mathematical models of segregation of large steel ingot, and microstructure models of unidirectionally solidified turbine blade casting are studied and discussed. In addition, some engineering case studies, including microstructure simulation of aluminum casting for automobile industry, segregation of large steel ingot for energy industry, and microstructure simulation of unidirectionally solidified turbine blade castings for aviation industry are discussed.

  9. Warm natural inflation

    NASA Astrophysics Data System (ADS)

    Mishra, Hiranmaya; Mohanty, Subhendra; Nautiyal, Akhilesh

    2012-04-01

    In warm inflation models there is the requirement of generating large dissipative couplings of the inflaton with radiation, while at the same time, not de-stabilising the flatness of the inflaton potential due to radiative corrections. One way to achieve this without fine tuning unrelated couplings is by supersymmetry. In this Letter we show that if the inflaton and other light fields are pseudo-Nambu-Goldstone bosons then the radiative corrections to the potential are suppressed and the thermal corrections are small as long as the temperature is below the symmetry breaking scale. In such models it is possible to fulfil the contrary requirements of an inflaton potential which is stable under radiative corrections and the generation of a large dissipative coupling of the inflaton field with other light fields. We construct a warm inflation model which gives the observed CMB-anisotropy amplitude and spectral index where the symmetry breaking is at the GUT scale.

  10. Computational Issues in Damping Identification for Large Scale Problems

    NASA Technical Reports Server (NTRS)

    Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.

    1997-01-01

    Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.

  11. Multiplexed analysis of protein-ligand interactions by fluorescence anisotropy in a microfluidic platform.

    PubMed

    Cheow, Lih Feng; Viswanathan, Ramya; Chin, Chee-Sing; Jennifer, Nancy; Jones, Robert C; Guccione, Ernesto; Quake, Stephen R; Burkholder, William F

    2014-10-07

    Homogeneous assay platforms for measuring protein-ligand interactions are highly valued due to their potential for high-throughput screening. However, the implementation of these multiplexed assays in conventional microplate formats is considerably expensive due to the large amounts of reagents required and the need for automation. We implemented a homogeneous fluorescence anisotropy-based binding assay in an automated microfluidic chip to simultaneously interrogate >2300 pairwise interactions. We demonstrated the utility of this platform in determining the binding affinities between chromatin-regulatory proteins and different post-translationally modified histone peptides. The microfluidic chip assay produces comparable results to conventional microtiter plate assays, yet requires 2 orders of magnitude less sample and an order of magnitude fewer pipetting steps. This approach enables one to use small samples for medium-scale screening and could ease the bottleneck of large-scale protein purification.

  12. Results of Long Term Life Tests of Large Scale Lithium-Ion Cells

    NASA Astrophysics Data System (ADS)

    Inoue, Takefumi; Imamura, Nobutaka; Miyanaga, Naozumi; Yoshida, Hiroaki; Komada, Kanemi

    2008-09-01

    High energy density Li-ion cells have been introduced to latest satellites and another space usage. We have started development of large scale Li-ion cells for space applications in 1997. The chemical design was fixed in 1999.It is very important to confirm life performance to apply satellite applications because it requires long mission life such as 15 years for GEO and 5 to 7 years for LEO. Therefore we started life test at various conditions. And the tests have reached 8 to 9 years in actual calendar time. Semi - accelerated GEO tests which gives both calendar and cycle loss have been reached 42 season that corresponds 21 years in orbit. The specific energy range is 120 - 130 Wh/kg at EOL. According to the test results, we have confirmed that our Li-ion cell meets general requirements for space application such as GEO and LEO with quite high specific energy.

  13. Ice Shape Scaling for Aircraft in SLD Conditions

    NASA Technical Reports Server (NTRS)

    Anderson, David N.; Tsao, Jen-Ching

    2008-01-01

    This paper has summarized recent NASA research into scaling of SLD conditions with data from both SLD and Appendix C tests. Scaling results obtained by applying existing scaling methods for size and test-condition scaling will be reviewed. Large feather growth issues, including scaling approaches, will be discussed briefly. The material included applies only to unprotected, unswept geometries. Within the limits of the conditions tested to date, the results show that the similarity parameters needed for Appendix C scaling also can be used for SLD scaling, and no additional parameters are required. These results were based on visual comparisons of reference and scale ice shapes. Nearly all of the experimental results presented have been obtained in sea-level tunnels. The currently recommended methods to scale model size, icing limit and test conditions are described.

  14. Analysis of BJ493 diesel engine lubrication system properties

    NASA Astrophysics Data System (ADS)

    Liu, F.

    2017-12-01

    The BJ493ZLQ4A diesel engine design is based on the primary model of BJ493ZLQ3, of which exhaust level is upgraded to the National GB5 standard due to the improved design of combustion and injection systems. Given the above changes in the diesel lubrication system, its improved properties are analyzed in this paper. According to the structures, technical parameters and indices of the lubrication system, the lubrication system model of BJ493ZLQ4A diesel engine was constructed using the Flowmaster flow simulation software. The properties of the diesel engine lubrication system, such as the oil flow rate and pressure at different rotational speeds were analyzed for the schemes involving large- and small-scale oil filters. The calculated values of the main oil channel pressure are in good agreement with the experimental results, which verifies the proposed model feasibility. The calculation results show that the main oil channel pressure and maximum oil flow rate values for the large-scale oil filter scheme satisfy the design requirements, while the small-scale scheme yields too low main oil channel’s pressure and too high. Therefore, application of small-scale oil filters is hazardous, and the large-scale scheme is recommended.

  15. Large-scale modular biofiltration system for effective odor removal in a composting facility.

    PubMed

    Lin, Yueh-Hsien; Chen, Yu-Pei; Ho, Kuo-Ling; Lee, Tsung-Yih; Tseng, Ching-Ping

    2013-01-01

    Several different foul odors such as nitrogen-containing groups, sulfur-containing groups, and short-chain fatty-acids commonly emitted from composting facilities. In this study, an experimental laboratory-scale bioreactor was scaled up to build a large-scale modular biofiltration system that can process 34 m(3)min(-1)waste gases. This modular reactor system was proven effective in eliminating odors, with a 97% removal efficiency for 96 ppm ammonia, a 98% removal efficiency for 220 ppm amines, and a 100% removal efficiency of other odorous substances. The results of operational parameters indicate that this modular biofiltration system offers long-term operational stability. Specifically, a low pressure drop (<45 mmH2O m(-1)) was observed, indicating that the packing carrier in bioreactor units does not require frequent replacement. Thus, this modular biofiltration system can be used in field applications to eliminate various odors with compact working volume.

  16. Access control and privacy in large distributed systems

    NASA Technical Reports Server (NTRS)

    Leiner, B. M.; Bishop, M.

    1986-01-01

    Large scale distributed systems consists of workstations, mainframe computers, supercomputers and other types of servers, all connected by a computer network. These systems are being used in a variety of applications including the support of collaborative scientific research. In such an environment, issues of access control and privacy arise. Access control is required for several reasons, including the protection of sensitive resources and cost control. Privacy is also required for similar reasons, including the protection of a researcher's proprietary results. A possible architecture for integrating available computer and communications security technologies into a system that meet these requirements is described. This architecture is meant as a starting point for discussion, rather that the final answer.

  17. Managing Large Scale Project Analysis Teams through a Web Accessible Database

    NASA Technical Reports Server (NTRS)

    O'Neil, Daniel A.

    2008-01-01

    Large scale space programs analyze thousands of requirements while mitigating safety, performance, schedule, and cost risks. These efforts involve a variety of roles with interdependent use cases and goals. For example, study managers and facilitators identify ground-rules and assumptions for a collection of studies required for a program or project milestone. Task leaders derive product requirements from the ground rules and assumptions and describe activities to produce needed analytical products. Disciplined specialists produce the specified products and load results into a file management system. Organizational and project managers provide the personnel and funds to conduct the tasks. Each role has responsibilities to establish information linkages and provide status reports to management. Projects conduct design and analysis cycles to refine designs to meet the requirements and implement risk mitigation plans. At the program level, integrated design and analysis cycles studies are conducted to eliminate every 'to-be-determined' and develop plans to mitigate every risk. At the agency level, strategic studies analyze different approaches to exploration architectures and campaigns. This paper describes a web-accessible database developed by NASA to coordinate and manage tasks at three organizational levels. Other topics in this paper cover integration technologies and techniques for process modeling and enterprise architectures.

  18. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  19. Scale up of large ALON® and spinel windows

    NASA Astrophysics Data System (ADS)

    Goldman, Lee M.; Kashalikar, Uday; Ramisetty, Mohan; Jha, Santosh; Sastri, Suri

    2017-05-01

    Aluminum Oxynitride (ALON® Transparent Ceramic) and Magnesia Aluminate Spinel (Spinel) combine broadband transparency with excellent mechanical properties. Their cubic structure means that they are transparent in their polycrystalline form, allowing them to be manufactured by conventional powder processing techniques. Surmet has scaled up its ALON® production capability to produce and deliver windows as large as 4.4 sq ft. We have also produced our first 6 sq ft window. We are in the process of producing 7 sq ft ALON® window blanks for armor applications; and scale up to even larger, high optical quality blanks for Recce window applications is underway. Surmet also produces spinel for customers that require superior transmission at the longer wavelengths in the mid wave infra-red (MWIR). Spinel windows have been limited to smaller sizes than have been achieved with ALON. To date the largest spinel window produced is 11x18-in, and windows 14x20-in size are currently in process. Surmet is now scaling up its spinel processing capability to produce high quality window blanks as large as 19x27-in for sensor applications.

  20. Human Factors in the Large: Experiences from Denmark, Finland and Canada in Moving Towards Regional and National Evaluations of Health Information System Usability. Contribution of the IMIA Human Factors Working Group.

    PubMed

    Kushniruk, A; Kaipio, J; Nieminen, M; Hyppönen, H; Lääveri, T; Nohr, C; Kanstrup, A M; Berg Christiansen, M; Kuo, M-H; Borycki, E

    2014-08-15

    The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems.

  1. Human Factors in the Large: Experiences from Denmark, Finland and Canada in Moving Towards Regional and National Evaluations of Health Information System Usability

    PubMed Central

    Kaipio, J.; Nieminen, M.; Hyppönen, H.; Lääveri, T.; Nohr, C.; Kanstrup, A. M.; Berg Christiansen, M.; Kuo, M.-H.; Borycki, E.

    2014-01-01

    Summary Objectives The objective of this paper is to explore approaches to understanding the usability of health information systems at regional and national levels. Methods Several different methods are discussed in case studies from Denmark, Finland and Canada. They range from small scale qualitative studies involving usability testing of systems to larger scale national level questionnaire studies aimed at assessing the use and usability of health information systems by entire groups of health professionals. Results It was found that regional and national usability studies can complement smaller scale usability studies, and that they are needed in order to understand larger trends regarding system usability. Despite adoption of EHRs, many health professionals rate the usability of the systems as low. A range of usability issues have been noted when data is collected on a large scale through use of widely distributed questionnaires and websites designed to monitor user perceptions of usability. Conclusion As health information systems are deployed on a widespread basis, studies that examine systems used regionally or nationally are required. In addition, collection of large scale data on the usability of specific IT products is needed in order to complement smaller scale studies of specific systems. PMID:25123725

  2. High-yield production of graphene by liquid-phase exfoliation of graphite.

    PubMed

    Hernandez, Yenny; Nicolosi, Valeria; Lotya, Mustafa; Blighe, Fiona M; Sun, Zhenyu; De, Sukanta; McGovern, I T; Holland, Brendan; Byrne, Michele; Gun'Ko, Yurii K; Boland, John J; Niraj, Peter; Duesberg, Georg; Krishnamurthy, Satheesh; Goodhue, Robbie; Hutchison, John; Scardaci, Vittorio; Ferrari, Andrea C; Coleman, Jonathan N

    2008-09-01

    Fully exploiting the properties of graphene will require a method for the mass production of this remarkable material. Two main routes are possible: large-scale growth or large-scale exfoliation. Here, we demonstrate graphene dispersions with concentrations up to approximately 0.01 mg ml(-1), produced by dispersion and exfoliation of graphite in organic solvents such as N-methyl-pyrrolidone. This is possible because the energy required to exfoliate graphene is balanced by the solvent-graphene interaction for solvents whose surface energies match that of graphene. We confirm the presence of individual graphene sheets by Raman spectroscopy, transmission electron microscopy and electron diffraction. Our method results in a monolayer yield of approximately 1 wt%, which could potentially be improved to 7-12 wt% with further processing. The absence of defects or oxides is confirmed by X-ray photoelectron, infrared and Raman spectroscopies. We are able to produce semi-transparent conducting films and conducting composites. Solution processing of graphene opens up a range of potential large-area applications, from device and sensor fabrication to liquid-phase chemistry.

  3. Discriminative Hierarchical K-Means Tree for Large-Scale Image Classification.

    PubMed

    Chen, Shizhi; Yang, Xiaodong; Tian, Yingli

    2015-09-01

    A key challenge in large-scale image classification is how to achieve efficiency in terms of both computation and memory without compromising classification accuracy. The learning-based classifiers achieve the state-of-the-art accuracies, but have been criticized for the computational complexity that grows linearly with the number of classes. The nonparametric nearest neighbor (NN)-based classifiers naturally handle large numbers of categories, but incur prohibitively expensive computation and memory costs. In this brief, we present a novel classification scheme, i.e., discriminative hierarchical K-means tree (D-HKTree), which combines the advantages of both learning-based and NN-based classifiers. The complexity of the D-HKTree only grows sublinearly with the number of categories, which is much better than the recent hierarchical support vector machines-based methods. The memory requirement is the order of magnitude less than the recent Naïve Bayesian NN-based approaches. The proposed D-HKTree classification scheme is evaluated on several challenging benchmark databases and achieves the state-of-the-art accuracies, while with significantly lower computation cost and memory requirement.

  4. Microcarrier-based platforms for in vitro expansion and differentiation of human pluripotent stem cells in bioreactor culture systems.

    PubMed

    Badenes, Sara M; Fernandes, Tiago G; Rodrigues, Carlos A V; Diogo, Maria Margarida; Cabral, Joaquim M S

    2016-09-20

    Human pluripotent stem cells (hPSC) have attracted a great attention as an unlimited source of cells for cell therapies and other in vitro biomedical applications such as drug screening, toxicology assays and disease modeling. The implementation of scalable culture platforms for the large-scale production of hPSC and their derivatives is mandatory to fulfill the requirement of obtaining large numbers of cells for these applications. Microcarrier technology has been emerging as an effective approach for the large scale ex vivo hPSC expansion and differentiation. This review presents recent achievements in hPSC microcarrier-based culture systems and discusses the crucial aspects that influence the performance of these culture platforms. Recent progress includes addressing chemically-defined culture conditions for manufacturing of hPSC and their derivatives, with the development of xeno-free media and microcarrier coatings to meet good manufacturing practice (GMP) quality requirements. Finally, examples of integrated platforms including hPSC expansion and directed differentiation to specific lineages are also presented in this review. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Avoiding and tolerating latency in large-scale next-generation shared-memory multiprocessors

    NASA Technical Reports Server (NTRS)

    Probst, David K.

    1993-01-01

    A scalable solution to the memory-latency problem is necessary to prevent the large latencies of synchronization and memory operations inherent in large-scale shared-memory multiprocessors from reducing high performance. We distinguish latency avoidance and latency tolerance. Latency is avoided when data is brought to nearby locales for future reference. Latency is tolerated when references are overlapped with other computation. Latency-avoiding locales include: processor registers, data caches used temporally, and nearby memory modules. Tolerating communication latency requires parallelism, allowing the overlap of communication and computation. Latency-tolerating techniques include: vector pipelining, data caches used spatially, prefetching in various forms, and multithreading in various forms. Relaxing the consistency model permits increased use of avoidance and tolerance techniques. Each model is a mapping from the program text to sets of partial orders on program operations; it is a convention about which temporal precedences among program operations are necessary. Information about temporal locality and parallelism constrains the use of avoidance and tolerance techniques. Suitable architectural primitives and compiler technology are required to exploit the increased freedom to reorder and overlap operations in relaxed models.

  6. Metal stack optimization for low-power and high-density for N7-N5

    NASA Astrophysics Data System (ADS)

    Raghavan, P.; Firouzi, F.; Matti, L.; Debacker, P.; Baert, R.; Sherazi, S. M. Y.; Trivkovic, D.; Gerousis, V.; Dusa, M.; Ryckaert, J.; Tokei, Z.; Verkest, D.; McIntyre, G.; Ronse, K.

    2016-03-01

    One of the key challenges while scaling logic down to N7 and N5 is the requirement of self-aligned multiple patterning for the metal stack. This comes with a large cost of the backend cost and therefore a careful stack optimization is required. Various layers in the stack have different purposes and therefore their choice of pitch and number of layers is critical. Furthermore, when in ultra scaled dimensions of N7 or N5, the number of patterning options are also much larger ranging from multiple LE, EUV to SADP/SAQP. The right choice of these are also needed patterning techniques that use a full grating of wires like SADP/SAQP techniques introduce high level of metal dummies into the design. This implies a large capacitance penalty to the design therefore having large performance and power penalties. This is often mitigated with extra masking strategies. This paper discusses a holistic view of metal stack optimization from standard cell level all the way to routing and the corresponding trade-off that exist for this space.

  7. Large-Scale Cubic-Scaling Random Phase Approximation Correlation Energy Calculations Using a Gaussian Basis.

    PubMed

    Wilhelm, Jan; Seewald, Patrick; Del Ben, Mauro; Hutter, Jürg

    2016-12-13

    We present an algorithm for computing the correlation energy in the random phase approximation (RPA) in a Gaussian basis requiring [Formula: see text] operations and [Formula: see text] memory. The method is based on the resolution of the identity (RI) with the overlap metric, a reformulation of RI-RPA in the Gaussian basis, imaginary time, and imaginary frequency integration techniques, and the use of sparse linear algebra. Additional memory reduction without extra computations can be achieved by an iterative scheme that overcomes the memory bottleneck of canonical RPA implementations. We report a massively parallel implementation that is the key for the application to large systems. Finally, cubic-scaling RPA is applied to a thousand water molecules using a correlation-consistent triple-ζ quality basis.

  8. Scale and modeling issues in water resources planning

    USGS Publications Warehouse

    Lins, H.F.; Wolock, D.M.; McCabe, G.J.

    1997-01-01

    Resource planners and managers interested in utilizing climate model output as part of their operational activities immediately confront the dilemma of scale discordance. Their functional responsibilities cover relatively small geographical areas and necessarily require data of relatively high spatial resolution. Climate models cover a large geographical, i.e. global, domain and produce data at comparatively low spatial resolution. Although the scale differences between model output and planning input are large, several techniques have been developed for disaggregating climate model output to a scale appropriate for use in water resource planning and management applications. With techniques in hand to reduce the limitations imposed by scale discordance, water resource professionals must now confront a more fundamental constraint on the use of climate models-the inability to produce accurate representations and forecasts of regional climate. Given the current capabilities of climate models, and the likelihood that the uncertainty associated with long-term climate model forecasts will remain high for some years to come, the water resources planning community may find it impractical to utilize such forecasts operationally.

  9. Inviscid criterion for decomposing scales

    NASA Astrophysics Data System (ADS)

    Zhao, Dongxiao; Aluie, Hussein

    2018-05-01

    The proper scale decomposition in flows with significant density variations is not as straightforward as in incompressible flows, with many possible ways to define a "length scale." A choice can be made according to the so-called inviscid criterion [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009]. It is a kinematic requirement that a scale decomposition yield negligible viscous effects at large enough length scales. It has been proved [Aluie, Physica D 24, 54 (2013), 10.1016/j.physd.2012.12.009] recently that a Favre decomposition satisfies the inviscid criterion, which is necessary to unravel inertial-range dynamics and the cascade. Here we present numerical demonstrations of those results. We also show that two other commonly used decompositions can violate the inviscid criterion and, therefore, are not suitable to study inertial-range dynamics in variable-density and compressible turbulence. Our results have practical modeling implication in showing that viscous terms in Large Eddy Simulations do not need to be modeled and can be neglected.

  10. Correlation Lengths for Estimating the Large-Scale Carbon and Heat Content of the Southern Ocean

    NASA Astrophysics Data System (ADS)

    Mazloff, M. R.; Cornuelle, B. D.; Gille, S. T.; Verdy, A.

    2018-02-01

    The spatial correlation scales of oceanic dissolved inorganic carbon, heat content, and carbon and heat exchanges with the atmosphere are estimated from a realistic numerical simulation of the Southern Ocean. Biases in the model are assessed by comparing the simulated sea surface height and temperature scales to those derived from optimally interpolated satellite measurements. While these products do not resolve all ocean scales, they are representative of the climate scale variability we aim to estimate. Results show that constraining the carbon and heat inventory between 35°S and 70°S on time-scales longer than 90 days requires approximately 100 optimally spaced measurement platforms: approximately one platform every 20° longitude by 6° latitude. Carbon flux has slightly longer zonal scales, and requires a coverage of approximately 30° by 6°. Heat flux has much longer scales, and thus a platform distribution of approximately 90° by 10° would be sufficient. Fluxes, however, have significant subseasonal variability. For all fields, and especially fluxes, sustained measurements in time are required to prevent aliasing of the eddy signals into the longer climate scale signals. Our results imply a minimum of 100 biogeochemical-Argo floats are required to monitor the Southern Ocean carbon and heat content and air-sea exchanges on time-scales longer than 90 days. However, an estimate of formal mapping error using the current Argo array implies that in practice even an array of 600 floats (a nominal float density of about 1 every 7° longitude by 3° latitude) will result in nonnegligible uncertainty in estimating climate signals.

  11. A Comprehensive Analysis of Multiscale Field-Aligned Currents: Characteristics, Controlling Parameters, and Relationships

    NASA Astrophysics Data System (ADS)

    McGranaghan, Ryan M.; Mannucci, Anthony J.; Forsyth, Colin

    2017-12-01

    We explore the characteristics, controlling parameters, and relationships of multiscale field-aligned currents (FACs) using a rigorous, comprehensive, and cross-platform analysis. Our unique approach combines FAC data from the Swarm satellites and the Advanced Magnetosphere and Planetary Electrodynamics Response Experiment (AMPERE) to create a database of small-scale (˜10-150 km, <1° latitudinal width), mesoscale (˜150-250 km, 1-2° latitudinal width), and large-scale (>250 km) FACs. We examine these data for the repeatable behavior of FACs across scales (i.e., the characteristics), the dependence on the interplanetary magnetic field orientation, and the degree to which each scale "departs" from nominal large-scale specification. We retrieve new information by utilizing magnetic latitude and local time dependence, correlation analyses, and quantification of the departure of smaller from larger scales. We find that (1) FACs characteristics and dependence on controlling parameters do not map between scales in a straight forward manner, (2) relationships between FAC scales exhibit local time dependence, and (3) the dayside high-latitude region is characterized by remarkably distinct FAC behavior when analyzed at different scales, and the locations of distinction correspond to "anomalous" ionosphere-thermosphere behavior. Comparing with nominal large-scale FACs, we find that differences are characterized by a horseshoe shape, maximizing across dayside local times, and that difference magnitudes increase when smaller-scale observed FACs are considered. We suggest that both new physics and increased resolution of models are required to address the multiscale complexities. We include a summary table of our findings to provide a quick reference for differences between multiscale FACs.

  12. Towards Understanding Methane Emissions from Abandoned Wells

    EPA Science Inventory

    Reconciliation of large-scale top-down methane measurements and bottom-up inventories requires complete accounting of source types. Methane emissions from abandoned oil and gas wells is an area of uncertainty. This presentation reviews progress to characterize the potential inv...

  13. Describing Ecosystem Complexity through Integrated Catchment Modeling

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Tenhunen, J. D.; Peiffer, S.

    2011-12-01

    Land use and climate change have been implicated in reduced ecosystem services (ie: high quality water yield, biodiversity, and agricultural yield. The prediction of ecosystem services expected under future land use decisions and changing climate conditions has become increasingly important. Complex policy and management decisions require the integration of physical, economic, and social data over several scales to assess effects on water resources and ecology. Field-based meteorology, hydrology, soil physics, plant production, solute and sediment transport, economic, and social behavior data were measured in a South Korean catchment. A variety of models are being used to simulate plot and field scale experiments within the catchment. Results from each of the local-scale models provide identification of sensitive, local-scale parameters which are then used as inputs into a large-scale watershed model. We used the spatially distributed SWAT model to synthesize the experimental field data throughout the catchment. The approach of our study was that the range in local-scale model parameter results can be used to define the sensitivity and uncertainty in the large-scale watershed model. Further, this example shows how research can be structured for scientific results describing complex ecosystems and landscapes where cross-disciplinary linkages benefit the end result. The field-based and modeling framework described is being used to develop scenarios to examine spatial and temporal changes in land use practices and climatic effects on water quantity, water quality, and sediment transport. Development of accurate modeling scenarios requires understanding the social relationship between individual and policy driven land management practices and the value of sustainable resources to all shareholders.

  14. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    PubMed

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  15. Polarization of the prompt gamma-ray emission from the gamma-ray burst of 6 December 2002.

    PubMed

    Coburn, Wayne; Boggs, Steven E

    2003-05-22

    Observations of the afterglows of gamma-ray bursts (GRBs) have revealed that they lie at cosmological distances, and so correspond to the release of an enormous amount of energy. The nature of the central engine that powers these events and the prompt gamma-ray emission mechanism itself remain enigmatic because, once a relativistic fireball is created, the physics of the afterglow is insensitive to the nature of the progenitor. Here we report the discovery of linear polarization in the prompt gamma-ray emission from GRB021206, which indicates that it is synchrotron emission from relativistic electrons in a strong magnetic field. The polarization is at the theoretical maximum, which requires a uniform, large-scale magnetic field over the gamma-ray emission region. A large-scale magnetic field constrains possible progenitors to those either having or producing organized fields. We suggest that the large magnetic energy densities in the progenitor environment (comparable to the kinetic energy densities of the fireball), combined with the large-scale structure of the field, indicate that magnetic fields drive the GRB explosion.

  16. Panoptes: web-based exploration of large scale genome variation data.

    PubMed

    Vauterin, Paul; Jeffery, Ben; Miles, Alistair; Amato, Roberto; Hart, Lee; Wright, Ian; Kwiatkowski, Dominic

    2017-10-15

    The size and complexity of modern large-scale genome variation studies demand novel approaches for exploring and sharing the data. In order to unlock the potential of these data for a broad audience of scientists with various areas of expertise, a unified exploration framework is required that is accessible, coherent and user-friendly. Panoptes is an open-source software framework for collaborative visual exploration of large-scale genome variation data and associated metadata in a web browser. It relies on technology choices that allow it to operate in near real-time on very large datasets. It can be used to browse rich, hybrid content in a coherent way, and offers interactive visual analytics approaches to assist the exploration. We illustrate its application using genome variation data of Anopheles gambiae, Plasmodium falciparum and Plasmodium vivax. Freely available at https://github.com/cggh/panoptes, under the GNU Affero General Public License. paul.vauterin@gmail.com. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  17. Parallel and serial computing tools for testing single-locus and epistatic SNP effects of quantitative traits in genome-wide association studies

    PubMed Central

    Ma, Li; Runesha, H Birali; Dvorkin, Daniel; Garbe, John R; Da, Yang

    2008-01-01

    Background Genome-wide association studies (GWAS) using single nucleotide polymorphism (SNP) markers provide opportunities to detect epistatic SNPs associated with quantitative traits and to detect the exact mode of an epistasis effect. Computational difficulty is the main bottleneck for epistasis testing in large scale GWAS. Results The EPISNPmpi and EPISNP computer programs were developed for testing single-locus and epistatic SNP effects on quantitative traits in GWAS, including tests of three single-locus effects for each SNP (SNP genotypic effect, additive and dominance effects) and five epistasis effects for each pair of SNPs (two-locus interaction, additive × additive, additive × dominance, dominance × additive, and dominance × dominance) based on the extended Kempthorne model. EPISNPmpi is the parallel computing program for epistasis testing in large scale GWAS and achieved excellent scalability for large scale analysis and portability for various parallel computing platforms. EPISNP is the serial computing program based on the EPISNPmpi code for epistasis testing in small scale GWAS using commonly available operating systems and computer hardware. Three serial computing utility programs were developed for graphical viewing of test results and epistasis networks, and for estimating CPU time and disk space requirements. Conclusion The EPISNPmpi parallel computing program provides an effective computing tool for epistasis testing in large scale GWAS, and the epiSNP serial computing programs are convenient tools for epistasis analysis in small scale GWAS using commonly available computer hardware. PMID:18644146

  18. Delivering better power: the role of simulation in reducing the environmental impact of aircraft engines.

    PubMed

    Menzies, Kevin

    2014-08-13

    The growth in simulation capability over the past 20 years has led to remarkable changes in the design process for gas turbines. The availability of relatively cheap computational power coupled to improvements in numerical methods and physical modelling in simulation codes have enabled the development of aircraft propulsion systems that are more powerful and yet more efficient than ever before. However, the design challenges are correspondingly greater, especially to reduce environmental impact. The simulation requirements to achieve a reduced environmental impact are described along with the implications of continued growth in available computational power. It is concluded that achieving the environmental goals will demand large-scale multi-disciplinary simulations requiring significantly increased computational power, to enable optimization of the airframe and propulsion system over the entire operational envelope. However even with massive parallelization, the limits imposed by communications latency will constrain the time required to achieve a solution, and therefore the position of such large-scale calculations in the industrial design process. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  19. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and describe the potential deployment of this information technology with other NASA applications.

  20. Fabrication and testing of the first 8.4-m off-axis segment for the Giant Magellan Telescope

    NASA Astrophysics Data System (ADS)

    Martin, H. M.; Allen, R. G.; Burge, J. H.; Kim, D. W.; Kingsley, J. S.; Tuell, M. T.; West, S. C.; Zhao, C.; Zobrist, T.

    2010-07-01

    The primary mirror of the Giant Magellan Telescope consists of seven 8.4 m segments which are borosilicate honeycomb sandwich mirrors. Fabrication and testing of the off-axis segments is challenging and has led to a number of innovations in manufacturing technology. The polishing system includes an actively stressed lap that follows the shape of the aspheric surface, used for large-scale figuring and smoothing, and a passive "rigid conformal lap" for small-scale figuring and smoothing. Four independent measurement systems support all stages of fabrication and provide redundant measurements of all critical parameters including mirror figure, radius of curvature, off-axis distance and clocking. The first measurement uses a laser tracker to scan the surface, with external references to compensate for rigid body displacements and refractive index variations. The main optical test is a full-aperture interferometric measurement, but it requires an asymmetric null corrector with three elements, including a 3.75 m mirror and a computer-generated hologram, to compensate for the surface's 14 mm departure from the best-fit sphere. Two additional optical tests measure large-scale and small-scale structure, with some overlap. Together these measurements provide high confidence that the segments meet all requirements.

  1. Modelling turbulent boundary layer flow over fractal-like multiscale terrain using large-eddy simulations and analytical tools.

    PubMed

    Yang, X I A; Meneveau, C

    2017-04-13

    In recent years, there has been growing interest in large-eddy simulation (LES) modelling of atmospheric boundary layers interacting with arrays of wind turbines on complex terrain. However, such terrain typically contains geometric features and roughness elements reaching down to small scales that typically cannot be resolved numerically. Thus subgrid-scale models for the unresolved features of the bottom roughness are needed for LES. Such knowledge is also required to model the effects of the ground surface 'underneath' a wind farm. Here we adapt a dynamic approach to determine subgrid-scale roughness parametrizations and apply it for the case of rough surfaces composed of cuboidal elements with broad size distributions, containing many scales. We first investigate the flow response to ground roughness of a few scales. LES with the dynamic roughness model which accounts for the drag of unresolved roughness is shown to provide resolution-independent results for the mean velocity distribution. Moreover, we develop an analytical roughness model that accounts for the sheltering effects of large-scale on small-scale roughness elements. Taking into account the shading effect, constraints from fundamental conservation laws, and assumptions of geometric self-similarity, the analytical roughness model is shown to provide analytical predictions that agree well with roughness parameters determined from LES.This article is part of the themed issue 'Wind energy in complex terrains'. © 2017 The Author(s).

  2. Parameterizing a Large-scale Water Balance Model in Regions with Sparse Data: The Tigris-Euphrates River Basins as an Example

    NASA Astrophysics Data System (ADS)

    Flint, A. L.; Flint, L. E.

    2010-12-01

    The characterization of hydrologic response to current and future climates is of increasing importance to many countries around the world that rely heavily on changing and uncertain water supplies. Large-scale models that can calculate a spatially distributed water balance and elucidate groundwater recharge and surface water flows for large river basins provide a basis of estimates of changes due to future climate projections. Unfortunately many regions in the world have very sparse data for parameterization or calibration of hydrologic models. For this study, the Tigris and Euphrates River basins were used for the development of a regional water balance model at 180-m spatial scale, using the Basin Characterization Model, to estimate historical changes in groundwater recharge and surface water flows in the countries of Turkey, Syria, Iraq, Iran, and Saudi Arabia. Necessary input parameters include precipitation, air temperature, potential evapotranspiration (PET), soil properties and thickness, and estimates of bulk permeability from geologic units. Data necessary for calibration includes snow cover, reservoir volumes (from satellite data and historic, pre-reservoir elevation data) and streamflow measurements. Global datasets for precipitation, air temperature, and PET were available at very large spatial scales (50 km) through the world scale databases, finer scale WorldClim climate data, and required downscaling to fine scales for model input. Soils data were available through world scale soil maps but required parameterization on the basis of textural data to estimate soil hydrologic properties. Soil depth was interpreted from geomorphologic interpretation and maps of quaternary deposits, and geologic materials were categorized from generalized geologic maps of each country. Estimates of bedrock permeability were made on the basis of literature and data on driller’s logs and adjusted during calibration of the model to streamflow measurements where available. Results of historical water balance calculations throughout the Tigris and Euphrates River basins will be shown along with details of processing input data to provide spatial continuity and downscaling. Basic water availability analysis for recharge and runoff is readily available from a determinisitic solar radiation energy balance model and a global potential evapotranspiration model and global estimates of precipitation and air temperature. Future climate estimates can be readily applied to the same water and energy balance models to evaluate future water availability for countries around the globe.

  3. Development of geopolitically relevant ranking criteria for geoengineering methods

    NASA Astrophysics Data System (ADS)

    Boyd, Philip W.

    2016-11-01

    A decade has passed since Paul Crutzen published his editorial essay on the potential for stratospheric geoengineering to cool the climate in the Anthropocene. He synthesized the effects of the 1991 Pinatubo eruption on the planet's radiative budget and used this large-scale event to broaden and deepen the debate on the challenges and opportunities of large-scale geoengineering. Pinatubo had pronounced effects, both in the short and longer term (months to years), on the ocean, land, and the atmosphere. This rich set of data on how a large-scale natural event influences many regional and global facets of the Earth System provides a comprehensive viewpoint to assess the wider ramifications of geoengineering. Here, I use the Pinatubo archives to develop a range of geopolitically relevant ranking criteria for a suite of different geoengineering approaches. The criteria focus on the spatial scales needed for geoengineering and whether large-scale dispersal is a necessary requirement for a technique to deliver significant cooling or carbon dioxide reductions. These categories in turn inform whether geoengineering approaches are amenable to participation (the "democracy of geoengineering") and whether they will lead to transboundary issues that could precipitate geopolitical conflicts. The criteria provide the requisite detail to demarcate different geoengineering approaches in the context of geopolitics. Hence, they offer another tool that can be used in the development of a more holistic approach to the debate on geoengineering.

  4. Big questions, big science: meeting the challenges of global ecology.

    PubMed

    Schimel, David; Keller, Michael

    2015-04-01

    Ecologists are increasingly tackling questions that require significant infrastucture, large experiments, networks of observations, and complex data and computation. Key hypotheses in ecology increasingly require more investment, and larger data sets to be tested than can be collected by a single investigator's or s group of investigator's labs, sustained for longer than a typical grant. Large-scale projects are expensive, so their scientific return on the investment has to justify the opportunity cost-the science foregone because resources were expended on a large project rather than supporting a number of individual projects. In addition, their management must be accountable and efficient in the use of significant resources, requiring the use of formal systems engineering and project management to mitigate risk of failure. Mapping the scientific method into formal project management requires both scientists able to work in the context, and a project implementation team sensitive to the unique requirements of ecology. Sponsoring agencies, under pressure from external and internal forces, experience many pressures that push them towards counterproductive project management but a scientific community aware and experienced in large project science can mitigate these tendencies. For big ecology to result in great science, ecologists must become informed, aware and engaged in the advocacy and governance of large ecological projects.

  5. A large-scale, long-term study of scale drift: The micro view and the macro view

    NASA Astrophysics Data System (ADS)

    He, W.; Li, S.; Kingsbury, G. G.

    2016-11-01

    The development of measurement scales for use across years and grades in educational settings provides unique challenges, as instructional approaches, instructional materials, and content standards all change periodically. This study examined the measurement stability of a set of Rasch measurement scales that have been in place for almost 40 years. In order to investigate the stability of these scales, item responses were collected from a large set of students who took operational adaptive tests using items calibrated to the measurement scales. For the four scales that were examined, item samples ranged from 2183 to 7923 items. Each item was administered to at least 500 students in each grade level, resulting in approximately 3000 responses per item. Stability was examined at the micro level analysing change in item parameter estimates that have occurred since the items were first calibrated. It was also examined at the macro level, involving groups of items and overall test scores for students. Results indicated that individual items had changes in their parameter estimates, which require further analysis and possible recalibration. At the same time, the results at the total score level indicate substantial stability in the measurement scales over the span of their use.

  6. The design of dapog rice seeder model for laboratory scale

    NASA Astrophysics Data System (ADS)

    Purba, UI; Rizaldi, T.; Sumono; Sigalingging, R.

    2018-02-01

    The dapog system is seeding rice seeds using a special nursery tray. Rice seedings with dapog systems can produce seedlings in the form of higher quality and uniform seed rolls. This study aims to reduce the cost of making large-scale apparatus by designing models for small-scale and can be used for learning in the laboratory. Parameters observed were soil uniformity, seeds and fertilizers, soil looses, seeds and fertilizers, effective capacity of apparatus, and power requirements. The results showed a high uniformity in soil, seed and fertilizer respectively 92.8%, 1-3 seeds / cm2 and 82%. The scattered materials for soil, seed and fertilizer were respectively 6.23%, 2.7% and 2.23%. The effective capacity of apparatus was 360 boxes / hour with 237.5 kWh of required power.

  7. Limited accessibility to designs and results of Japanese large-scale clinical trials for cardiovascular diseases.

    PubMed

    Sawata, Hiroshi; Ueshima, Kenji; Tsutani, Kiichiro

    2011-04-14

    Clinical evidence is important for improving the treatment of patients by health care providers. In the study of cardiovascular diseases, large-scale clinical trials involving thousands of participants are required to evaluate the risks of cardiac events and/or death. The problems encountered in conducting the Japanese Acute Myocardial Infarction Prospective (JAMP) study highlighted the difficulties involved in obtaining the financial and infrastructural resources necessary for conducting large-scale clinical trials. The objectives of the current study were: 1) to clarify the current funding and infrastructural environment surrounding large-scale clinical trials in cardiovascular and metabolic diseases in Japan, and 2) to find ways to improve the environment surrounding clinical trials in Japan more generally. We examined clinical trials examining cardiovascular diseases that evaluated true endpoints and involved 300 or more participants using Pub-Med, Ichushi (by the Japan Medical Abstracts Society, a non-profit organization), websites of related medical societies, the University Hospital Medical Information Network (UMIN) Clinical Trials Registry, and clinicaltrials.gov at three points in time: 30 November, 2004, 25 February, 2007 and 25 July, 2009. We found a total of 152 trials that met our criteria for 'large-scale clinical trials' examining cardiovascular diseases in Japan. Of these, 72.4% were randomized controlled trials (RCTs). Of 152 trials, 9.2% of the trials examined more than 10,000 participants, and 42.8% examined between 1,000 and 10,000 participants. The number of large-scale clinical trials markedly increased from 2001 to 2004, but suddenly decreased in 2007, then began to increase again. Ischemic heart disease (39.5%) was the most common target disease. Most of the larger-scale trials were funded by private organizations such as pharmaceutical companies. The designs and results of 13 trials were not disclosed. To improve the quality of clinical trials, all sponsors should register trials and disclose the funding sources before the enrolment of participants, and publish their results after the completion of each study.

  8. Large-scale synthesis of arrays of high-aspect-ratio rigid vertically aligned carbon nanofibres

    NASA Astrophysics Data System (ADS)

    Melechko, A. V.; McKnight, T. E.; Hensley, D. K.; Guillorn, M. A.; Borisevich, A. Y.; Merkulov, V. I.; Lowndes, D. H.; Simpson, M. L.

    2003-09-01

    We report on techniques for catalytic synthesis of rigid, high-aspect-ratio, vertically aligned carbon nanofibres by dc plasma enhanced chemical vapour deposition that are tailored for applications that require arrays of individual fibres that feature long fibre lengths (up to 20 µm) such as scanning probe microscopy, penetrant cell and tissue probing arrays and mechanical insertion approaches for gene delivery to cell cultures. We demonstrate that the definition of catalyst nanoparticles is the critical step that enables growth of individual, long-length fibres and discuss methods for catalyst particle preparation that allow the growth of individual isolated nanofibres from catalyst dots with diameters as large as 500 nm. This development enables photolithographic definition of catalyst and therefore the inexpensive, large-scale production of such arrays.

  9. Characterization of Sound Radiation by Unresolved Scales of Motion in Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Rubinstein, Robert; Zhou, Ye

    1999-01-01

    Evaluation of the sound sources in a high Reynolds number turbulent flow requires time-accurate resolution of an extremely large number of scales of motion. Direct numerical simulations will therefore remain infeasible for the forseeable future: although current large eddy simulation methods can resolve the largest scales of motion accurately the, they must leave some scales of motion unresolved. A priori studies show that acoustic power can be underestimated significantly if the contribution of these unresolved scales is simply neglected. In this paper, the problem of evaluating the sound radiation properties of the unresolved, subgrid-scale motions is approached in the spirit of the simplest subgrid stress models: the unresolved velocity field is treated as isotropic turbulence with statistical descriptors, evaluated from the resolved field. The theory of isotropic turbulence is applied to derive formulas for the total power and the power spectral density of the sound radiated by a filtered velocity field. These quantities are compared with the corresponding quantities for the unfiltered field for a range of filter widths and Reynolds numbers.

  10. Bottom-up production of meta-atoms for optical magnetism in visible and NIR light

    NASA Astrophysics Data System (ADS)

    Barois, Philippe; Ponsinet, Virginie; Baron, Alexandre; Richetti, Philippe

    2018-02-01

    Many unusual optical properties of metamaterials arise from the magnetic response of engineered structures of sub-wavelength size (meta-atoms) exposed to light. The top-down approach whereby engineered nanostructure of well-defined morphology are engraved on a surface proved to be successful for the generation of strong optical magnetism. It faces however the limitations of high cost and small active area in visible light where nanometre resolution is needed. The bottom-up approach whereby the fabrication metamaterials of large volume or large area results from the combination of nanochemitry and self-assembly techniques may constitute a cost-effective alternative. This approach nevertheless requires the large-scale production of functional building-blocks (meta-atoms) bearing a strong magnetic optical response. We propose in this paper a few tracks that lead to the large scale synthesis of magnetic metamaterials operating in visible or near IR light.

  11. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  12. Template Interfaces for Agile Parallel Data-Intensive Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramakrishnan, Lavanya; Gunter, Daniel; Pastorello, Gilerto Z.

    Tigres provides a programming library to compose and execute large-scale data-intensive scientific workflows from desktops to supercomputers. DOE User Facilities and large science collaborations are increasingly generating large enough data sets that it is no longer practical to download them to a desktop to operate on them. They are instead stored at centralized compute and storage resources such as high performance computing (HPC) centers. Analysis of this data requires an ability to run on these facilities, but with current technologies, scaling an analysis to an HPC center and to a large data set is difficult even for experts. Tigres ismore » addressing the challenge of enabling collaborative analysis of DOE Science data through a new concept of reusable "templates" that enable scientists to easily compose, run and manage collaborative computational tasks. These templates define common computation patterns used in analyzing a data set.« less

  13. Communication architecture for large geostationary platforms

    NASA Technical Reports Server (NTRS)

    Bond, F. E.

    1979-01-01

    Large platforms have been proposed for supporting multipurpose communication payloads to exploit economy of scale, reduce congestion in the geostationary orbit, provide interconnectivity between diverse earth stations, and obtain significant frequency reuse with large multibeam antennas. This paper addresses a specific system design, starting with traffic projections in the next two decades and discussing tradeoffs and design approaches for major components including: antennas, transponders, and switches. Other issues explored are selection of frequency bands, modulation, multiple access, switching methods, and techniques for servicing areas with nonuniform traffic demands. Three-major services are considered: a high-volume trunking system, a direct-to-user system, and a broadcast system for video distribution and similar functions. Estimates of payload weight and d.c. power requirements are presented. Other subjects treated are: considerations of equipment layout for servicing by an orbit transfer vehicle, mechanical stability requirements for the large antennas, and reliability aspects of the large number of transponders employed.

  14. Direct Transformation of Amorphous Silicon Carbide into Graphene under Low Temperature and Ambient Pressure

    PubMed Central

    Peng, Tao; Lv, Haifeng; He, Daping; Pan, Mu; Mu, Shichun

    2013-01-01

    A large-scale availability of the graphene is critical to the successful application of graphene-based electronic devices. The growth of epitaxial graphene (EG) on insulating silicon carbide (SiC) surfaces has opened a new promising route for large-scale high-quality graphene production. However, two key obstacles to epitaxial growth are extremely high requirements for almost perfectly ordered crystal SiC and harsh process conditions. Here, we report that the amorphous SiC (a-Si1−xCx) nano-shell (nano-film) can be directly transformed into graphene by using chlorination method under very mild reaction conditions of relative low temperature (800°C) and the ambient pressure in chlorine (Cl2) atmosphere. Therefore, our finding, the direct transformation of a-Si1−xCx into graphene under much milder condition, will open a door to apply this new method to the large-scale production of graphene at low costs. PMID:23359349

  15. Methods, caveats and the future of large-scale microelectrode recordings in the non-human primate

    PubMed Central

    Dotson, Nicholas M.; Goodell, Baldwin; Salazar, Rodrigo F.; Hoffman, Steven J.; Gray, Charles M.

    2015-01-01

    Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field. PMID:26578906

  16. Expansion of Human Mesenchymal Stem Cells in a Microcarrier Bioreactor.

    PubMed

    Tsai, Ang-Chen; Ma, Teng

    2016-01-01

    Human mesenchymal stem cells (hMSCs) are considered as a primary candidate in cell therapy owing to their self-renewability, high differentiation capabilities, and secretions of trophic factors. In clinical application, a large quantity of therapeutically competent hMSCs is required that cannot be produced in conventional petri dish culture. Bioreactors are scalable and have the capacity to meet the production demand. Microcarrier suspension culture in stirred-tank bioreactors is the most widely used method to expand anchorage dependent cells in a large scale. Stirred-tank bioreactors have the potential to scale up and microcarriers provide the high surface-volume ratio. As a result, a spinner flask bioreactor with microcarriers has been commonly used in large scale expansion of adherent cells. This chapter describes a detailed culture protocol for hMSC expansion in a 125 mL spinner flask using microcarriers, Cytodex I, and a procedure for cell seeding, expansion, metabolic sampling, and quantification and visualization using microculture tetrazolium (MTT) reagent.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robinson, Alan E.

    Here, proposed dark matter detectors with eV-scale sensitivities will detect a large background of atomic (nuclear) recoils from coherent photon scattering of MeV-scale photons. This background climbs steeply below ~10 eV, far exceeding the declining rate of low-energy Compton recoils. The upcoming generation of dark matter detectors will not be limited by this background, but further development of eV-scale and sub-eV detectors will require strategies, including the use of low nuclear mass target materials, to maximize dark matter sensitivity while minimizing the coherent photon scattering background.

  18. A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks

    PubMed Central

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and other metrics. Finally some open issues in routing protocol design in large-scale wireless sensor networks and conclusions are proposed. PMID:22163808

  19. Process, pattern and scale: hydrogeomorphology and plant diversity in forested wetlands across multiple spatial scales

    NASA Astrophysics Data System (ADS)

    Alexander, L.; Hupp, C. R.; Forman, R. T.

    2002-12-01

    Many geodisturbances occur across large spatial scales, spanning entire landscapes and creating ecological phenomena in their wake. Ecological study at large scales poses special problems: (1) large-scale studies require large-scale resources, and (2) sampling is not always feasible at the appropriate scale, and researchers rely on data collected at smaller scales to interpret patterns across broad regions. A criticism of landscape ecology is that findings at small spatial scales are "scaled up" and applied indiscriminately across larger spatial scales. In this research, landscape scaling is addressed through process-pattern relationships between hydrogeomorphic processes and patterns of plant diversity in forested wetlands. The research addresses: (1) whether patterns and relationships between hydrogeomorphic, vegetation, and spatial variables can transcend scale; and (2) whether data collected at small spatial scales can be used to describe patterns and relationships across larger spatial scales. Field measurements of hydrologic, geomorphic, spatial, and vegetation data were collected or calculated for 15- 1-ha sites on forested floodplains of six (6) Chesapeake Bay Coastal Plain streams over a total area of about 20,000 km2. Hydroperiod (day/yr), floodplain surface elevation range (m), discharge (m3/s), stream power (kg-m/s2), sediment deposition (mm/yr), relative position downstream and other variables were used in multivariate analyses to explain differences in species richness, tree diversity (Shannon-Wiener Diversity Index H'), and plant community composition at four spatial scales. Data collected at the plot (400-m2) and site- (c. 1-ha) scales are applied to and tested at the river watershed and regional spatial scales. Results indicate that plant species richness and tree diversity (Shannon-Wiener diversity index H') can be described by hydrogeomorphic conditions at all scales, but are best described at the site scale. Data collected at plot and site scales are tested for spatial heterogeneity across the Chesapeake Bay Coastal Plain using a geostatistical variogram, and multiple regression analysis is used to relate plant diversity, spatial, and hydrogeomorphic variables across Coastal Plain regions and hydrologic regimes. Results indicate that relationships between hydrogeomorphic processes and patterns of plant diversity at finer scales can proxy relationships at coarser scales in some, not all, cases. Findings also suggest that data collected at small scales can be used to describe trends across broader scales under limited conditions.

  20. ARIES: Acquisition of Requirements and Incremental Evolution of Specifications

    NASA Technical Reports Server (NTRS)

    Roberts, Nancy A.

    1993-01-01

    This paper describes a requirements/specification environment specifically designed for large-scale software systems. This environment is called ARIES (Acquisition of Requirements and Incremental Evolution of Specifications). ARIES provides assistance to requirements analysts for developing operational specifications of systems. This development begins with the acquisition of informal system requirements. The requirements are then formalized and gradually elaborated (transformed) into formal and complete specifications. ARIES provides guidance to the user in validating formal requirements by translating them into natural language representations and graphical diagrams. ARIES also provides ways of analyzing the specification to ensure that it is correct, e.g., testing the specification against a running simulation of the system to be built. Another important ARIES feature, especially when developing large systems, is the sharing and reuse of requirements knowledge. This leads to much less duplication of effort. ARIES combines all of its features in a single environment that makes the process of capturing a formal specification quicker and easier.

  1. Path changing methods applied to the 4-D guidance of STOL aircraft.

    DOT National Transportation Integrated Search

    1971-11-01

    Prior to the advent of large-scale commercial STOL service, some challenging navigation and guidance problems must be solved. Proposed terminal area operations may require that these aircraft be capable of accurately flying complex flight paths, and ...

  2. The Sun in a Drawer

    ERIC Educational Resources Information Center

    Anderson, Bruce

    1975-01-01

    Widespread use of solar energy is unsuccessful with large scale integration required and conservatism of responsible institutions. Rising fuel costs and government promotion may initiate expansion in commercial, environmental, and government establishments, homes, and schools. Difficulties encountered include financing, tax incentives, design,…

  3. Forming Mandrels for X-Ray Mirror Substrates

    NASA Technical Reports Server (NTRS)

    Blake, Peter N.; Saha. To,p; Zhang, Will; O'Dell, Stephen; Kester, Thomas; Jones, William

    2011-01-01

    Precision forming mandrels are one element in X-ray mirror development at NASA. Current mandrel fabrication process is capable of meeting the allocated precision requirements for a 5 arcsec telescope. A manufacturing plan is outlined for a large IXO-scale program.

  4. What Fermenter?

    ERIC Educational Resources Information Center

    Terry, John

    1987-01-01

    Discusses the feasibility of using fermenters in secondary school laboratories. Includes discussions of equipment, safety, and computer interfacing. Describes how a simple fermenter could be used to simulate large-scale processes. Concludes that, although teachers and technicians will require additional training, the prospects for biotechnology in…

  5. Large-Scale Assessment of Language Proficiency: Theoretical and Pedagogical Reflections on the Use of Multiple-Choice Tests

    ERIC Educational Resources Information Center

    Argüelles Álvarez, Irina

    2013-01-01

    The new requirement placed on students in tertiary settings in Spain to demonstrate a B1 or a B2 proficiency level of English, in accordance with the Common European Framework of Reference for Languages (CEFRL), has led most Spanish universities to develop a program of certification or accreditation of the required level. The first part of this…

  6. Multiple Independent File Parallel I/O with HDF5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M. C.

    2016-07-13

    The HDF5 library has supported the I/O requirements of HPC codes at Lawrence Livermore National Labs (LLNL) since the late 90’s. In particular, HDF5 used in the Multiple Independent File (MIF) parallel I/O paradigm has supported LLNL code’s scalable I/O requirements and has recently been gainfully used at scales as large as O(10 6) parallel tasks.

  7. Drainage networks after wildfire

    USGS Publications Warehouse

    Kinner, D.A.; Moody, J.A.

    2005-01-01

    Predicting runoff and erosion from watersheds burned by wildfires requires an understanding of the three-dimensional structure of both hillslope and channel drainage networks. We investigate the small-and large-scale structures of drainage networks using field studies and computer analysis of 30-m digital elevation model. Topologic variables were derived from a composite 30-m DEM, which included 14 order 6 watersheds within the Pikes Peak batholith. Both topologic and hydraulic variables were measured in the field in two smaller burned watersheds (3.7 and 7.0 hectares) located within one of the order 6 watersheds burned by the 1996 Buffalo Creek Fire in Central Colorado. Horton ratios of topologic variables (stream number, drainage area, stream length, and stream slope) for small-scale and large-scale watersheds are shown to scale geometrically with stream order (i.e., to be scale invariant). However, the ratios derived for the large-scale drainage networks could not be used to predict the rill and gully drainage network structure. Hydraulic variables (width, depth, cross-sectional area, and bed roughness) for small-scale drainage networks were found to be scale invariant across 3 to 4 stream orders. The relation between hydraulic radius and cross-sectional area is similar for rills and gullies, suggesting that their geometry can be treated similarly in hydraulic modeling. Additionally, the rills and gullies have relatively small width-to-depth ratios, implying sidewall friction may be important to the erosion and evolutionary process relative to main stem channels.

  8. Cells as advanced therapeutics: State-of-the-art, challenges, and opportunities in large scale biomanufacturing of high-quality cells for adoptive immunotherapies.

    PubMed

    Dwarshuis, Nate J; Parratt, Kirsten; Santiago-Miranda, Adriana; Roy, Krishnendu

    2017-05-15

    Therapeutic cells hold tremendous promise in treating currently incurable, chronic diseases since they perform multiple, integrated, complex functions in vivo compared to traditional small-molecule drugs or biologics. However, they also pose significant challenges as therapeutic products because (a) their complex mechanisms of actions are difficult to understand and (b) low-cost bioprocesses for large-scale, reproducible manufacturing of cells have yet to be developed. Immunotherapies using T cells and dendritic cells (DCs) have already shown great promise in treating several types of cancers, and human mesenchymal stromal cells (hMSCs) are now extensively being evaluated in clinical trials as immune-modulatory cells. Despite these exciting developments, the full potential of cell-based therapeutics cannot be realized unless new engineering technologies enable cost-effective, consistent manufacturing of high-quality therapeutic cells at large-scale. Here we review cell-based immunotherapy concepts focused on the state-of-the-art in manufacturing processes including cell sourcing, isolation, expansion, modification, quality control (QC), and culture media requirements. We also offer insights into how current technologies could be significantly improved and augmented by new technologies, and how disciplines must converge to meet the long-term needs for large-scale production of cell-based immunotherapies. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  10. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Astrophysics Data System (ADS)

    Cleary, Joseph

    2018-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an array of four telescopes designed to measure the polarization of the Cosmic Microwave Background. CLASS aims to detect the B-mode polarization from primordial gravitational waves predicted by cosmic inflation theory, as well as the imprint left by reionization upon the CMB E-mode polarization. This will be achieved through a combination of observing strategy and state-of-the-art instrumentation. CLASS is observing 70% of the sky to characterize the CMB at large angular scales, which will measure the entire CMB power spectrum from the reionization peak to the recombination peak. The four telescopes operate at frequencies of 38, 93, 145, and 217 GHz, in order to estimate Galactic synchrotron and dust foregrounds while avoiding atmospheric absorption. CLASS employs rapid polarization modulation to overcome atmospheric and instrumental noise. Polarization sensitive cryogenic detectors with low noise levels provide CLASS the sensitivity required to constrain the tensor-to-scalar ratio down to levels of r ~ 0.01 while also measuring the optical depth the reionization to sample-variance levels. These improved constraints on the optical depth to reionization are required to pin down the mass of neutrinos from complementary cosmological data. CLASS has completed a year of observations at 38 GHz and is in the process of deploying the rest of the telescope array. This poster provides an overview and update on the CLASS science, hardware and survey operations.

  11. Antimicrobial residues in animal waste and water resources proximal to large-scale swine and poultry feeding operations

    USGS Publications Warehouse

    Campagnolo, E.R.; Johnson, K.R.; Karpati, A.; Rubin, C.S.; Kolpin, D.W.; Meyer, M.T.; Esteban, J. Emilio; Currier, R.W.; Smith, K.; Thu, K.M.; McGeehin, M.

    2002-01-01

    Expansion and intensification of large-scale animal feeding operations (AFOs) in the United States has resulted in concern about environmental contamination and its potential public health impacts. The objective of this investigation was to obtain background data on a broad profile of antimicrobial residues in animal wastes and surface water and groundwater proximal to large-scale swine and poultry operations. The samples were measured for antimicrobial compounds using both radioimmunoassay and liquid chromatography/electrospray ionization-mass spectrometry (LC/ESI-MS) techniques. Multiple classes of antimicrobial compounds (commonly at concentrations of >100 μg/l) were detected in swine waste storage lagoons. In addition, multiple classes of antimicrobial compounds were detected in surface and groundwater samples collected proximal to the swine and poultry farms. This information indicates that animal waste used as fertilizer for crops may serve as a source of antimicrobial residues for the environment. Further research is required to determine if the levels of antimicrobials detected in this study are of consequence to human and/or environmental ecosystems. A comparison of the radioimmunoassay and LC/ESI-MS analytical methods documented that radioimmunoassay techniques were only appropriate for measuring residues in animal waste samples likely to contain high levels of antimicrobials. More sensitive LC/ESI-MS techniques are required in environmental samples, where low levels of antimicrobial residues are more likely.

  12. Ray-optics cloaking devices for large objects in incoherent natural light

    NASA Astrophysics Data System (ADS)

    Chen, Hongsheng; Zheng, Bin; Shen, Lian; Wang, Huaping; Zhang, Xianmin; Zheludev, Nikolay I.; Zhang, Baile

    2013-10-01

    A cloak that can hide living creatures from sight is a common feature of mythology but still remains unrealized as a practical device. To preserve the wave phase, the previous cloaking solution proposed by Pendry and colleagues required transformation of the electromagnetic space around the hidden object in such a way that the rays bending around the object inside the cloak region have to travel faster than those passing it by. This difficult phase preservation requirement is the main obstacle for building a broadband polarization-insensitive cloak for large objects. Here we propose a simplified version of Pendry’s cloak by abolishing the requirement for phase preservation, as it is irrelevant for observation using incoherent natural light with human eyes, which are phase and polarization insensitive. This allows for a cloak design on large scales using commonly available materials. We successfully demonstrate the cloaking of living creatures, a cat and a fish, from the eye.

  13. Ray-optics cloaking devices for large objects in incoherent natural light.

    PubMed

    Chen, Hongsheng; Zheng, Bin; Shen, Lian; Wang, Huaping; Zhang, Xianmin; Zheludev, Nikolay I; Zhang, Baile

    2013-01-01

    A cloak that can hide living creatures from sight is a common feature of mythology but still remains unrealized as a practical device. To preserve the wave phase, the previous cloaking solution proposed by Pendry and colleagues required transformation of the electromagnetic space around the hidden object in such a way that the rays bending around the object inside the cloak region have to travel faster than those passing it by. This difficult phase preservation requirement is the main obstacle for building a broadband polarization-insensitive cloak for large objects. Here we propose a simplified version of Pendry's cloak by abolishing the requirement for phase preservation, as it is irrelevant for observation using incoherent natural light with human eyes, which are phase and polarization insensitive. This allows for a cloak design on large scales using commonly available materials. We successfully demonstrate the cloaking of living creatures, a cat and a fish, from the eye.

  14. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    PubMed

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  15. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution function method.

  16. Developing Renewable Energy Projects Larger Than 10 MWs at Federal Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2013-03-01

    To accomplish Federal goals for renewable energy, sustainability, and energy security, large-scale renewable energy projects must be developed and constructed on Federal sites at a significant scale with significant private investment. For the purposes of this Guide, large-scale Federal renewable energy projects are defined as renewable energy facilities larger than 10 megawatts (MW) that are sited on Federal property and lands and typically financed and owned by third parties.1 The U.S. Department of Energy’s Federal Energy Management Program (FEMP) helps Federal agencies meet these goals and assists agency personnel navigate the complexities of developing such projects and attract the necessarymore » private capital to complete them. This Guide is intended to provide a general resource that will begin to develop the Federal employee’s awareness and understanding of the project developer’s operating environment and the private sector’s awareness and understanding of the Federal environment. Because the vast majority of the investment that is required to meet the goals for large-scale renewable energy projects will come from the private sector, this Guide has been organized to match Federal processes with typical phases of commercial project development. FEMP collaborated with the National Renewable Energy Laboratory (NREL) and professional project developers on this Guide to ensure that Federal projects have key elements recognizable to private sector developers and investors. The main purpose of this Guide is to provide a project development framework to allow the Federal Government, private developers, and investors to work in a coordinated fashion on large-scale renewable energy projects. The framework includes key elements that describe a successful, financially attractive large-scale renewable energy project. This framework begins the translation between the Federal and private sector operating environments. When viewing the overall« less

  17. The impact of new forms of large-scale general practice provider collaborations on England's NHS: a systematic review.

    PubMed

    Pettigrew, Luisa M; Kumpunen, Stephanie; Mays, Nicholas; Rosen, Rebecca; Posaner, Rachel

    2018-03-01

    Over the past decade, collaboration between general practices in England to form new provider networks and large-scale organisations has been driven largely by grassroots action among GPs. However, it is now being increasingly advocated for by national policymakers. Expectations of what scaling up general practice in England will achieve are significant. To review the evidence of the impact of new forms of large-scale general practice provider collaborations in England. Systematic review. Embase, MEDLINE, Health Management Information Consortium, and Social Sciences Citation Index were searched for studies reporting the impact on clinical processes and outcomes, patient experience, workforce satisfaction, or costs of new forms of provider collaborations between general practices in England. A total of 1782 publications were screened. Five studies met the inclusion criteria and four examined the same general practice networks, limiting generalisability. Substantial financial investment was required to establish the networks and the associated interventions that were targeted at four clinical areas. Quality improvements were achieved through standardised processes, incentives at network level, information technology-enabled performance dashboards, and local network management. The fifth study of a large-scale multisite general practice organisation showed that it may be better placed to implement safety and quality processes than conventional practices. However, unintended consequences may arise, such as perceptions of disenfranchisement among staff and reductions in continuity of care. Good-quality evidence of the impacts of scaling up general practice provider organisations in England is scarce. As more general practice collaborations emerge, evaluation of their impacts will be important to understand which work, in which settings, how, and why. © British Journal of General Practice 2018.

  18. Dither Gyro Scale Factor Calibration: GOES-16 Flight Experience

    NASA Technical Reports Server (NTRS)

    Reth, Alan D.; Freesland, Douglas C.; Krimchansky, Alexander

    2018-01-01

    This poster is a sequel to a paper presented at the 34th Annual AAS Guidance and Control Conference in 2011, which first introduced dither-based calibration of gyro scale factors. The dither approach uses very small excitations, avoiding the need to take instruments offline during gyro scale factor calibration. In 2017, the dither calibration technique was successfully used to estimate gyro scale factors on the GOES-16 satellite. On-orbit dither calibration results were compared to more traditional methods using large angle spacecraft slews about each gyro axis, requiring interruption of science. The results demonstrate that the dither technique can estimate gyro scale factors to better than 2000 ppm during normal science observations.

  19. Effects of large deep-seated landslides on hillslope morphology, western Southern Alps, New Zealand

    NASA Astrophysics Data System (ADS)

    Korup, Oliver

    2006-03-01

    Morphometric analysis and air photo interpretation highlight geomorphic imprints of large landslides (i.e., affecting ≥1 km2) on hillslopes in the western Southern Alps (WSA), New Zealand. Large landslides attain kilometer-scale runout, affect >50% of total basin relief, and in 70% are slope clearing, and thus relief limiting. Landslide terrain shows lower mean local relief, relief variability, slope angles, steepness, and concavity than surrounding terrain. Measuring mean slope angle smoothes out local landslide morphology, masking any relationship between large landslides and possible threshold hillslopes. Large failures also occurred on low-gradient slopes, indicating persistent low-frequency/high-magnitude hillslope adjustment independent of fluvial bedrock incision. At the basin and hillslope scale, slope-area plots partly constrain the effects of landslides on geomorphic process regimes. Landslide imprints gradually blend with relief characteristics at orogen scale (102 km), while being sensitive to length scales of slope failure, topography, sampling, and digital elevation model resolution. This limits means of automated detection, and underlines the importance of local morphologic contrasts for detecting large landslides in the WSA. Landslide controls on low-order drainage include divide lowering and shifting, formation of headwater basins and hanging valleys, and stream piracy. Volumes typically mobilized, yet still stored in numerous deposits despite high denudation rates, are >107 m3, and theoretically equal to 102 years of basin-wide debris production from historic shallow landslides; lack of absolute ages precludes further estimates. Deposit size and mature forest cover indicate residence times of 101-104 years. On these timescales, large landslides require further attention in landscape evolution models of tectonically active orogens.

  20. Parallel workflow manager for non-parallel bioinformatic applications to solve large-scale biological problems on a supercomputer.

    PubMed

    Suplatov, Dmitry; Popova, Nina; Zhumatiy, Sergey; Voevodin, Vladimir; Švedas, Vytas

    2016-04-01

    Rapid expansion of online resources providing access to genomic, structural, and functional information associated with biological macromolecules opens an opportunity to gain a deeper understanding of the mechanisms of biological processes due to systematic analysis of large datasets. This, however, requires novel strategies to optimally utilize computer processing power. Some methods in bioinformatics and molecular modeling require extensive computational resources. Other algorithms have fast implementations which take at most several hours to analyze a common input on a modern desktop station, however, due to multiple invocations for a large number of subtasks the full task requires a significant computing power. Therefore, an efficient computational solution to large-scale biological problems requires both a wise parallel implementation of resource-hungry methods as well as a smart workflow to manage multiple invocations of relatively fast algorithms. In this work, a new computer software mpiWrapper has been developed to accommodate non-parallel implementations of scientific algorithms within the parallel supercomputing environment. The Message Passing Interface has been implemented to exchange information between nodes. Two specialized threads - one for task management and communication, and another for subtask execution - are invoked on each processing unit to avoid deadlock while using blocking calls to MPI. The mpiWrapper can be used to launch all conventional Linux applications without the need to modify their original source codes and supports resubmission of subtasks on node failure. We show that this approach can be used to process huge amounts of biological data efficiently by running non-parallel programs in parallel mode on a supercomputer. The C++ source code and documentation are available from http://biokinet.belozersky.msu.ru/mpiWrapper .

  1. Partition-of-unity finite-element method for large scale quantum molecular dynamics on massively parallel computational platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pask, J E; Sukumar, N; Guney, M

    2011-02-28

    Over the course of the past two decades, quantum mechanical calculations have emerged as a key component of modern materials research. However, the solution of the required quantum mechanical equations is a formidable task and this has severely limited the range of materials systems which can be investigated by such accurate, quantum mechanical means. The current state of the art for large-scale quantum simulations is the planewave (PW) method, as implemented in now ubiquitous VASP, ABINIT, and QBox codes, among many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points inmore » space, and in which every basis function overlaps every other at every point, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires substantial nonlocal communications in parallel implementations, placing critical limits on scalability. In recent years, real-space methods such as finite-differences (FD) and finite-elements (FE) have been developed to address these deficiencies by reformulating the required quantum mechanical equations in a strictly local representation. However, while addressing both resolution and parallel-communications problems, such local real-space approaches have been plagued by one key disadvantage relative to planewaves: excessive degrees of freedom (grid points, basis functions) needed to achieve the required accuracies. And so, despite critical limitations, the PW method remains the standard today. In this work, we show for the first time that this key remaining disadvantage of real-space methods can in fact be overcome: by building known atomic physics into the solution process using modern partition-of-unity (PU) techniques in finite element analysis. Indeed, our results show order-of-magnitude reductions in basis size relative to state-of-the-art planewave based methods. The method developed here is completely general, applicable to any crystal symmetry and to both metals and insulators alike. We have developed and implemented a full self-consistent Kohn-Sham method, including both total energies and forces for molecular dynamics, and developed a full MPI parallel implementation for large-scale calculations. We have applied the method to the gamut of physical systems, from simple insulating systems with light atoms to complex d- and f-electron systems, requiring large numbers of atomic-orbital enrichments. In every case, the new PU FE method attained the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the current state-of-the-art PW method. Finally, our initial MPI implementation has shown excellent parallel scaling of the most time-critical parts of the code up to 1728 processors, with clear indications of what will be required to achieve comparable scaling for the rest. Having shown that the key remaining disadvantage of real-space methods can in fact be overcome, the work has attracted significant attention: with sixteen invited talks, both domestic and international, so far; two papers published and another in preparation; and three new university and/or national laboratory collaborations, securing external funding to pursue a number of related research directions. Having demonstrated the proof of principle, work now centers on the necessary extensions and optimizations required to bring the prototype method and code delivered here to production applications.« less

  2. A tilted cold dark matter cosmological scenario

    NASA Technical Reports Server (NTRS)

    Cen, Renyue; Gnedin, Nickolay Y.; Kofman, Lev A.; Ostriker, Jeremiah P.

    1992-01-01

    A new cosmological scenario based on CDM but with a power spectrum index of about 0.7-0.8 is suggested. This model is predicted by various inflationary models with no fine tuning. This tilted CDM model, if normalized to COBE, alleviates many problems of the standard CDM model related to both small-scale and large-scale power. A physical bias of galaxies over dark matter of about two is required to fit spatial observations.

  3. Genome-Scale Metabolic Reconstructions and Theoretical Investigation of Methane Conversion in Methylomicrobium buryatense Strain 5G(B1)

    DOE PAGES

    de la Torre, Andrea; Metivier, Aisha; Chu, Frances; ...

    2015-11-25

    Methane-utilizing bacteria (methanotrophs) are capable of growth on methane and are attractive systems for bio-catalysis. However, the application of natural methanotrophic strains to large-scale production of value-added chemicals/biofuels requires a number of physiological and genetic alterations. An accurate metabolic model coupled with flux balance analysis can provide a solid interpretative framework for experimental data analyses and integration.

  4. Overview of Accelerator Applications in Energy

    NASA Astrophysics Data System (ADS)

    Garnett, Robert W.; Sheffield, Richard L.

    An overview of the application of accelerators and accelerator technology in energy is presented. Applications span a broad range of cost, size, and complexity and include large-scale systems requiring high-power or high-energy accelerators to drive subcritical reactors for energy production or waste transmutation, as well as small-scale industrial systems used to improve oil and gas exploration and production. The enabling accelerator technologies will also be reviewed and future directions discussed.

  5. Tradeoffs and synergies between biofuel production and large-scale solar infrastructure in deserts

    NASA Astrophysics Data System (ADS)

    Ravi, S.; Lobell, D. B.; Field, C. B.

    2012-12-01

    Solar energy installations in deserts are on the rise, fueled by technological advances and policy changes. Deserts, with a combination of high solar radiation and availability of large areas unusable for crop production are ideal locations for large scale solar installations. For efficient power generation, solar infrastructures require large amounts of water for operation (mostly for cleaning panels and dust suppression), leading to significant moisture additions to desert soil. A pertinent question is how to use the moisture inputs for sustainable agriculture/biofuel production. We investigated the water requirements for large solar infrastructures in North American deserts and explored the possibilities for integrating biofuel production with solar infrastructure. In co-located systems the possible decline in yields due to shading by solar panels may be offsetted by the benefits of periodic water addition to biofuel crops, simpler dust management and more efficient power generation in solar installations, and decreased impacts on natural habitats and scarce resources in deserts. In particular, we evaluated the potential to integrate solar infrastructure with biomass feedstocks that grow in arid and semi-arid lands (Agave Spp), which are found to produce high yields with minimal water inputs. To this end, we conducted detailed life cycle analysis for these coupled agave biofuel - solar energy systems to explore the tradeoffs and synergies, in the context of energy input-output, water use and carbon emissions.

  6. The Computing and Data Grid Approach: Infrastructure for Distributed Science Applications

    NASA Technical Reports Server (NTRS)

    Johnston, William E.

    2002-01-01

    With the advent of Grids - infrastructure for using and managing widely distributed computing and data resources in the science environment - there is now an opportunity to provide a standard, large-scale, computing, data, instrument, and collaboration environment for science that spans many different projects and provides the required infrastructure and services in a relatively uniform and supportable way. Grid technology has evolved over the past several years to provide the services and infrastructure needed for building 'virtual' systems and organizations. We argue that Grid technology provides an excellent basis for the creation of the integrated environments that can combine the resources needed to support the large- scale science projects located at multiple laboratories and universities. We present some science case studies that indicate that a paradigm shift in the process of science will come about as a result of Grids providing transparent and secure access to advanced and integrated information and technologies infrastructure: powerful computing systems, large-scale data archives, scientific instruments, and collaboration tools. These changes will be in the form of services that can be integrated with the user's work environment, and that enable uniform and highly capable access to these computers, data, and instruments, regardless of the location or exact nature of these resources. These services will integrate transient-use resources like computing systems, scientific instruments, and data caches (e.g., as they are needed to perform a simulation or analyze data from a single experiment); persistent-use resources. such as databases, data catalogues, and archives, and; collaborators, whose involvement will continue for the lifetime of a project or longer. While we largely address large-scale science in this paper, Grids, particularly when combined with Web Services, will address a broad spectrum of science scenarios. both large and small scale.

  7. Quantitative measurement of the growth rate of the PHA-producing photosynthetic bacterium Rhodocyclus gelatinous CBS-2[PolyHydroxyAlkanoate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wolfrum, E.J.; Weaver, P.F.

    Researchers at the National Renewable Energy Laboratory (NREL) have been investigating the use of model photosynthetic microorganisms that use sunlight and two-carbon organic substrates (e.g., ethanol, acetate) to produce biodegradable polyhydroxyalkanoate (PHA) copolymers as carbon storage compounds. Use of these biological PHAs in single-use plastics applications, followed by their post-consumer composting or anaerobic digestion, could impact petroleum consumption as well as the overloading of landfills. The large-scale production of PHA polymers by photosynthetic bacteria will require large-scale reactor systems utilizing either sunlight or artificial illumination. The first step in the scale-up process is to quantify the microbial growth rates andmore » the PHA production rates as a function of reaction conditions such as nutrient concentration, temperature, and light quality and intensity.« less

  8. The Explorer of Diffuse Galactic Emission (EDGE): Determining the Large-Scale Structure Evolution in the Universe

    NASA Technical Reports Server (NTRS)

    Silverberg, R. F.; Cheng, E. S.; Cottingham, D. A.; Fixsen, D. J.; Meyer, S. S.; Knox, L.; Timbie, P.; Wilson, G.

    2003-01-01

    Measurements of the large-scale anisotropy of the Cosmic Infared Background (CIB) can be used to determine the characteristics of the distribution of galaxies at the largest spatial scales. With this information important tests of galaxy evolution models and primordial structure growth are possible. In this paper, we describe the scientific goals, instrumentation, and operation of EDGE, a mission using an Antarctic Long Duration Balloon (LDB) platform. EDGE will osbserve the anisotropy in the CIB in 8 spectral bands from 270 GHz-1.5 THz with 6 arcminute angular resolution over a region -400 square degrees. EDGE uses a one-meter class off-axis telescope and an array of Frequency Selective Bololeters (FSB) to provide the compact and efficient multi-colar, high sensitivity radiometer required to achieve its scientific objectives.

  9. A study of the viability of exploiting memory content similarity to improve resilience to memory errors

    DOE PAGES

    Levy, Scott; Ferreira, Kurt B.; Bridges, Patrick G.; ...

    2014-12-09

    Building the next-generation of extreme-scale distributed systems will require overcoming several challenges related to system resilience. As the number of processors in these systems grow, the failure rate increases proportionally. One of the most common sources of failure in large-scale systems is memory. In this paper, we propose a novel runtime for transparently exploiting memory content similarity to improve system resilience by reducing the rate at which memory errors lead to node failure. We evaluate the viability of this approach by examining memory snapshots collected from eight high-performance computing (HPC) applications and two important HPC operating systems. Based on themore » characteristics of the similarity uncovered, we conclude that our proposed approach shows promise for addressing system resilience in large-scale systems.« less

  10. Materials for stem cell factories of the future

    NASA Astrophysics Data System (ADS)

    Celiz, Adam D.; Smith, James G. W.; Langer, Robert; Anderson, Daniel G.; Winkler, David A.; Barrett, David A.; Davies, Martyn C.; Young, Lorraine E.; Denning, Chris; Alexander, Morgan R.

    2014-06-01

    Polymeric substrates are being identified that could permit translation of human pluripotent stem cells from laboratory-based research to industrial-scale biomedicine. Well-defined materials are required to allow cell banking and to provide the raw material for reproducible differentiation into lineages for large-scale drug-screening programs and clinical use. Yet more than 1 billion cells for each patient are needed to replace losses during heart attack, multiple sclerosis and diabetes. Producing this number of cells is challenging, and a rethink of the current predominant cell-derived substrates is needed to provide technology that can be scaled to meet the needs of millions of patients a year. In this Review, we consider the role of materials discovery, an emerging area of materials chemistry that is in large part driven by the challenges posed by biologists to materials scientists.

  11. Linear-scaling density-functional simulations of charged point defects in Al2O3 using hierarchical sparse matrix algebra.

    PubMed

    Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C

    2010-09-21

    We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.

  12. Microworld Simulations: A New Dimension in Training Army Logistics Management Skills

    DTIC Science & Technology

    2004-01-01

    Providing effective training to Army personnelis always challenging, but the Army facessome new challenges in training its logisticsstaff managers in...soldiers are stationed and where materiel and services are readily available. The design and management of the Army’s Combat Ser- vice Support (CSS) large...scale logistics systems are increasingly important. The skills that are required to manage these systems are difficult to train. Large deployments

  13. Design solutions for dome and main structure (mount) of giant telescopes

    NASA Astrophysics Data System (ADS)

    Murga, Gaizka; Bilbao, Armando; de Bilbao, Lander; Lorentz, Thomas E.

    2016-07-01

    During the last recent years, designs for several giant telescopes ranging from 20 to 40m in diameter are being developed: European Extremely Large Telescope Telescope (TMT). (E-ELT), Giant Magellan Telescope (GMT) and Thirty Meter It is evident that simple direct up-scaling of solutions that were more or less successful in the 8 to 10m class telescopes can not lead to viable designs for the future giant telescopes. New solutions are required to provide adequate load sharing, to cope with the large-scale derived deflections and to provide the required compliance, or to respond to structure-mechanism control interaction issues, among others. From IDOM experience in the development of the Dome and Main Structure of the European Extremely Large Telescope and our participation in some other giant telescopes, this paper reviews several design approaches for the main mechanisms and key structural parts of enclosures and mounts/main structures for giant telescopes, analyzing pros and cons of the different alternatives and outlining the preferred design schemes. The assessment is carried out mainly from a technical and performance-based angle but it also considers specific logistical issues for the assembly of these large telescopes in remote and space-limited areas, together with cost and schedule related issues.

  14. Viscous-enstrophy scaling law for Navier-Stokes reconnection

    NASA Astrophysics Data System (ADS)

    Kerr, Robert M.

    2017-11-01

    Simulations of perturbed, helical trefoil vortex knots and anti-parallel vortices find ν-independent collapse of temporally scaled (√{ ν} Z) - 1 / 2, Z enstrophy, between when the loops first touch at tΓ, and when reconnection ends at tx for the viscosity ν varying by 256. Due to mathematical bounds upon higher-order norms, this collapse requires that the domain increase as ν decreases, possibly to allow large-scale negative helicity to grow as compensation for small-scale positive helicity and enstrophy growth. This mechanism could be a step towards explaining how smooth solutions of the Navier-Stokes can generate finite-energy dissipation in a finite time as ν -> 0 .

  15. Bridging the Gap Between the iLEAPS and GEWEX Land-Surface Modeling Communities

    NASA Technical Reports Server (NTRS)

    Bonan, Gordon; Santanello, Joseph A., Jr.

    2013-01-01

    Models of Earth's weather and climate require fluxes of momentum, energy, and moisture across the land-atmosphere interface to solve the equations of atmospheric physics and dynamics. Just as atmospheric models can, and do, differ between weather and climate applications, mostly related to issues of scale, resolved or parameterised physics,and computational requirements, so too can the land models that provide the required surface fluxes differ between weather and climate models. Here, however, the issue is less one of scale-dependent parameterisations.Computational demands can influence other minor land model differences, especially with respect to initialisation, data assimilation, and forecast skill. However, the distinction among land models (and their development and application) is largely driven by the different science and research needs of the weather and climate communities.

  16. An Eulerian time filtering technique to study large-scale transient flow phenomena

    NASA Astrophysics Data System (ADS)

    Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric

    2009-10-01

    Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.

  17. User requirements for project-oriented remote sensing

    NASA Technical Reports Server (NTRS)

    Hitchcock, H. C.; Baxter, F. P.; Cox, T. L.

    1975-01-01

    Registration of remotely sensed data to geodetic coordinates provides for overlay analysis of land use data. For aerial photographs of a large area, differences in scales, dates, and film types are reconciled, and multispectral scanner data are machine registered at the time of acquisition.

  18. Estimating ecosystem service changes as a precursor to modeling

    EPA Science Inventory

    EPA's Future Midwestern Landscapes Study will project changes in ecosystem services (ES) for alternative future policy scenarios in the Midwestern U.S. Doing so for detailed landscapes over large spatial scales will require serial application of economic and ecological models. W...

  19. Assessing sufficiency of thermal riverscapes for resilient ...

    EPA Pesticide Factsheets

    Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec

  20. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    NASA Astrophysics Data System (ADS)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  1. Classical boson sampling algorithms with superior performance to near-term experiments

    NASA Astrophysics Data System (ADS)

    Neville, Alex; Sparrow, Chris; Clifford, Raphaël; Johnston, Eric; Birchall, Patrick M.; Montanaro, Ashley; Laing, Anthony

    2017-12-01

    It is predicted that quantum computers will dramatically outperform their conventional counterparts. However, large-scale universal quantum computers are yet to be built. Boson sampling is a rudimentary quantum algorithm tailored to the platform of linear optics, which has sparked interest as a rapid way to demonstrate such quantum supremacy. Photon statistics are governed by intractable matrix functions, which suggests that sampling from the distribution obtained by injecting photons into a linear optical network could be solved more quickly by a photonic experiment than by a classical computer. The apparently low resource requirements for large boson sampling experiments have raised expectations of a near-term demonstration of quantum supremacy by boson sampling. Here we present classical boson sampling algorithms and theoretical analyses of prospects for scaling boson sampling experiments, showing that near-term quantum supremacy via boson sampling is unlikely. Our classical algorithm, based on Metropolised independence sampling, allowed the boson sampling problem to be solved for 30 photons with standard computing hardware. Compared to current experiments, a demonstration of quantum supremacy over a successful implementation of these classical methods on a supercomputer would require the number of photons and experimental components to increase by orders of magnitude, while tackling exponentially scaling photon loss.

  2. Physical consistency of subgrid-scale models for large-eddy simulation of incompressible turbulent flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Remmerswaal, Ronald A.; Verstappen, Roel

    2017-01-01

    We study the construction of subgrid-scale models for large-eddy simulation of incompressible turbulent flows. In particular, we aim to consolidate a systematic approach of constructing subgrid-scale models, based on the idea that it is desirable that subgrid-scale models are consistent with the mathematical and physical properties of the Navier-Stokes equations and the turbulent stresses. To that end, we first discuss in detail the symmetries of the Navier-Stokes equations, and the near-wall scaling behavior, realizability and dissipation properties of the turbulent stresses. We furthermore summarize the requirements that subgrid-scale models have to satisfy in order to preserve these important mathematical and physical properties. In this fashion, a framework of model constraints arises that we apply to analyze the behavior of a number of existing subgrid-scale models that are based on the local velocity gradient. We show that these subgrid-scale models do not satisfy all the desired properties, after which we explain that this is partly due to incompatibilities between model constraints and limitations of velocity-gradient-based subgrid-scale models. However, we also reason that the current framework shows that there is room for improvement in the properties and, hence, the behavior of existing subgrid-scale models. We furthermore show how compatible model constraints can be combined to construct new subgrid-scale models that have desirable properties built into them. We provide a few examples of such new models, of which a new model of eddy viscosity type, that is based on the vortex stretching magnitude, is successfully tested in large-eddy simulations of decaying homogeneous isotropic turbulence and turbulent plane-channel flow.

  3. Collective dynamics of cell migration and cell rearrangements

    NASA Astrophysics Data System (ADS)

    Kabla, Alexandre

    Understanding multicellular processes such as embryo development or cancer metastasis requires to decipher the contributions of local cell autonomous behaviours and long range interactions with the tissue environment. A key question in this context concerns the emergence of large scale coordination in cell behaviours, a requirement for collective cell migration or convergent extension. I will present a few examples where physical and mechanical aspects play a significant role in driving tissue scale dynamics.

  4. A Computational framework for telemedicine.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foster, I.; von Laszewski, G.; Thiruvathukal, G. K.

    1998-07-01

    Emerging telemedicine applications require the ability to exploit diverse and geographically distributed resources. Highspeed networks are used to integrate advanced visualization devices, sophisticated instruments, large databases, archival storage devices, PCs, workstations, and supercomputers. This form of telemedical environment is similar to networked virtual supercomputers, also known as metacomputers. Metacomputers are already being used in many scientific application areas. In this article, we analyze requirements necessary for a telemedical computing infrastructure and compare them with requirements found in a typical metacomputing environment. We will show that metacomputing environments can be used to enable a more powerful and unified computational infrastructure formore » telemedicine. The Globus metacomputing toolkit can provide the necessary low level mechanisms to enable a large scale telemedical infrastructure. The Globus toolkit components are designed in a modular fashion and can be extended to support the specific requirements for telemedicine.« less

  5. Scale-dependent coupling of hysteretic capillary pressure, trapping, and fluid mobilities

    NASA Astrophysics Data System (ADS)

    Doster, F.; Celia, M. A.; Nordbotten, J. M.

    2012-12-01

    Many applications of multiphase flow in porous media, including CO2-storage and enhanced oil recovery, require mathematical models that span a large range of length scales. In the context of numerical simulations, practical grid sizes are often on the order of tens of meters, thereby de facto defining a coarse model scale. Under particular conditions, it is possible to approximate the sub-grid-scale distribution of the fluid saturation within a grid cell; that reconstructed saturation can then be used to compute effective properties at the coarse scale. If both the density difference between the fluids and the vertical extend of the grid cell are large, and buoyant segregation within the cell on a sufficiently shorte time scale, then the phase pressure distributions are essentially hydrostatic and the saturation profile can be reconstructed from the inferred capillary pressures. However, the saturation reconstruction may not be unique because the parameters and parameter functions of classical formulations of two-phase flow in porous media - the relative permeability functions, the capillary pressure -saturation relationship, and the residual saturations - show path dependence, i.e. their values depend not only on the state variables but also on their drainage and imbibition histories. In this study we focus on capillary pressure hysteresis and trapping and show that the contribution of hysteresis to effective quantities is dependent on the vertical length scale. By studying the transition from the two extreme cases - the homogeneous saturation distribution for small vertical extents and the completely segregated distribution for large extents - we identify how hysteretic capillary pressure at the local scale induces hysteresis in all coarse-scale quantities for medium vertical extents and finally vanishes for large vertical extents. Our results allow for more accurate vertically integrated modeling while improving our understanding of the coupling of capillary pressure and relative permeabilities over larger length scales.

  6. Glutamate 338 is an electrostatic facilitator of C–Co bond breakage in a dynamic/electrostatic model of catalysis by ornithine aminomutase

    PubMed Central

    Menon, Binuraj R K; Menon, Navya; Fisher, Karl; Rigby, Stephen E J; Leys, David; Scrutton, Nigel S

    2015-01-01

    How cobalamin-dependent enzymes promote C–Co homolysis to initiate radical catalysis has been debated extensively. For the pyridoxal 5′-phosphate and cobalamin-dependent enzymes lysine 5,6-aminomutase and ornithine 4,5-aminomutase (OAM), large-scale re-orientation of the cobalamin-binding domain linked to C–Co bond breakage has been proposed. In these models, substrate binding triggers dynamic sampling of the B12-binding Rossmann domain to achieve a catalytically competent ‘closed’ conformational state. In ‘closed’ conformations of OAM, Glu338 is thought to facilitate C–Co bond breakage by close association with the cobalamin adenosyl group. We investigated this using stopped-flow continuous-wave photolysis, viscosity dependence kinetic measurements, and electron paramagnetic resonance spectroscopy of a series of Glu338 variants. We found that substrate-induced C–Co bond homolysis is compromised in Glu388 variant forms of OAM, although photolysis of the C–Co bond is not affected by the identity of residue 338. Electrostatic interactions of Glu338 with the 5′-deoxyadenosyl group of B12 potentiate C–Co bond homolysis in ‘closed’ conformations only; these conformations are unlocked by substrate binding. Our studies extend earlier models that identified a requirement for large-scale motion of the cobalamin domain. Our findings indicate that large-scale motion is required to pre-organize the active site by enabling transient formation of ‘closed’ conformations of OAM. In ‘closed’ conformations, Glu338 interacts with the 5′-deoxyadenosyl group of cobalamin. This interaction is required to potentiate C–Co homolysis, and is a crucial component of the approximately 1012 rate enhancement achieved by cobalamin-dependent enzymes for C–Co bond homolysis. PMID:25627283

  7. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    NASA Astrophysics Data System (ADS)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  8. Large - scale Rectangular Ruler Automated Verification Device

    NASA Astrophysics Data System (ADS)

    Chen, Hao; Chang, Luping; Xing, Minjian; Xie, Xie

    2018-03-01

    This paper introduces a large-scale rectangular ruler automated verification device, which consists of photoelectric autocollimator and self-designed mechanical drive car and data automatic acquisition system. The design of mechanical structure part of the device refer to optical axis design, drive part, fixture device and wheel design. The design of control system of the device refer to hardware design and software design, and the hardware mainly uses singlechip system, and the software design is the process of the photoelectric autocollimator and the automatic data acquisition process. This devices can automated achieve vertical measurement data. The reliability of the device is verified by experimental comparison. The conclusion meets the requirement of the right angle test procedure.

  9. Evaluating waste printed circuit boards recycling: Opportunities and challenges, a mini review.

    PubMed

    Awasthi, Abhishek Kumar; Zlamparet, Gabriel Ionut; Zeng, Xianlai; Li, Jinhui

    2017-04-01

    Rapid generation of waste printed circuit boards has become a very serious issue worldwide. Numerous techniques have been developed in the last decade to resolve the pollution from waste printed circuit boards, and also recover valuable metals from the waste printed circuit boards stream on a large-scale. However, these techniques have their own certain specific drawbacks that need to be rectified properly. In this review article, these recycling technologies are evaluated based on a strength, weaknesses, opportunities and threats analysis. Furthermore, it is warranted that, the substantial research is required to improve the current technologies for waste printed circuit boards recycling in the outlook of large-scale applications.

  10. A conceptual design of shock-eliminating clover combustor for large scale scramjet engine

    NASA Astrophysics Data System (ADS)

    Sun, Ming-bo; Zhao, Yu-xin; Zhao, Guo-yan; Liu, Yuan

    2017-01-01

    A new concept of shock-eliminating clover combustor is proposed for large scale scramjet engine to fulfill the requirements of fuel penetration, total pressure recovery and cooling. To generate the circular-to-clover transition shape of the combustor, the streamline tracing technique is used based on an axisymmetric expansion parent flowfield calculated using the method of characteristics. The combustor is examined using inviscid and viscous numerical simulations and a pure circular shape is calculated for comparison. The results showed that the combustor avoids the shock wave generation and produces low total pressure losses in a wide range of flight condition with various Mach number. The flameholding device for this combustor is briefly discussed.

  11. Extracellular matrix motion and early morphogenesis

    PubMed Central

    Loganathan, Rajprasad; Rongish, Brenda J.; Smith, Christopher M.; Filla, Michael B.; Czirok, Andras; Bénazéraf, Bertrand

    2016-01-01

    For over a century, embryologists who studied cellular motion in early amniotes generally assumed that morphogenetic movement reflected migration relative to a static extracellular matrix (ECM) scaffold. However, as we discuss in this Review, recent investigations reveal that the ECM is also moving during morphogenesis. Time-lapse studies show how convective tissue displacement patterns, as visualized by ECM markers, contribute to morphogenesis and organogenesis. Computational image analysis distinguishes between cell-autonomous (active) displacements and convection caused by large-scale (composite) tissue movements. Modern quantification of large-scale ‘total’ cellular motion and the accompanying ECM motion in the embryo demonstrates that a dynamic ECM is required for generation of the emergent motion patterns that drive amniote morphogenesis. PMID:27302396

  12. Research on unit commitment with large-scale wind power connected power system

    NASA Astrophysics Data System (ADS)

    Jiao, Ran; Zhang, Baoqun; Chi, Zhongjun; Gong, Cheng; Ma, Longfei; Yang, Bing

    2017-01-01

    Large-scale integration of wind power generators into power grid brings severe challenges to power system economic dispatch due to its stochastic volatility. Unit commitment including wind farm is analyzed from the two parts of modeling and solving methods. The structures and characteristics can be summarized after classification has been done according to different objective function and constraints. Finally, the issues to be solved and possible directions of research and development in the future are discussed, which can adapt to the requirements of the electricity market, energy-saving power generation dispatching and smart grid, even providing reference for research and practice of researchers and workers in this field.

  13. Selected Papers on Low-Energy Antiprotons and Possible Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Noble, Robert

    1998-09-19

    The only realistic means by which to create a facility at Fermilab to produce large amounts of low energy antiprotons is to use resources which already exist. There is simply too little money and manpower at this point in time to generate new accelerators on a time scale before the turn of the century. Therefore, innovation is required to modify existing equipment to provide the services required by experimenters.

  14. Age ratios as estimators of productivity: testing assumptions on a threatened seabird, the marbled murrelet (Brachyramphus marmoratus)

    Treesearch

    M. Zachariah Peery; Benjamin H. Becker; Steven R. Beissinger

    2007-01-01

    The ratio of hatch-year (HY) to after-hatch-year (AHY) individuals (HY:AHY ratio) can be a valuable metric for estimating avian productivity because it does not require monitoring individual breeding sites and can often be estimated across large geographic and temporal scales. However, rigorous estimation of age ratios requires that both young and adult age classes are...

  15. Design considerations for implementation of large scale automatic meter reading systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mak, S.; Radford, D.

    1995-01-01

    This paper discusses the requirements imposed on the design of an AMR system expected to serve a large (> 1 million) customer base spread over a large geographical area. Issues such as system throughput response time, and multi-application expendability are addressed, all of which are intimately dependent on the underlying communication system infrastructure, the local geography, the customer base, and the regulatory environment. A methodology for analysis, assessment, and design of large systems is presented. For illustration, two communication systems -- a low power RF/PLC system and a power frequency carrier system -- are analyzed and discussed.

  16. Large-scale high-throughput computer-aided discovery of advanced materials using cloud computing

    NASA Astrophysics Data System (ADS)

    Bazhirov, Timur; Mohammadi, Mohammad; Ding, Kevin; Barabash, Sergey

    Recent advances in cloud computing made it possible to access large-scale computational resources completely on-demand in a rapid and efficient manner. When combined with high fidelity simulations, they serve as an alternative pathway to enable computational discovery and design of new materials through large-scale high-throughput screening. Here, we present a case study for a cloud platform implemented at Exabyte Inc. We perform calculations to screen lightweight ternary alloys for thermodynamic stability. Due to the lack of experimental data for most such systems, we rely on theoretical approaches based on first-principle pseudopotential density functional theory. We calculate the formation energies for a set of ternary compounds approximated by special quasirandom structures. During an example run we were able to scale to 10,656 CPUs within 7 minutes from the start, and obtain results for 296 compounds within 38 hours. The results indicate that the ultimate formation enthalpy of ternary systems can be negative for some of lightweight alloys, including Li and Mg compounds. We conclude that compared to traditional capital-intensive approach that requires in on-premises hardware resources, cloud computing is agile and cost-effective, yet scalable and delivers similar performance.

  17. Experimental study of detonation of large-scale powder-droplet-vapor mixtures

    NASA Astrophysics Data System (ADS)

    Bai, C.-H.; Wang, Y.; Xue, K.; Wang, L.-F.

    2018-05-01

    Large-scale experiments were carried out to investigate the detonation performance of a 1600-m3 ternary cloud consisting of aluminum powder, fuel droplets, and vapor, which were dispersed by a central explosive in a cylindrically stratified configuration. High-frame-rate video cameras and pressure gauges were used to analyze the large-scale explosive dispersal of the mixture and the ensuing blast wave generated by the detonation of the cloud. Special attention was focused on the effect of the descending motion of the charge on the detonation performance of the dispersed ternary cloud. The charge was parachuted by an ensemble of apparatus from the designated height in order to achieve the required terminal velocity when the central explosive was detonated. A descending charge with a terminal velocity of 32 m/s produced a cloud with discernably increased concentration compared with that dispersed from a stationary charge, the detonation of which hence generates a significantly enhanced blast wave beyond the scaled distance of 6 m/kg^{1/3}. The results also show the influence of the descending motion of the charge on the jetting phenomenon and the distorted shock front.

  18. Micro-Macro Coupling in Plasma Self-Organization Processes during Island Coalescence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wan Weigang; Lapenta, Giovanni; Centrum voor Plasma-Astrofysica, Departement Wiskunde, Katholieke Universiteit Leuven, Celestijnenlaan 200B, 3001 Leuven

    The collisionless island coalescence process is studied with particle-in-cell simulations, as an internal-driven magnetic self-organization scenario. The macroscopic relaxation time, corresponding to the total time required for the coalescence to complete, is found to depend crucially on the scale of the system. For small-scale systems, where the macroscopic scales and the dissipation scales are more tightly coupled, the relaxation time is independent of the strength of the internal driving force: the small-scale processes of magnetic reconnection adjust to the amount of the initial magnetic flux to be reconnected, indicating that at the microscopic scales reconnection is enslaved by the macroscopicmore » drive. However, for large-scale systems, where the micro-macro scale separation is larger, the relaxation time becomes dependent on the driving force.« less

  19. Helicopter rotor and engine sizing for preliminary performance estimation

    NASA Technical Reports Server (NTRS)

    Talbot, P. D.; Bowles, J. V.; Lee, H. C.

    1986-01-01

    Methods are presented for estimating some of the more fundamental design variables of single-rotor helicopters (tip speed, blade area, disk loading, and installed power) based on design requirements (speed, weight, fuselage drag, and design hover ceiling). The well-known constraints of advancing-blade compressibility and retreating-blade stall are incorporated into the estimation process, based on an empirical interpretation of rotor performance data from large-scale wind-tunnel tests. Engine performance data are presented and correlated with a simple model usable for preliminary design. When approximate results are required quickly, these methods may be more convenient to use and provide more insight than large digital computer programs.

  20. Efficient and sustainable deployment of bioenergy with carbon capture and storage in mitigation pathways

    NASA Astrophysics Data System (ADS)

    Kato, E.; Moriyama, R.; Kurosawa, A.

    2016-12-01

    Bioenergy with Carbon Capture and Storage (BECCS) is a key component of mitigation strategies in future socio-economic scenarios that aim to keep mean global temperature rise well below 2°C above pre-industrial, which would require net negative carbon emissions at the end of the 21st century. Also, in the Paris agreement from COP21, it is denoted "a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century" which could require large scale deployment of negative emissions technologies later in this century. Because of the additional requirement for land, developing sustainable low-carbon scenarios requires careful consideration of the land-use implications of large-scale BECCS. In this study, we present possible development strategies of low carbon scenarios that consider interaction of economically efficient deployment of bioenergy and/or BECCS technologies, biophysical limit of bioenergy productivity, and food production. In the evaluations, detailed bioenergy representations, including bioenergy feedstocks and conversion technologies with and without CCS, are implemented in an integrated assessment model GRAPE. Also, to overcome a general discrepancy about yield development between 'top-down' integrate assessment models and 'bottom-up' estimates, we applied yields changes of food and bioenergy crops consistent with process-based biophysical models; PRYSBI-2 (Process-Based Regional-Scale Yield Simulator with Bayesian Inference) for food crops, and SWAT (Soil and Water Assessment Tool) for bioenergy crops in changing climate conditions. Using the framework, economically viable strategy for implementing sustainable BECCS are evaluated.

  1. Scale-space measures for graph topology link protein network architecture to function.

    PubMed

    Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen

    2014-06-15

    The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.

  2. The global palm oil sector must change to save biodiversity and improve food security in the tropics.

    PubMed

    Azhar, Badrul; Saadun, Norzanalia; Prideaux, Margi; Lindenmayer, David B

    2017-12-01

    Most palm oil currently available in global markets is sourced from certified large-scale plantations. Comparatively little is sourced from (typically uncertified) smallholders. We argue that sourcing sustainable palm oil should not be determined by commercial certification alone and that the certification process should be revisited. There are so-far unrecognized benefits of sourcing palm oil from smallholders that should be considered if genuine biodiversity conservation is to be a foundation of 'environmentally sustainable' palm oil production. Despite a lack of certification, smallholder production is often more biodiversity-friendly than certified production from large-scale plantations. Sourcing palm oil from smallholders also alleviates poverty among rural farmers, promoting better conservation outcomes. Yet, certification schemes - the current measure of 'sustainability' - are financially accessible only for large-scale plantations that operate as profit-driven monocultures. Industrial palm oil is expanding rapidly in regions with weak environmental laws and enforcement. This warrants the development of an alternative certification scheme for smallholders. Greater attention should be directed to deforestation-free palm oil production in smallholdings, where production is less likely to cause large scale biodiversity loss. These small-scale farmlands in which palm oil is mixed with other crops should be considered by retailers and consumers who are interested in promoting sustainable palm oil production. Simultaneously, plantation companies should be required to make their existing production landscapes more compatible with enhanced biodiversity conservation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Uncovering Nature’s 100 TeV Particle Accelerators in the Large-Scale Jets of Quasars

    NASA Astrophysics Data System (ADS)

    Georganopoulos, Markos; Meyer, Eileen; Sparks, William B.; Perlman, Eric S.; Van Der Marel, Roeland P.; Anderson, Jay; Sohn, S. Tony; Biretta, John A.; Norman, Colin Arthur; Chiaberge, Marco

    2016-04-01

    Since the first jet X-ray detections sixteen years ago the adopted paradigm for the X-ray emission has been the IC/CMB model that requires highly relativistic (Lorentz factors of 10-20), extremely powerful (sometimes super-Eddington) kpc scale jets. R I will discuss recently obtained strong evidence, from two different avenues, IR to optical polarimetry for PKS 1136-135 and gamma-ray observations for 3C 273 and PKS 0637-752, ruling out the EC/CMB model. Our work constrains the jet Lorentz factors to less than ~few, and leaves as the only reasonable alternative synchrotron emission from ~100 TeV jet electrons, accelerated hundreds of kpc away from the central engine. This refutes over a decade of work on the jet X-ray emission mechanism and overall energetics and, if confirmed in more sources, it will constitute a paradigm shift in our understanding of powerful large scale jets and their role in the universe. Two important findings emerging from our work will also discussed be: (i) the solid angle-integrated luminosity of the large scale jet is comparable to that of the jet core, contrary to the current belief that the core is the dominant jet radiative outlet and (ii) the large scale jets are the main source of TeV photon in the universe, something potentially important, as TeV photons have been suggested to heat up the intergalactic medium and reduce the number of dwarf galaxies formed.

  4. Precision agriculture in large-scale mechanized farming

    USDA-ARS?s Scientific Manuscript database

    Precision agriculture involves a great deal of technologies and requires additional investments of money and time, but it can be practiced at different levels depending on the specific field and crop conditions and the resources and technology services available to the farmer. If practiced properly,...

  5. A synthesis and comparative evaluation of drainage water management

    USDA-ARS?s Scientific Manuscript database

    Viable large-scale crop production in the United States requires artificial drainage in humid and poorly drained agricultural regions. Excess water removal is generally achieved by installing tile drains that export water to open ditches that eventually flow into streams. Drainage water management...

  6. Engineering Education for Agricultural and Rural Development in Africa

    ERIC Educational Resources Information Center

    Adewumi, B. A.

    2008-01-01

    Agricultural Engineering has transformed agricultural practices from subsistence level to medium and large-scale production via mechanisation in the developed nations. This has reduced the labour force requirements in agriculture; increased production levels and efficiency, product shelf life and product quality; and resulted into…

  7. The Numerical Propulsion System Simulation: An Overview

    NASA Technical Reports Server (NTRS)

    Lytle, John K.

    2000-01-01

    Advances in computational technology and in physics-based modeling are making large-scale, detailed simulations of complex systems possible within the design environment. For example, the integration of computing, communications, and aerodynamics has reduced the time required to analyze major propulsion system components from days and weeks to minutes and hours. This breakthrough has enabled the detailed simulation of major propulsion system components to become a routine part of designing systems, providing the designer with critical information about the components early in the design process. This paper describes the development of the numerical propulsion system simulation (NPSS), a modular and extensible framework for the integration of multicomponent and multidisciplinary analysis tools using geographically distributed resources such as computing platforms, data bases, and people. The analysis is currently focused on large-scale modeling of complete aircraft engines. This will provide the product developer with a "virtual wind tunnel" that will reduce the number of hardware builds and tests required during the development of advanced aerospace propulsion systems.

  8. A new statistical model for subgrid dispersion in large eddy simulations of particle-laden flows

    NASA Astrophysics Data System (ADS)

    Muela, Jordi; Lehmkuhl, Oriol; Pérez-Segarra, Carles David; Oliva, Asensi

    2016-09-01

    Dispersed multiphase turbulent flows are present in many industrial and commercial applications like internal combustion engines, turbofans, dispersion of contaminants, steam turbines, etc. Therefore, there is a clear interest in the development of models and numerical tools capable of performing detailed and reliable simulations about these kind of flows. Large Eddy Simulations offer good accuracy and reliable results together with reasonable computational requirements, making it a really interesting method to develop numerical tools for particle-laden turbulent flows. Nonetheless, in multiphase dispersed flows additional difficulties arises in LES, since the effect of the unresolved scales of the continuous phase over the dispersed phase is lost due to the filtering procedure. In order to solve this issue a model able to reconstruct the subgrid velocity seen by the particles is required. In this work a new model for the reconstruction of the subgrid scale effects over the dispersed phase is presented and assessed. This innovative methodology is based in the reconstruction of statistics via Probability Density Functions (PDFs).

  9. Simulation of FRET dyes allows quantitative comparison against experimental data

    NASA Astrophysics Data System (ADS)

    Reinartz, Ines; Sinner, Claude; Nettels, Daniel; Stucki-Buchli, Brigitte; Stockmar, Florian; Panek, Pawel T.; Jacob, Christoph R.; Nienhaus, Gerd Ulrich; Schuler, Benjamin; Schug, Alexander

    2018-03-01

    Fully understanding biomolecular function requires detailed insight into the systems' structural dynamics. Powerful experimental techniques such as single molecule Förster Resonance Energy Transfer (FRET) provide access to such dynamic information yet have to be carefully interpreted. Molecular simulations can complement these experiments but typically face limits in accessing slow time scales and large or unstructured systems. Here, we introduce a coarse-grained simulation technique that tackles these challenges. While requiring only few parameters, we maintain full protein flexibility and include all heavy atoms of proteins, linkers, and dyes. We are able to sufficiently reduce computational demands to simulate large or heterogeneous structural dynamics and ensembles on slow time scales found in, e.g., protein folding. The simulations allow for calculating FRET efficiencies which quantitatively agree with experimentally determined values. By providing atomically resolved trajectories, this work supports the planning and microscopic interpretation of experiments. Overall, these results highlight how simulations and experiments can complement each other leading to new insights into biomolecular dynamics and function.

  10. On the decentralized control of large-scale systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Chong, C.

    1973-01-01

    The decentralized control of stochastic large scale systems was considered. Particular emphasis was given to control strategies which utilize decentralized information and can be computed in a decentralized manner. The deterministic constrained optimization problem is generalized to the stochastic case when each decision variable depends on different information and the constraint is only required to be satisfied on the average. For problems with a particular structure, a hierarchical decomposition is obtained. For the stochastic control of dynamic systems with different information sets, a new kind of optimality is proposed which exploits the coupled nature of the dynamic system. The subsystems are assumed to be uncoupled and then certain constraints are required to be satisfied, either in a off-line or on-line fashion. For off-line coordination, a hierarchical approach of solving the problem is obtained. The lower level problems are all uncoupled. For on-line coordination, distinction is made between open loop feedback optimal coordination and closed loop optimal coordination.

  11. Small-scale dynamic confinement gap test

    NASA Astrophysics Data System (ADS)

    Cook, Malcolm

    2011-06-01

    Gap tests are routinely used to ascertain the shock sensitiveness of new explosive formulations. The tests are popular since that are easy and relatively cheap to perform. However, with modern insensitive formulations with big critical diameters, large test samples are required. This can make testing and screening of new formulations expensive since large quantities of test material are required. Thus a new test that uses significantly smaller sample quantities would be very beneficial. In this paper we describe a new small-scale test that has been designed using our CHARM ignition and growth routine in the DYNA2D hydrocode. The new test is a modified gap test and uses detonating nitromethane to provide dynamic confinement (instead of a thick metal case) whilst exposing the sample to a long duration shock wave. The long duration shock wave allows less reactive materials that are below their critical diameter, more time to react. We present details on the modelling of the test together with some preliminary experiments to demonstrate the potential of the new test method.

  12. A scalable and operationally simple radical trifluoromethylation

    PubMed Central

    Beatty, Joel W.; Douglas, James J.; Cole, Kevin P.; Stephenson, Corey R. J.

    2015-01-01

    The large number of reagents that have been developed for the synthesis of trifluoromethylated compounds is a testament to the importance of the CF3 group as well as the associated synthetic challenge. Current state-of-the-art reagents for appending the CF3 functionality directly are highly effective; however, their use on preparative scale has minimal precedent because they require multistep synthesis for their preparation, and/or are prohibitively expensive for large-scale application. For a scalable trifluoromethylation methodology, trifluoroacetic acid and its anhydride represent an attractive solution in terms of cost and availability; however, because of the exceedingly high oxidation potential of trifluoroacetate, previous endeavours to use this material as a CF3 source have required the use of highly forcing conditions. Here we report a strategy for the use of trifluoroacetic anhydride for a scalable and operationally simple trifluoromethylation reaction using pyridine N-oxide and photoredox catalysis to affect a facile decarboxylation to the CF3 radical. PMID:26258541

  13. Random-field-induced disordering mechanism in a disordered ferromagnet: Between the Imry-Ma and the standard disordering mechanism

    NASA Astrophysics Data System (ADS)

    Andresen, Juan Carlos; Katzgraber, Helmut G.; Schechter, Moshe

    2017-12-01

    Random fields disorder Ising ferromagnets by aligning single spins in the direction of the random field in three space dimensions, or by flipping large ferromagnetic domains at dimensions two and below. While the former requires random fields of typical magnitude similar to the interaction strength, the latter Imry-Ma mechanism only requires infinitesimal random fields. Recently, it has been shown that for dilute anisotropic dipolar systems a third mechanism exists, where the ferromagnetic phase is disordered by finite-size glassy domains at a random field of finite magnitude that is considerably smaller than the typical interaction strength. Using large-scale Monte Carlo simulations and zero-temperature numerical approaches, we show that this mechanism applies to disordered ferromagnets with competing short-range ferromagnetic and antiferromagnetic interactions, suggesting its generality in ferromagnetic systems with competing interactions and an underlying spin-glass phase. A finite-size-scaling analysis of the magnetization distribution suggests that the transition might be first order.

  14. Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach

    PubMed Central

    Bigdely-Shamlo, Nima; Makeig, Scott; Robbins, Kay A.

    2016-01-01

    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain–computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a “containerized” approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data “Levels,” each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org). PMID:27014048

  15. Bioinspired large-scale aligned porous materials assembled with dual temperature gradients

    PubMed Central

    Bai, Hao; Chen, Yuan; Delattre, Benjamin; Tomsia, Antoni P.; Ritchie, Robert O.

    2015-01-01

    Natural materials, such as bone, teeth, shells, and wood, exhibit outstanding properties despite being porous and made of weak constituents. Frequently, they represent a source of inspiration to design strong, tough, and lightweight materials. Although many techniques have been introduced to create such structures, a long-range order of the porosity as well as a precise control of the final architecture remain difficult to achieve. These limitations severely hinder the scale-up fabrication of layered structures aimed for larger applications. We report on a bidirectional freezing technique to successfully assemble ceramic particles into scaffolds with large-scale aligned, lamellar, porous, nacre-like structure and long-range order at the centimeter scale. This is achieved by modifying the cold finger with a polydimethylsiloxane (PDMS) wedge to control the nucleation and growth of ice crystals under dual temperature gradients. Our approach could provide an effective way of manufacturing novel bioinspired structural materials, in particular advanced materials such as composites, where a higher level of control over the structure is required. PMID:26824062

  16. Large-Scale Biomonitoring of Remote and Threatened Ecosystems via High-Throughput Sequencing

    PubMed Central

    Gibson, Joel F.; Shokralla, Shadi; Curry, Colin; Baird, Donald J.; Monk, Wendy A.; King, Ian; Hajibabaei, Mehrdad

    2015-01-01

    Biodiversity metrics are critical for assessment and monitoring of ecosystems threatened by anthropogenic stressors. Existing sorting and identification methods are too expensive and labour-intensive to be scaled up to meet management needs. Alternately, a high-throughput DNA sequencing approach could be used to determine biodiversity metrics from bulk environmental samples collected as part of a large-scale biomonitoring program. Here we show that both morphological and DNA sequence-based analyses are suitable for recovery of individual taxonomic richness, estimation of proportional abundance, and calculation of biodiversity metrics using a set of 24 benthic samples collected in the Peace-Athabasca Delta region of Canada. The high-throughput sequencing approach was able to recover all metrics with a higher degree of taxonomic resolution than morphological analysis. The reduced cost and increased capacity of DNA sequence-based approaches will finally allow environmental monitoring programs to operate at the geographical and temporal scale required by industrial and regulatory end-users. PMID:26488407

  17. On the Computation of Sound by Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Piomelli, Ugo; Streett, Craig L.; Sarkar, Sutanu

    1997-01-01

    The effect of the small scales on the source term in Lighthill's acoustic analogy is investigated, with the objective of determining the accuracy of large-eddy simulations when applied to studies of flow-generated sound. The distribution of the turbulent quadrupole is predicted accurately, if models that take into account the trace of the SGS stresses are used. Its spatial distribution is also correct, indicating that the low-wave-number (or frequency) part of the sound spectrum can be predicted well by LES. Filtering, however, removes the small-scale fluctuations that contribute significantly to the higher derivatives in space and time of Lighthill's stress tensor T(sub ij). The rms fluctuations of the filtered derivatives are substantially lower than those of the unfiltered quantities. The small scales, however, are not strongly correlated, and are not expected to contribute significantly to the far-field sound; separate modeling of the subgrid-scale density fluctuations might, however, be required in some configurations.

  18. AirSTAR: A UAV Platform for Flight Dynamics and Control System Testing

    NASA Technical Reports Server (NTRS)

    Jordan, Thomas L.; Foster, John V.; Bailey, Roger M.; Belcastro, Christine M.

    2006-01-01

    As part of the NASA Aviation Safety Program at Langley Research Center, a dynamically scaled unmanned aerial vehicle (UAV) and associated ground based control system are being developed to investigate dynamics modeling and control of large transport vehicles in upset conditions. The UAV is a 5.5% (seven foot wingspan), twin turbine, generic transport aircraft with a sophisticated instrumentation and telemetry package. A ground based, real-time control system is located inside an operations vehicle for the research pilot and associated support personnel. The telemetry system supports over 70 channels of data plus video for the downlink and 30 channels for the control uplink. Data rates are in excess of 200 Hz. Dynamic scaling of the UAV, which includes dimensional, weight, inertial, actuation, and control system scaling, is required so that the sub-scale vehicle will realistically simulate the flight characteristics of the full-scale aircraft. This testbed will be utilized to validate modeling methods, flight dynamics characteristics, and control system designs for large transport aircraft, with the end goal being the development of technologies to reduce the fatal accident rate due to loss-of-control.

  19. SOURCE EXPLORER: Towards Web Browser Based Tools for Astronomical Source Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Young, M. D.; Hayashi, S.; Gopu, A.

    2014-05-01

    As a new generation of large format, high-resolution imagers come online (ODI, DECAM, LSST, etc.) we are faced with the daunting prospect of astronomical images containing upwards of hundreds of thousands of identifiable sources. Visualizing and interacting with such large datasets using traditional astronomical tools appears to be unfeasible, and a new approach is required. We present here a method for the display and analysis of arbitrarily large source datasets using dynamically scaling levels of detail, enabling scientists to rapidly move from large-scale spatial overviews down to the level of individual sources and everything in-between. Based on the recognized standards of HTML5+JavaScript, we enable observers and archival users to interact with their images and sources from any modern computer without having to install specialized software. We demonstrate the ability to produce large-scale source lists from the images themselves, as well as overlaying data from publicly available source ( 2MASS, GALEX, SDSS, etc.) or user provided source lists. A high-availability cluster of computational nodes allows us to produce these source maps on demand and customized based on user input. User-generated source lists and maps are persistent across sessions and are available for further plotting, analysis, refinement, and culling.

  20. BioPlex Display: An Interactive Suite for Large-Scale AP-MS Protein-Protein Interaction Data.

    PubMed

    Schweppe, Devin K; Huttlin, Edward L; Harper, J Wade; Gygi, Steven P

    2018-01-05

    The development of large-scale data sets requires a new means to display and disseminate research studies to large audiences. Knowledge of protein-protein interaction (PPI) networks has become a principle interest of many groups within the field of proteomics. At the confluence of technologies, such as cross-linking mass spectrometry, yeast two-hybrid, protein cofractionation, and affinity purification mass spectrometry (AP-MS), detection of PPIs can uncover novel biological inferences at a high-throughput. Thus new platforms to provide community access to large data sets are necessary. To this end, we have developed a web application that enables exploration and dissemination of the growing BioPlex interaction network. BioPlex is a large-scale interactome data set based on AP-MS of baits from the human ORFeome. The latest BioPlex data set release (BioPlex 2.0) contains 56 553 interactions from 5891 AP-MS experiments. To improve community access to this vast compendium of interactions, we developed BioPlex Display, which integrates individual protein querying, access to empirical data, and on-the-fly annotation of networks within an easy-to-use and mobile web application. BioPlex Display enables rapid acquisition of data from BioPlex and development of hypotheses based on protein interactions.

  1. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    PubMed

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  2. Extending large-scale forest inventories to assess urban forests.

    PubMed

    Corona, Piermaria; Agrimi, Mariagrazia; Baffetta, Federica; Barbati, Anna; Chiriacò, Maria Vincenza; Fattorini, Lorenzo; Pompei, Enrico; Valentini, Riccardo; Mattioli, Walter

    2012-03-01

    Urban areas are continuously expanding today, extending their influence on an increasingly large proportion of woods and trees located in or nearby urban and urbanizing areas, the so-called urban forests. Although these forests have the potential for significantly improving the quality the urban environment and the well-being of the urban population, data to quantify the extent and characteristics of urban forests are still lacking or fragmentary on a large scale. In this regard, an expansion of the domain of multipurpose forest inventories like National Forest Inventories (NFIs) towards urban forests would be required. To this end, it would be convenient to exploit the same sampling scheme applied in NFIs to assess the basic features of urban forests. This paper considers approximately unbiased estimators of abundance and coverage of urban forests, together with estimators of the corresponding variances, which can be achieved from the first phase of most large-scale forest inventories. A simulation study is carried out in order to check the performance of the considered estimators under various situations involving the spatial distribution of the urban forests over the study area. An application is worked out on the data from the Italian NFI.

  3. Data management in large-scale collaborative toxicity studies: how to file experimental data for automated statistical analysis.

    PubMed

    Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette

    2013-06-01

    High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. A forward-advancing wave expansion method for numerical solution of large-scale sound propagation problems

    NASA Astrophysics Data System (ADS)

    Rolla, L. Barrera; Rice, H. J.

    2006-09-01

    In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.

  5. Large-scale quantum photonic circuits in silicon

    NASA Astrophysics Data System (ADS)

    Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk

    2016-08-01

    Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards large-scale source integration. Finally, we review monolithic integration strategies for single-photon detectors and their essential role in on-chip feed forward operations.

  6. Large-scale production of lentiviral vector in a closed system hollow fiber bioreactor

    PubMed Central

    Sheu, Jonathan; Beltzer, Jim; Fury, Brian; Wilczek, Katarzyna; Tobin, Steve; Falconer, Danny; Nolta, Jan; Bauer, Gerhard

    2015-01-01

    Lentiviral vectors are widely used in the field of gene therapy as an effective method for permanent gene delivery. While current methods of producing small scale vector batches for research purposes depend largely on culture flasks, the emergence and popularity of lentiviral vectors in translational, preclinical and clinical research has demanded their production on a much larger scale, a task that can be difficult to manage with the numbers of producer cell culture flasks required for large volumes of vector. To generate a large scale, partially closed system method for the manufacturing of clinical grade lentiviral vector suitable for the generation of induced pluripotent stem cells (iPSCs), we developed a method employing a hollow fiber bioreactor traditionally used for cell expansion. We have demonstrated the growth, transfection, and vector-producing capability of 293T producer cells in this system. Vector particle RNA titers after subsequent vector concentration yielded values comparable to lentiviral iPSC induction vector batches produced using traditional culture methods in 225 cm2 flasks (T225s) and in 10-layer cell factories (CF10s), while yielding a volume nearly 145 times larger than the yield from a T225 flask and nearly three times larger than the yield from a CF10. Employing a closed system hollow fiber bioreactor for vector production offers the possibility of manufacturing large quantities of gene therapy vector while minimizing reagent usage, equipment footprint, and open system manipulation. PMID:26151065

  7. Collection of quantitative chemical release field data.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demirgian, J.; Macha, S.; Loyola Univ.

    1999-01-01

    Detection and quantitation of chemicals in the environment requires Fourier-transform infrared (FTIR) instruments that are properly calibrated and tested. This calibration and testing requires field testing using matrices that are representative of actual instrument use conditions. Three methods commonly used for developing calibration files and training sets in the field are a closed optical cell or chamber, a large-scale chemical release, and a small-scale chemical release. There is no best method. The advantages and limitations of each method should be considered in evaluating field results. Proper calibration characterizes the sensitivity of an instrument, its ability to detect a component inmore » different matrices, and the quantitative accuracy and precision of the results.« less

  8. Scalable DB+IR Technology: Processing Probabilistic Datalog with HySpirit.

    PubMed

    Frommholz, Ingo; Roelleke, Thomas

    2016-01-01

    Probabilistic Datalog (PDatalog, proposed in 1995) is a probabilistic variant of Datalog and a nice conceptual idea to model Information Retrieval in a logical, rule-based programming paradigm. Making PDatalog work in real-world applications requires more than probabilistic facts and rules, and the semantics associated with the evaluation of the programs. We report in this paper some of the key features of the HySpirit system required to scale the execution of PDatalog programs. Firstly, there is the requirement to express probability estimation in PDatalog. Secondly, fuzzy-like predicates are required to model vague predicates (e.g. vague match of attributes such as age or price). Thirdly, to handle large data sets there are scalability issues to be addressed, and therefore, HySpirit provides probabilistic relational indexes and parallel and distributed processing . The main contribution of this paper is a consolidated view on the methods of the HySpirit system to make PDatalog applicable in real-scale applications that involve a wide range of requirements typical for data (information) management and analysis.

  9. Coherent photon scattering background in sub- GeV / c 2 direct dark matter searches

    DOE PAGES

    Robinson, Alan E.

    2017-01-18

    Here, proposed dark matter detectors with eV-scale sensitivities will detect a large background of atomic (nuclear) recoils from coherent photon scattering of MeV-scale photons. This background climbs steeply below ~10 eV, far exceeding the declining rate of low-energy Compton recoils. The upcoming generation of dark matter detectors will not be limited by this background, but further development of eV-scale and sub-eV detectors will require strategies, including the use of low nuclear mass target materials, to maximize dark matter sensitivity while minimizing the coherent photon scattering background.

  10. Ocean Wave Energy Regimes of the Circumpolar Coastal Zones

    NASA Astrophysics Data System (ADS)

    Atkinson, D. E.

    2004-12-01

    Ocean wave activity is a major enviromental forcing agent of the ice-rich sediments that comprise large sections of the arctic coastal margins. While it is instructive to possess information about the wind regimes in these regions, direct application to geomorphological and engineering needs requires knowledge of the resultant wave-energy regimes. Wave energy information has been calculated at the regional scale using adjusted reanalysis model windfield data. Calculations at this scale are not designed to account for local-scale coastline/bathymetric irregularities and variability. Results will be presented for the circumpolar zones specified by the Arctic Coastal Dynamics Project.

  11. Beyond Scale-Free Small-World Networks: Cortical Columns for Quick Brains

    NASA Astrophysics Data System (ADS)

    Stoop, Ralph; Saase, Victor; Wagner, Clemens; Stoop, Britta; Stoop, Ruedi

    2013-03-01

    We study to what extent cortical columns with their particular wiring boost neural computation. Upon a vast survey of columnar networks performing various real-world cognitive tasks, we detect no signs of enhancement. It is on a mesoscopic—intercolumnar—scale that the existence of columns, largely irrespective of their inner organization, enhances the speed of information transfer and minimizes the total wiring length required to bind distributed columnar computations towards spatiotemporally coherent results. We suggest that brain efficiency may be related to a doubly fractal connectivity law, resulting in networks with efficiency properties beyond those by scale-free networks.

  12. Inversion of very large matrices encountered in large scale problems of photogrammetry and photographic astrometry

    NASA Technical Reports Server (NTRS)

    Brown, D. C.

    1971-01-01

    The simultaneous adjustment of very large nets of overlapping plates covering the celestial sphere becomes computationally feasible by virtue of a twofold process that generates a system of normal equations having a bordered-banded coefficient matrix, and solves such a system in a highly efficient manner. Numerical results suggest that when a well constructed spherical net is subjected to a rigorous, simultaneous adjustment, the exercise of independently established control points is neither required for determinancy nor for production of accurate results.

  13. Decoupling processes and scales of shoreline morphodynamics

    USGS Publications Warehouse

    Hapke, Cheryl J.; Plant, Nathaniel G.; Henderson, Rachel E.; Schwab, William C.; Nelson, Timothy R.

    2016-01-01

    Behavior of coastal systems on time scales ranging from single storm events to years and decades is controlled by both small-scale sediment transport processes and large-scale geologic, oceanographic, and morphologic processes. Improved understanding of coastal behavior at multiple time scales is required for refining models that predict potential erosion hazards and for coastal management planning and decision-making. Here we investigate the primary controls on shoreline response along a geologically-variable barrier island on time scales resolving extreme storms and decadal variations over a period of nearly one century. An empirical orthogonal function analysis is applied to a time series of shoreline positions at Fire Island, NY to identify patterns of shoreline variance along the length of the island. We establish that there are separable patterns of shoreline behavior that represent response to oceanographic forcing as well as patterns that are not explained by this forcing. The dominant shoreline behavior occurs over large length scales in the form of alternating episodes of shoreline retreat and advance, presumably in response to storms cycles. Two secondary responses include long-term response that is correlated to known geologic variations of the island and the other reflects geomorphic patterns with medium length scale. Our study also includes the response to Hurricane Sandy and a period of post-storm recovery. It was expected that the impacts from Hurricane Sandy would disrupt long-term trends and spatial patterns. We found that the response to Sandy at Fire Island is not notable or distinguishable from several other large storms of the prior decade.

  14. Implementing Parquet equations using HPX

    NASA Astrophysics Data System (ADS)

    Kellar, Samuel; Wagle, Bibek; Yang, Shuxiang; Tam, Ka-Ming; Kaiser, Hartmut; Moreno, Juana; Jarrell, Mark

    A new C++ runtime system (HPX) enables simulations of complex systems to run more efficiently on parallel and heterogeneous systems. This increased efficiency allows for solutions to larger simulations of the parquet approximation for a system with impurities. The relevancy of the parquet equations depends upon the ability to solve systems which require long runs and large amounts of memory. These limitations, in addition to numerical complications arising from stability of the solutions, necessitate running on large distributed systems. As the computational resources trend towards the exascale and the limitations arising from computational resources vanish efficiency of large scale simulations becomes a focus. HPX facilitates efficient simulations through intelligent overlapping of computation and communication. Simulations such as the parquet equations which require the transfer of large amounts of data should benefit from HPX implementations. Supported by the the NSF EPSCoR Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents.

  15. Creation of current filaments in the solar corona

    NASA Technical Reports Server (NTRS)

    Mikic, Z.; Schnack, D. D.; Van Hoven, G.

    1989-01-01

    It has been suggested that the solar corona is heated by the dissipation of electric currents. The low value of the resistivity requires the magnetic field to have structure at very small length scales if this mechanism is to work. In this paper it is demonstrated that the coronal magnetic field acquires small-scale structure through the braiding produced by smooth, randomly phased, photospheric flows. The current density develops a filamentary structure and grows exponentially in time. Nonlinear processes in the ideal magnetohydrodynamic equations produce a cascade effect, in which the structure introduced by the flow at large length scales is transferred to smaller scales. If this process continues down to the resistive dissipation length scale, it would provide an effective mechanism for coronal heating.

  16. Intelligent systems engineering methodology

    NASA Technical Reports Server (NTRS)

    Fouse, Scott

    1990-01-01

    An added challenge for the designers of large scale systems such as Space Station Freedom is the appropriate incorporation of intelligent system technology (artificial intelligence, expert systems, knowledge-based systems, etc.) into their requirements and design. This presentation will describe a view of systems engineering which successfully addresses several aspects of this complex problem: design of large scale systems, design with requirements that are so complex they only completely unfold during the development of a baseline system and even then continue to evolve throughout the system's life cycle, design that involves the incorporation of new technologies, and design and development that takes place with many players in a distributed manner yet can be easily integrated to meet a single view of the requirements. The first generation of this methodology was developed and evolved jointly by ISX and the Lockheed Aeronautical Systems Company over the past five years on the Defense Advanced Research Projects Agency/Air Force Pilot's Associate Program, one of the largest, most complex, and most successful intelligent systems constructed to date. As the methodology has evolved it has also been applied successfully to a number of other projects. Some of the lessons learned from this experience may be applicable to Freedom.

  17. Analysis of Radar and Optical Space Borne Data for Large Scale Topographical Mapping

    NASA Astrophysics Data System (ADS)

    Tampubolon, W.; Reinhardt, W.

    2015-03-01

    Normally, in order to provide high resolution 3 Dimension (3D) geospatial data, large scale topographical mapping needs input from conventional airborne campaigns which are in Indonesia bureaucratically complicated especially during legal administration procedures i.e. security clearance from military/defense ministry. This often causes additional time delays besides technical constraints such as weather and limited aircraft availability for airborne campaigns. Of course the geospatial data quality is an important issue for many applications. The increasing demand of geospatial data nowadays consequently requires high resolution datasets as well as a sufficient level of accuracy. Therefore an integration of different technologies is required in many cases to gain the expected result especially in the context of disaster preparedness and emergency response. Another important issue in this context is the fast delivery of relevant data which is expressed by the term "Rapid Mapping". In this paper we present first results of an on-going research to integrate different data sources like space borne radar and optical platforms. Initially the orthorectification of Very High Resolution Satellite (VHRS) imagery i.e. SPOT-6 has been done as a continuous process to the DEM generation using TerraSAR-X/TanDEM-X data. The role of Ground Control Points (GCPs) from GNSS surveys is mandatory in order to fulfil geometrical accuracy. In addition, this research aims on providing suitable processing algorithm of space borne data for large scale topographical mapping as described in section 3.2. Recently, radar space borne data has been used for the medium scale topographical mapping e.g. for 1:50.000 map scale in Indonesian territories. The goal of this on-going research is to increase the accuracy of remote sensing data by different activities, e.g. the integration of different data sources (optical and radar) or the usage of the GCPs in both, the optical and the radar satellite data processing. Finally this results will be used in the future as a reference for further geospatial data acquisitions to support topographical mapping in even larger scales up to the 1:10.000 map scale.

  18. Facilitating large-scale clinical trials: in Asia.

    PubMed

    Choi, Han Yong; Ko, Jae-Wook

    2010-01-01

    The number of clinical trials conducted in Asian countries has started to increase as a result of expansion of the pharmaceutical market in this area. There is a growing opportunity for large-scale clinical trials because of the large number of patients, significant market potential, good quality of data, and the cost effective and qualified medical infrastructure. However, for carrying out large-scale clinical trials in Asia, there are several major challenges, including the quality control of data, budget control, laboratory validation, monitoring capacity, authorship, staff training, and nonstandard treatment that need to be considered. There are also several difficulties in collaborating on international trials in Asia because Asia is an extremely diverse continent. The major challenges are language differences, diversity of patterns of disease, and current treatments, a large gap in the experience with performing multinational trials, and regulatory differences among the Asian countries. In addition, there are also differences in the understanding of global clinical trials, medical facilities, indemnity assurance, and culture, including food and religion. To make regional and local data provide evidence for efficacy through the standardization of these differences, unlimited effort is required. At this time, there are no large clinical trials led by urologists in Asia, but it is anticipated that the role of urologists in clinical trials will continue to increase. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. The Convergence of High Performance Computing and Large Scale Data Analytics

    NASA Astrophysics Data System (ADS)

    Duffy, D.; Bowen, M. K.; Thompson, J. H.; Yang, C. P.; Hu, F.; Wills, B.

    2015-12-01

    As the combinations of remote sensing observations and model outputs have grown, scientists are increasingly burdened with both the necessity and complexity of large-scale data analysis. Scientists are increasingly applying traditional high performance computing (HPC) solutions to solve their "Big Data" problems. While this approach has the benefit of limiting data movement, the HPC system is not optimized to run analytics, which can create problems that permeate throughout the HPC environment. To solve these issues and to alleviate some of the strain on the HPC environment, the NASA Center for Climate Simulation (NCCS) has created the Advanced Data Analytics Platform (ADAPT), which combines both HPC and cloud technologies to create an agile system designed for analytics. Large, commonly used data sets are stored in this system in a write once/read many file system, such as Landsat, MODIS, MERRA, and NGA. High performance virtual machines are deployed and scaled according to the individual scientist's requirements specifically for data analysis. On the software side, the NCCS and GMU are working with emerging commercial technologies and applying them to structured, binary scientific data in order to expose the data in new ways. Native NetCDF data is being stored within a Hadoop Distributed File System (HDFS) enabling storage-proximal processing through MapReduce while continuing to provide accessibility of the data to traditional applications. Once the data is stored within HDFS, an additional indexing scheme is built on top of the data and placed into a relational database. This spatiotemporal index enables extremely fast mappings of queries to data locations to dramatically speed up analytics. These are some of the first steps toward a single unified platform that optimizes for both HPC and large-scale data analysis, and this presentation will elucidate the resulting and necessary exascale architectures required for future systems.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas

    Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less

  1. Autonomous smart sensor network for full-scale structural health monitoring

    NASA Astrophysics Data System (ADS)

    Rice, Jennifer A.; Mechitov, Kirill A.; Spencer, B. F., Jr.; Agha, Gul A.

    2010-04-01

    The demands of aging infrastructure require effective methods for structural monitoring and maintenance. Wireless smart sensor networks offer the ability to enhance structural health monitoring (SHM) practices through the utilization of onboard computation to achieve distributed data management. Such an approach is scalable to the large number of sensor nodes required for high-fidelity modal analysis and damage detection. While smart sensor technology is not new, the number of full-scale SHM applications has been limited. This slow progress is due, in part, to the complex network management issues that arise when moving from a laboratory setting to a full-scale monitoring implementation. This paper presents flexible network management software that enables continuous and autonomous operation of wireless smart sensor networks for full-scale SHM applications. The software components combine sleep/wake cycling for enhanced power management with threshold detection for triggering network wide tasks, such as synchronized sensing or decentralized modal analysis, during periods of critical structural response.

  2. Preliminary Assessment of Microwave Readout Multiplexing Factor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Croce, Mark Philip; Koehler, Katrina Elizabeth; Rabin, Michael W.

    2017-01-23

    Ultra-high resolution microcalorimeter gamma spectroscopy is a new non-destructive assay technology for measurement of plutonium isotopic composition, with the potential to reduce total measurement uncertainty to a level competitive with destructive analysis methods [1-4]. Achieving this level of performance in practical applications requires not only the energy resolution now routinely achieved with transition-edge sensor microcalorimeter arrays (an order of magnitude better than for germanium detectors) but also high throughput. Microcalorimeter gamma spectrometers have not yet achieved detection efficiency and count rate capability that is comparable to germanium detectors, largely because of limits from existing readout technology. Microcalorimeter detectors must bemore » operated at low temperature to achieve their exceptional energy resolution. Although the typical 100 mK operating temperatures can be achieved with reliable, cryogen-free systems, the cryogenic complexity and heat load from individual readout channels for large sensor arrays is prohibitive. Multiplexing is required for practical systems. The most mature multiplexing technology at present is time-division multiplexing (TDM) [3, 5-6]. In TDM, the sensor outputs are switched by applying bias current to one SQUID amplifier at a time. Transition-edge sensor (TES) microcalorimeter arrays as large as 256 pixels have been developed for X-ray and gamma-ray spectroscopy using TDM technology. Due to bandwidth limits and noise scaling, TDM is limited to a maximum multiplexing factor of approximately 32-40 sensors on one readout line [8]. Increasing the size of microcalorimeter arrays above the kilopixel scale, required to match the throughput of germanium detectors, requires the development of a new readout technology with a much higher multiplexing factor.« less

  3. Single-field consistency relations of large scale structure part III: test of the equivalence principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Creminelli, Paolo; Gleyzes, Jérôme; Vernizzi, Filippo

    2014-06-01

    The recently derived consistency relations for Large Scale Structure do not hold if the Equivalence Principle (EP) is violated. We show it explicitly in a toy model with two fluids, one of which is coupled to a fifth force. We explore the constraints that galaxy surveys can set on EP violation looking at the squeezed limit of the 3-point function involving two populations of objects. We find that one can explore EP violations of order 10{sup −3}÷10{sup −4} on cosmological scales. Chameleon models are already very constrained by the requirement of screening within the Solar System and only a verymore » tiny region of the parameter space can be explored with this method. We show that no violation of the consistency relations is expected in Galileon models.« less

  4. Energy Dissipation and Phase-Space Dynamics in Eulerian Vlasov-Maxwell Turbulence

    NASA Astrophysics Data System (ADS)

    Tenbarge, Jason; Juno, James; Hakim, Ammar

    2017-10-01

    Turbulence in a magnetized plasma is a primary mechanism responsible for transforming energy at large injection scales into small-scale motions, which are ultimately dissipated as heat in systems such as the solar corona, wind, and other astrophysical objects. At large scales, the turbulence is well described by fluid models of the plasma; however, understanding the processes responsible for heating a weakly collisional plasma such as the solar wind requires a kinetic description. We present a fully kinetic Eulerian Vlasov-Maxwell study of turbulence using the Gkeyll simulation framework, including studies of the cascade of energy in phase space and formation and dissipation of coherent structures. We also leverage the recently developed field-particle correlations to diagnose the dominant sources of dissipation and compare the results of the field-particle correlation to other dissipation measures. NSF SHINE AGS-1622306 and DOE DE-AC02-09CH11466.

  5. The Full-Scale Prototype for the Fluorescence Detector Array of Single-Pixel Telescopes

    NASA Astrophysics Data System (ADS)

    Fujii, T.; Malacari, M.; Bellido, J. A.; Farmer, J.; Galimova, A.; Horvath, P.; Hrabovsky, M.; Mandat, D.; Matalon, A.; Matthews, J. N.; Merolle, M.; Ni, X.; Nozka, L.; Palatka, M.; Pech, M.; Privitera, P.; Schovanek, P.; Thomas, S. B.; Travnicek, P.

    The Fluorescence detector Array of Single-pixel Telescopes (FAST) is a design concept for the next generation of ultrahigh-energy cosmic ray (UHECR) observatories, addressing the requirements for a large-area, low-cost detector suitable for measuring the properties of the highest energy cosmic rays. In the FAST design, a large field of view is covered by a few pixels at the focal plane of a mirror or Fresnel lens. Motivated by the successful detection of UHECRs using a prototype comprised of a single 200 mm photomultiplier-tube and a 1 m2 Fresnel lens system, we have developed a new "full-scale" prototype consisting of four 200 mm photomultiplier-tubes at the focus of a segmented mirror of 1.6 m in diameter. We report on the status of the full-scale prototype, including test measurements made during first light operation at the Telescope Array site in central Utah, U.S.A.

  6. The origin of density fluctuations in the 'new inflationary universe'

    NASA Technical Reports Server (NTRS)

    Turner, M. S.

    1983-01-01

    Cosmological mysteries which are not explained by the Big Bang hypothesis but may be approached by a revamped inflationary universe model are discussed. Attention is focused on the isotropy, the large-scale homogeneity, small-scale inhomogeneity, the oldness/flatness of the universe, and the baryon asymmetry. The universe is assumed to start in the lowest energy state, be initially dominated by false vacuum energy, enter a de Sitter phase, and then cross a barrier which is followed by the formation of fluctuation regions that lead to structure. The scalar fields (perturbation regions) experience quantum fluctuations which produce spontaneous symmetry breaking on a large scale. The scalar field value would need to be much greater than the expansion rate during the de Sitter epoch. A supersymmetric (flat) potential which satisfies the requirement, yields fluctuations of the right magnitude, and allows inflation to occur is described.

  7. Computer code for the prediction of nozzle admittance

    NASA Technical Reports Server (NTRS)

    Nguyen, Thong V.

    1988-01-01

    A procedure which can accurately characterize injector designs for large thrust (0.5 to 1.5 million pounds), high pressure (500 to 3000 psia) LOX/hydrocarbon engines is currently under development. In this procedure, a rectangular cross-sectional combustion chamber is to be used to simulate the lower traverse frequency modes of the large scale chamber. The chamber will be sized so that the first width mode of the rectangular chamber corresponds to the first tangential mode of the full-scale chamber. Test data to be obtained from the rectangular chamber will be used to assess the full scale engine stability. This requires the development of combustion stability models for rectangular chambers. As part of the combustion stability model development, a computer code, NOAD based on existing theory was developed to calculate the nozzle admittances for both rectangular and axisymmetric nozzles. This code is detailed.

  8. The effects of magnetic fields on the growth of thermal instabilities in cooling flows

    NASA Technical Reports Server (NTRS)

    David, Laurence P.; Bregman, Joel N.

    1989-01-01

    The effects of heat conduction and magnetic fields on the growth of thermal instabilities in cooling flows are examined using a time-dependent hydrodynamics code. It is found that, for magnetic field strengths of roughly 1 micro-Gauss, magnetic pressure forces can completely suppress shocks from forming in thermally unstable entropy perturbations with initial length scales as large as 20 kpc, even for initial amplitudes as great as 60 percent. Perturbations with initial amplitudes of 50 percent and initial magnetic field strengths of 1 micro-Gauss cool to 10,000 K on a time scale which is only 22 percent of the initial instantaneous cooling time. Nonlinear perturbations can thus condense out of cooling flows on a time scale substantially less than the time required for linear perturbations and produce significant mass deposition of cold gas while the accreting intracluster gas is still at large radii.

  9. Evidence and AIDS activism: HIV scale-up and the contemporary politics of knowledge in global public health

    PubMed Central

    Colvin, Christopher J.

    2014-01-01

    The HIV epidemic is widely recognised as having prompted one of the most remarkable intersections ever of illness, science and activism. The production, circulation, use and evaluation of empirical scientific ‘evidence’ played a central part in activists’ engagement with AIDS science. Previous activist engagement with evidence focused on the social and biomedical responses to HIV in the global North as well as challenges around ensuring antiretroviral treatment (ART) was available in the global South. More recently, however, with the roll-out and scale-up of large public-sector ART programmes and new multi-dimensional prevention efforts, the relationships between evidence and activism have been changing. Scale-up of these large-scale treatment and prevention programmes represents an exciting new opportunity while bringing with it a host of new challenges. This paper examines what new forms of evidence and activism will be required to address the challenges of the scaling-up era of HIV treatment and prevention. It reviews some recent controversies around evidence and HIV scale-up and describes the different forms of evidence and activist strategies that will be necessary for a robust response to these new challenges. PMID:24498918

  10. Large-scale hydrological modeling for calculating water stress indices: implications of improved spatiotemporal resolution, surface-groundwater differentiation, and uncertainty characterization.

    PubMed

    Scherer, Laura; Venkatesh, Aranya; Karuppiah, Ramkumar; Pfister, Stephan

    2015-04-21

    Physical water scarcities can be described by water stress indices. These are often determined at an annual scale and a watershed level; however, such scales mask seasonal fluctuations and spatial heterogeneity within a watershed. In order to account for this level of detail, first and foremost, water availability estimates must be improved and refined. State-of-the-art global hydrological models such as WaterGAP and UNH/GRDC have previously been unable to reliably reflect water availability at the subbasin scale. In this study, the Soil and Water Assessment Tool (SWAT) was tested as an alternative to global models, using the case study of the Mississippi watershed. While SWAT clearly outperformed the global models at the scale of a large watershed, it was judged to be unsuitable for global scale simulations due to the high calibration efforts required. The results obtained in this study show that global assessments miss out on key aspects related to upstream/downstream relations and monthly fluctuations, which are important both for the characterization of water scarcity in the Mississippi watershed and for water footprints. Especially in arid regions, where scarcity is high, these models provide unsatisfying results.

  11. Diffuse pollution of soil and water: Long term trends at large scales?

    NASA Astrophysics Data System (ADS)

    Grathwohl, P.

    2012-04-01

    Industrialization and urbanization, which consequently increased pressure on the environment to cause degradation of soil and water quality over more than a century, is still ongoing. The number of potential environmental contaminants detected in surface and groundwater is continuously increasing; from classical industrial and agricultural chemicals, to flame retardants, pharmaceuticals, and personal care products. While point sources of pollution can be managed in principle, diffuse pollution is only reversible at very long time scales if at all. Compounds which were phased out many decades ago such as PCBs or DDT are still abundant in soils, sediments and biota. How diffuse pollution is processed at large scales in space (e.g. catchments) and time (centuries) is unknown. The relevance to the field of processes well investigated at the laboratory scale (e.g. sorption/desorption and (bio)degradation kinetics) is not clear. Transport of compounds is often coupled to the water cycle and in order to assess trends in diffuse pollution, detailed knowledge about the hydrology and the solute fluxes at the catchment scale is required (e.g. input/output fluxes, transformation rates at the field scale). This is also a prerequisite in assessing management options for reversal of adverse trends.

  12. CHURCHILL COUNTY, NEVADA ARSENIC STUDY: WATER CONSUMPTION AND EXPOSURE BIOMARKERS

    EPA Science Inventory

    The US Environmental Protection Agency is required to reevaluate the Maximum Contaminant Level (MCL) for arsenic in 2006. To provide data for reducing uncertainties in assessing health risks associated with exposure to low levels (<200 g/l) of arsenic, a large scale biomarker st...

  13. Ecological Effects of Weather Modification: A Problem Analysis.

    ERIC Educational Resources Information Center

    Cooper, Charles F.; Jolly, William C.

    This publication reviews the potential hazards to the environment of weather modification techniques as they eventually become capable of producing large scale weather pattern modifications. Such weather modifications could result in ecological changes which would generally require several years to be fully evident, including the alteration of…

  14. INVENTORY AND CLASSIFICATION OF GREAT LAKES COASTAL WETLANDS FOR MONITORING AND ASSESSMENT AT LARGE SPATIAL SCALES

    EPA Science Inventory

    Monitoring aquatic resources for regional assessments requires an accurate and comprehensive inventory of the resource and useful classification of exosystem similarities. Our research effort to create an electronic database and work with various ways to classify coastal wetlands...

  15. The Systems Revolution

    ERIC Educational Resources Information Center

    Ackoff, Russell L.

    1974-01-01

    The major organizational and social problems of our time do not lend themselves to the reductionism of traditional analytical and disciplinary approaches. They must be attacked holistically, with a comprehensive systems approach. The effective study of large-scale social systems requires the synthesis of science with the professions that use it.…

  16. EVALUATION OF ANALYTICAL METHODS FOR DETERMINING PESTICIDES IN BABY FOOD AND ADULT DUPLICATE-DIET SAMPLES

    EPA Science Inventory

    Determinations of pesticides in food are often complicated by the presence of fats and require multiple cleanup steps before analysis. Cost-effective analytical methods are needed for conducting large-scale exposure studies. We examined two extraction methods, supercritical flu...

  17. Strategic Planning Tools for Large-Scale Technology-Based Assessments

    ERIC Educational Resources Information Center

    Koomen, Marten; Zoanetti, Nathan

    2018-01-01

    Education systems are increasingly being called upon to implement new technology-based assessment systems that generate efficiencies, better meet changing stakeholder expectations, or fulfil new assessment purposes. These assessment systems require coordinated organisational effort to implement and can be expensive in time, skill and other…

  18. Residential solar-heating system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    Complete residential solar-heating and hot-water system, when installed in highly-insulated energy-saver home, can supply large percentage of total energy demand for space heating and domestic hot water. System which uses water-heating energy storage can be scaled to meet requirements of building in which it is installed.

  19. Soil organic matter composition from correlated thermal analysis and nuclear magnetic resonance data in Australian national inventory of agricultural soils

    NASA Astrophysics Data System (ADS)

    Moore, T. S.; Sanderman, J.; Baldock, J.; Plante, A. F.

    2016-12-01

    National-scale inventories typically include soil organic carbon (SOC) content, but not chemical composition or biogeochemical stability. Australia's Soil Carbon Research Programme (SCaRP) represents a national inventory of SOC content and composition in agricultural systems. The program used physical fractionation followed by 13C nuclear magnetic resonance (NMR) spectroscopy. While these techniques are highly effective, they are typically too expensive and time consuming for use in large-scale SOC monitoring. We seek to understand if analytical thermal analysis is a viable alternative. Coupled differential scanning calorimetry (DSC) and evolved gas analysis (CO2- and H2O-EGA) yields valuable data on SOC composition and stability via ramped combustion. The technique requires little training to use, and does not require fractionation or other sample pre-treatment. We analyzed 300 agricultural samples collected by SCaRP, divided into four fractions: whole soil, coarse particulates (POM), untreated mineral associated (HUM), and hydrofluoric acid (HF)-treated HUM. All samples were analyzed by DSC-EGA, but only the POM and HF-HUM fractions were analyzed by NMR. Multivariate statistical analyses were used to explore natural clustering in SOC composition and stability based on DSC-EGA data. A partial least-squares regression (PLSR) model was used to explore correlations among the NMR and DSC-EGA data. Correlations demonstrated regions of combustion attributable to specific functional groups, which may relate to SOC stability. We are increasingly challenged with developing an efficient technique to assess SOC composition and stability at large spatial and temporal scales. Correlations between NMR and DSC-EGA may demonstrate the viability of using thermal analysis in lieu of more demanding methods in future large-scale surveys, and may provide data that goes beyond chemical composition to better approach quantification of biogeochemical stability.

  20. Nutrient Regulation by Continuous Feeding Removes Limitations on Cell Yield in the Large-Scale Expansion of Mammalian Cell Spheroids

    PubMed Central

    Weegman, Bradley P.; Nash, Peter; Carlson, Alexandra L.; Voltzke, Kristin J.; Geng, Zhaohui; Jahani, Marjan; Becker, Benjamin B.; Papas, Klearchos K.; Firpo, Meri T.

    2013-01-01

    Cellular therapies are emerging as a standard approach for the treatment of several diseases. However, realizing the promise of cellular therapies across the full range of treatable disorders will require large-scale, controlled, reproducible culture methods. Bioreactor systems offer the scale-up and monitoring needed, but standard stirred bioreactor cultures do not allow for the real-time regulation of key nutrients in the medium. In this study, β-TC6 insulinoma cells were aggregated and cultured for 3 weeks as a model of manufacturing a mammalian cell product. Cell expansion rates and medium nutrient levels were compared in static, stirred suspension bioreactors (SSB), and continuously fed (CF) SSB. While SSB cultures facilitated increased culture volumes, no increase in cell yields were observed, partly due to limitations in key nutrients, which were consumed by the cultures between feedings, such as glucose. Even when glucose levels were increased to prevent depletion between feedings, dramatic fluctuations in glucose levels were observed. Continuous feeding eliminated fluctuations and improved cell expansion when compared with both static and SSB culture methods. Further improvements in growth rates were observed after adjusting the feed rate based on calculated nutrient depletion, which maintained physiological glucose levels for the duration of the expansion. Adjusting the feed rate in a continuous medium replacement system can maintain the consistent nutrient levels required for the large-scale application of many cell products. Continuously fed bioreactor systems combined with nutrient regulation can be used to improve the yield and reproducibility of mammalian cells for biological products and cellular therapies and will facilitate the translation of cell culture from the research lab to clinical applications. PMID:24204645

  1. ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.

    2016-04-01

    Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less

  2. Challenges to Progress in Studies of Climate-Tectonic-Erosion Interactions

    NASA Astrophysics Data System (ADS)

    Burbank, D. W.

    2016-12-01

    Attempts to unravel the relative importance of climate and tectonics in modulating topography and erosion should compare relevant data sets at comparable temporal and spatial scales. Given that such data are uncommonly available, how can we compare diverse data sets in a robust fashion? Many erosion-rate studies rely on detrital cosmogenic nuclides. What time scales can such data address, and what landscape conditions do they require to provide accurate representations of long-term erosion rates? To what extent do large-scale, but infrequent erosional events impact long-term rates? Commonly, long-term erosion rates are deduced from thermochronologic data. What types of data are needed to test for consistency of rates across a given interval or change in rates through time? Similarly, spatial and temporal variability in precipitation or tectonics requires averaging across appropriate scales. How are such data obtained in deforming mountain belts, and how do we assess their reliability? This study describes the character and temporal duration of key variables that are needed to examine climate-tectonic-erosion interactions, explores the strengths and weaknesses of several study areas, and suggests the types of data requirements that will underpin enlightening "tests" of hypotheses related to the mutual impacts of climate, tectonics, and erosion.

  3. The numerics of hydrostatic structured-grid coastal ocean models: State of the art and future perspectives

    NASA Astrophysics Data System (ADS)

    Klingbeil, Knut; Lemarié, Florian; Debreu, Laurent; Burchard, Hans

    2018-05-01

    The state of the art of the numerics of hydrostatic structured-grid coastal ocean models is reviewed here. First, some fundamental differences in the hydrodynamics of the coastal ocean, such as the large surface elevation variation compared to the mean water depth, are contrasted against large scale ocean dynamics. Then the hydrodynamic equations as they are used in coastal ocean models as well as in large scale ocean models are presented, including parameterisations for turbulent transports. As steps towards discretisation, coordinate transformations and spatial discretisations based on a finite-volume approach are discussed with focus on the specific requirements for coastal ocean models. As in large scale ocean models, splitting of internal and external modes is essential also for coastal ocean models, but specific care is needed when drying & flooding of intertidal flats is included. As one obvious characteristic of coastal ocean models, open boundaries occur and need to be treated in a way that correct model forcing from outside is transmitted to the model domain without reflecting waves from the inside. Here, also new developments in two-way nesting are presented. Single processes such as internal inertia-gravity waves, advection and turbulence closure models are discussed with focus on the coastal scales. Some overview on existing hydrostatic structured-grid coastal ocean models is given, including their extensions towards non-hydrostatic models. Finally, an outlook on future perspectives is made.

  4. Towards Development of Clustering Applications for Large-Scale Comparative Genotyping and Kinship Analysis Using Y-Short Tandem Repeats.

    PubMed

    Seman, Ali; Sapawi, Azizian Mohd; Salleh, Mohd Zaki

    2015-06-01

    Y-chromosome short tandem repeats (Y-STRs) are genetic markers with practical applications in human identification. However, where mass identification is required (e.g., in the aftermath of disasters with significant fatalities), the efficiency of the process could be improved with new statistical approaches. Clustering applications are relatively new tools for large-scale comparative genotyping, and the k-Approximate Modal Haplotype (k-AMH), an efficient algorithm for clustering large-scale Y-STR data, represents a promising method for developing these tools. In this study we improved the k-AMH and produced three new algorithms: the Nk-AMH I (including a new initial cluster center selection), the Nk-AMH II (including a new dominant weighting value), and the Nk-AMH III (combining I and II). The Nk-AMH III was the superior algorithm, with mean clustering accuracy that increased in four out of six datasets and remained at 100% in the other two. Additionally, the Nk-AMH III achieved a 2% higher overall mean clustering accuracy score than the k-AMH, as well as optimal accuracy for all datasets (0.84-1.00). With inclusion of the two new methods, the Nk-AMH III produced an optimal solution for clustering Y-STR data; thus, the algorithm has potential for further development towards fully automatic clustering of any large-scale genotypic data.

  5. Cosmological structure formation

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1991-01-01

    A summary of the current forefront problem of physical cosmology, the formation of structures (galaxies, clusters, great walls, etc.) in the universe is presented. Solutions require two key ingredients: (1) matter; and (2) seeds. Regarding the matter, it now seems clear that both baryonic and non-baryonic matter are required. Whether the non-baryonic matter is hot or cold depends on the choice of seeds. Regarding the seeds, both density fluctuations and topological defects are discussed. The combination of isotropy of the microwave background and the recent observations indicating more power on large scales have severly constrained, if not eliminated, Gaussian fluctuations with equal power on all scales, regardless of the eventual resolution of both the matter and seed questions. It is important to note that all current structure formation ideas require new physics beyond SU(3) x SU(2) x U(1).

  6. Implementation factors affecting the large-scale deployment of digital health and well-being technologies: A qualitative study of the initial phases of the 'Living-It-Up' programme.

    PubMed

    Agbakoba, Ruth; McGee-Lennon, Marilyn; Bouamrane, Matt-Mouley; Watson, Nicholas; Mair, Frances S

    2016-12-01

    Little is known about the factors which facilitate or impede the large-scale deployment of health and well-being consumer technologies. The Living-It-Up project is a large-scale digital intervention led by NHS 24, aiming to transform health and well-being services delivery throughout Scotland. We conducted a qualitative study of the factors affecting the implementation and deployment of the Living-It-Up services. We collected a range of data during the initial phase of deployment, including semi-structured interviews (N = 6); participant observation sessions (N = 5) and meetings with key stakeholders (N = 3). We used the Normalisation Process Theory as an explanatory framework to interpret the social processes at play during the initial phases of deployment.Initial findings illustrate that it is clear - and perhaps not surprising - that the size and diversity of the Living-It-Up consortium made implementation processes more complex within a 'multi-stakeholder' environment. To overcome these barriers, there is a need to clearly define roles, tasks and responsibilities among the consortium partners. Furthermore, varying levels of expectations and requirements, as well as diverse cultures and ways of working, must be effectively managed. Factors which facilitated implementation included extensive stakeholder engagement, such as co-design activities, which can contribute to an increased 'buy-in' from users in the long term. An important lesson from the Living-It-Up initiative is that attempting to co-design innovative digital services, but at the same time, recruiting large numbers of users is likely to generate conflicting implementation priorities which hinder - or at least substantially slow down - the effective rollout of services at scale.The deployment of Living-It-Up services is ongoing, but our results to date suggest that - in order to be successful - the roll-out of digital health and well-being technologies at scale requires a delicate and pragmatic trade-off between co-design activities, the development of innovative services and the efforts allocated to widespread marketing and recruitment initiatives. © The Author(s) 2015.

  7. Large scale Brownian dynamics of confined suspensions of rigid particles

    NASA Astrophysics Data System (ADS)

    Sprinkle, Brennan; Balboa Usabiaga, Florencio; Patankar, Neelesh A.; Donev, Aleksandar

    2017-12-01

    We introduce methods for large-scale Brownian Dynamics (BD) simulation of many rigid particles of arbitrary shape suspended in a fluctuating fluid. Our method adds Brownian motion to the rigid multiblob method [F. Balboa Usabiaga et al., Commun. Appl. Math. Comput. Sci. 11(2), 217-296 (2016)] at a cost comparable to the cost of deterministic simulations. We demonstrate that we can efficiently generate deterministic and random displacements for many particles using preconditioned Krylov iterative methods, if kernel methods to efficiently compute the action of the Rotne-Prager-Yamakawa (RPY) mobility matrix and its "square" root are available for the given boundary conditions. These kernel operations can be computed with near linear scaling for periodic domains using the positively split Ewald method. Here we study particles partially confined by gravity above a no-slip bottom wall using a graphical processing unit implementation of the mobility matrix-vector product, combined with a preconditioned Lanczos iteration for generating Brownian displacements. We address a major challenge in large-scale BD simulations, capturing the stochastic drift term that arises because of the configuration-dependent mobility. Unlike the widely used Fixman midpoint scheme, our methods utilize random finite differences and do not require the solution of resistance problems or the computation of the action of the inverse square root of the RPY mobility matrix. We construct two temporal schemes which are viable for large-scale simulations, an Euler-Maruyama traction scheme and a trapezoidal slip scheme, which minimize the number of mobility problems to be solved per time step while capturing the required stochastic drift terms. We validate and compare these schemes numerically by modeling suspensions of boomerang-shaped particles sedimented near a bottom wall. Using the trapezoidal scheme, we investigate the steady-state active motion in dense suspensions of confined microrollers, whose height above the wall is set by a combination of thermal noise and active flows. We find the existence of two populations of active particles, slower ones closer to the bottom and faster ones above them, and demonstrate that our method provides quantitative accuracy even with relatively coarse resolutions of the particle geometry.

  8. Global views of energetic particle precipitation and their sources: Combining large-scale models with observations during the 21-22 January 2005 magnetic storm (Invited)

    NASA Astrophysics Data System (ADS)

    Kozyra, J. U.; Brandt, P. C.; Cattell, C. A.; Clilverd, M.; de Zeeuw, D.; Evans, D. S.; Fang, X.; Frey, H. U.; Kavanagh, A. J.; Liemohn, M. W.; Lu, G.; Mende, S. B.; Paxton, L. J.; Ridley, A. J.; Rodger, C. J.; Soraas, F.

    2010-12-01

    Energetic ions and electrons that precipitate into the upper atmosphere from sources throughout geospace carry the influences of space weather disturbances deeper into the atmosphere, possibly contributing to climate variability. The three-dimensional atmospheric effects of these precipitating particles are a function of the energy and species of the particles, lifetimes of reactive species generated during collisions in the atmosphere, the nature of the driving space weather disturbance, and the large-scale transport properties (meteorology) of the atmosphere in the region of impact. Unraveling the features of system-level coupling between solar magnetic variability, space weather and stratospheric dynamics requires a global view of the precipitation, along with its temporal and spatial variation. However, observations of particle precipitation at the system level are sparse and incomplete requiring they be combined with other observations and with large-scale models to provide the global context that is needed to accelerate progress. We compare satellite and ground-based observations of geospace conditions and energetic precipitation (at ring current, radiation belt and auroral energies) to a simulation of the geospace environment during 21-22 January 2005 by the BATS-R-US MHD model coupled with a self-consistent ring current solution. The aim is to explore the extent to which regions of particle precipitation track global magnetic field distortions and ways in which global models enhance our understanding of linkages between solar wind drivers and evolution of energetic particle precipitation.

  9. Extending SME to Handle Large-Scale Cognitive Modeling.

    PubMed

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  10. Tools for Large-Scale Mobile Malware Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierma, Michael

    Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000more » Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.« less

  11. Current challenges in quantifying preferential flow through the vadose zone

    NASA Astrophysics Data System (ADS)

    Koestel, John; Larsbo, Mats; Jarvis, Nick

    2017-04-01

    In this presentation, we give an overview of current challenges in quantifying preferential flow through the vadose zone. A review of the literature suggests that current generation models do not fully reflect the present state of process understanding and empirical knowledge of preferential flow. We believe that the development of improved models will be stimulated by the increasingly widespread application of novel imaging technologies as well as future advances in computational power and numerical techniques. One of the main challenges in this respect is to bridge the large gap between the scales at which preferential flow occurs (pore to Darcy scales) and the scale of interest for management (fields, catchments, regions). Studies at the pore scale are being supported by the development of 3-D non-invasive imaging and numerical simulation techniques. These studies are leading to a better understanding of how macropore network topology and initial/boundary conditions control key state variables like matric potential and thus the strength of preferential flow. Extrapolation of this knowledge to larger scales would require support from theoretical frameworks such as key concepts from percolation and network theory, since we lack measurement technologies to quantify macropore networks at these large scales. Linked hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data enable investigation of the larger-scale heterogeneities that can generate preferential flow patterns at pedon, hillslope and field scales. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help in parameterizing models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  12. Dark matter and cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, D.N.

    1992-03-01

    The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold`` and ``hot`` non-baryonic candidates is shown to depend on the assumed ``seeds`` that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less

  13. Dark matter and cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schramm, D.N.

    1992-03-01

    The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the {Omega} = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between cold'' and hot'' non-baryonic candidates is shown to depend on the assumed seeds'' that stimulate structure formation. Gaussian density fluctuations,more » such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.« less

  14. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  15. Dark matter and cosmology

    NASA Astrophysics Data System (ADS)

    Schramm, David N.

    1992-07-01

    The cosmological dark matter problem is reviewed. The Big Bang Nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the Ω = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between ``cold'' and ``hot'' non-baryonic candidates is shown to depend on the assumed ``seeds'' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.

  16. Dark matter and cosmology

    NASA Astrophysics Data System (ADS)

    Schramm, D. N.

    1992-03-01

    The cosmological dark matter problem is reviewed. The Big Bang nucleosynthesis constraints on the baryon density are compared with the densities implied by visible matter, dark halos, dynamics of clusters, gravitational lenses, large-scale velocity flows, and the omega = 1 flatness/inflation argument. It is shown that (1) the majority of baryons are dark; and (2) non-baryonic dark matter is probably required on large scales. It is also noted that halo dark matter could be either baryonic or non-baryonic. Descrimination between 'cold' and 'hot' non-baryonic candidates is shown to depend on the assumed 'seeds' that stimulate structure formation. Gaussian density fluctuations, such as those induced by quantum fluctuations, favor cold dark matter, whereas topological defects such as strings, textures or domain walls may work equally or better with hot dark matter. A possible connection between cold dark matter, globular cluster ages, and the Hubble constant is mentioned. Recent large-scale structure measurements, coupled with microwave anisotropy limits, are shown to raise some questions for the previously favored density fluctuation picture. Accelerator and underground limits on dark matter candidates are also reviewed.

  17. Upscaling of U (VI) desorption and transport from decimeter‐scale heterogeneity to plume‐scale modeling

    USGS Publications Warehouse

    Curtis, Gary P.; Kohler, Matthias; Kannappan, Ramakrishnan; Briggs, Martin A.; Day-Lewis, Frederick D.

    2015-01-01

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.

  18. Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series

    NASA Astrophysics Data System (ADS)

    Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth

    2017-12-01

    The growing field of large-scale time domain astronomy requires methods for probabilistic data analysis that are computationally tractable, even with large data sets. Gaussian processes (GPs) are a popular class of models used for this purpose, but since the computational cost scales, in general, as the cube of the number of data points, their application has been limited to small data sets. In this paper, we present a novel method for GPs modeling in one dimension where the computational requirements scale linearly with the size of the data set. We demonstrate the method by applying it to simulated and real astronomical time series data sets. These demonstrations are examples of probabilistic inference of stellar rotation periods, asteroseismic oscillation spectra, and transiting planet parameters. The method exploits structure in the problem when the covariance function is expressed as a mixture of complex exponentials, without requiring evenly spaced observations or uniform noise. This form of covariance arises naturally when the process is a mixture of stochastically driven damped harmonic oscillators—providing a physical motivation for and interpretation of this choice—but we also demonstrate that it can be a useful effective model in some other cases. We present a mathematical description of the method and compare it to existing scalable GP methods. The method is fast and interpretable, with a range of potential applications within astronomical data analysis and beyond. We provide well-tested and documented open-source implementations of this method in C++, Python, and Julia.

  19. Test of the CLAS12 RICH large-scale prototype in the direct proximity focusing configuration

    DOE PAGES

    Anefalos Pereira, S.; Baltzell, N.; Barion, L.; ...

    2016-02-11

    A large area ring-imaging Cherenkov detector has been designed to provide clean hadron identification capability in the momentum range from 3 GeV/c up to 8 GeV/c for the CLAS12 experiments at the upgraded 12 GeV continuous electron beam accelerator facility of Jefferson Laboratory. The adopted solution foresees a novel hybrid optics design based on aerogel radiator, composite mirrors and high-packed and high-segmented photon detectors. Cherenkov light will either be imaged directly (forward tracks) or after two mirror reflections (large angle tracks). We report here the results of the tests of a large scale prototype of the RICH detector performed withmore » the hadron beam of the CERN T9 experimental hall for the direct detection configuration. As a result, the tests demonstrated that the proposed design provides the required pion-to-kaon rejection factor of 1:500 in the whole momentum range.« less

  20. Atmospheric considerations regarding the impact of heat dissipation from a nuclear energy center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rotty, R.M.; Bauman, H.; Bennett, L.L.

    1976-05-01

    Potential changes in climate resulting from a large nuclear energy center are discussed. On a global scale, no noticeable changes are likely, but on both a regional and a local scale, changes can be expected. Depending on the cooling system employed, the amount of fog may increase, the amount and distribution of precipitation will change, and the frequency or location of severe storms may change. Very large heat releases over small surface areas can result in greater atmospheric instability; a large number of closely spaced natural-draft cooling towers have this disadvantage. On the other hand, employment of natural-draft towers makesmore » an increase in the occurrence of ground fog unlikely. The analysis suggests that the cooling towers for a large nuclear energy center should be located in clusters of four with at least 2.5-mile spacing between the clusters. This is equivalent to the requirement of one acre of land surface per each two megawatts of heat being rejected.« less

Top