Science.gov

Sample records for large scale harmonization

  1. Scaling of the generation of high-order harmonics in large gas media with focal length

    SciTech Connect

    Boutu, W.; Auguste, T.; Caumes, J. P.; Carre, B.; Merdji, H.

    2011-11-15

    We present theoretical and experimental results on high-order harmonic generation in a low-density few-centimeter-long gas medium (L{sub med}{<=} 10 cm). We study the dependence with focal length of harmonic efficiency. Theoretically, we consider in detail the generation of the 25th harmonic of a short pulse Ti:sapphire laser in argon. Within the strong-field approximation for the atomic dipole, and a complete account of the macroscopic propagation, we compute the number of photons produced as a function of the medium parameters and the focusing conditions. The simulations show that, at constant intensity, the emission of the 25th harmonic scales with the focal length as {approx}f{sup 4} at low pressure (P=2 Torr) and as {approx}f{sup 6} at higher pressure (P=5 Torr). At constant laser energy, we find that the harmonic signal scales approximately as f{sup 2} at low pressure and as f{sup 4} at higher pressure. Those numerical results are compared with experimental data.

  2. Large scale synthesis, second-harmonic generation, and piezoelectric properties of a noncentrosymmetric vanadium phosphate, Li2VPO6

    NASA Astrophysics Data System (ADS)

    Lee, Eun Pyo; Lee, Dong Woo; Cho, Yoon-Ho; Thao Tran, T.; Shiv Halasyamani, P.; Ok, Kang Min

    2013-06-01

    The phase pure large scale synthesis, second-harmonic generation, and piezoelectric properties of a vanadium phosphate, Li2VPO6 are reported. The material has been synthesized by a standard solid-state reaction using Li2CO3, V2O5, and NH4H2PO4 as reagents. The phase purity and crystal structure of the reported material have been confirmed by powder X-ray and neutron diffractions. The material crystallizes in orthorhombic space group Pna21 (no. 33) with a=10.32581(7) Å, b=4.63728(3) Å, c=8.56606(5) Å, and Z=4. The noncentrosymmetric layered structure is composed of distorted VO6 octahedra and PO4 tetrahedra. The V5+ cations distort either along the approximate [1 0 1] or [-1 0 1] directions attributable to the alignment of distorted VO6 octahedra, resulting in a polar structure. Powder second-harmonic generating (SHG) measurements on Li2VPO6 using 1064 nm radiation, indicate the material has a SHG efficiency of approximately 10 times that of α-SiO2 and is not phase-matchable (type 1). Converse piezoelectric measurements for the material reveal a piezoelectric coefficient, d33, of 12 pm V-1.

  3. Large-Scale Ionospheric Effects Related to Electron-Gyro Harmonics: What We Have Learned from HAARP.

    NASA Astrophysics Data System (ADS)

    Watkins, B. J.; Fallen, C. T.; Secan, J. A.

    2014-12-01

    The HAARP ionospheric modification facility has unique capabilities that enable a wide range of HF frequencies with transmit powers ranging from very low to very high values. We will review a range of experiment results that illustrate large-scale ionospheric effects when the HF frequencies used are close to electron gyro-harmoncs and we focus mainly on the 3rd and 4th harmonics. The data are primarily from the UHF diagnosticc radar and total electron content (TEC) observations through the heated topside ionosphere. Radar data for HF frequencies just above and just below gyro harmoncs show significant differences in radar scatter cross-section that suggest differing plasma processes, and this effect is HF power dependent with some effects only observable with full HF power. For the production of artificial ionization in the E-region when the HF frequency is near gyro-harmoncs the results differ significantly for relatively small (50 kHz) variations in the HF frequency. We show how slow FM scans in conjunction with gyro-harmonic effects are effective in producing artificial ionization in the lower ionosphere.In the topside ionosphere enhanced density and upward fluxes have been observed and these may act as effective ducts for the propagation of VLF waves upward into the magneosphere. Experimental techniques have been developed that may be used to continuously maintain these effects in the topside ionossphere.

  4. Large scale synthesis, second-harmonic generation, and piezoelectric properties of a noncentrosymmetric vanadium phosphate, Li₂VPO₆

    SciTech Connect

    Lee, Eun Pyo; Lee, Dong Woo; Cho, Yoon-Ho; Thao Tran, T.; Shiv Halasyamani, P.; Ok, Kang Min

    2013-06-01

    The phase pure large scale synthesis, second-harmonic generation, and piezoelectric properties of a vanadium phosphate, Li₂VPO₆ are reported. The material has been synthesized by a standard solid-state reaction using Li₂CO₃, V₂O₅, and NH₄H₂PO₄ as reagents. The phase purity and crystal structure of the reported material have been confirmed by powder X-ray and neutron diffractions. The material crystallizes in orthorhombic space group Pna2₁ (no. 33) with a=10.32581(7) Å, b=4.63728(3) Å, c=8.56606(5) Å, and Z=4. The noncentrosymmetric layered structure is composed of distorted VO₆ octahedra and PO₄ tetrahedra. The V⁵⁺ cations distort either along the approximate [1 0 1] or [-1 0 1] directions attributable to the alignment of distorted VO₆ octahedra, resulting in a polar structure. Powder second-harmonic generating (SHG) measurements on Li₂VPO₆ using 1064 nm radiation, indicate the material has a SHG efficiency of approximately 10 times that of α-SiO₂ and is not phase-matchable (type 1). Converse piezoelectric measurements for the material reveal a piezoelectric coefficient, d₃₃, of 12 pm V⁻¹. - Graphical abstract: A net moment arising from the alignment of the distorted VO₆ octahedra is responsible for the SHG and piezoelectricity for the noncentrosymmetric vanadium phosphate, Li₂VPO₆. Highlights: • Large scale synthesis of a NCS vanadium phosphate, Li₂VPO₆ is reported. • Li₂VPO₆ exhibits a SHG efficiency of 10× that of α-SiO₂. • Li₂VPO₆ reveals a piezoelectric coefficient, d₃₃, of 12 pm V⁻¹. • A net moment is arising from the alignment of the VO₆ octahedra in Li₂VPO₆.

  5. Low variance energy estimators for systems of quantum Drude oscillators: treating harmonic path integrals with large separations of time scales.

    PubMed

    Whitfield, Troy W; Martyna, Glenn J

    2007-02-21

    In the effort to develop atomistic models capable of accurately describing nanoscale systems with complex interfaces, it has become clear that simple treatments with rigid charge distributions and dispersion coefficients selected to generate bulk properties are insufficient to predict important physical properties. The quantum Drude oscillator model, a system of one-electron pseudoatoms whose "pseudoelectrons" are harmonically bound to their respective "pseudonuclei," is capable of treating many-body polarization and dispersion interactions in molecular systems on an equal footing due to the ability of the pseudoatoms to mimic the long-range interactions that characterize real materials. Using imaginary time path integration, the Drude oscillator model can, in principle, be solved in computer operation counts that scale linearly with the number of atoms in the system. In practice, however, standard expressions for the energy and pressure, including the commonly used virial estimator, have extremely large variances that require untenably long simulation times to generate converged averages. In this paper, low-variance estimators for the internal energy are derived, in which the large zero-point energy of the oscillators does not contribute to the variance. The new estimators are applicable to any system of harmonic oscillators coupled to one another (or to the environment) via an arbitrary set of anharmonic interactions. The variance of the new estimators is found to be much smaller than standard estimators in three example problems, a one-dimensional anharmonic oscillator and quantum Drude models of the xenon dimer and solid (fcc) xenon, respectively, yielding 2-3 orders of magnitude improvement in computational efficiency.

  6. Results and harmonization guidelines from two large-scale international Elispot proficiency panels conducted by the Cancer Vaccine Consortium (CVC/SVI)

    PubMed Central

    Panageas, Katherine S.; Ben-Porat, Leah; Boyer, Jean; Britten, Cedrik M.; Clay, Timothy M.; Kalos, Michael; Maecker, Holden T.; Romero, Pedro; Yuan, Jianda; Martin Kast, W.; Hoos, Axel

    2007-01-01

    The Cancer Vaccine Consortium of the Sabin Vaccine Institute (CVC/SVI) is conducting an ongoing large-scale immune monitoring harmonization program through its members and affiliated associations. This effort was brought to life as an external validation program by conducting an international Elispot proficiency panel with 36 laboratories in 2005, and was followed by a second panel with 29 participating laboratories in 2006 allowing for application of learnings from the first panel. Critical protocol choices, as well as standardization and validation practices among laboratories were assessed through detailed surveys. Although panel participants had to follow general guidelines in order to allow comparison of results, each laboratory was able to use its own protocols, materials and reagents. The second panel recorded an overall significantly improved performance, as measured by the ability to detect all predefined responses correctly. Protocol choices and laboratory practices, which can have a dramatic effect on the overall assay outcome, were identified and lead to the following recommendations: (A) Establish a laboratory SOP for Elispot testing procedures including (A1) a counting method for apoptotic cells for determining adequate cell dilution for plating, and (A2) overnight rest of cells prior to plating and incubation, (B) Use only pre-tested serum optimized for low background: high signal ratio, (C) Establish a laboratory SOP for plate reading including (C1) human auditing during the reading process and (C2) adequate adjustments for technical artifacts, and (D) Only allow trained personnel, which is certified per laboratory SOPs to conduct assays. Recommendations described under (A) were found to make a statistically significant difference in assay performance, while the remaining recommendations are based on practical experiences confirmed by the panel results, which could not be statistically tested. These results provide initial harmonization guidelines

  7. Results and harmonization guidelines from two large-scale international Elispot proficiency panels conducted by the Cancer Vaccine Consortium (CVC/SVI).

    PubMed

    Janetzki, Sylvia; Panageas, Katherine S; Ben-Porat, Leah; Boyer, Jean; Britten, Cedrik M; Clay, Timothy M; Kalos, Michael; Maecker, Holden T; Romero, Pedro; Yuan, Jianda; Kast, W Martin; Hoos, Axel

    2008-03-01

    The Cancer Vaccine Consortium of the Sabin Vaccine Institute (CVC/SVI) is conducting an ongoing large-scale immune monitoring harmonization program through its members and affiliated associations. This effort was brought to life as an external validation program by conducting an international Elispot proficiency panel with 36 laboratories in 2005, and was followed by a second panel with 29 participating laboratories in 2006 allowing for application of learnings from the first panel. Critical protocol choices, as well as standardization and validation practices among laboratories were assessed through detailed surveys. Although panel participants had to follow general guidelines in order to allow comparison of results, each laboratory was able to use its own protocols, materials and reagents. The second panel recorded an overall significantly improved performance, as measured by the ability to detect all predefined responses correctly. Protocol choices and laboratory practices, which can have a dramatic effect on the overall assay outcome, were identified and lead to the following recommendations: (A) Establish a laboratory SOP for Elispot testing procedures including (A1) a counting method for apoptotic cells for determining adequate cell dilution for plating, and (A2) overnight rest of cells prior to plating and incubation, (B) Use only pre-tested serum optimized for low background: high signal ratio, (C) Establish a laboratory SOP for plate reading including (C1) human auditing during the reading process and (C2) adequate adjustments for technical artifacts, and (D) Only allow trained personnel, which is certified per laboratory SOPs to conduct assays. Recommendations described under (A) were found to make a statistically significant difference in assay performance, while the remaining recommendations are based on practical experiences confirmed by the panel results, which could not be statistically tested. These results provide initial harmonization guidelines

  8. MinActionPath: maximum likelihood trajectory for large-scale structural transitions in a coarse-grained locally harmonic energy landscape

    PubMed Central

    Franklin, Joel; Koehl, Patrice; Doniach, Sebastian; Delarue, Marc

    2007-01-01

    The non-linear problem of simulating the structural transition between two known forms of a macromolecule still remains a challenge in structural biology. The problem is usually addressed in an approximate way using ‘morphing’ techniques, which are linear interpolations of either the Cartesian or the internal coordinates between the initial and end states, followed by energy minimization. Here we describe a web tool that implements a new method to calculate the most probable trajectory that is exact for harmonic potentials; as an illustration of the method, the classical Calpha-based Elastic Network Model (ENM) is used both for the initial and the final states but other variants of the ENM are also possible. The Langevin equation under this potential is solved analytically using the Onsager and Machlup action minimization formalism on each side of the transition, thus replacing the original non-linear problem by a pair of linear differential equations joined by a non-linear boundary matching condition. The crossover between the two multidimensional energy curves around each state is found numerically using an iterative approach, producing the most probable trajectory and fully characterizing the transition state and its energy. Jobs calculating such trajectories can be submitted on-line at: http://lorentz.dynstr.pasteur.fr/joel/index.php. PMID:17545201

  9. MinActionPath: maximum likelihood trajectory for large-scale structural transitions in a coarse-grained locally harmonic energy landscape.

    PubMed

    Franklin, Joel; Koehl, Patrice; Doniach, Sebastian; Delarue, Marc

    2007-07-01

    The non-linear problem of simulating the structural transition between two known forms of a macromolecule still remains a challenge in structural biology. The problem is usually addressed in an approximate way using 'morphing' techniques, which are linear interpolations of either the Cartesian or the internal coordinates between the initial and end states, followed by energy minimization. Here we describe a web tool that implements a new method to calculate the most probable trajectory that is exact for harmonic potentials; as an illustration of the method, the classical Calpha-based Elastic Network Model (ENM) is used both for the initial and the final states but other variants of the ENM are also possible. The Langevin equation under this potential is solved analytically using the Onsager and Machlup action minimization formalism on each side of the transition, thus replacing the original non-linear problem by a pair of linear differential equations joined by a non-linear boundary matching condition. The crossover between the two multidimensional energy curves around each state is found numerically using an iterative approach, producing the most probable trajectory and fully characterizing the transition state and its energy. Jobs calculating such trajectories can be submitted on-line at: http://lorentz.dynstr.pasteur.fr/joel/index.php.

  10. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  11. Large scale scientific computing

    SciTech Connect

    Deuflhard, P. ); Engquist, B. )

    1987-01-01

    This book presents papers on large scale scientific computing. It includes: Initial value problems of ODE's and parabolic PDE's; Boundary value problems of ODE's and elliptic PDE's; Hyperbolic PDE's; Inverse problems; Optimization and optimal control problems; and Algorithm adaptation on supercomputers.

  12. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  13. Large Scale Nonlinear Programming.

    DTIC Science & Technology

    1978-06-15

    KEY WORDS (Conhinu. as, t.n.t.. aid. if nic••iary aid ld.ntify by block n,a,b.r) L. In,~~~ IP!CIE LARGE SCALE OPTIMIZATION APPLICATIONS OF NONLINEAR ... NONLINEAR PROGRAMMING by Garth P. McCormick 1. Introduction The general mathematical programming ( optimization ) problem can be stated in the following form...because the difficulty in solving a general nonlinear optimization problem has a~ much to do with the nature of the functions involved as it does with the

  14. Comparing differential tissue harmonic imaging with tissue harmonic and fundamental gray scale imaging of the liver.

    PubMed

    Chiou, See-Ying; Forsberg, Flemming; Fox, Traci B; Needleman, Laurence

    2007-11-01

    The purpose of this study was to compare fundamental gray scale sonography, tissue harmonic imaging (THI), and differential tissue harmonic imaging (DTHI) for depicting normal and abnormal livers. The in vitro lateral resolution of DTHI, THI, and sonography was assessed in a phantom. Sagittal and transverse images of right and left hepatic lobes of 5 volunteers and 20 patients and images of 27 liver lesions were also acquired. Three independent blinded readers scored all randomized images for noise, detail resolution, image quality, and margin (for lesions) on a 7-point scale. Next, images from the same location obtained with all 3 modes were compared blindly side by side and rated by all readers. Contrast-to-noise ratios were calculated for the lesions, and the depth of penetration (centimeters) was determined for all images. In vitro, the lateral resolution of DTHI was significantly better than fundamental sonography (P = .009) and showed a trend toward significance versus THI (P = .06). In the far field, DTHI performed better than both modes (P < .04). In vivo, 450 images were scored, and for all parameters, DTHI and THI did better than sonography (P < .002). Differential tissue harmonic imaging scored significantly higher than THI with regard to detail resolution and image quality (P < .001). The average increase in penetration with THI and DTHI was around 0.6 cm relative to sonography (P < .0001). The contrast-to-noise ratio for DTHI showed a trend toward significance versus THI (P = .06). Side-by-side comparisons rated DTHI better than THI and sonography in 54% of the cases (P < .007). Tissue harmonic imaging and DTHI do better than fundamental sonography for hepatic imaging, with DTHI being rated the best of the 3 techniques.

  15. Large deviations in the alternating mass harmonic chain

    NASA Astrophysics Data System (ADS)

    Fogedby, Hans C.

    2014-08-01

    We extend the work of Kannan et al and derive the cumulant generating function (CGF) for the alternating mass harmonic chain consisting of N particles and driven by heat reservoirs. The main result is a closed expression for the (CGF) in the thermodynamic large N limit. This expression is independent of N, but depends on whether the chain consists of an even or odd number of particles, in accordance with the results obtained by Kannan et al for the heat current. This result is in accordance with the absence of local thermodynamic equilibrium in a linear system.

  16. Large scale tracking algorithms

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  17. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L. |; Rickert, M. |

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  18. Large deviations of heat flow in harmonic chains

    NASA Astrophysics Data System (ADS)

    Kundu, Anupam; Sabhapandit, Sanjib; Dhar, Abhishek

    2011-03-01

    We consider heat transport across a harmonic chain connected at its two ends to white-noise Langevin reservoirs at different temperatures. In the steady state of this system the heat Q flowing from one reservoir into the system in a finite time τ has a distribution P(Q, τ). We study the large time form of the corresponding moment generating function lange - λQrang ~ g(λ)eτμ(λ). Exact formal expressions, in terms of phonon Green's functions, are obtained for both μ(λ) and also the lowest order correction g(λ). We point out that, in general, a knowledge of both μ(λ) and g(λ) is required for finding the large deviation function associated with P(Q, τ). The function μ(λ) is known to be the largest eigenvector of an appropriate Fokker-Planck type operator and our method also gives the corresponding eigenvector exactly.

  19. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  20. Scaling of mode shapes from operational modal analysis using harmonic forces

    NASA Astrophysics Data System (ADS)

    Brandt, A.; Berardengo, M.; Manzoni, S.; Cigada, A.

    2017-10-01

    This paper presents a new method for scaling mode shapes obtained by means of operational modal analysis. The method is capable of scaling mode shapes on any structure, also structures with closely coupled modes, and the method can be used in the presence of ambient vibration from traffic or wind loads, etc. Harmonic excitation can be relatively easily accomplished by using general-purpose actuators, also for force levels necessary for driving large structures such as bridges and highrise buildings. The signal processing necessary for mode shape scaling by the proposed method is simple and the method can easily be implemented in most measurement systems capable of generating a sine wave output. The tests necessary to scale the modes are short compared to typical operational modal analysis test time. The proposed method is thus easy to apply and inexpensive relative to some other methods for scaling mode shapes that are available in literature. Although it is not necessary per se, we propose to excite the structure at, or close to, the eigenfrequencies of the modes to be scaled, since this provides better signal-to-noise ratio in the response sensors, thus permitting the use of smaller actuators. An extensive experimental activity on a real structure was carried out and the results reported demonstrate the feasibility and accuracy of the proposed method. Since the method utilizes harmonic excitation for the mode shape scaling, we propose to call the method OMAH.

  1. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  2. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  3. Large-scale circuit simulation

    NASA Astrophysics Data System (ADS)

    Wei, Y. P.

    1982-12-01

    The simulation of VLSI (Very Large Scale Integration) circuits falls beyond the capabilities of conventional circuit simulators like SPICE. On the other hand, conventional logic simulators can only give the results of logic levels 1 and 0 with the attendent loss of detail in the waveforms. The aim of developing large-scale circuit simulation is to bridge the gap between conventional circuit simulation and logic simulation. This research is to investigate new approaches for fast and relatively accurate time-domain simulation of MOS (Metal Oxide Semiconductors), LSI (Large Scale Integration) and VLSI circuits. New techniques and new algorithms are studied in the following areas: (1) analysis sequencing (2) nonlinear iteration (3) modified Gauss-Seidel method (4) latency criteria and timestep control scheme. The developed methods have been implemented into a simulation program PREMOS which could be used as a design verification tool for MOS circuits.

  4. Large Scale Dynamos in Stars

    NASA Astrophysics Data System (ADS)

    Vishniac, Ethan T.

    2015-01-01

    We show that a differentially rotating conducting fluid automatically creates a magnetic helicity flux with components along the rotation axis and in the direction of the local vorticity. This drives a rapid growth in the local density of current helicity, which in turn drives a large scale dynamo. The dynamo growth rate derived from this process is not constant, but depends inversely on the large scale magnetic field strength. This dynamo saturates when buoyant losses of magnetic flux compete with the large scale dynamo, providing a simple prediction for magnetic field strength as a function of Rossby number in stars. Increasing anisotropy in the turbulence produces a decreasing magnetic helicity flux, which explains the flattening of the B/Rossby number relation at low Rossby numbers. We also show that the kinetic helicity is always a subdominant effect. There is no kinematic dynamo in real stars.

  5. Galaxy clustering on large scales.

    PubMed Central

    Efstathiou, G

    1993-01-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  6. Natural Minor Scale is More Natural to the Brain than Harmonic Minor Scale as Revealed by Magnetoencephalography

    NASA Astrophysics Data System (ADS)

    Ando, Hiromitsu; Nemoto, Iku; Oda, Shoichiro

    Minor mode is known to elicit stronger emotional responses than major mode in the brain. The present work focused on the minor scales and natural and harmonic minor scales were compared in automatic brain responses in an oddball paradigm. The standard stimulus was either the natural or harmonic minor scale, and a deviant stimulus lacked one scale tone of the corresponding complete minor scale. The brain responded to omission of every tone but omission of the tone B flat in the natural minor experiment elicited larger response than that of the other tones. In particular, the response was significantly larger than that to omission of B in the harmonic minor experiment. This result suggested that the brain felt the natural minor scale to be actually more natural than the harmonic minor scale.

  7. Large scale biomimetic membrane arrays.

    PubMed

    Hansen, Jesper S; Perry, Mark; Vogel, Jörg; Groth, Jesper S; Vissing, Thomas; Larsen, Marianne S; Geschke, Oliver; Emneús, Jenny; Bohr, Henrik; Nielsen, Claus H

    2009-10-01

    To establish planar biomimetic membranes across large scale partition aperture arrays, we created a disposable single-use horizontal chamber design that supports combined optical-electrical measurements. Functional lipid bilayers could easily and efficiently be established across CO(2) laser micro-structured 8 x 8 aperture partition arrays with average aperture diameters of 301 +/- 5 microm. We addressed the electro-physical properties of the lipid bilayers established across the micro-structured scaffold arrays by controllable reconstitution of biotechnological and physiological relevant membrane peptides and proteins. Next, we tested the scalability of the biomimetic membrane design by establishing lipid bilayers in rectangular 24 x 24 and hexagonal 24 x 27 aperture arrays, respectively. The results presented show that the design is suitable for further developments of sensitive biosensor assays, and furthermore demonstrate that the design can conveniently be scaled up to support planar lipid bilayers in large square-centimeter partition arrays.

  8. Highly efficient second harmonic generation in hyperbolic metamaterial slot waveguides with large phase matching tolerance.

    PubMed

    Sun, Yu; Zheng, Zheng; Cheng, Jiangtao; Sun, Guodong; Qiao, Guofu

    2015-03-09

    Highly efficient second harmonic generation (SHG) bridging the mid-infrared (IR) and near-IR wavelengths in a coupled hyperbolic metamaterial waveguide with a nonlinear-polymer-filled nanoscale slot is theoretically investigated. By engineering the geometrical parameters, the collinear phase matching condition is satisfied between the even hybrid modes at the fundamental frequency (3,100 nm) and the second harmonic (1,550 nm). Two modes manifest the great field overlap and the significant field enhancement in the nonlinear integration area (i.e. the slot), which leads to extreme large nonlinear coupling coefficient. For a low pumping power of 100 mW, the device length is as short as 2.19 µm and the normalized conversion efficiency comes up to more than 6.37 × 10(5) W(-1)cm(-2) which outperforms that of the plasmonic-based structures. Moreover, the efficient SHG can be achieved with great phase matching tolerance, i.e., a small theoretical fabrication-error sensitivity to filling ratio and a broad pump bandwidth in a compact device length of 2.19 µm using 100 mW pump. The proposed scheme links the mature near-IR devices to the mid-IR regime and have a great potential for integrated chip-scale all-optical signal processes.

  9. The Consortium on Health and Ageing: Network of Cohorts in Europe and the United States (CHANCES) project--design, population and data harmonization of a large-scale, international study.

    PubMed

    Boffetta, Paolo; Bobak, Martin; Borsch-Supan, Axel; Brenner, Hermann; Eriksson, Sture; Grodstein, Fran; Jansen, Eugene; Jenab, Mazda; Juerges, Hendrik; Kampman, Ellen; Kee, Frank; Kuulasmaa, Kari; Park, Yikyung; Tjonneland, Anne; van Duijn, Cornelia; Wilsgaard, Tom; Wolk, Alicja; Trichopoulos, Dimitrios; Bamia, Christina; Trichopoulou, Antonia

    2014-12-01

    There is a public health demand to prevent health conditions which lead to increased morbidity and mortality among the rapidly-increasing elderly population. Data for the incidence of such conditions exist in cohort studies worldwide, which, however, differ in various aspects. The Consortium on Health and Ageing: Network of Cohorts in Europe and the United States (CHANCES) project aims at harmonizing data from existing major longitudinal studies for the elderly whilst focussing on cardiovascular diseases, diabetes mellitus, cancer, fractures and cognitive impairment in order to estimate their prevalence, incidence and cause-specific mortality, and identify lifestyle, socioeconomic, and genetic determinants and biomarkers for the incidence of and mortality from these conditions. A survey instrument assessing ageing-related conditions of the elderly will be also developed. Fourteen cohort studies participate in CHANCES with 683,228 elderly (and 150,210 deaths), from 23 European and three non-European countries. So far, 287 variables on health conditions and a variety of exposures, including biomarkers and genetic data have been harmonized. Different research hypotheses are investigated with meta-analyses. The results which will be produced can help international organizations, governments and policy-makers to better understand the broader implications and consequences of ageing and thus make informed decisions.

  10. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  11. A practical approach to solving large drive harmonic problems at the design stage

    SciTech Connect

    Grainger, L.G. )

    1990-11-01

    This paper reports on large variable-frequency drives which are ideally suited to many petrochemical applications. The major concern often expressed during the early stages of the project that could utilize such a drive is that harmonic problems could arise. Numerous papers have been written concerning the sources of these harmonics and locating them in an existing system. An attempt to show how to eliminate the harmonic problems early in the design phase of a project is mad to reassure the uninitiated that the task is not insurmountable.

  12. Large Scale Coordination of Small Scale Structures

    NASA Astrophysics Data System (ADS)

    Kobelski, Adam; Tarr, Lucas A.; Jaeggli, Sarah A.; Savage, Sabrina

    2017-08-01

    Transient brightenings are ubiquitous features of the solar atmosphere across many length and energy scales, the most energetic of which manifest as large-class solar flares. Often, transient brightenings originate in regions of strong magnetic activity and create strong observable enhancements across wavelengths from X-ray to radio, with notable dynamics on timescales of seconds to hours.The coronal aspects of these brightenings have often been studied by way of EUV and X-ray imaging and spectra. These events are likely driven by photospheric activity (such as flux emergence) with the coronal brightenings originating largely from chromospheric ablation (evaporation). Until recently, chromospheric and transition region observations of these events have been limited. However, new observational capabilities have become available which significantly enhance our ability to understand the bi-directional flow of energy through the chromosphere between the photosphere and the corona.We have recently obtained a unique data set with which to study this flow of energy through the chromosphere via the Interface Region Imaging Spectrograph (IRIS), Hinode EUV Imaging Spectrometer (EIS), Hinode X-Ray Telescope (XRT), Hinode Solar Optical Telescope (SOT), Solar Dynamics Observatory (SDO) Atmospheric Imaging Assembly (AIA), SDO Helioseismic and Magnetic Imager (HMI), Nuclear Spectroscopic Telescope Array (NuStar), Atacama Large Millimeter Array (ALMA), and Interferometric BIdimensional Spectropolarimeter (IBIS) at the Dunn Solar Telescope (DST). This data set targets a small active area near disk center which was tracked simultaneously for approximately four hours. Within this region, many transient brightenings detected through multiple layers of the solar atmosphere. In this study, we combine the imaging data and use the spectra from EIS and IRIS to track flows from the photosphere (HMI, SOT) through the chromosphere and transition region (AIA, IBIS, IRIS, ALMA) into the corona

  13. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  14. Scaling solid density high-harmonic generation into the mid-infrared range

    NASA Astrophysics Data System (ADS)

    Nguyen, Tam; Dollar, Franklin

    2016-10-01

    High-order harmonic generation (HOHG) has been observed to be a bright source of coherent x-rays in laser-solid interactions using high intensity, femtosecond laser pulses. The majority of high-intensity experiments investigating HOHG have been with the use of 800 nm wavelength lasers. It is known that the density profile of the solid affects absorption of the initial pulse and the intensity of the re-radiated harmonics. We investigate how conversion efficiency and maximum harmonic order scale into the mid-infrared range. Results and particle-in-cell simulations over the pre-plasma scale length will be shown.

  15. Large molecule specific assay operation: recommendation for best practices and harmonization from the global bioanalysis consortium harmonization team.

    PubMed

    Stevenson, Lauren; Kelley, Marian; Gorovits, Boris; Kingsley, Clare; Myler, Heather; Osterlund, Karolina; Muruganandam, Arumugam; Minamide, Yoshiyuki; Dominguez, Mario

    2014-01-01

    The L2 Global Harmonization Team on large molecule specific assay operation for protein bioanalysis in support of pharmacokinetics focused on the following topics: setting up a balanced validation design, specificity testing, selectivity testing, dilutional linearity, hook effect, parallelism, and testing of robustness and ruggedness. The team additionally considered the impact of lipemia, hemolysis, and the presence of endogenous analyte on selectivity assessments as well as the occurrence of hook effect in study samples when no hook effect had been observed during pre-study validation.

  16. Cosmology with Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Ho, Shirley; Cuesta, A.; Ross, A.; Seo, H.; DePutter, R.; Padmanabhan, N.; White, M.; Myers, A.; Bovy, J.; Blanton, M.; Hernandez, C.; Mena, O.; Percival, W.; Prada, F.; Ross, N. P.; Saito, S.; Schneider, D.; Skibba, R.; Smith, K.; Slosar, A.; Strauss, M.; Verde, L.; Weinberg, D.; Bachall, N.; Brinkmann, J.; da Costa, L. A.

    2012-01-01

    The Sloan Digital Sky Survey I-III surveyed 14,000 square degrees, and delivered over a trillion pixels of imaging data. I present cosmological results from this unprecedented data set which contains over a million galaxies distributed between redshift of 0.45 to 0.70. With such a large volume of data set, high precision cosmological constraints can be obtained given a careful control and understanding of observational systematics. I present a novel treatment of observational systematics and its application to the clustering signals from the data set. I will present cosmological constraints on dark components of the Universe and tightest constraints of the non-gaussianity of early Universe to date utilizing Large Scale Structure.

  17. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  18. Methane emissions on large scales

    NASA Astrophysics Data System (ADS)

    Beswick, K. M.; Simpson, T. W.; Fowler, D.; Choularton, T. W.; Gallagher, M. W.; Hargreaves, K. J.; Sutton, M. A.; Kaye, A.

    Two separate studies have been undertaken to improve estimates of methane emissions on a landscape scale. The first study took place over a palsa mire in northern Finland in August 1995. A tethered balloon and a tunable diode laser were used to measure profiles of methane in the nocturnal boundary layer. Using a simple box method or the flux gradient technique fluxes ranging from 18.5 to 658 μmol m -2 h -1 were calculated. The large fluxes may be caused by advection of methane pockets across the measurement site, reflecting the heterogeneous nature of methane source strengths in the surrounding area. Under suitable conditions, comparison with nearby ground-based eddy-correlation results suggested that the balloon techniques could successfully measure fluxes on field scales. The second study was carried out by the NERC Scientific Services Atmospheric Research Airborne Support Facility using the Hercules C130 operated by the United Kingdom Meteorological Research Flight. A flight path around the northern coastline of Britain under steady West-East wind conditions enabled the measurement of methane concentrations up- and down-wind of northern Britain. Using a simple one-dimensional, constant-source diffusion model, the difference between the upwind and downwind concentrations was accounted for by methane emission from the surface. The contribution to methane emissions from livestock was also modelled. Modelled non-agricultural methane emissions decreased with increasing latitude with fluxes in northern England being a factor of 4 greater than those in northern Scotland. Since the only major methane source in northern Scotland was peat bogs, these results indicated that emissions over northern England were dominated by anthropogenic sources. Emissions from livestock accounted for 12% of the total flux over northern England, decreasing to 4% in southern Scotland and becoming negligible in northern Scotland. The total methane flux over northern Scotland was consistent

  19. Measurement of the effects of large scale anisotropy on the small scales of turbulence

    NASA Astrophysics Data System (ADS)

    Wijesinghe, Susantha W. A.

    This thesis reports measurements of anisotropy in a laboratory turbulent flow generated by two oscillating grids. It has recently been identified that SO(3) decomposition of Eulerian structure functions provides a powerful tool for analyzing anisotropy in turbulence. From 3D particle tracks obtained with stereoscpic high speed video, we measure the longitudinal Eulerian structure functions for which SO(3) decomposition becomes a spherical harmonic decomposition. This method allows us to measure the anisotropy in different sectors, specified by j and m of the spherical harmonics Yjm (theta, φ). In order to acquire huge data sets required for the full 3D measurement of anisotropy as a function of scale, we have upgraded the optical tracking system to four high speed cameras with a new real-time image compression system. We achieved compression ratios of 154--614 depending on the number of particles appearing in an image. Anisotropy measurements are performed at three different detection volumes in the tank for two grid frequencies where Reynolds numbers vary from Relambda = 132 to Relambda = 277. Increasing j sectors show faster decay of anisotropy as scale decreases, consistent with the idea that the small scales should become isotropic at very high Reynolds number. Measured anisotropic scaling exponents are also consistent with previous studies performed with numerical simulations and hot wire anemometry. By conditioning the different j sectors on the instantaneous large scale velocity, we are able to quantify the dependence of the anisotropy on the state of the large scales. The isotropic sector shows a strong dependence on the state of the large scales. For the isotropic sector, this strong dependence is the same at all length scales showing that the small scales do not become independent of the large scales and confirms previous work by Blum et al. However, for a given state of the large scales, the anisotropic sector diminishes toward smaller length scales

  20. Large-Scale Sequence Comparison.

    PubMed

    Lal, Devi; Verma, Mansi

    2017-01-01

    There are millions of sequences deposited in genomic databases, and it is an important task to categorize them according to their structural and functional roles. Sequence comparison is a prerequisite for proper categorization of both DNA and protein sequences, and helps in assigning a putative or hypothetical structure and function to a given sequence. There are various methods available for comparing sequences, alignment being first and foremost for sequences with a small number of base pairs as well as for large-scale genome comparison. Various tools are available for performing pairwise large sequence comparison. The best known tools either perform global alignment or generate local alignments between the two sequences. In this chapter we first provide basic information regarding sequence comparison. This is followed by the description of the PAM and BLOSUM matrices that form the basis of sequence comparison. We also give a practical overview of currently available methods such as BLAST and FASTA, followed by a description and overview of tools available for genome comparison including LAGAN, MumMER, BLASTZ, and AVID.

  1. Scale-invariant pattern recognition using a combined Mellin radial harmonic function and the bidimensional empirical mode decomposition.

    PubMed

    Yin, Qingbo; Shen, Liran; Kim, Jong-Nam; Jeong, Yong-Jae

    2009-09-14

    A novel scale and shift invariant pattern recognition method is proposed to improve the discrimination capability and noise robustness by combining the bidimensional empirical mode decomposition with the Mellin radial harmonic decomposition. The flatness of its peak intensity response versus scale change is improved. This property is important, since we can detect a large range of scaled patterns (from 0.2 to 1) using a global threshold. Within this range, the correlation peak intensity is relatively uniform with a variance below 20%. This proposed filter has been tested experimentally to confirm the result from numerical simulation for cases both with and without input white noise.

  2. Modelling soil erosion at European scale: towards harmonization and reproducibility

    NASA Astrophysics Data System (ADS)

    Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.

    2014-04-01

    Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale. A new approach for modelling soil erosion at large spatial scale is here proposed. It is based on the joint use of low data demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available datasets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country level statistics of pre-existing European maps of soil erosion by water is also provided.

  3. Large-scale PACS implementation.

    PubMed

    Carrino, J A; Unkel, P J; Miller, I D; Bowser, C L; Freckleton, M W; Johnson, T G

    1998-08-01

    The transition to filmless radiology is a much more formidable task than making the request for proposal to purchase a (Picture Archiving and Communications System) PACS. The Department of Defense and the Veterans Administration have been pioneers in the transformation of medical diagnostic imaging to the electronic environment. Many civilian sites are expected to implement large-scale PACS in the next five to ten years. This presentation will related the empirical insights gleaned at our institution from a large-scale PACS implementation. Our PACS integration was introduced into a fully operational department (not a new hospital) in which work flow had to continue with minimal impact. Impediments to user acceptance will be addressed. The critical components of this enormous task will be discussed. The topics covered during this session will include issues such as phased implementation, DICOM (digital imaging and communications in medicine) standard-based interaction of devices, hospital information system (HIS)/radiology information system (RIS) interface, user approval, networking, workstation deployment and backup procedures. The presentation will make specific suggestions regarding the implementation team, operating instructions, quality control (QC), training and education. The concept of identifying key functional areas is relevant to transitioning the facility to be entirely on line. Special attention must be paid to specific functional areas such as the operating rooms and trauma rooms where the clinical requirements may not match the PACS capabilities. The printing of films may be necessary for certain circumstances. The integration of teleradiology and remote clinics into a PACS is a salient topic with respect to the overall role of the radiologists providing rapid consultation. A Web-based server allows a clinician to review images and reports on a desk-top (personal) computer and thus reduce the number of dedicated PACS review workstations. This session

  4. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  5. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  6. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  7. Spatial position scaling on harmonic generation from He atom in bowtie-shaped nanostructure

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Feng, Liqiang; Li, Wenliang; Li, Yi

    2017-09-01

    Spatial position scaling on the harmonic generation from He in the bowtie-shaped nanostructure has been theoretically investigated. It shows that (i) due to the surface plasmon polaritons in the nanostructure, the laser intensity can be enhanced and presents the nonhomogeneous effect in space. As a result, the extension of the harmonic cutoff can be achieved when He is away from the gap center of the nanostructure. However, due to the limit of the gap size, there is a maximum harmonic cutoff extension for a given nanostructure. (ii) Due to the asymmetric enhancement of the laser intensity in space, the extended harmonics are mainly from E(t) >0 a.u. and E(t) <0 a.u. when He is injected into the positive and the negative positions, respectively. Moreover, the intensities and the cutoffs of the extended harmonics can be controlled by changing the pulse duration or by adding the second controlling pulse. Finally, by properly superposing the harmonics from the two-color field, four single attosecond pulses with the durations of 30 as can be produced.

  8. Large-Scale Pattern Discovery in Music

    NASA Astrophysics Data System (ADS)

    Bertin-Mahieux, Thierry

    This work focuses on extracting patterns in musical data from very large collections. The problem is split in two parts. First, we build such a large collection, the Million Song Dataset, to provide researchers access to commercial-size datasets. Second, we use this collection to study cover song recognition which involves finding harmonic patterns from audio features. Regarding the Million Song Dataset, we detail how we built the original collection from an online API, and how we encouraged other organizations to participate in the project. The result is the largest research dataset with heterogeneous sources of data available to music technology researchers. We demonstrate some of its potential and discuss the impact it already has on the field. On cover song recognition, we must revisit the existing literature since there are no publicly available results on a dataset of more than a few thousand entries. We present two solutions to tackle the problem, one using a hashing method, and one using a higher-level feature computed from the chromagram (dubbed the 2DFTM). We further investigate the 2DFTM since it has potential to be a relevant representation for any task involving audio harmonic content. Finally, we discuss the future of the dataset and the hope of seeing more work making use of the different sources of data that are linked in the Million Song Dataset. Regarding cover songs, we explain how this might be a first step towards defining a harmonic manifold of music, a space where harmonic similarities between songs would be more apparent.

  9. Studies of χ(2)/χ(3) Tensors in Submicron-Scaled Bio-Tissues by Polarization Harmonics Optical Microscopy

    PubMed Central

    Chu, Shi-Wei; Chen, Szu-Yu; Chern, Gia-Wei; Tsai, Tsung-Han; Chen, Yung-Chih; Lin, Bai-Ling; Sun, Chi-Kuang

    2004-01-01

    Optical second- and third-harmonic generations have attracted a lot of attention in the biomedical imaging research field recently due to their intrinsic sectioning ability and noninvasiveness. Combined with near-infrared excitation sources, their deep-penetration ability makes these imaging modalities suitable for tissue characterization. In this article, we demonstrate a polarization harmonics optical microscopy, or P-HOM, to study the nonlinear optical anisotropy of the nanometer-scaled myosin and actin filaments inside myofibrils. By using tight focusing we can avoid the phase-matching condition due to micron-scaled, high-order structures in skeletal muscle fibers, and obtain the submicron-scaled polarization dependencies of second/third-harmonic generation intensities on the inclination angle between the long axes of the filaments and the polarization direction of the linear polarized fundamental excitation laser light. From these dependencies, detailed information on the tensor elements of the second/third-order nonlinear susceptibilities contributed from the myosin/actin filaments inside myofibrils can thus be analyzed and obtained, reflecting the detailed arrangements and structures of the constructing biomolecules. By acquiring a whole, nonlinearly sectioned image with a submicron spatial resolution, we can also compare the polarization dependency and calculate the nonlinear susceptibilities over a large area of the tissue at the same time—which not only provides statistical information but will be especially useful with complex specimen geometry. PMID:15189888

  10. Studies of chi(2)/chi(3) tensors in submicron-scaled bio-tissues by polarization harmonics optical microscopy.

    PubMed

    Chu, Shi-Wei; Chen, Szu-Yu; Chern, Gia-Wei; Tsai, Tsung-Han; Chen, Yung-Chih; Lin, Bai-Ling; Sun, Chi-Kuang

    2004-06-01

    Optical second- and third-harmonic generations have attracted a lot of attention in the biomedical imaging research field recently due to their intrinsic sectioning ability and noninvasiveness. Combined with near-infrared excitation sources, their deep-penetration ability makes these imaging modalities suitable for tissue characterization. In this article, we demonstrate a polarization harmonics optical microscopy, or P-HOM, to study the nonlinear optical anisotropy of the nanometer-scaled myosin and actin filaments inside myofibrils. By using tight focusing we can avoid the phase-matching condition due to micron-scaled, high-order structures in skeletal muscle fibers, and obtain the submicron-scaled polarization dependencies of second/third-harmonic generation intensities on the inclination angle between the long axes of the filaments and the polarization direction of the linear polarized fundamental excitation laser light. From these dependencies, detailed information on the tensor elements of the second/third-order nonlinear susceptibilities contributed from the myosin/actin filaments inside myofibrils can thus be analyzed and obtained, reflecting the detailed arrangements and structures of the constructing biomolecules. By acquiring a whole, nonlinearly sectioned image with a submicron spatial resolution, we can also compare the polarization dependency and calculate the nonlinear susceptibilities over a large area of the tissue at the same time-which not only provides statistical information but will be especially useful with complex specimen geometry.

  11. Life Cycle Greenhouse Gas Emissions of Utility-Scale Wind Power: Systematic Review and Harmonization

    SciTech Connect

    Dolan, S. L.; Heath, G. A.

    2012-04-01

    A systematic review and harmonization of life cycle assessment (LCA) literature of utility-scale wind power systems was performed to determine the causes of and, where possible, reduce variability in estimates of life cycle greenhouse gas (GHG) emissions. Screening of approximately 240 LCAs of onshore and offshore systems yielded 72 references meeting minimum thresholds for quality, transparency, and relevance. Of those, 49 references provided 126 estimates of life cycle GHG emissions. Published estimates ranged from 1.7 to 81 grams CO{sub 2}-equivalent per kilowatt-hour (g CO{sub 2}-eq/kWh), with median and interquartile range (IQR) both at 12 g CO{sub 2}-eq/kWh. After adjusting the published estimates to use consistent gross system boundaries and values for several important system parameters, the total range was reduced by 47% to 3.0 to 45 g CO{sub 2}-eq/kWh and the IQR was reduced by 14% to 10 g CO{sub 2}-eq/kWh, while the median remained relatively constant (11 g CO{sub 2}-eq/kWh). Harmonization of capacity factor resulted in the largest reduction in variability in life cycle GHG emission estimates. This study concludes that the large number of previously published life cycle GHG emission estimates of wind power systems and their tight distribution suggest that new process-based LCAs of similar wind turbine technologies are unlikely to differ greatly. However, additional consequential LCAs would enhance the understanding of true life cycle GHG emissions of wind power (e.g., changes to other generators operations when wind electricity is added to the grid), although even those are unlikely to fundamentally change the comparison of wind to other electricity generation sources.

  12. Large-Scale periodic solar velocities: An observational study

    NASA Technical Reports Server (NTRS)

    Dittmer, P. H.

    1977-01-01

    Observations of large-scale solar velocities were made using the mean field telescope and Babcock magnetograph of the Stanford Solar Observatory. Observations were made in the magnetically insensitive ion line at 5124 A, with light from the center (limb) of the disk right (left) circularly polarized, so that the magnetograph measures the difference in wavelength between center and limb. Computer calculations are made of the wavelength difference produced by global pulsations for spherical harmonics up to second order and of the signal produced by displacing the solar image relative to polarizing optics or diffraction grating.

  13. Method of Harmonic Balance in Full-Scale-Model Tests of Electrical Devices

    NASA Astrophysics Data System (ADS)

    Gorbatenko, N. I.; Lankin, A. M.; Lankin, M. V.

    2017-01-01

    Methods for determining the weber-ampere characteristics of electrical devices, one of which is based on solution of direct problem of harmonic balance and the other on solution of inverse problem of harmonic balance by the method of full-scale-model tests, are suggested. The mathematical model of the device is constructed using the describing function and simplex optimization methods. The presented results of experimental applications of the method show its efficiency. The advantage of the method is the possibility of application for nondestructive inspection of electrical devices in the processes of their production and operation.

  14. Multiple harmonic frequencies resonant cavity design and half-scale prototype measurements for a fast kicker

    NASA Astrophysics Data System (ADS)

    Huang, Yulu; Wang, Haipeng; Rimmer, Robert A.; Wang, Shaoheng; Guo, Jiquan

    2016-12-01

    Quarter wavelength resonator (QWR) based deflecting cavities with the capability of supporting multiple odd-harmonic modes have been developed for an ultrafast periodic kicker system in the proposed Jefferson Lab Electron Ion Collider (JLEIC, formerly MEIC). Previous work on the kicking pulse synthesis and the transverse beam dynamics tracking simulations show that a flat-top kicking pulse can be generated with minimal emittance growth during injection and circulation of the cooling electron bunches. This flat-top kicking pulse can be obtained when a DC component and 10 harmonic modes with appropriate amplitude and phase are combined together. To support 10 such harmonic modes, four QWR cavities are used with 5, 3, 1, and 1 modes, respectively. In the multiple-mode cavities, several slightly tapered segments of the inner conductor are introduced to tune the higher order deflecting modes to be harmonic, and stub tuners are used to fine tune each frequency to compensate for potential errors. In this paper, we summarize the electromagnetic design of the five-mode cavity, including the geometry optimization to get high transverse shunt impedance, the frequency tuning and sensitivity analysis, and the single loop coupler design for coupling to all of the harmonic modes. In particular we report on the design and fabrication of a half-scale copper prototype of this proof-of-principle five-odd-mode cavity, as well as the rf bench measurements. Finally, we demonstrate mode superposition in this cavity experimentally, which illustrates the kicking pulse generation concept.

  15. Modelling soil erosion at European scale: towards harmonization and reproducibility

    NASA Astrophysics Data System (ADS)

    Bosco, C.; de Rigo, D.; Dewitte, O.; Poesen, J.; Panagos, P.

    2015-02-01

    Soil erosion by water is one of the most widespread forms of soil degradation. The loss of soil as a result of erosion can lead to decline in organic matter and nutrient contents, breakdown of soil structure and reduction of the water-holding capacity. Measuring soil loss across the whole landscape is impractical and thus research is needed to improve methods of estimating soil erosion with computational modelling, upon which integrated assessment and mitigation strategies may be based. Despite the efforts, the prediction value of existing models is still limited, especially at regional and continental scale, because a systematic knowledge of local climatological and soil parameters is often unavailable. A new approach for modelling soil erosion at regional scale is here proposed. It is based on the joint use of low-data-demanding models and innovative techniques for better estimating model inputs. The proposed modelling architecture has at its basis the semantic array programming paradigm and a strong effort towards computational reproducibility. An extended version of the Revised Universal Soil Loss Equation (RUSLE) has been implemented merging different empirical rainfall-erosivity equations within a climatic ensemble model and adding a new factor for a better consideration of soil stoniness within the model. Pan-European soil erosion rates by water have been estimated through the use of publicly available data sets and locally reliable empirical relationships. The accuracy of the results is corroborated by a visual plausibility check (63% of a random sample of grid cells are accurate, 83% at least moderately accurate, bootstrap p ≤ 0.05). A comparison with country-level statistics of pre-existing European soil erosion maps is also provided.

  16. Large-scale Digitoxin Intoxication

    PubMed Central

    Lely, A. H.; Van Enter, C. H. J.

    1970-01-01

    Because of an error in the manufacture of digoxin tablets a large number of patients took tablets that contained 0·20 mg. of digitoxin and 0·05 mg. of digoxin instead of the prescribed 0·25 mg. of digoxin. The symptoms are described of 179 patients who took these tablets and suffered from digitalis intoxication. Of these patients, 125 had taken the faultily composed tablets for more than three weeks. In 48 patients 105 separate disturbances in rhythm or in atrioventricular conduction were observed on the electrocardiogram. Extreme fatigue and serious eye conditions were observed in 95% of the patients. Twelve patients had a transient psychosis. Extensive ophthalmological observations indicated that the visual complaints were most probably caused by a transient retrobulbar neuritis. PMID:5273245

  17. Resonant plankton patchiness induced by large-scale turbulent flow

    NASA Astrophysics Data System (ADS)

    McKiver, William J.; Neufeld, Zoltán

    2011-01-01

    Here we study how large-scale variability of oceanic plankton is affected by mesoscale turbulence in a spatially heterogeneous environment. We consider a phytoplankton-zooplankton (PZ) ecosystem model, with different types of zooplankton grazing functions, coupled to a turbulent flow described by the two-dimensional Navier-Stokes equations, representing large-scale horizontal transport in the ocean. We characterize the system using a dimensionless parameter, γ=TB/TF, which is the ratio of the ecosystem biological time scale TB and the flow time scale TF. Through numerical simulations, we examine how the PZ system depends on the time-scale ratio γ and find that the variance of both species changes significantly, with maximum phytoplankton variability at intermediate mixing rates. Through an analysis of the linearized population dynamics, we find an analytical solution based on the forced harmonic oscillator, which explains the behavior of the ecosystem, where there is resonance between the advection and the ecosystem predator-prey dynamics when the forcing time scales match the ecosystem time scales. We also examine the dependence of the power spectra on γ and find that the resonance behavior leads to different spectral slopes for phytoplankton and zooplankton, in agreement with observations.

  18. Resonant plankton patchiness induced by large-scale turbulent flow.

    PubMed

    McKiver, William J; Neufeld, Zoltán

    2011-01-01

    Here we study how large-scale variability of oceanic plankton is affected by mesoscale turbulence in a spatially heterogeneous environment. We consider a phytoplankton-zooplankton (PZ) ecosystem model, with different types of zooplankton grazing functions, coupled to a turbulent flow described by the two-dimensional Navier-Stokes equations, representing large-scale horizontal transport in the ocean. We characterize the system using a dimensionless parameter, γ=T(B)/T(F), which is the ratio of the ecosystem biological time scale T(B) and the flow time scale T(F). Through numerical simulations, we examine how the PZ system depends on the time-scale ratio γ and find that the variance of both species changes significantly, with maximum phytoplankton variability at intermediate mixing rates. Through an analysis of the linearized population dynamics, we find an analytical solution based on the forced harmonic oscillator, which explains the behavior of the ecosystem, where there is resonance between the advection and the ecosystem predator-prey dynamics when the forcing time scales match the ecosystem time scales. We also examine the dependence of the power spectra on γ and find that the resonance behavior leads to different spectral slopes for phytoplankton and zooplankton, in agreement with observations.

  19. Scaling of the dynamics of flexible Lennard-Jones chains: Effects of harmonic bonds.

    PubMed

    Veldhorst, Arno A; Dyre, Jeppe C; Schrøder, Thomas B

    2015-11-21

    The previous paper [A. A. Veldhorst et al., J. Chem. Phys. 141, 054904 (2014)] demonstrated that the isomorph theory explains the scaling properties of a liquid of flexible chains consisting of ten Lennard-Jones particles connected by rigid bonds. We here investigate the same model with harmonic bonds. The introduction of harmonic bonds almost completely destroys the correlations in the equilibrium fluctuations of the potential energy and the virial. According to the isomorph theory, if these correlations are strong a system has isomorphs, curves in the phase diagram along which structure, dynamics, and the excess entropy are invariant. The Lennard-Jones chain liquid with harmonic bonds does have curves in the phase diagram along which the structure and dynamics are invariant. The excess entropy is not invariant on these curves, which we refer to as "pseudoisomorphs." In particular, this means that Rosenfeld's excess-entropy scaling (the dynamics being a function of excess entropy only) does not apply for the Lennard-Jones chain with harmonic bonds.

  20. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  1. Automating large-scale reactor systems

    SciTech Connect

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.

  2. Large Scale Metal Additive Techniques Review

    SciTech Connect

    Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W; Love, Lonnie J

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environment friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.

  3. The Large -scale Distribution of Galaxies

    NASA Astrophysics Data System (ADS)

    Flin, Piotr

    A review of the Large-scale structure of the Universe is given. A connection is made with the titanic work by Johannes Kepler in many areas of astronomy and cosmology. A special concern is made to spatial distribution of Galaxies, voids and walls (cellular structure of the Universe). Finaly, the author is concluding that the large scale structure of the Universe can be observed in much greater scale that it was thought twenty years ago.

  4. What is a large-scale dynamo?

    NASA Astrophysics Data System (ADS)

    Nigro, G.; Pongkitiwanichakul, P.; Cattaneo, F.; Tobias, S. M.

    2017-01-01

    We consider kinematic dynamo action in a sheared helical flow at moderate to high values of the magnetic Reynolds number (Rm). We find exponentially growing solutions which, for large enough shear, take the form of a coherent part embedded in incoherent fluctuations. We argue that at large Rm large-scale dynamo action should be identified by the presence of structures coherent in time, rather than those at large spatial scales. We further argue that although the growth rate is determined by small-scale processes, the period of the coherent structures is set by mean-field considerations.

  5. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  6. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  7. Liquid crystal spatial light modulator with very large phase modulation operating in high harmonic orders.

    PubMed

    Calero, Venancio; García-Martínez, Pascuala; Albero, Jorge; Sánchez-López, María M; Moreno, Ignacio

    2013-11-15

    Unusually large phase modulation in a commercial liquid crystal spatial light modulator (LCSLM) is reported. Such a situation is obtained by illuminating with visible light a device designed to operate in the infrared range. The phase modulation range reaches 6π radians in the red region of the visible spectrum and 10π radians in the blue region. Excellent diffraction efficiency in high harmonic orders is demonstrated despite a concomitant and non-negligible Fabry-Perot interference effect. This type of SLM opens the possibility to implement diffractive elements with reduced chromatic dispersion or chromatic control.

  8. Large-scale dynamics of magnetic helicity

    NASA Astrophysics Data System (ADS)

    Linkmann, Moritz; Dallas, Vassilios

    2016-11-01

    In this paper we investigate the dynamics of magnetic helicity in magnetohydrodynamic (MHD) turbulent flows focusing at scales larger than the forcing scale. Our results show a nonlocal inverse cascade of magnetic helicity, which occurs directly from the forcing scale into the largest scales of the magnetic field. We also observe that no magnetic helicity and no energy is transferred to an intermediate range of scales sufficiently smaller than the container size and larger than the forcing scale. Thus, the statistical properties of this range of scales, which increases with scale separation, is shown to be described to a large extent by the zero flux solutions of the absolute statistical equilibrium theory exhibited by the truncated ideal MHD equations.

  9. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  10. Multiple harmonic frequencies resonant cavity design and half-scale prototype measurements for a fast kicker

    DOE PAGES

    Huang, Yulu; Wang, Haipeng; Wang, Shaoheng; ...

    2016-12-09

    Quarter wavelength resonator (QWR) based deflecting cavities with the capability of supporting multiple odd-harmonic modes have been developed for an ultrafast periodic kicker system in the proposed Jefferson Lab Electron Ion Collider (JLEIC, formerly MEIC). Previous work on the kicking pulse synthesis and the transverse beam dynamics tracking simulations show that a flat-top kicking pulse can be generated with minimal emittance growth during injection and circulation of the cooling electron bunches. This flat-top kicking pulse can be obtained when a DC component and 10 harmonic modes with appropriate amplitude and phase are combined together. To support 10 such harmonic modes,more » four QWR cavities are used with 5, 3, 1, and 1 modes, respectively. In the multiple-mode cavities, several slightly tapered segments of the inner conductor are introduced to tune the higher order deflecting modes to be harmonic, and stub tuners are used to fine tune each frequency to compensate for potential errors. In this paper, we summarize the electromagnetic design of the five-mode cavity, including the geometry optimization to get high transverse shunt impedance, the frequency tuning and sensitivity analysis, and the single loop coupler design for coupling to all of the harmonic modes. In particular we report on the design and fabrication of a half-scale copper prototype of this proof-of-principle five-odd-mode cavity, as well as the rf bench measurements. Lastly, we demonstrate mode superposition in this cavity experimentally, which illustrates the kicking pulse generation concept.« less

  11. Shift- and scale-invariant recognition of contour objects with logarithmic radial harmonic filters.

    PubMed

    Moya, A; Esteve-Taboada, J J; García, J; Ferreira, C

    2000-10-10

    The phase-only logarithmic radial harmonic (LRH) filter has been shown to be suitable for scale-invariant block object recognition. However, an important set of objects is the collection of contour functions that results from a digital edge extraction of the original block objects. These contour functions have a constant width that is independent of the scale of the original object. Therefore, since the energy of the contour objects decreases more slowly with the scale factor than does the energy of the block objects, the phase-only LRH filter has difficulties in the recognition tasks when these contour objects are used. We propose a modified LRH filter that permits the realization of a shift- and scale-invariant optical recognition of contour objects. The modified LRH filter is a complex filter that compensates the energy variation resulting from the scaling of contour objects. Optical results validate the theory and show the utility of the newly proposed method.

  12. Wavelength Scaling of High-Harmonic Yield: Threshold Phenomena and Bound State Symmetry Dependence

    NASA Astrophysics Data System (ADS)

    Starace, Anthony F.; Frolov, M. V.; Manakov, N. L.

    2008-05-01

    Using our recent description of harmonic generation (HG) in terms of a system's complex quasienergy [1], we analyze the harmonic power PδE(λ) (over a fixed interval, δE, of harmonic energies) as a function of wavelength and show that it reproduces the wavelength scaling predicted recently by two groups of authors [2, 3] based on solutions of the time-dependent Schr"odinger equation: PδE(λ)˜&-xcirc;, where x 5-6. Furthermore, the oscillations of PδE(λ) on a fine λ scale found in Ref. [3] are then shown to have a quantum origin, involving threshold phenomena within a system of interacting ionization and HG channels. Our results are also shown to be sensitive to the bound state wave function's symmetry. [1] M.V. Frolov, A.V. Flegel, N.L. Manakov, and A.F. Starace, Phys. Rev. A 75, 063407 (2007). [2] J. Tate, T. Auguste, H.G. Muller, P. Salières, P. Agostini, and L.F. DiMauro, Phys. Rev. Lett.98, 013901 (2007). [3] K. Schiessl, K.L. Ishikava, E. Persson, and J. Burgd"orfer, Phys. Rev. Lett.99, 253903 (2007).

  13. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  14. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  15. Large-scale cortical networks and cognition.

    PubMed

    Bressler, S L

    1995-03-01

    The well-known parcellation of the mammalian cerebral cortex into a large number of functionally distinct cytoarchitectonic areas presents a problem for understanding the complex cortical integrative functions that underlie cognition. How do cortical areas having unique individual functional properties cooperate to accomplish these complex operations? Do neurons distributed throughout the cerebral cortex act together in large-scale functional assemblages? This review examines the substantial body of evidence supporting the view that complex integrative functions are carried out by large-scale networks of cortical areas. Pathway tracing studies in non-human primates have revealed widely distributed networks of interconnected cortical areas, providing an anatomical substrate for large-scale parallel processing of information in the cerebral cortex. Functional coactivation of multiple cortical areas has been demonstrated by neurophysiological studies in non-human primates and several different cognitive functions have been shown to depend on multiple distributed areas by human neuropsychological studies. Electrophysiological studies on interareal synchronization have provided evidence that active neurons in different cortical areas may become not only coactive, but also functionally interdependent. The computational advantages of synchronization between cortical areas in large-scale networks have been elucidated by studies using artificial neural network models. Recent observations of time-varying multi-areal cortical synchronization suggest that the functional topology of a large-scale cortical network is dynamically reorganized during visuomotor behavior.

  16. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  17. Introduction to macroscopic power scaling principles for high-order harmonic generation

    NASA Astrophysics Data System (ADS)

    Heyl, C. M.; Arnold, C. L.; Couairon, A.; L'Huillier, A.

    2017-01-01

    This tutorial presents an introduction to power scaling concepts for high-order harmonic generation (HHG) and attosecond pulse production. We present an overview of state-of-the-art HHG-based extreme ultraviolet (XUV) sources, followed by a brief introduction to basic principles underlying HHG and a detailed discussion of macroscopic effects and scaling principles. Particular emphasis is put on a general scaling model that allows the invariant scaling of the HHG process both, to μJ-level driving laser pulses and thus to multi-MHz repetition rates as well as to 100 mJ-or even Joule-level laser pulses, allowing new intensity regimes with attosecond XUV pulses.

  18. Scaling high-order harmonic generation from laser-solid interactions to ultrahigh intensity.

    PubMed

    Dollar, F; Cummings, P; Chvykov, V; Willingale, L; Vargas, M; Yanovsky, V; Zulick, C; Maksimchuk, A; Thomas, A G R; Krushelnick, K

    2013-04-26

    Coherent x-ray beams with a subfemtosecond (<10(-15)  s) pulse duration will enable measurements of fundamental atomic processes in a completely new regime. High-order harmonic generation (HOHG) using short pulse (<100  fs) infrared lasers focused to intensities surpassing 10(18)  W cm(-2) onto a solid density plasma is a promising means of generating such short pulses. Critical to the relativistic oscillating mirror mechanism is the steepness of the plasma density gradient at the reflection point, characterized by a scale length, which can strongly influence the harmonic generation mechanism. It is shown that for intensities in excess of 10(21)  W cm(-2) an optimum density ramp scale length exists that balances an increase in efficiency with a growth of parametric plasma wave instabilities. We show that for these higher intensities the optimal scale length is c/ω0, for which a variety of HOHG properties are optimized, including total conversion efficiency, HOHG divergence, and their power law scaling. Particle-in-cell simulations show striking evidence of the HOHG loss mechanism through parametric instabilities and relativistic self-phase modulation, which affect the produced spectra and conversion efficiency.

  19. Large-angular-scale anisotropy in the cosmic background radiation

    NASA Astrophysics Data System (ADS)

    Gorenstein, M. V.; Smoot, G. F.

    1981-03-01

    Results of an extended series of airborne measurements of large-angular-scale anisotropy in the 3-K cosmic background radiation are reported. A dual-antenna microwave radiometer operating at 33 GHz flown aboard a U-2 aircraft to 20-km altitude on 11 flights between December 1976 and May 1978 measured differential intensity between pairs of directions distributed over most of the Northern Hemisphere. Measurements show clear evidence of anisotropy that is readily interpreted as due to the solar motion relative to the sources of the radiation. The anisotropy is well fitted by a first order spherical harmonic of amplitude 3.6 + or - 0.5 mK, corresponding to a velocity of 360 + or - 50 km/s toward the direction 11.2 + or - 0.5 hours of right ascension and 19 deg + or - 8 deg declination.

  20. Macromolecular Origins of Harmonics Higher than the Third in Large-Amplitude Oscillatory Shear Flow

    NASA Astrophysics Data System (ADS)

    Giacomin, Alan; Jbara, Layal; Gilbert, Peter; Chemical Engineering Department Team

    2016-11-01

    In 1935, Andrew Gemant conceived of the complex viscosity, a rheological material function measured by "jiggling" an elastic liquid in oscillatory shear. This test reveals information about both the viscous and elastic properties of the liquid, and about how these properties depend on frequency. The test gained popularity with chemists when John Ferry perfected instruments for measuring both the real and imaginary parts of the complex viscosity. In 1958, Cox and Merz discovered that the steady shear viscosity curve was easily deduced from the magnitude of the complex viscosity, and today oscillatory shear is the single most popular rheological property measurement. With oscillatory shear, we can control two things: the frequency (Deborah number) and the shear rate amplitude (Weissenberg number). When the Weissenberg number is large, the elastic liquids respond with a shear stress over a series of odd-multiples of the test frequency. In this lecture we will explore recent attempts to deepen our understand of the physics of these higher harmonics, including especially harmonics higher than the third. Canada Research Chairs program of the Government of Canada for the Natural Sciences and Engineering Research Council of Canada (NSERC) Tier 1 Canada Research Chair in Rheology.

  1. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  2. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  3. Large-scale multimedia modeling applications

    SciTech Connect

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications.

  4. Large-scale synthesis of peptides.

    PubMed

    Andersson, L; Blomberg, L; Flegel, M; Lepsa, L; Nilsson, B; Verlander, M

    2000-01-01

    Recent advances in the areas of formulation and delivery have rekindled the interest of the pharmaceutical community in peptides as drug candidates, which, in turn, has provided a challenge to the peptide industry to develop efficient methods for the manufacture of relatively complex peptides on scales of up to metric tons per year. This article focuses on chemical synthesis approaches for peptides, and presents an overview of the methods available and in use currently, together with a discussion of scale-up strategies. Examples of the different methods are discussed, together with solutions to some specific problems encountered during scale-up development. Finally, an overview is presented of issues common to all manufacturing methods, i.e., methods used for the large-scale purification and isolation of final bulk products and regulatory considerations to be addressed during scale-up of processes to commercial levels. Copyright 2000 John Wiley & Sons, Inc. Biopolymers (Pept Sci) 55: 227-250, 2000

  5. Large optical second harmonic generation in a low-bandgap polymer

    NASA Astrophysics Data System (ADS)

    Vanbel, Maarten K.; Vandendriessche, Stefaan; Willot, Pieter; Koeckelberghs, Guy; Verbiest, Thierry

    2014-10-01

    Recent research has focused on developing low-bandgap polymers for harvesting solar energy, fine-tuning desirable properties including power conversion efficiency, carrier mobilities and broad light absorption. However, little attention has been paid to their nonlinear optical properties. We characterized the optical second harmonic generation of corona poled films of poly(cyclopenta[2,1-b;3,4-b']dithiophen-4-ylidenedioctylmalonate). Despite being amorphous and lacking a typical donor-acceptor dye, these films display large nonlinear optical susceptibilities. Coupled with their stability and low absorption in the relevant wavelength region, these polymer films compare favorably to other materials. Our results show the promise of low-bandgap polymers for nonlinear optical applications.

  6. Molecular origins of higher harmonics in large-amplitude oscillatory shear flow: Shear stress response

    NASA Astrophysics Data System (ADS)

    Gilbert, P. H.; Giacomin, A. J.

    2016-10-01

    Recent work has focused on deepening our understanding of the molecular origins of the higher harmonics that arise in the shear stress response of polymeric liquids in large-amplitude oscillatory shear flow. For instance, these higher harmonics have been explained by just considering the orientation distribution of rigid dumbbells suspended in a Newtonian solvent. These dumbbells, when in dilute suspension, form the simplest relevant molecular model of polymer viscoelasticity, and this model specifically neglects interactions between the polymer molecules [R. B. Bird et al., "Dilute rigid dumbbell suspensions in large-amplitude oscillatory shear flow: Shear stress response," J. Chem. Phys. 140, 074904 (2014)]. In this paper, we explore these interactions by examining the Curtiss-Bird model, a kinetic molecular theory designed specifically to account for the restricted motions that arise when polymer chains are concentrated, thus interacting and specifically, entangled. We begin our comparison using a heretofore ignored explicit analytical solution [X.-J. Fan and R. B. Bird, "A kinetic theory for polymer melts. VI. Calculation of additional material functions," J. Non-Newtonian Fluid Mech. 15, 341 (1984)]. For concentrated systems, the chain motion transverse to the chain axis is more restricted than along the axis. This anisotropy is described by the link tension coefficient, ɛ, for which several special cases arise: ɛ = 0 corresponds to reptation, ɛ > 1/8 to rod-climbing, 1/5 ≤ ɛ ≤ 3/4 to reasonable predictions for shear-thinning in steady simple shear flow, and ɛ = 1 to the dilute solution without hydrodynamic interaction. In this paper, we examine the shapes of the shear stress versus shear rate loops for the special cases ɛ = (" separators=" 0 , 1 / 8 , 3 / 8 , 1 ) , and we compare these with those of rigid dumbbell and reptation model predictions.

  7. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  8. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  9. Macroscopic scaling of high-order harmonics generated by two-color optimized waveforms in a hollow waveguide

    NASA Astrophysics Data System (ADS)

    Jin, Cheng; Hong, Kyung-Han; Lin, C. D.

    2017-07-01

    We present the macroscopic scaling of high harmonics generated by two-color laser pulses interacting with Ne gas in a hollow waveguide. We demonstrate that the divergence of harmonics is inversely proportional to the waveguide radius and harmonic yields are proportional to the square of the waveguide radius when the gas pressure and waveguide length are chosen to meet the phase-matching condition. We also show that harmonic yields are inversely proportional to the ionization level of the optimized two-color waveform with proper gas pressure if waveguide radius and length are fixed. These scaling relations would help experimentalists find phase-matching conditions to efficiently generate tabletop high-flux coherent soft x rays for applications.

  10. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  11. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  12. 10-MHz, Yb-fiber chirped-pulse amplifier system with large-scale transmission gratings.

    PubMed

    Kobayashi, Yohei; Hirayama, Nozomi; Ozawa, Akira; Sukegawa, Takashi; Seki, Takashi; Kuramoto, Yoshiyuki; Watanabe, Shuntaro

    2013-05-20

    Large-scale transmission gratings were produced for a stretcher and a compressor in the Yb-fiber chirped-pulse amplification system. A 23-W, 200-fs laser system with a 10-MHz repetition rate was demonstrated. Focused intensity as high as 10(14) W/cm(2) was achieved, which is high enough for multi-photon processes such as high-order harmonics generation and multi-photon ionization of neutral atoms. High-order harmonics up to 7th order were observed using Xe gas as a nonlinear medium.

  13. Neutrino footprint in large scale structure

    NASA Astrophysics Data System (ADS)

    Garay, Carlos Peña; Verde, Licia; Jimenez, Raul

    2017-03-01

    Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.

  14. Harmonic statistics

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2017-05-01

    The exponential, the normal, and the Poisson statistical laws are of major importance due to their universality. Harmonic statistics are as universal as the three aforementioned laws, but yet they fall short in their 'public relations' for the following reason: the full scope of harmonic statistics cannot be described in terms of a statistical law. In this paper we describe harmonic statistics, in their full scope, via an object termed harmonic Poisson process: a Poisson process, over the positive half-line, with a harmonic intensity. The paper reviews the harmonic Poisson process, investigates its properties, and presents the connections of this object to an assortment of topics: uniform statistics, scale invariance, random multiplicative perturbations, Pareto and inverse-Pareto statistics, exponential growth and exponential decay, power-law renormalization, convergence and domains of attraction, the Langevin equation, diffusions, Benford's law, and 1/f noise.

  15. Wavelength Scaling of High Harmonic Generation Close to the Multiphoton Ionization Regime

    NASA Astrophysics Data System (ADS)

    Lai, Chien-Jen; Cirmi, Giovanni; Hong, Kyung-Han; Moses, Jeffrey; Huang, Shu-Wei; Granados, Eduardo; Keathley, Phillip; Bhardwaj, Siddharth; Kärtner, Franz X.

    2013-08-01

    We study the wavelength scaling of high harmonic generation efficiency with visible driver wavelengths in the transition between the tunneling and the multiphoton ionization regimes where the Keldysh parameter is around unity. Our experiment shows a less dramatic wavelength scaling of efficiency than the conventional case for near- and mid-IR driver wavelengths, and it is well explained by a generalized three-step model for increased Keldysh parameters that employs complex ionization times in addition to the nonadiabatic ionization. The complex ionization time is critical to avoid the divergence when replacing the quasistatic ionization model by the more general nonadiabatic ionization model. Together, the two modifications present a consistent description of the influence of the atomic potential on the rescattering process in the intermediate Keldysh regime.

  16. Scale up of large ALON windows

    NASA Astrophysics Data System (ADS)

    Goldman, Lee M.; Balasubramanian, Sreeram; Kashalikar, Uday; Foti, Robyn; Sastri, Suri

    2013-06-01

    Aluminum Oxynitride (ALON® Optical Ceramic) combines broadband transparency with excellent mechanical properties. ALON's cubic structure means that it is transparent in its polycrystalline form, allowing it to be manufactured by conventional powder processing techniques. Surmet has established a robust manufacturing process, beginning with synthesis of ALON® powder, continuing through forming/heat treatment of blanks, and ending with optical fabrication of ALON® windows. Surmet has made significant progress in our production capability in recent years. Additional scale up of Surmet's manufacturing capability, for larger sizes and higher quantities, is currently underway. ALON® transparent armor represents the state of the art in protection against armor piercing threats, offering a factor of two in weight and thickness savings over conventional glass laminates. Tiled and monolithic windows have been successfully produced and tested against a range of threats. Large ALON® window are also of interest to a range of visible to Mid-Wave Infra-Red (MWIR) sensor applications. These applications often have stressing imaging requirements which in turn require that these large windows have optical characteristics including excellent homogeneity of index of refraction and very low stress birefringence. Surmet is currently scaling up its production facility to be able to make and deliver ALON® monolithic windows as large as ~19x36-in. Additionally, Surmet has plans to scale up to windows ~3ftx3ft in size in the coming years. Recent results with scale up and characterization of the resulting blanks will be presented.

  17. Large-Scale PV Integration Study

    SciTech Connect

    Lu, Shuai; Etingov, Pavel V.; Diao, Ruisheng; Ma, Jian; Samaan, Nader A.; Makarov, Yuri V.; Guo, Xinxin; Hafen, Ryan P.; Jin, Chunlian; Kirkham, Harold; Shlatz, Eugene; Frantzis, Lisa; McClive, Timothy; Karlson, Gregory; Acharya, Dhruv; Ellis, Abraham; Stein, Joshua; Hansen, Clifford; Chadliev, Vladimir; Smart, Michael; Salgo, Richard; Sorensen, Rahn; Allen, Barbara; Idelchik, Boris

    2011-07-29

    This research effort evaluates the impact of large-scale photovoltaic (PV) and distributed generation (DG) output on NV Energy’s electric grid system in southern Nevada. It analyzes the ability of NV Energy’s generation to accommodate increasing amounts of utility-scale PV and DG, and the resulting cost of integrating variable renewable resources. The study was jointly funded by the United States Department of Energy and NV Energy, and conducted by a project team comprised of industry experts and research scientists from Navigant Consulting Inc., Sandia National Laboratories, Pacific Northwest National Laboratory and NV Energy.

  18. Is rigorous retrospective harmonization possible? Application of the DataSHaPER approach across 53 large studies

    PubMed Central

    Fortier, Isabel; Doiron, Dany; Little, Julian; Ferretti, Vincent; L’Heureux, François; Stolk, Ronald P; Knoppers, Bartha M; Hudson, Thomas J; Burton, Paul R

    2011-01-01

    Background Proper understanding of the roles of, and interactions between genetic, lifestyle, environmental and psycho-social factors in determining the risk of development and/or progression of chronic diseases requires access to very large high-quality databases. Because of the financial, technical and time burdens related to developing and maintaining very large studies, the scientific community is increasingly synthesizing data from multiple studies to construct large databases. However, the data items collected by individual studies must be inferentially equivalent to be meaningfully synthesized. The DataSchema and Harmonization Platform for Epidemiological Research (DataSHaPER; http://www.datashaper.org) was developed to enable the rigorous assessment of the inferential equivalence, i.e. the potential for harmonization, of selected information from individual studies. Methods This article examines the value of using the DataSHaPER for retrospective harmonization of established studies. Using the DataSHaPER approach, the potential to generate 148 harmonized variables from the questionnaires and physical measures collected in 53 large population-based studies (6.9 million participants) was assessed. Variable and study characteristics that might influence the potential for data synthesis were also explored. Results Out of all assessment items evaluated (148 variables for each of the 53 studies), 38% could be harmonized. Certain characteristics of variables (i.e. relative importance, individual targeted, reference period) and of studies (i.e. observational units, data collection start date and mode of questionnaire administration) were associated with the potential for harmonization. For example, for variables deemed to be essential, 62% of assessment items paired could be harmonized. Conclusion The current article shows that the DataSHaPER provides an effective and flexible approach for the retrospective harmonization of information across studies. To implement

  19. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  20. Large-scale fibre-array multiplexing

    SciTech Connect

    Cheremiskin, I V; Chekhlova, T K

    2001-05-31

    The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)

  1. Modeling Human Behavior at a Large Scale

    DTIC Science & Technology

    2012-01-01

    Discerning intentions in dynamic human action. Trends in Cognitive Sciences , 5(4):171 – 178, 2001. Shirli Bar-David, Israel Bar-David, Paul C. Cross, Sadie...Limits of predictability in human mobility. Science , 327(5968):1018, 2010. S.A. Stouffer. Intervening opportunities: a theory relating mobility and...Modeling Human Behavior at a Large Scale by Adam Sadilek Submitted in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy

  2. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2008-09-30

    aerosol species up to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas...impact cloud processes globally. With increasing dust storms due to climate change and land use changes in desert regions, the impact of the...bacteria in large-scale dust storms is expected to significantly impact warm ice cloud formation, human health, and ecosystems globally. In Niemi et al

  3. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  4. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  5. Large landslides in the Pyrenees: preliminary tasks carried out for a harmonized cross-border risk analysis

    NASA Astrophysics Data System (ADS)

    Moya, José; Grandjean, Gilles; Copons, Ramon; Vaunat, Jean; Buxó, Pere; Colas, Bastien; Darrozes, José; Gasc, Muriel; Guinau, Marta; Gutiérrez, Francisco; García, Juan Carlos; Virely, Didier; Crosetto, Michele; Mas, Raül

    2017-04-01

    Large landslides are recognised as one of the main erosional agents in mountain ranges, having a significant influence on landscape evolution. However, few efforts have been carried out to assess their geomorphological impact from a regional perspective. Regional-scale investigations are also necessary for the reliable evaluation of the associated risks (i.e. for land-use planning). Large landslides are common in the Pyrenees but: 1) their geographic distribution on a regional scale is not well known; 2) their geological and geomorphological controlling factors have been only studied preliminarily; and 3) their state of activity and stability conditions are unknown for most of the cases. Regional analyses of large landslides, as those carried out by Crosta et al. (2013) in the Alps, are rare worldwide. Jarman et al. (2014) conducted a very preliminary analysis in a sector of the Pyrenees. The construction of a cartographic inventory constitutes the basics for such type of studies, which are typically hindered by the lack of cross-border landslide data bases and methodologies. The aim of this contribution is to present the preliminary works carried out for constructing a harmonized inventory of large landslides in the Pyrenees, involving for the first time both sides of the cordillera and the main groups working in landslide risk in France, Spain and Andorra. Methods used for landslide hazard and risk analysis have been compiled and compared, showing a significant divergence, even as regards the terminology. A preliminary cross-border inventory sheet on risk of large landslides has been prepared. It includes specific fields for the assessment of landslide activity (by using complimentary methods such as morpho-stratigraphy, morphometric analysis and remote techniques) and indirect potential costs (that typically overcome direct ones), which usually are neglected in the existing data bases. Crosta, G.B., Frattini, P. and Agliardi, F., 2013. Deep seated gravitational

  6. Large scale preparation of pure phycobiliproteins.

    PubMed

    Padgett, M P; Krogmann, D W

    1987-01-01

    This paper describes simple procedures for the purification of large amounts of phycocyanin and allophycocyanin from the cyanobacterium Microcystis aeruginosa. A homogeneous natural bloom of this organism provided hundreds of kilograms of cells. Large samples of cells were broken by freezing and thawing. Repeated extraction of the broken cells with distilled water released phycocyanin first, then allophycocyanin, and provides supporting evidence for the current models of phycobilisome structure. The very low ionic strength of the aqueous extracts allowed allophycocyanin release in a particulate form so that this protein could be easily concentrated by centrifugation. Other proteins in the extract were enriched and concentrated by large scale membrane filtration. The biliproteins were purified to homogeneity by chromatography on DEAE cellulose. Purity was established by HPLC and by N-terminal amino acid sequence analysis. The proteins were examined for stability at various pHs and exposures to visible light.

  7. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  8. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Ali, Aamir; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F.; Hubmayr, Johannes; Iuliano, Jeffrey; Karakla, John; Marriage, Tobias; McMahon, Jeff; Miller, Nathan; Moseley, Samuel H.; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen

    2017-01-01

    The Cosmology Large Angular Scale Surveryor (CLASS) is a ground based telescope array designed to measure the large-angular scale polarization signal of the Cosmic Microwave Background (CMB). The large-angular scale CMB polarization measurement is essential for a precise determination of the optical depth to reionization (from the E-mode polarization) and a characterization of inflation from the predicted polarization pattern imprinted on the CMB by gravitational waves in the early universe (from the B-mode polarization). CLASS will characterize the primordial tensor-to-scalar ratio, r, to 0.01 (95% CL).CLASS is uniquely designed to be sensitive to the primordial B-mode signal across the entire range of angular scales where it could possibly dominate over the lensing signal that converts E-modes to B-modes while also making multi-frequency observations both high and low of the frequency where the CMB-to-foreground signal ratio is at its maximum. The design enables CLASS to make a definitive cosmic-variance-limited measurement of the optical depth to scattering from reionization.CLASS is an array of 4 telescopes operating at approximately 40, 90, 150, and 220 GHz. CLASS is located high in the Andes mountains in the Atacama Desert of northern Chile. The location of the CLASS site at high altitude near the equator minimizes atmospheric emission while allowing for daily mapping of ~70% of the sky.A rapid front end Variable-delay Polarization Modulator (VPM) and low noise Transition Edge Sensor (TES) detectors allow for a high sensitivity and low systematic error mapping of the CMB polarization at large angular scales. The VPM, detectors and their coupling structures were all uniquely designed and built for CLASS.We present here an overview of the CLASS scientific strategy, instrument design, and current progress. Particular attention is given to the development and status of the Q-band receiver currently surveying the sky from the Atacama Desert and the development of

  9. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F.; Hubmayr, Johannes; Iuliano, Jeffrey; Karakla, John; McMahon, Jeff; Miller, Nathan T.; Moseley, Samuel H.; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián.; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen

    2016-07-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  10. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  11. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Denis, Kevin; Moseley, Samuel H.; Rostem, Karwan; Wollack, Edward

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  12. The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  13. Statistical Measures of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard

    1993-12-01

    \\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.

  14. Large-Angular-Scale Anisotropy in the Cosmic Background Radiation

    DOE R&D Accomplishments Database

    Gorenstein, M. V.; Smoot, G. F.

    1980-05-01

    We report the results of an extended series of airborne measurements of large-angular-scale anisotropy in the 3 K cosmic background radiation. Observations were carried out with a dual-antenna microwave radiometer operating at 33 GHz (.089 cm wavelength) flown on board a U-2 aircraft to 20 km altitude. In eleven flights, between December 1976 and May 1978, the radiometer measured differential intensity between pairs of directions distributed over most of the northern hemisphere with an rms sensitivity of 47 mK Hz{sup 1�}. The measurements how clear evidence of anisotropy that is readily interpreted as due to the solar motion relative to the sources of the radiation. The anisotropy is well fit by a first order spherical harmonic of amplitude 360{+ or -}50km sec{sup -1} toward the direction 11.2{+ or -}0.5 hours of right ascension and 19 {+ or -}8 degrees declination. A simultaneous fit to a combined hypotheses of dipole and quadrupole angular distributions places a 1 mK limit on the amplitude of most components of quadrupole anisotropy with 90% confidence. Additional analysis places a 0.5 mK limit on uncorrelated fluctuations (sky-roughness) in the 3 K background on an angular scale of the antenna beam width, about 7 degrees.

  15. Large-scale quasi-geostrophic magnetohydrodynamics

    SciTech Connect

    Balk, Alexander M.

    2014-12-01

    We consider the ideal magnetohydrodynamics (MHD) of a shallow fluid layer on a rapidly rotating planet or star. The presence of a background toroidal magnetic field is assumed, and the 'shallow water' beta-plane approximation is used. We derive a single equation for the slow large length scale dynamics. The range of validity of this equation fits the MHD of the lighter fluid at the top of Earth's outer core. The form of this equation is similar to the quasi-geostrophic (Q-G) equation (for usual ocean or atmosphere), but the parameters are essentially different. Our equation also implies the inverse cascade; but contrary to the usual Q-G situation, the energy cascades to smaller length scales, while the enstrophy cascades to the larger scales. We find the Kolmogorov-type spectrum for the inverse cascade. The spectrum indicates the energy accumulation in larger scales. In addition to the energy and enstrophy, the obtained equation possesses an extra (adiabatic-type) invariant. Its presence implies energy accumulation in the 30° sector around zonal direction. With some special energy input, the extra invariant can lead to the accumulation of energy in zonal magnetic field; this happens if the input of the extra invariant is small, while the energy input is considerable.

  16. Large-scale optimization of neuron arbors

    NASA Astrophysics Data System (ADS)

    Cherniak, Christopher; Changizi, Mark; Won Kang, Du

    1999-05-01

    At the global as well as local scales, some of the geometry of types of neuron arbors-both dendrites and axons-appears to be self-organizing: Their morphogenesis behaves like flowing water, that is, fluid dynamically; waterflow in branching networks in turn acts like a tree composed of cords under tension, that is, vector mechanically. Branch diameters and angles and junction sites conform significantly to this model. The result is that such neuron tree samples globally minimize their total volume-rather than, for example, surface area or branch length. In addition, the arbors perform well at generating the cheapest topology interconnecting their terminals: their large-scale layouts are among the best of all such possible connecting patterns, approaching 5% of optimum. This model also applies comparably to arterial and river networks.

  17. Voids in the Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    El-Ad, Hagai; Piran, Tsvi

    1997-12-01

    Voids are the most prominent feature of the large-scale structure of the universe. Still, their incorporation into quantitative analysis of it has been relatively recent, owing essentially to the lack of an objective tool to identify the voids and to quantify them. To overcome this, we present here the VOID FINDER algorithm, a novel tool for objectively quantifying voids in the galaxy distribution. The algorithm first classifies galaxies as either wall galaxies or field galaxies. Then, it identifies voids in the wall-galaxy distribution. Voids are defined as continuous volumes that do not contain any wall galaxies. The voids must be thicker than an adjustable limit, which is refined in successive iterations. In this way, we identify the same regions that would be recognized as voids by the eye. Small breaches in the walls are ignored, avoiding artificial connections between neighboring voids. We test the algorithm using Voronoi tesselations. By appropriate scaling of the parameters with the selection function, we apply it to two redshift surveys, the dense SSRS2 and the full-sky IRAS 1.2 Jy. Both surveys show similar properties: ~50% of the volume is filled by voids. The voids have a scale of at least 40 h-1 Mpc and an average -0.9 underdensity. Faint galaxies do not fill the voids, but they do populate them more than bright ones. These results suggest that both optically and IRAS-selected galaxies delineate the same large-scale structure. Comparison with the recovered mass distribution further suggests that the observed voids in the galaxy distribution correspond well to underdense regions in the mass distribution. This confirms the gravitational origin of the voids.

  18. Molecular Origins of Higher Harmonics in Large-Amplitude Oscillatory Shear Flow: Shear Stress Response

    NASA Astrophysics Data System (ADS)

    Gilbert, Peter; Giacomin, A. Jeffrey; Schmalzer, Andrew; Bird, R. B.

    Recent work has focused on understanding the molecular origins of higher harmonics that arise in the shear stress response of polymeric liquids in large-amplitude oscillatory shear flow. These higher harmonics have been explained using only the orientation distribution of a dilute suspension of rigid dumbbells in a Newtonian fluid, which neglects molecular interactions and is the simplest relevant molecular model of polymer viscoelasticity [R.B. Bird et al., J Chem Phys, 140, 074904 (2014)]. We explore these molecular interactions by examining the Curtiss-Bird model, a kinetic molecular theory that accounts for restricted polymer motions arising when chains are concentrated [Fan and Bird, JNNFM, 15, 341 (1984)]. For concentrated systems, the chain motion transverse to the chain axis is more restricted than along the axis. This anisotropy is described by the link tension coefficient, ɛ, for which several special cases arise: ɛ =0 corresponds to reptation, ɛ > 1 1 8 8 to rod-climbing, 1 1 2 2 >= ɛ >= 3 3 4 4 to reasonable shear-thinning predictions in steady simple shear flow, and ɛ =1 to a dilute solution of chains. We examine the shapes of the shear stress versus shear rate loops for the special cases, ɛ = 0 , 1 0 , 1 8 , 3 3 8 8 8 , 3 3 8 8 , 1 , of the Curtiss-Bird model, and we compare these with those

  19. Large-scale brightenings associated with flares

    NASA Technical Reports Server (NTRS)

    Mandrini, Cristina H.; Machado, Marcos E.

    1992-01-01

    It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.

  20. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  1. Large-scale planar lightwave circuits

    NASA Astrophysics Data System (ADS)

    Bidnyk, Serge; Zhang, Hua; Pearson, Matt; Balakrishnan, Ashok

    2011-01-01

    By leveraging advanced wafer processing and flip-chip bonding techniques, we have succeeded in hybrid integrating a myriad of active optical components, including photodetectors and laser diodes, with our planar lightwave circuit (PLC) platform. We have combined hybrid integration of active components with monolithic integration of other critical functions, such as diffraction gratings, on-chip mirrors, mode-converters, and thermo-optic elements. Further process development has led to the integration of polarization controlling functionality. Most recently, all these technological advancements have been combined to create large-scale planar lightwave circuits that comprise hundreds of optical elements integrated on chips less than a square inch in size.

  2. Colloquium: Large scale simulations on GPU clusters

    NASA Astrophysics Data System (ADS)

    Bernaschi, Massimo; Bisson, Mauro; Fatica, Massimiliano

    2015-06-01

    Graphics processing units (GPU) are currently used as a cost-effective platform for computer simulations and big-data processing. Large scale applications require that multiple GPUs work together but the efficiency obtained with cluster of GPUs is, at times, sub-optimal because the GPU features are not exploited at their best. We describe how it is possible to achieve an excellent efficiency for applications in statistical mechanics, particle dynamics and networks analysis by using suitable memory access patterns and mechanisms like CUDA streams, profiling tools, etc. Similar concepts and techniques may be applied also to other problems like the solution of Partial Differential Equations.

  3. Neutrinos and large-scale structure

    SciTech Connect

    Eisenstein, Daniel J.

    2015-07-15

    I review the use of cosmological large-scale structure to measure properties of neutrinos and other relic populations of light relativistic particles. With experiments to measure the anisotropies of the cosmic microwave anisotropies and the clustering of matter at low redshift, we now have securely measured a relativistic background with density appropriate to the cosmic neutrino background. Our limits on the mass of the neutrino continue to shrink. Experiments coming in the next decade will greatly improve the available precision on searches for the energy density of novel relativistic backgrounds and the mass of neutrinos.

  4. Large-scale Heterogeneous Network Data Analysis

    DTIC Science & Technology

    2012-07-31

    Data for Multi-Player Influence Maximization on Social Networks.” KDD 2012 (Demo).  Po-Tzu Chang , Yen-Chieh Huang, Cheng-Lun Yang, Shou-De Lin, Pu...Jen Cheng. “Learning-Based Time-Sensitive Re-Ranking for Web Search.” SIGIR 2012 (poster)  Hung -Che Lai, Cheng-Te Li, Yi-Chen Lo, and Shou-De Lin...Exploiting and Evaluating MapReduce for Large-Scale Graph Mining.” ASONAM 2012 (Full, 16% acceptance ratio).  Hsun-Ping Hsieh , Cheng-Te Li, and Shou

  5. Thermal activation of dislocations in large scale obstacle bypass

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Capolungo, Laurent; McDowell, David L.; Martinez, Enrique

    2017-08-01

    Dislocation dynamics simulations have been used extensively to predict hardening caused by dislocation-obstacle interactions, including irradiation defect hardening in the athermal case. Incorporating the role of thermal energy on these interactions is possible with a framework provided by harmonic transition state theory (HTST) enabling direct access to thermally activated reaction rates using the Arrhenius equation, including rates of dislocation-obstacle bypass processes. Moving beyond unit dislocation-defect reactions to a representative environment containing a large number of defects requires coarse-graining the activation energy barriers of a population of obstacles into an effective energy barrier that accurately represents the large scale collective process. The work presented here investigates the relationship between unit dislocation-defect bypass processes and the distribution of activation energy barriers calculated for ensemble bypass processes. A significant difference between these cases is observed, which is attributed to the inherent cooperative nature of dislocation bypass processes. In addition to the dislocation-defect interaction, the morphology of the dislocation segments pinned to the defects play an important role on the activation energies for bypass. A phenomenological model for activation energy stress dependence is shown to describe well the effect of a distribution of activation energies, and a probabilistic activation energy model incorporating the stress distribution in a material is presented.

  6. Primer design for large scale sequencing.

    PubMed Central

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-01-01

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects. PMID:9611248

  7. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  8. Primer design for large scale sequencing.

    PubMed

    Haas, S; Vingron, M; Poustka, A; Wiemann, S

    1998-06-15

    We have developed PRIDE, a primer design program that automatically designs primers in single contigs or whole sequencing projects to extend the already known sequence and to double strand single-stranded regions. The program is fully integrated into the Staden package (GAP4) and accessible with a graphical user interface. PRIDE uses a fuzzy logic-based system to calculate primer qualities. The computational performance of PRIDE is enhanced by using suffix trees to store the huge amount of data being produced. A test set of 110 sequencing primers and 11 PCR primer pairs has been designed on genomic templates, cDNAs and sequences containing repetitive elements to analyze PRIDE's success rate. The high performance of PRIDE, combined with its minimal requirement of user interaction and its fast algorithm, make this program useful for the large scale design of primers, especially in large sequencing projects.

  9. Large-scale ATLAS production on EGEE

    NASA Astrophysics Data System (ADS)

    Espinal, X.; Campana, S.; Walker, R.

    2008-07-01

    In preparation for first data at the LHC, a series of Data Challenges, of increasing scale and complexity, have been performed. Large quantities of simulated data have been produced on three different Grids, integrated into the ATLAS production system. During 2006, the emphasis moved towards providing stable continuous production, as is required in the immediate run-up to first data, and thereafter. Here, we discuss the experience of the production done on EGEE resources, using submission based on the gLite WMS, CondorG and a system using Condor Glide-ins. The overall wall time efficiency of around 90% is largely independent of the submission method, and the dominant source of wasted cpu comes from data handling issues. The efficiency of grid job submission is significantly worse than this, and the glide-in method benefits greatly from factorising this out.

  10. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  11. Scaling and Criticality in Large-Scale Neuronal Activity

    NASA Astrophysics Data System (ADS)

    Linkenkaer-Hansen, K.

    The human brain during wakeful rest spontaneously generates large-scale neuronal network oscillations at around 10 and 20 Hz that can be measured non-invasively using magnetoencephalography (MEG) or electroencephalography (EEG). In this chapter, spontaneous oscillations are viewed as the outcome of a self-organizing stochastic process. The aim is to introduce the general prerequisites for stochastic systems to evolve to the critical state and to explain their neurophysiological equivalents. I review the recent evidence that the theory of self-organized criticality (SOC) may provide a unifying explanation for the large variability in amplitude, duration, and recurrence of spontaneous network oscillations, as well as the high susceptibility to perturbations and the long-range power-law temporal correlations in their amplitude envelope.

  12. Internationalization Measures in Large Scale Research Projects

    NASA Astrophysics Data System (ADS)

    Soeding, Emanuel; Smith, Nancy

    2017-04-01

    Internationalization measures in Large Scale Research Projects Large scale research projects (LSRP) often serve as flagships used by universities or research institutions to demonstrate their performance and capability to stakeholders and other interested parties. As the global competition among universities for the recruitment of the brightest brains has increased, effective internationalization measures have become hot topics for universities and LSRP alike. Nevertheless, most projects and universities are challenged with little experience on how to conduct these measures and make internationalization an cost efficient and useful activity. Furthermore, those undertakings permanently have to be justified with the Project PIs as important, valuable tools to improve the capacity of the project and the research location. There are a variety of measures, suited to support universities in international recruitment. These include e.g. institutional partnerships, research marketing, a welcome culture, support for science mobility and an effective alumni strategy. These activities, although often conducted by different university entities, are interlocked and can be very powerful measures if interfaced in an effective way. On this poster we display a number of internationalization measures for various target groups, identify interfaces between project management, university administration, researchers and international partners to work together, exchange information and improve processes in order to be able to recruit, support and keep the brightest heads to your project.

  13. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  14. Large-scale Intelligent Transporation Systems simulation

    SciTech Connect

    Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.

    1995-06-01

    A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.

  15. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  16. Large-scale Globally Propagating Coronal Waves.

    PubMed

    Warmuth, Alexander

    Large-scale, globally propagating wave-like disturbances have been observed in the solar chromosphere and by inference in the corona since the 1960s. However, detailed analysis of these phenomena has only been conducted since the late 1990s. This was prompted by the availability of high-cadence coronal imaging data from numerous spaced-based instruments, which routinely show spectacular globally propagating bright fronts. Coronal waves, as these perturbations are usually referred to, have now been observed in a wide range of spectral channels, yielding a wealth of information. Many findings have supported the "classical" interpretation of the disturbances: fast-mode MHD waves or shocks that are propagating in the solar corona. However, observations that seemed inconsistent with this picture have stimulated the development of alternative models in which "pseudo waves" are generated by magnetic reconfiguration in the framework of an expanding coronal mass ejection. This has resulted in a vigorous debate on the physical nature of these disturbances. This review focuses on demonstrating how the numerous observational findings of the last one and a half decades can be used to constrain our models of large-scale coronal waves, and how a coherent physical understanding of these disturbances is finally emerging.

  17. Territorial Polymers and Large Scale Genome Organization

    NASA Astrophysics Data System (ADS)

    Grosberg, Alexander

    2012-02-01

    Chromatin fiber in interphase nucleus represents effectively a very long polymer packed in a restricted volume. Although polymer models of chromatin organization were considered, most of them disregard the fact that DNA has to stay not too entangled in order to function properly. One polymer model with no entanglements is the melt of unknotted unconcatenated rings. Extensive simulations indicate that rings in the melt at large length (monomer numbers) N approach the compact state, with gyration radius scaling as N^1/3, suggesting every ring being compact and segregated from the surrounding rings. The segregation is consistent with the known phenomenon of chromosome territories. Surface exponent β (describing the number of contacts between neighboring rings scaling as N^β) appears only slightly below unity, β 0.95. This suggests that the loop factor (probability to meet for two monomers linear distance s apart) should decay as s^-γ, where γ= 2 - β is slightly above one. The later result is consistent with HiC data on real human interphase chromosomes, and does not contradict to the older FISH data. The dynamics of rings in the melt indicates that the motion of one ring remains subdiffusive on the time scale well above the stress relaxation time.

  18. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  19. Affine scaling transformation algorithms for harmonic retrieval in a compressive sampling framework

    NASA Astrophysics Data System (ADS)

    Cabrera, Sergio D.; Rosiles, Jose Gerardo; Brito, Alejandro E.

    2007-09-01

    In this paper we investigate the use of the Affine Scaling Transformation (AST) family of algorithms in solving the sparse signal recovery problem of harmonic retrieval for the DFT-grid frequencies case. We present the problem in the more general Compressive Sampling/Sensing (CS) framework where any set of incomplete, linearly independent measurements can be used to recover or approximate a sparse signal. The compressive sampling problem has been approached mostly as a problem of l I norm minimization, which can be solved via an associated linear programming problem. More recently, attention has shifted to the random linear projection measurements case. For the harmonic retrieval problem, we focus on linear measurements in the form of: consecutively located time samples, randomly located time samples, and (Gaussian) random linear projections. We use the AST family of algorithms which is applicable to the more general problem of minimization of the l p p-norm-like diversity measure that includes the numerosity (p=0), and the l I norm (p=1). Of particular interest in this paper is to experimentally find a relationship between the minimum number M of measurements needed for perfect recovery and the number of components K of the sparse signal, which is N samples long. Of further interest is the number of AST iterations required to converge to its solution for various values of the parameter p. In addition, we quantify the reconstruction error to assess the closeness of the AST solution to the original signal. Results show that the AST for p=1 requires 3-5 times more iterations to converge to its solution than AST for p=0. The minimum number of data measurements needed for perfect recovery is approximately the same on the average for all values of p, however, there is an increasing spread as p is reduced from p=1 to p=0. Finally, we briefly contrast the AST results with those obtained using another l I minimization algorithm solver.

  20. Improving Recent Large-Scale Pulsar Surveys

    NASA Astrophysics Data System (ADS)

    Cardoso, Rogerio Fernando; Ransom, S.

    2011-01-01

    Pulsars are unique in that they act as celestial laboratories for precise tests of gravity and other extreme physics (Kramer 2004). There are approximately 2000 known pulsars today, which is less than ten percent of pulsars in the Milky Way according to theoretical models (Lorimer 2004). Out of these 2000 known pulsars, approximately ten percent are known millisecond pulsars, objects used for their period stability for detailed physics tests and searches for gravitational radiation (Lorimer 2008). As the field and instrumentation progress, pulsar astronomers attempt to overcome observational biases and detect new pulsars, consequently discovering new millisecond pulsars. We attempt to improve large scale pulsar surveys by examining three recent pulsar surveys. The first, the Green Bank Telescope 350MHz Drift Scan, a low frequency isotropic survey of the northern sky, has yielded a large number of candidates that were visually inspected and identified, resulting in over 34.000 thousands candidates viewed, dozens of detections of known pulsars, and the discovery of a new low-flux pulsar, PSRJ1911+22. The second, the PALFA survey, is a high frequency survey of the galactic plane with the Arecibo telescope. We created a processing pipeline for the PALFA survey at the National Radio Astronomy Observatory in Charlottesville- VA, in addition to making needed modifications upon advice from the PALFA consortium. The third survey examined is a new GBT 820MHz survey devoted to find new millisecond pulsars by observing the target-rich environment of unidentified sources in the FERMI LAT catalogue. By approaching these three pulsar surveys at different stages, we seek to improve the success rates of large scale surveys, and hence the possibility for ground-breaking work in both basic physics and astrophysics.

  1. Multifrequency time-harmonic elastography for the measurement of liver viscoelasticity in large tissue windows.

    PubMed

    Tzschätzsch, Heiko; Ipek-Ugay, Selcan; Trong, Manh Nguyen; Guo, Jing; Eggers, Jonathan; Gentz, Enno; Fischer, Thomas; Schultz, Michael; Braun, Jürgen; Sack, Ingolf

    2015-03-01

    Elastography of the liver for the non-invasive diagnosis of hepatic fibrosis is an established method. However, investigations of obese patients or patients with ascites are often limited by small and superficial elastographic windows. Therefore, multifrequency time-harmonic elastography (THE) based on time-resolved A-line ultrasound has recently been developed for measuring liver viscoelasticity in wide soft tissue windows and at greater depths. In this study, THE was integrated into a clinical B-mode scanner connected to a dedicated actuator bed driven by superimposed vibrations of 30- to 60-Hz frequencies. The resulting shear waves in the liver were captured along multiple profiles 7 to 14 cm in width and automatically processed for reconstruction of mean efficient shear wave speed and shear wave dispersion slope. This new modality was tested in healthy volunteers and 22 patients with clinically proven cirrhosis. Patients could be separated from controls by higher shear wave speeds (3.11 ± 0.64 m/s, 2.14-4.81 m/s, vs. 1.74 ± 0.10 m/s, 1.60-1.91 m/s) without significant degradation of data by high body mass index or ascites. Furthermore, the wave speed dispersion slope was significantly (p = 0.0025) lower in controls (5.2 ± 1.8 m/s/kHz) than in patients (39.1 ± 32.2 m/s/kHz). In conclusion, THE is useful for the diagnosis of cirrhosis in large tissue windows.

  2. Efficient, large scale separation of coal macerals

    SciTech Connect

    Dyrkacz, G.R.; Bloomquist, C.A.A.

    1988-01-01

    The authors believe that the separation of macerals by continuous flow centrifugation offers a simple technique for the large scale separation of macerals. With relatively little cost (/approximately/ $10K), it provides an opportunity for obtaining quite pure maceral fractions. Although they have not completely worked out all the nuances of this separation system, they believe that the problems they have indicated can be minimized to pose only minor inconvenience. It cannot be said that this system completely bypasses the disagreeable tedium or time involved in separating macerals, nor will it by itself overcome the mental inertia required to make maceral separation an accepted necessary fact in fundamental coal science. However, they find their particular brand of continuous flow centrifugation is considerably faster than sink/float separation, can provide a good quality product with even one separation cycle, and permits the handling of more material than a conventional sink/float centrifuge separation.

  3. Grid sensitivity capability for large scale structures

    NASA Technical Reports Server (NTRS)

    Nagendra, Gopal K.; Wallerstein, David V.

    1989-01-01

    The considerations and the resultant approach used to implement design sensitivity capability for grids into a large scale, general purpose finite element system (MSC/NASTRAN) are presented. The design variables are grid perturbations with a rather general linking capability. Moreover, shape and sizing variables may be linked together. The design is general enough to facilitate geometric modeling techniques for generating design variable linking schemes in an easy and straightforward manner. Test cases have been run and validated by comparison with the overall finite difference method. The linking of a design sensitivity capability for shape variables in MSC/NASTRAN with an optimizer would give designers a powerful, automated tool to carry out practical optimization design of real life, complicated structures.

  4. Engineering management of large scale systems

    NASA Technical Reports Server (NTRS)

    Sanders, Serita; Gill, Tepper L.; Paul, Arthur S.

    1989-01-01

    The organization of high technology and engineering problem solving, has given rise to an emerging concept. Reasoning principles for integrating traditional engineering problem solving with system theory, management sciences, behavioral decision theory, and planning and design approaches can be incorporated into a methodological approach to solving problems with a long range perspective. Long range planning has a great potential to improve productivity by using a systematic and organized approach. Thus, efficiency and cost effectiveness are the driving forces in promoting the organization of engineering problems. Aspects of systems engineering that provide an understanding of management of large scale systems are broadly covered here. Due to the focus and application of research, other significant factors (e.g., human behavior, decision making, etc.) are not emphasized but are considered.

  5. Large scale cryogenic fluid systems testing

    NASA Technical Reports Server (NTRS)

    1992-01-01

    NASA Lewis Research Center's Cryogenic Fluid Systems Branch (CFSB) within the Space Propulsion Technology Division (SPTD) has the ultimate goal of enabling the long term storage and in-space fueling/resupply operations for spacecraft and reusable vehicles in support of space exploration. Using analytical modeling, ground based testing, and on-orbit experimentation, the CFSB is studying three primary categories of fluid technology: storage, supply, and transfer. The CFSB is also investigating fluid handling, advanced instrumentation, and tank structures and materials. Ground based testing of large-scale systems is done using liquid hydrogen as a test fluid at the Cryogenic Propellant Tank Facility (K-site) at Lewis' Plum Brook Station in Sandusky, Ohio. A general overview of tests involving liquid transfer, thermal control, pressure control, and pressurization is given.

  6. Large Scale Quantum Simulations of Nuclear Pasta

    NASA Astrophysics Data System (ADS)

    Fattoyev, Farrukh J.; Horowitz, Charles J.; Schuetrumpf, Bastian

    2016-03-01

    Complex and exotic nuclear geometries collectively referred to as ``nuclear pasta'' are expected to naturally exist in the crust of neutron stars and in supernovae matter. Using a set of self-consistent microscopic nuclear energy density functionals we present the first results of large scale quantum simulations of pasta phases at baryon densities 0 . 03 < ρ < 0 . 10 fm-3, proton fractions 0 . 05

  7. Jovian large-scale stratospheric circulation

    NASA Technical Reports Server (NTRS)

    West, R. A.; Friedson, A. J.; Appleby, J. F.

    1992-01-01

    An attempt is made to diagnose the annual-average mean meridional residual Jovian large-scale stratospheric circulation from observations of the temperature and reflected sunlight that reveal the morphology of the aerosol heating. The annual mean solar heating, total radiative flux divergence, mass stream function, and Eliassen-Palm flux divergence are shown. The stratospheric radiative flux divergence is dominated the high latitudes by aerosol absorption. Between the 270 and 100 mbar pressure levels, where there is no aerosol heating in the model, the structure of the circulation at low- to midlatitudes is governed by the meridional variation of infrared cooling in association with the variation of zonal mean temperatures observed by IRIS. The principal features of the vertical velocity profile found by Gierasch et al. (1986) are recovered in the present calculation.

  8. Large-scale parametric survival analysis.

    PubMed

    Mittal, Sushil; Madigan, David; Cheng, Jerry Q; Burd, Randall S

    2013-10-15

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only a small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power have led to considerable interest in analyzing very-high-dimensional data where the number of predictor variables and the number of observations range between 10(4) and 10(6). In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of the cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models.

  9. Large-Scale Parametric Survival Analysis†

    PubMed Central

    Mittal, Sushil; Madigan, David; Cheng, Jerry; Burd, Randall S.

    2013-01-01

    Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power has led to considerable interest in analyzing very high-dimensional data where the number of predictor variables and the number of observations range between 104 – 106. In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models. PMID:23625862

  10. The challenge of large-scale structure

    NASA Astrophysics Data System (ADS)

    Gregory, S. A.

    1996-03-01

    The tasks that I have assumed for myself in this presentation include three separate parts. The first, appropriate to the particular setting of this meeting, is to review the basic work of the founding of this field; the appropriateness comes from the fact that W. G. Tifft made immense contributions that are not often realized by the astronomical community. The second task is to outline the general tone of the observational evidence for large scale structures. (Here, in particular, I cannot claim to be complete. I beg forgiveness from any workers who are left out by my oversight for lack of space and time.) The third task is to point out some of the major aspects of the field that may represent the clues by which some brilliant sleuth will ultimately figure out how galaxies formed.

  11. Modeling the Internet's large-scale topology

    PubMed Central

    Yook, Soon-Hyung; Jeong, Hawoong; Barabási, Albert-László

    2002-01-01

    Network generators that capture the Internet's large-scale topology are crucial for the development of efficient routing protocols and modeling Internet traffic. Our ability to design realistic generators is limited by the incomplete understanding of the fundamental driving forces that affect the Internet's evolution. By combining several independent databases capturing the time evolution, topology, and physical layout of the Internet, we identify the universal mechanisms that shape the Internet's router and autonomous system level topology. We find that the physical layout of nodes form a fractal set, determined by population density patterns around the globe. The placement of links is driven by competition between preferential attachment and linear distance dependence, a marked departure from the currently used exponential laws. The universal parameters that we extract significantly restrict the class of potentially correct Internet models and indicate that the networks created by all available topology generators are fundamentally different from the current Internet. PMID:12368484

  12. Large-scale sequential quadratic programming algorithms

    SciTech Connect

    Eldersveld, S.K.

    1992-09-01

    The problem addressed is the general nonlinear programming problem: finding a local minimizer for a nonlinear function subject to a mixture of nonlinear equality and inequality constraints. The methods studied are in the class of sequential quadratic programming (SQP) algorithms, which have previously proved successful for problems of moderate size. Our goal is to devise an SQP algorithm that is applicable to large-scale optimization problems, using sparse data structures and storing less curvature information but maintaining the property of superlinear convergence. The main features are: 1. The use of a quasi-Newton approximation to the reduced Hessian of the Lagrangian function. Only an estimate of the reduced Hessian matrix is required by our algorithm. The impact of not having available the full Hessian approximation is studied and alternative estimates are constructed. 2. The use of a transformation matrix Q. This allows the QP gradient to be computed easily when only the reduced Hessian approximation is maintained. 3. The use of a reduced-gradient form of the basis for the null space of the working set. This choice of basis is more practical than an orthogonal null-space basis for large-scale problems. The continuity condition for this choice is proven. 4. The use of incomplete solutions of quadratic programming subproblems. Certain iterates generated by an active-set method for the QP subproblem are used in place of the QP minimizer to define the search direction for the nonlinear problem. An implementation of the new algorithm has been obtained by modifying the code MINOS. Results and comparisons with MINOS and NPSOL are given for the new algorithm on a set of 92 test problems.

  13. Supporting large-scale computational science

    SciTech Connect

    Musick, R., LLNL

    1998-02-19

    Business needs have driven the development of commercial database systems since their inception. As a result, there has been a strong focus on supporting many users, minimizing the potential corruption or loss of data, and maximizing performance metrics like transactions per second, or TPC-C and TPC-D results. It turns out that these optimizations have little to do with the needs of the scientific community, and in particular have little impact on improving the management and use of large-scale high-dimensional data. At the same time, there is an unanswered need in the scientific community for many of the benefits offered by a robust DBMS. For example, tying an ad-hoc query language such as SQL together with a visualization toolkit would be a powerful enhancement to current capabilities. Unfortunately, there has been little emphasis or discussion in the VLDB community on this mismatch over the last decade. The goal of the paper is to identify the specific issues that need to be resolved before large-scale scientific applications can make use of DBMS products. This topic is addressed in the context of an evaluation of commercial DBMS technology applied to the exploration of data generated by the Department of Energy`s Accelerated Strategic Computing Initiative (ASCI). The paper describes the data being generated for ASCI as well as current capabilities for interacting with and exploring this data. The attraction of applying standard DBMS technology to this domain is discussed, as well as the technical and business issues that currently make this an infeasible solution.

  14. Large scale mechanical metamaterials as seismic shields

    NASA Astrophysics Data System (ADS)

    Miniaci, Marco; Krushynska, Anastasiia; Bosia, Federico; Pugno, Nicola M.

    2016-08-01

    Earthquakes represent one of the most catastrophic natural events affecting mankind. At present, a universally accepted risk mitigation strategy for seismic events remains to be proposed. Most approaches are based on vibration isolation of structures rather than on the remote shielding of incoming waves. In this work, we propose a novel approach to the problem and discuss the feasibility of a passive isolation strategy for seismic waves based on large-scale mechanical metamaterials, including for the first time numerical analysis of both surface and guided waves, soil dissipation effects, and adopting a full 3D simulations. The study focuses on realistic structures that can be effective in frequency ranges of interest for seismic waves, and optimal design criteria are provided, exploring different metamaterial configurations, combining phononic crystals and locally resonant structures and different ranges of mechanical properties. Dispersion analysis and full-scale 3D transient wave transmission simulations are carried out on finite size systems to assess the seismic wave amplitude attenuation in realistic conditions. Results reveal that both surface and bulk seismic waves can be considerably attenuated, making this strategy viable for the protection of civil structures against seismic risk. The proposed remote shielding approach could open up new perspectives in the field of seismology and in related areas of low-frequency vibration damping or blast protection.

  15. Large Scale EOF Analysis of Climate Data

    NASA Astrophysics Data System (ADS)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  16. Spatial solitons in photonic lattices with large-scale defects

    NASA Astrophysics Data System (ADS)

    Yang, Xiao-Yu; Zheng, Jiang-Bo; Dong, Liang-Wei

    2011-03-01

    We address the existence, stability and propagation dynamics of solitons supported by large-scale defects surrounded by the harmonic photonic lattices imprinted in the defocusing saturable nonlinear medium. Several families of soliton solutions, including flat-topped, dipole-like, and multipole-like solitons, can be supported by the defected lattices with different heights of defects. The width of existence domain of solitons is determined solely by the saturable parameter. The existence domains of various types of solitons can be shifted by the variations of defect size, lattice depth and soliton order. Solitons in the model are stable in a wide parameter window, provided that the propagation constant exceeds a critical value, which is in sharp contrast to the case where the soliton trains is supported by periodic lattices imprinted in defocusing saturable nonlinear medium. We also find stable solitons in the semi-infinite gap which rarely occur in the defocusing media. Project supported by the National Natural Science Foundation of China (Grant No. 10704067) and the Natural Science Foundation of Zhejiang Province, China (Grant No. Y6100381).

  17. CLASS: The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Essinger-Hileman, Thomas; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Chuss, David T.; Colazo, Felipe; Crowe, Erik; Denis, Kevin; Dunner, Rolando; Eimer, Joseph; Gothe, Dominik; Halpern, Mark; Kogut, Alan J.; Miller, Nathan; Moseley, Samuel; Rostem, Karwan; Stevenson, Thomas; Towner, Deborah; U-Yen, Kongpop; Wollack, Edward

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an experiment to measure the signature of a gravitational wave background from inflation in the polarization of the cosmic microwave background (CMB). CLASS is a multi-frequency array of four telescopes operating from a high-altitude site in the Atacama Desert in Chile. CLASS will survey 70% of the sky in four frequency bands centered at 38, 93, 148, and 217 GHz, which are chosen to straddle the Galactic-foreground minimum while avoiding strong atmospheric emission lines. This broad frequency coverage ensures that CLASS can distinguish Galactic emission from the CMB. The sky fraction of the CLASS survey will allow the full shape of the primordial B-mode power spectrum to be characterized, including the signal from reionization at low-length. Its unique combination of large sky coverage, control of systematic errors, and high sensitivity will allow CLASS to measure or place upper limits on the tensor-to-scalar ratio at a level of r = 0:01 and make a cosmic-variance-limited measurement of the optical depth to the surface of last scattering, tau. (c) (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

  18. Large-scale wind turbine structures

    NASA Technical Reports Server (NTRS)

    Spera, David A.

    1988-01-01

    The purpose of this presentation is to show how structural technology was applied in the design of modern wind turbines, which were recently brought to an advanced stage of development as sources of renewable power. Wind turbine structures present many difficult problems because they are relatively slender and flexible; subject to vibration and aeroelastic instabilities; acted upon by loads which are often nondeterministic; operated continuously with little maintenance in all weather; and dominated by life-cycle cost considerations. Progress in horizontal-axis wind turbines (HAWT) development was paced by progress in the understanding of structural loads, modeling of structural dynamic response, and designing of innovative structural response. During the past 15 years a series of large HAWTs was developed. This has culminated in the recent completion of the world's largest operating wind turbine, the 3.2 MW Mod-5B power plane installed on the island of Oahu, Hawaii. Some of the applications of structures technology to wind turbine will be illustrated by referring to the Mod-5B design. First, a video overview will be presented to provide familiarization with the Mod-5B project and the important components of the wind turbine system. Next, the structural requirements for large-scale wind turbines will be discussed, emphasizing the difficult fatigue-life requirements. Finally, the procedures used to design the structure will be presented, including the use of the fracture mechanics approach for determining allowable fatigue stresses.

  19. Large-Scale Spacecraft Fire Safety Tests

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    An international collaborative program is underway to address open issues in spacecraft fire safety. Because of limited access to long-term low-gravity conditions and the small volume generally allotted for these experiments, there have been relatively few experiments that directly study spacecraft fire safety under low-gravity conditions. Furthermore, none of these experiments have studied sample sizes and environment conditions typical of those expected in a spacecraft fire. The major constraint has been the size of the sample, with prior experiments limited to samples of the order of 10 cm in length and width or smaller. This lack of experimental data forces spacecraft designers to base their designs and safety precautions on 1-g understanding of flame spread, fire detection, and suppression. However, low-gravity combustion research has demonstrated substantial differences in flame behavior in low-gravity. This, combined with the differences caused by the confined spacecraft environment, necessitates practical scale spacecraft fire safety research to mitigate risks for future space missions. To address this issue, a large-scale spacecraft fire experiment is under development by NASA and an international team of investigators. This poster presents the objectives, status, and concept of this collaborative international project (Saffire). The project plan is to conduct fire safety experiments on three sequential flights of an unmanned ISS re-supply spacecraft (the Orbital Cygnus vehicle) after they have completed their delivery of cargo to the ISS and have begun their return journeys to earth. On two flights (Saffire-1 and Saffire-3), the experiment will consist of a flame spread test involving a meter-scale sample ignited in the pressurized volume of the spacecraft and allowed to burn to completion while measurements are made. On one of the flights (Saffire-2), 9 smaller (5 x 30 cm) samples will be tested to evaluate NASAs material flammability screening tests

  20. Scaling of Harmonic Oscillator Eigenfunctions and Their Nodal Sets Around the Caustic

    NASA Astrophysics Data System (ADS)

    Hanin, Boris; Zelditch, Steve; Zhou, Peng

    2017-03-01

    We study the scaling asymptotics of the eigenspace projection kernels Π_{hbar, E}(x,y) of the isotropic Harmonic Oscillator {hat{H}_{hbar} = - hbar^2 Δ +|x|^2} of eigenvalue {E = hbar(N + d/2)} in the semi-classical limit {hbar to 0} . The principal result is an explicit formula for the scaling asymptotics of Π_{hbar, E}(x,y) for x, y in a {hbar^{2/3}} neighborhood of the caustic C_E as {hbar → 0.} The scaling asymptotics are applied to the distribution of nodal sets of Gaussian random eigenfunctions around the caustic as {hbar to 0} . In previous work we proved that the density of zeros of Gaussian random eigenfunctions of {hat{H}_{hbar}} have different orders in the Planck constant {hbar} in the allowed and forbidden regions: In the allowed region the density is of order {hbar^{-1}} while it is {hbar^{-1/2}} in the forbidden region. Our main result on nodal sets is that the density of zeros is of order {hbar^{-2/3}} in an {hbar^{2/3}} -tube around the caustic. This tube radius is the `critical radius'. For annuli of larger inner and outer radii {hbar^{α}} with {0 < α < 2/3} we obtain density results that interpolate between this critical radius result and our prior ones in the allowed and forbidden region. We also show that the Hausdorff ( d-2)-dimensional measure of the intersection of the nodal set with the caustic is of order {hbar^{- 2/3}}.

  1. Demagnetization harmonic effects on the magnetization of granular systems on a macroscopic scale: the superconducting case

    NASA Astrophysics Data System (ADS)

    Mancusi, D.; Galluzzi, A.; Pace, S.; Polichetti, M.

    2017-10-01

    A model has been developed to determine the effective ac magnetic response of magnetic systems, taking into account the demagnetization effects arising from the sample geometry which determine the out-of-phase components of the applied fundamental frequency and higher harmonic components. Indeed, demagnetization fields and their intermodulation can significantly affect the ac magnetic response. This approach provides a system of self-consistent linear equations relating the magnetic response to the external magnetic field by means of nonlinear magnetic susceptibility. The model is extended to the magnetic response of granular systems in terms of the contributions of the individual grains and of the whole sample in the presence of demagnetization effects of the whole sample and of the grains on a macroscopic scale. In particular, our model is applied to a granular superconducting system. The comparison between the performed numerical simulations and the experimental data shows that the demagnetization fields of the single grains and of the whole sample, and their intermodulation, are relevant if magnetic measurements are used to extract detailed information about the analyzed material.

  2. Large scale simulations of Brownian suspensions

    NASA Astrophysics Data System (ADS)

    Viera, Marc Nathaniel

    Particle suspensions occur in a wide variety of natural and engineering materials. Some examples are colloids, polymers, paints, and slurries. These materials exhibit complex behavior owing to the forces which act among the particles and are transmitted through the fluid medium. Depending on the application, particle sizes range from large macroscopic molecules of 100mum to smaller colloidal particles in the range of 10nm to 1mum. Particles of this size interact though interparticle forces such as electrostatic and van der Waals, as well as hydrodynamic forces transmitted through the fluid medium. Additionally, the particles are subjected to random thermal fluctuations in the fluid giving rise to Brownian motion. The central objective of our research is to develop efficient numerical algorithms for the large scale dynamic simulation of particle suspensions. While previous methods have incurred a computational cost of O(N3), where N is the number of particles, we have developed a novel algorithm capable of solving this problem in O(N ln N) operations. This has allowed us to perform dynamic simulations with up to 64,000 particles and Monte Carlo realizations of up to 1 million particles. Our algorithm follows a Stokesian dynamics formulation by evaluating many-body hydrodynamic interactions using a far-field multipole expansion combined with a near-field lubrication correction. The breakthrough O(N ln N) scaling is obtained by employing a Particle-Mesh-Ewald (PME) approach whereby near-field interactions are evaluated directly and far-field interactions are evaluated using a grid based velocity computed with FFT's. This approach is readily extended to include the effects of Brownian motion. For interacting particles, the fluctuation-dissipation theorem requires that the individual Brownian forces satisfy a correlation based on the N body resistance tensor R. The accurate modeling of these forces requires the computation of a matrix square root R 1/2 for matrices up

  3. Population generation for large-scale simulation

    NASA Astrophysics Data System (ADS)

    Hannon, Andrew C.; King, Gary; Morrison, Clayton; Galstyan, Aram; Cohen, Paul

    2005-05-01

    Computer simulation is used to research phenomena ranging from the structure of the space-time continuum to population genetics and future combat.1-3 Multi-agent simulations in particular are now commonplace in many fields.4, 5 By modeling populations whose complex behavior emerges from individual interactions, these simulations help to answer questions about effects where closed form solutions are difficult to solve or impossible to derive.6 To be useful, simulations must accurately model the relevant aspects of the underlying domain. In multi-agent simulation, this means that the modeling must include both the agents and their relationships. Typically, each agent can be modeled as a set of attributes drawn from various distributions (e.g., height, morale, intelligence and so forth). Though these can interact - for example, agent height is related to agent weight - they are usually independent. Modeling relations between agents, on the other hand, adds a new layer of complexity, and tools from graph theory and social network analysis are finding increasing application.7, 8 Recognizing the role and proper use of these techniques, however, remains the subject of ongoing research. We recently encountered these complexities while building large scale social simulations.9-11 One of these, the Hats Simulator, is designed to be a lightweight proxy for intelligence analysis problems. Hats models a "society in a box" consisting of many simple agents, called hats. Hats gets its name from the classic spaghetti western, in which the heroes and villains are known by the color of the hats they wear. The Hats society also has its heroes and villains, but the challenge is to identify which color hat they should be wearing based on how they behave. There are three types of hats: benign hats, known terrorists, and covert terrorists. Covert terrorists look just like benign hats but act like terrorists. Population structure can make covert hat identification significantly more

  4. Management of large-scale multimedia conferencing

    NASA Astrophysics Data System (ADS)

    Cidon, Israel; Nachum, Youval

    1998-12-01

    The goal of this work is to explore management strategies and algorithms for large-scale multimedia conferencing over a communication network. Since the use of multimedia conferencing is still limited, the management of such systems has not yet been studied in depth. A well organized and human friendly multimedia conference management should utilize efficiently and fairly its limited resources as well as take into account the requirements of the conference participants. The ability of the management to enforce fair policies and to quickly take into account the participants preferences may even lead to a conference environment that is more pleasant and more effective than a similar face to face meeting. We suggest several principles for defining and solving resource sharing problems in this context. The conference resources which are addressed in this paper are the bandwidth (conference network capacity), time (participants' scheduling) and limitations of audio and visual equipment. The participants' requirements for these resources are defined and translated in terms of Quality of Service requirements and the fairness criteria.

  5. Large-scale tides in general relativity

    NASA Astrophysics Data System (ADS)

    Ip, Hiu Yan; Schmidt, Fabian

    2017-02-01

    Density perturbations in cosmology, i.e. spherically symmetric adiabatic perturbations of a Friedmann-Lemaȋtre-Robertson-Walker (FLRW) spacetime, are locally exactly equivalent to a different FLRW solution, as long as their wavelength is much larger than the sound horizon of all fluid components. This fact is known as the "separate universe" paradigm. However, no such relation is known for anisotropic adiabatic perturbations, which correspond to an FLRW spacetime with large-scale tidal fields. Here, we provide a closed, fully relativistic set of evolutionary equations for the nonlinear evolution of such modes, based on the conformal Fermi (CFC) frame. We show explicitly that the tidal effects are encoded by the Weyl tensor, and are hence entirely different from an anisotropic Bianchi I spacetime, where the anisotropy is sourced by the Ricci tensor. In order to close the system, certain higher derivative terms have to be dropped. We show that this approximation is equivalent to the local tidal approximation of Hui and Bertschinger [1]. We also show that this very simple set of equations matches the exact evolution of the density field at second order, but fails at third and higher order. This provides a useful, easy-to-use framework for computing the fully relativistic growth of structure at second order.

  6. Food appropriation through large scale land acquisitions

    NASA Astrophysics Data System (ADS)

    Rulli, Maria Cristina; D'Odorico, Paolo

    2014-05-01

    The increasing demand for agricultural products and the uncertainty of international food markets has recently drawn the attention of governments and agribusiness firms toward investments in productive agricultural land, mostly in the developing world. The targeted countries are typically located in regions that have remained only marginally utilized because of lack of modern technology. It is expected that in the long run large scale land acquisitions (LSLAs) for commercial farming will bring the technology required to close the existing crops yield gaps. While the extent of the acquired land and the associated appropriation of freshwater resources have been investigated in detail, the amount of food this land can produce and the number of people it could feed still need to be quantified. Here we use a unique dataset of land deals to provide a global quantitative assessment of the rates of crop and food appropriation potentially associated with LSLAs. We show how up to 300-550 million people could be fed by crops grown in the acquired land, should these investments in agriculture improve crop production and close the yield gap. In contrast, about 190-370 million people could be supported by this land without closing of the yield gap. These numbers raise some concern because the food produced in the acquired land is typically exported to other regions, while the target countries exhibit high levels of malnourishment. Conversely, if used for domestic consumption, the crops harvested in the acquired land could ensure food security to the local populations.

  7. Large scale structure of the sun's corona

    NASA Astrophysics Data System (ADS)

    Kundu, Mukul R.

    Results concerning the large-scale structure of the solar corona obtained by observations at meter-decameter wavelengths are reviewed. Coronal holes observed on the disk at multiple frequencies show the radial and azimuthal geometry of the hole. At the base of the hole there is good correspondence to the chromospheric signature in He I 10,830 A, but at greater heights the hole may show departures from symmetry. Two-dimensional imaging of weak-type III bursts simultaneously with the HAO SMM coronagraph/polarimeter measurements indicate that these bursts occur along elongated features emanating from the quiet sun, corresponding in position angle to the bright coronal streamers. It is shown that the densest regions of streamers and the regions of maximum intensity of type II bursts coincide closely. Non-flare-associated type II/type IV bursts associated with coronal streamer disruption events are studied along with correlated type II burst emissions originating from distant centers on the sun.

  8. Large-scale carbon fiber tests

    NASA Technical Reports Server (NTRS)

    Pride, R. A.

    1980-01-01

    A realistic release of carbon fibers was established by burning a minimum of 45 kg of carbon fiber composite aircraft structural components in each of five large scale, outdoor aviation jet fuel fire tests. This release was quantified by several independent assessments with various instruments developed specifically for these tests. The most likely values for the mass of single carbon fibers released ranged from 0.2 percent of the initial mass of carbon fiber for the source tests (zero wind velocity) to a maximum of 0.6 percent of the initial carbon fiber mass for dissemination tests (5 to 6 m/s wind velocity). Mean fiber lengths for fibers greater than 1 mm in length ranged from 2.5 to 3.5 mm. Mean diameters ranged from 3.6 to 5.3 micrometers which was indicative of significant oxidation. Footprints of downwind dissemination of the fire released fibers were measured to 19.1 km from the fire.

  9. Large-scale clustering of cosmic voids

    NASA Astrophysics Data System (ADS)

    Chan, Kwan Chuen; Hamaus, Nico; Desjacques, Vincent

    2014-11-01

    We study the clustering of voids using N -body simulations and simple theoretical models. The excursion-set formalism describes fairly well the abundance of voids identified with the watershed algorithm, although the void formation threshold required is quite different from the spherical collapse value. The void cross bias bc is measured and its large-scale value is found to be consistent with the peak background split results. A simple fitting formula for bc is found. We model the void auto-power spectrum taking into account the void biasing and exclusion effect. A good fit to the simulation data is obtained for voids with radii ≳30 Mpc h-1 , especially when the void biasing model is extended to 1-loop order. However, the best-fit bias parameters do not agree well with the peak-background results. Being able to fit the void auto-power spectrum is particularly important not only because it is the direct observable in galaxy surveys, but also our method enables us to treat the bias parameters as nuisance parameters, which are sensitive to the techniques used to identify voids.

  10. Large-scale autostereoscopic outdoor display

    NASA Astrophysics Data System (ADS)

    Reitterer, Jörg; Fidler, Franz; Saint Julien-Wallsee, Ferdinand; Schmid, Gerhard; Gartner, Wolfgang; Leeb, Walter; Schmid, Ulrich

    2013-03-01

    State-of-the-art autostereoscopic displays are often limited in size, effective brightness, number of 3D viewing zones, and maximum 3D viewing distances, all of which are mandatory requirements for large-scale outdoor displays. Conventional autostereoscopic indoor concepts like lenticular lenses or parallax barriers cannot simply be adapted for these screens due to the inherent loss of effective resolution and brightness, which would reduce both image quality and sunlight readability. We have developed a modular autostereoscopic multi-view laser display concept with sunlight readable effective brightness, theoretically up to several thousand 3D viewing zones, and maximum 3D viewing distances of up to 60 meters. For proof-of-concept purposes a prototype display with two pixels was realized. Due to various manufacturing tolerances each individual pixel has slightly different optical properties, and hence the 3D image quality of the display has to be calculated stochastically. In this paper we present the corresponding stochastic model, we evaluate the simulation and measurement results of the prototype display, and we calculate the achievable autostereoscopic image quality to be expected for our concept.

  11. Numerical Modeling for Large Scale Hydrothermal System

    NASA Astrophysics Data System (ADS)

    Sohrabi, Reza; Jansen, Gunnar; Malvoisin, Benjamin; Mazzini, Adriano; Miller, Stephen A.

    2017-04-01

    Moderate-to-high enthalpy systems are driven by multiphase and multicomponent processes, fluid and rock mechanics, and heat transport processes, all of which present challenges in developing realistic numerical models of the underlying physics. The objective of this work is to present an approach, and some initial results, for modeling and understanding dynamics of the birth of large scale hydrothermal systems. Numerical modeling of such complex systems must take into account a variety of coupled thermal, hydraulic, mechanical and chemical processes, which is numerically challenging. To provide first estimates of the behavior of this deep complex systems, geological structures must be constrained, and the fluid dynamics, mechanics and the heat transport need to be investigated in three dimensions. Modeling these processes numerically at adequate resolution and reasonable computation times requires a suite of tools that we are developing and/or utilizing to investigate such systems. Our long-term goal is to develop 3D numerical models, based on a geological models, which couples mechanics with the hydraulics and thermal processes driving hydrothermal system. Our first results from the Lusi hydrothermal system in East Java, Indonesia provide a basis for more sophisticated studies, eventually in 3D, and we introduce a workflow necessary to achieve these objectives. Future work focuses with the aim and parallelization suitable for High Performance Computing (HPC). Such developments are necessary to achieve high-resolution simulations to more fully understand the complex dynamics of hydrothermal systems.

  12. Large scale digital atlases in neuroscience

    NASA Astrophysics Data System (ADS)

    Hawrylycz, M.; Feng, D.; Lau, C.; Kuan, C.; Miller, J.; Dang, C.; Ng, L.

    2014-03-01

    Imaging in neuroscience has revolutionized our current understanding of brain structure, architecture and increasingly its function. Many characteristics of morphology, cell type, and neuronal circuitry have been elucidated through methods of neuroimaging. Combining this data in a meaningful, standardized, and accessible manner is the scope and goal of the digital brain atlas. Digital brain atlases are used today in neuroscience to characterize the spatial organization of neuronal structures, for planning and guidance during neurosurgery, and as a reference for interpreting other data modalities such as gene expression and connectivity data. The field of digital atlases is extensive and in addition to atlases of the human includes high quality brain atlases of the mouse, rat, rhesus macaque, and other model organisms. Using techniques based on histology, structural and functional magnetic resonance imaging as well as gene expression data, modern digital atlases use probabilistic and multimodal techniques, as well as sophisticated visualization software to form an integrated product. Toward this goal, brain atlases form a common coordinate framework for summarizing, accessing, and organizing this knowledge and will undoubtedly remain a key technology in neuroscience in the future. Since the development of its flagship project of a genome wide image-based atlas of the mouse brain, the Allen Institute for Brain Science has used imaging as a primary data modality for many of its large scale atlas projects. We present an overview of Allen Institute digital atlases in neuroscience, with a focus on the challenges and opportunities for image processing and computation.

  13. Large-scale assembly of colloidal particles

    NASA Astrophysics Data System (ADS)

    Yang, Hongta

    This study reports a simple, roll-to-roll compatible coating technology for producing three-dimensional highly ordered colloidal crystal-polymer composites, colloidal crystals, and macroporous polymer membranes. A vertically beveled doctor blade is utilized to shear align silica microsphere-monomer suspensions to form large-area composites in a single step. The polymer matrix and the silica microspheres can be selectively removed to create colloidal crystals and self-standing macroporous polymer membranes. The thickness of the shear-aligned crystal is correlated with the viscosity of the colloidal suspension and the coating speed, and the correlations can be qualitatively explained by adapting the mechanisms developed for conventional doctor blade coating. Five important research topics related to the application of large-scale three-dimensional highly ordered macroporous films by doctor blade coating are covered in this study. The first topic describes the invention in large area and low cost color reflective displays. This invention is inspired by the heat pipe technology. The self-standing macroporous polymer films exhibit brilliant colors which originate from the Bragg diffractive of visible light form the three-dimensional highly ordered air cavities. The colors can be easily changed by tuning the size of the air cavities to cover the whole visible spectrum. When the air cavities are filled with a solvent which has the same refractive index as that of the polymer, the macroporous polymer films become completely transparent due to the index matching. When the solvent trapped in the cavities is evaporated by in-situ heating, the sample color changes back to brilliant color. This process is highly reversible and reproducible for thousands of cycles. The second topic reports the achievement of rapid and reversible vapor detection by using 3-D macroporous photonic crystals. Capillary condensation of a condensable vapor in the interconnected macropores leads to the

  14. Sensitivity technologies for large scale simulation.

    SciTech Connect

    Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard

    2005-01-01

    Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first

  15. Large-scale Fractal Motion of Clouds

    NASA Image and Video Library

    2017-09-27

    waters surrounding the island.) The “swallowed” gulps of clear island air get carried along within the vortices, but these are soon mixed into the surrounding clouds. Landsat is unique in its ability to image both the small-scale eddies that mix clear and cloudy air, down to the 30 meter pixel size of Landsat, but also having a wide enough field-of-view, 180 km, to reveal the connection of the turbulence to large-scale flows such as the subtropical oceanic gyres. Landsat 7, with its new onboard digital recorder, has extended this capability away from the few Landsat ground stations to remote areas such as Alejandro Island, and thus is gradually providing a global dynamic picture of evolving human-scale phenomena. For more details on von Karman vortices, refer to climate.gsfc.nasa.gov/~cahalan. Image and caption courtesy Bob Cahalan, NASA GSFC Instrument: Landsat 7 - ETM+ Credit: NASA/GSFC/Landsat NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Join us on Facebook

  16. Large Scale, High Resolution, Mantle Dynamics Modeling

    NASA Astrophysics Data System (ADS)

    Geenen, T.; Berg, A. V.; Spakman, W.

    2007-12-01

    To model the geodynamic evolution of plate convergence, subduction and collision and to allow for a connection to various types of observational data, geophysical, geodetical and geological, we developed a 4D (space-time) numerical mantle convection code. The model is based on a spherical 3D Eulerian fem model, with quadratic elements, on top of which we constructed a 3D Lagrangian particle in cell(PIC) method. We use the PIC method to transport material properties and to incorporate a viscoelastic rheology. Since capturing small scale processes associated with localization phenomena require a high resolution, we spend a considerable effort on implementing solvers suitable to solve for models with over 100 million degrees of freedom. We implemented Additive Schwartz type ILU based methods in combination with a Krylov solver, GMRES. However we found that for problems with over 500 thousend degrees of freedom the convergence of the solver degraded severely. This observation is known from the literature [Saad, 2003] and results from the local character of the ILU preconditioner resulting in a poor approximation of the inverse of A for large A. The size of A for which ILU is no longer usable depends on the condition of A and on the amount of fill in allowed for the ILU preconditioner. We found that for our problems with over 5×105 degrees of freedom convergence became to slow to solve the system within an acceptable amount of walltime, one minute, even when allowing for considerable amount of fill in. We also implemented MUMPS and found good scaling results for problems up to 107 degrees of freedom for up to 32 CPU¡¯s. For problems with over 100 million degrees of freedom we implemented Algebraic Multigrid type methods (AMG) from the ML library [Sala, 2006]. Since multigrid methods are most effective for single parameter problems, we rebuild our model to use the SIMPLE method in the Stokes solver [Patankar, 1980]. We present scaling results from these solvers for 3D

  17. Large Scale Flame Spread Environmental Characterization Testing

    NASA Technical Reports Server (NTRS)

    Clayman, Lauren K.; Olson, Sandra L.; Gokoghi, Suleyman A.; Brooker, John E.; Ferkul, Paul V.; Kacher, Henry F.

    2013-01-01

    Under the Advanced Exploration Systems (AES) Spacecraft Fire Safety Demonstration Project (SFSDP), as a risk mitigation activity in support of the development of a large-scale fire demonstration experiment in microgravity, flame-spread tests were conducted in normal gravity on thin, cellulose-based fuels in a sealed chamber. The primary objective of the tests was to measure pressure rise in a chamber as sample material, burning direction (upward/downward), total heat release, heat release rate, and heat loss mechanisms were varied between tests. A Design of Experiments (DOE) method was imposed to produce an array of tests from a fixed set of constraints and a coupled response model was developed. Supplementary tests were run without experimental design to additionally vary select parameters such as initial chamber pressure. The starting chamber pressure for each test was set below atmospheric to prevent chamber overpressure. Bottom ignition, or upward propagating burns, produced rapid acceleratory turbulent flame spread. Pressure rise in the chamber increases as the amount of fuel burned increases mainly because of the larger amount of heat generation and, to a much smaller extent, due to the increase in gaseous number of moles. Top ignition, or downward propagating burns, produced a steady flame spread with a very small flat flame across the burning edge. Steady-state pressure is achieved during downward flame spread as the pressure rises and plateaus. This indicates that the heat generation by the flame matches the heat loss to surroundings during the longer, slower downward burns. One heat loss mechanism included mounting a heat exchanger directly above the burning sample in the path of the plume to act as a heat sink and more efficiently dissipate the heat due to the combustion event. This proved an effective means for chamber overpressure mitigation for those tests producing the most total heat release and thusly was determined to be a feasible mitigation

  18. Synchronization of coupled large-scale Boolean networks

    SciTech Connect

    Li, Fangfei

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  19. The School Principal's Role in Large-Scale Assessment

    ERIC Educational Resources Information Center

    Newton, Paul; Tunison, Scott; Viczko, Melody

    2010-01-01

    This paper reports on an interpretive study in which 25 elementary principals were asked about their assessment knowledge, the use of large-scale assessments in their schools, and principals' perceptions on their roles with respect to large-scale assessments. Principals in this study suggested that the current context of large-scale assessment and…

  20. Synchronization of coupled large-scale Boolean networks

    NASA Astrophysics Data System (ADS)

    Li, Fangfei

    2014-03-01

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  1. Multitree Algorithms for Large-Scale Astrostatistics

    NASA Astrophysics Data System (ADS)

    March, William B.; Ozakin, Arkadas; Lee, Dongryeol; Riegel, Ryan; Gray, Alexander G.

    2012-03-01

    Common astrostatistical operations. A number of common "subroutines" occur over and over again in the statistical analysis of astronomical data. Some of the most powerful, and computationally expensive, of these additionally share the common trait that they involve distance comparisons between all pairs of data points—or in some cases, all triplets or worse. These include: * All Nearest Neighbors (AllNN): For each query point in a dataset, find the k-nearest neighbors among the points in another dataset—naively O(N2) to compute, for O(N) data points. * n-Point Correlation Functions: The main spatial statistic used for comparing two datasets in various ways—naively O(N2) for the 2-point correlation, O(N3) for the 3-point correlation, etc. * Euclidean Minimum Spanning Tree (EMST): The basis for "single-linkage hierarchical clustering,"the main procedure for generating a hierarchical grouping of the data points at all scales, aka "friends-of-friends"—naively O(N2). * Kernel Density Estimation (KDE): The main method for estimating the probability density function of the data, nonparametrically (i.e., with virtually no assumptions on the functional form of the pdf)—naively O(N2). * Kernel Regression: A powerful nonparametric method for regression, or predicting a continuous target value—naively O(N2). * Kernel Discriminant Analysis (KDA): A powerful nonparametric method for classification, or predicting a discrete class label—naively O(N2). (Note that the "two datasets" may in fact be the same dataset, as in two-point autocorrelations, or the so-called monochromatic AllNN problem, or the leave-one-out cross-validation needed in kernel estimation.) The need for fast algorithms for such analysis subroutines is particularly acute in the modern age of exploding dataset sizes in astronomy. The Sloan Digital Sky Survey yielded hundreds of millions of objects, and the next generation of instruments such as the Large Synoptic Survey Telescope will yield roughly

  2. The Calculation of Accurate Harmonic Frequencies of Large Molecules: The Polycyclic Aromatic Hydrocarbons, a Case Study

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  3. The Calculation of Accurate Harmonic Frequencies of Large Molecules: The Polycyclic Aromatic Hydrocarbons, a Case Study

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Arnold, James O. (Technical Monitor)

    1996-01-01

    The vibrational frequencies and infrared intensities of naphthalene neutral and cation are studied at the self-consistent-field (SCF), second-order Moller-Plesset (MP2), and density functional theory (DFT) levels using a variety of one-particle basis sets. Very accurate frequencies can be obtained at the DFT level in conjunction with large basis sets if they are scaled with two factors, one for the C-H stretches and a second for all other modes. We also find remarkably good agreement at the B3LYP/4-31G level using only one scale factor. Unlike the neutral PAHs where all methods do reasonably well for the intensities, only the DFT results are accurate for the PAH cations. The failure of the SCF and MP2 methods is caused by symmetry breaking and an inability to describe charge delocalization. We present several interesting cases of symmetry breaking in this study. An assessment is made as to whether an ensemble of PAH neutrals or cations could account for the unidentified infrared bands observed in many astronomical sources.

  4. Large scale dynamics of protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Béthune, William

    2017-08-01

    Planets form in the gaseous and dusty disks orbiting young stars. These protoplanetary disks are dispersed in a few million years, being accreted onto the central star or evaporated into the interstellar medium. To explain the observed accretion rates, it is commonly assumed that matter is transported through the disk by turbulence, although the mechanism sustaining turbulence is uncertain. On the other side, irradiation by the central star could heat up the disk surface and trigger a photoevaporative wind, but thermal effects cannot account for the observed acceleration and collimation of the wind into a narrow jet perpendicular to the disk plane. Both issues can be solved if the disk is sensitive to magnetic fields. Weak fields lead to the magnetorotational instability, whose outcome is a state of sustained turbulence. Strong fields can slow down the disk, causing it to accrete while launching a collimated wind. However, the coupling between the disk and the neutral gas is done via electric charges, each of which is outnumbered by several billion neutral molecules. The imperfect coupling between the magnetic field and the neutral gas is described in terms of "non-ideal" effects, introducing new dynamical behaviors. This thesis is devoted to the transport processes happening inside weakly ionized and weakly magnetized accretion disks; the role of microphysical effects on the large-scale dynamics of the disk is of primary importance. As a first step, I exclude the wind and examine the impact of non-ideal effects on the turbulent properties near the disk midplane. I show that the flow can spontaneously organize itself if the ionization fraction is low enough; in this case, accretion is halted and the disk exhibits axisymmetric structures, with possible consequences on planetary formation. As a second step, I study the launching of disk winds via a global model of stratified disk embedded in a warm atmosphere. This model is the first to compute non-ideal effects from

  5. Validating Large Scale Networks Using Temporary Local Scale Networks

    USDA-ARS?s Scientific Manuscript database

    The USDA NRCS Soil Climate Analysis Network and NOAA Climate Reference Networks are nationwide meteorological and land surface data networks with soil moisture measurements in the top layers of soil. There is considerable interest in scaling these point measurements to larger scales for validating ...

  6. Large-Scale Processing of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Finn, John; Sridhar, K. R.; Meyyappan, M.; Arnold, James O. (Technical Monitor)

    1998-01-01

    Scale-up difficulties and high energy costs are two of the more important factors that limit the availability of various types of nanotube carbon. While several approaches are known for producing nanotube carbon, the high-powered reactors typically produce nanotubes at rates measured in only grams per hour and operate at temperatures in excess of 1000 C. These scale-up and energy challenges must be overcome before nanotube carbon can become practical for high-consumption structural and mechanical applications. This presentation examines the issues associated with using various nanotube production methods at larger scales, and discusses research being performed at NASA Ames Research Center on carbon nanotube reactor technology.

  7. Large scale structure from viscous dark matter

    SciTech Connect

    Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim E-mail: stefan.floerchinger@cern.ch E-mail: ntetrad@phys.uoa.gr

    2015-11-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale k{sub m} for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale k{sub m}, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.

  8. Large scale structure from viscous dark matter

    NASA Astrophysics Data System (ADS)

    Blas, Diego; Floerchinger, Stefan; Garny, Mathias; Tetradis, Nikolaos; Wiedemann, Urs Achim

    2015-11-01

    Cosmological perturbations of sufficiently long wavelength admit a fluid dynamic description. We consider modes with wavevectors below a scale km for which the dynamics is only mildly non-linear. The leading effect of modes above that scale can be accounted for by effective non-equilibrium viscosity and pressure terms. For mildly non-linear scales, these mainly arise from momentum transport within the ideal and cold but inhomogeneous fluid, while momentum transport due to more microscopic degrees of freedom is suppressed. As a consequence, concrete expressions with no free parameters, except the matching scale km, can be derived from matching evolution equations to standard cosmological perturbation theory. Two-loop calculations of the matter power spectrum in the viscous theory lead to excellent agreement with N-body simulations up to scales k=0.2 h/Mpc. The convergence properties in the ultraviolet are better than for standard perturbation theory and the results are robust with respect to variations of the matching scale.

  9. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  10. On the scaling of small-scale jet noise to large scale

    NASA Astrophysics Data System (ADS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-05-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  11. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  12. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  13. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  14. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  15. Addressable, large-field second harmonic generation microscopy based on 2D acousto-optical deflector and spatial light modulator.

    PubMed

    Shao, Yonghong; Liu, Honghai; Qin, Wan; Qu, Junle; Peng, Xiang; Niu, Hanben; Gao, Bruce Z

    2012-09-01

    We present an addressable, large-field second harmonic generation microscope by combining a 2D acousto-optical deflector with a spatial light modulator. The SLM shapes an incoming mode-locked, near-infrared Ti:Sapphire laser beam into a multifocus array, which can be rapidly scanned by changing the incident angle of the laser beam using a 2D acousto-optical deflector. Compared to the single-beam-scan technique, the multifocus array scan can increase the scanning rate and the field-of-view size with the multi-region imaging ability.

  16. Addressable, large-field second harmonic generation microscopy based on 2D acousto-optical deflector and spatial light modulator

    PubMed Central

    Shao, Yonghong; Liu, Honghai; Qin, Wan; Qu, Junle; Peng, Xiang; Niu, Hanben

    2013-01-01

    We present an addressable, large-field second harmonic generation microscope by combining a 2D acousto-optical deflector with a spatial light modulator. The SLM shapes an incoming mode-locked, near-infrared Ti:Sapphire laser beam into a multifocus array, which can be rapidly scanned by changing the incident angle of the laser beam using a 2D acousto-optical deflector. Compared to the single-beam-scan technique, the multifocus array scan can increase the scanning rate and the field-of-view size with the multi-region imaging ability. PMID:24307756

  17. Planck scale corrections to the harmonic oscillator, coherent, and squeezed states

    NASA Astrophysics Data System (ADS)

    Bosso, Pasquale; Das, Saurya; Mann, Robert B.

    2017-09-01

    The generalized uncertainty principle (GUP) is a modification of Heisenberg's Principle predicted by several theories of quantum gravity. It consists of a modified commutator between the position and momentum. In this work, we compute potentially observable effects that GUP implies for the harmonic oscillator, coherent, and squeezed states in quantum mechanics. In particular, we rigorously analyze the GUP-perturbed harmonic oscillator Hamiltonian, defining new operators that act as ladder operators on the perturbed states. We use these operators to define the new coherent and squeezed states. We comment on potential applications.

  18. Real or virtual large-scale structure?

    PubMed Central

    Evrard, August E.

    1999-01-01

    Modeling the development of structure in the universe on galactic and larger scales is the challenge that drives the field of computational cosmology. Here, photorealism is used as a simple, yet expert, means of assessing the degree to which virtual worlds succeed in replicating our own. PMID:10200243

  19. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  20. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  1. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  2. Modified gravity and large scale flows, a review

    NASA Astrophysics Data System (ADS)

    Mould, Jeremy

    2017-02-01

    Large scale flows have been a challenging feature of cosmography ever since galaxy scaling relations came on the scene 40 years ago. The next generation of surveys will offer a serious test of the standard cosmology.

  3. Linking Large-Scale Reading Assessments: Comment

    ERIC Educational Resources Information Center

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  4. Large scale scientific computing - future directions

    NASA Astrophysics Data System (ADS)

    Patterson, G. S.

    1982-06-01

    Every new generation of scientific computers has opened up new areas of science for exploration through the use of more realistic numerical models or the ability to process ever larger amounts of data. Concomitantly, scientists, because of the success of past models and the wide range of physical phenomena left unexplored, have pressed computer designers to strive for the maximum performance that current technology will permit. This encompasses not only increased processor speed, but also substantial improvements in processor memory, I/O bandwidth, secondary storage and facilities to augment the scientist's ability both to program and to understand the results of a computation. Over the past decade, performance improvements for scientific calculations have come from algoeithm development and a major change in the underlying architecture of the hardware, not from significantly faster circuitry. It appears that this trend will continue for another decade. A future archetectural change for improved performance will most likely be multiple processors coupled together in some fashion. Because the demand for a significantly more powerful computer system comes from users with single large applications, it is essential that an application be efficiently partitionable over a set of processors; otherwise, a multiprocessor system will not be effective. This paper explores some of the constraints on multiple processor architecture posed by these large applications. In particular, the trade-offs between large numbers of slow processors and small numbers of fast processors is examined. Strategies for partitioning range from partitioning at the language statement level (in-the-small) and at the program module level (in-the-large). Some examples of partitioning in-the-large are given and a strategy for efficiently executing a partitioned program is explored.

  5. Large-scale preparation of plasmid DNA.

    PubMed

    Heilig, J S; Elbing, K L; Brent, R

    2001-05-01

    Although the need for large quantities of plasmid DNA has diminished as techniques for manipulating small quantities of DNA have improved, occasionally large amounts of high-quality plasmid DNA are desired. This unit describes the preparation of milligram quantities of highly purified plasmid DNA. The first part of the unit describes three methods for preparing crude lysates enriched in plasmid DNA from bacterial cells grown in liquid culture: alkaline lysis, boiling, and Triton lysis. The second part describes four methods for purifying plasmid DNA in such lysates away from contaminating RNA and protein: CsCl/ethidium bromide density gradient centrifugation, polyethylene glycol (PEG) precipitation, anion-exchange chromatography, and size-exclusion chromatography.

  6. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2009-09-30

    9 October and lasting until 1500 UTC, 11 October. (bottom) COAMPS forecast of visibility for the same period showing a dust storm with a similar...starting time and an ending time of 0900 UTC 12 October. (Walker et al., 2009.) 6 7 Figure 3. Comparison of COAMPS dust storm forecast...forecasts of dust storms in areas downwind of the large deserts of the world: Arabian Gulf, Sea of Japan, China Sea, Mediterranean Sea, and the Tropical

  7. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2010-09-30

    advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large deserts of the world... dust source regions in NAAPS. The DSD has been crucial for high-resolution dust forecasting in SW Asia using COAMPS (Walker et al., 2009). Dust ...6 Figure 2. Four-panel product used to compare multiple model forecasts of visibility in SW Asia dust storms . On the web the product is

  8. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2007-09-30

    to six days in advance anywhere on the globe. NAAPS and COAMPS are particularly useful for forecasts of dust storms in areas downwind of the large...in FY08. NAAPS forecasts of CONUS dust storms and long-range dust transport to CONUS were further evaluated in collaboration with CSU. These...visibility. The regional model ( COAMPS /Aerosol) became operational during OIF. The global model Navy Aerosol Analysis and Prediction System (NAAPS

  9. Large scale calculations for hadron spectroscopy

    SciTech Connect

    Rebbi, C.

    1985-01-01

    The talk reviews some recent Monte Carlo calculations for Quantum Chromodynamics, performed on Euclidean lattices of rather large extent. Purpose of the calculations is to provide accurate determinations of quantities, such as interquark potentials or mass eigenvalues, which are relevant for hadronic spectroscopy. Results obtained in quenched QCD on 16/sup 3/ x 32 lattices are illustrated, and a discussion of computational resources and techniques required for the calculations is presented. 18 refs.,3 figs., 2 tabs.

  10. Vibration analysis of harmonically excited non-linear system using the method of multiple scales

    NASA Astrophysics Data System (ADS)

    Moon, Byung-Young; Kang, Beom-Soo

    2003-05-01

    An analytical method is presented for evaluation of the steady state periodic behavior of non-linear systems. This method is based on the substructure synthesis formulation and a multiple scales procedure, which is applied to the analysis of non-linear responses. A complex non-linear system is divided into substructures, of which equations are approximately transformed to modal co-ordinates including non-linear term under the reasonable procedure. Then, the equations are synthesized into the overall system and the solution of the non-linear system can be obtained. Based on the method of multiple scales, the proposed procedure reduces the size of large-degree-of-freedom problem in solving the non-linear equations. Feasibility and advantages of the proposed method are illustrated by the application of the analytic procedure to the non-linear rotating machine system as a large mechanical structure system. Results obtained are reported to be an efficient approach with respect to non-linear response prediction when compared with other conventional methods.

  11. Responses in large-scale structure

    NASA Astrophysics Data System (ADS)

    Barreira, Alexandre; Schmidt, Fabian

    2017-06-01

    We introduce a rigorous definition of general power-spectrum responses as resummed vertices with two hard and n soft momenta in cosmological perturbation theory. These responses measure the impact of long-wavelength perturbations on the local small-scale power spectrum. The kinematic structure of the responses (i.e., their angular dependence) can be decomposed unambiguously through a ``bias'' expansion of the local power spectrum, with a fixed number of physical response coefficients, which are only a function of the hard wavenumber k. Further, the responses up to n-th order completely describe the (n+2)-point function in the squeezed limit, i.e. with two hard and n soft modes, which one can use to derive the response coefficients. This generalizes previous results, which relate the angle-averaged squeezed limit to isotropic response coefficients. We derive the complete expression of first- and second-order responses at leading order in perturbation theory, and present extrapolations to nonlinear scales based on simulation measurements of the isotropic response coefficients. As an application, we use these results to predict the non-Gaussian part of the angle-averaged matter power spectrum covariance CovNGl=0(k1,k2), in the limit where one of the modes, say k2, is much smaller than the other. Without any free parameters, our model results are in very good agreement with simulations for k2 lesssim 0.06 h Mpc-1, and for any k1 gtrsim 2k2. The well-defined kinematic structure of the power spectrum response also permits a quick evaluation of the angular dependence of the covariance matrix. While we focus on the matter density field, the formalism presented here can be generalized to generic tracers such as galaxies.

  12. Global scale deposition of radioactivity from a large scale exchange

    SciTech Connect

    Knox, J.B.

    1983-10-01

    The global impact of radioactivity pertains to the continental scale and planetary scale deposition of the radioactivity in a delayed mode; it affects all peoples. Global deposition is distinct and separate from close-in fallout. Close-in fallout is delivered in a matter of a few days or less and is much studied in the literature of civilian defense. But much less studied is the matter of global deposition. The global deposition of radioactivity from the reference strategic exchange (5300 MT) leads to an estimated average whole body, total integrated dose of 20 rem for the latitudes of 30 to 50/sup 0/ in the Northern Hemisphere. Hotspots of deposited radioactivity can occur with doses of about 70 rem (winter) to 40 to 110 rem (summer) in regions like Europe, western Asia, western North Pacific, southeastern US, northeastern US, and Canada. The neighboring countries within a few hundred kilometers of areas under strategic nuclear attack can be impacted by the normal (termal close-in) fallout due to gravitational sedimentation with lethal radiation doses to unsheltered populations. In regard to the strategic scenario about 40% of the megatonage is assumed to be in a surface burst mode and the rest in the free air burst mode.

  13. Different time scales in plasmonically enhanced high-order-harmonic generation

    NASA Astrophysics Data System (ADS)

    Zagoya, C.; Bonner, M.; Chomet, H.; Slade, E.; Figueira de Morisson Faria, C.

    2016-05-01

    We investigate high-order-harmonic generation in inhomogeneous media for reduced dimensionality models. We perform a phase-space analysis, in which we identify specific features caused by the field inhomogeneity. We compute high-order-harmonic spectra using the numerical solution of the time-dependent Schrödinger equation, and provide an interpretation in terms of classical electron trajectories. We show that the dynamics of the system can be described by the interplay of high-frequency and slow-frequency oscillations, which are given by Mathieu's equations. The latter oscillations lead to an increase in the cutoff energy, and, for small values of the inhomogeneity parameter, take place over many driving-field cycles. In this case, the two processes can be decoupled and the oscillations can be described analytically.

  14. Large-scale GW software development

    NASA Astrophysics Data System (ADS)

    Kim, Minjung; Mandal, Subhasish; Mikida, Eric; Jindal, Prateek; Bohm, Eric; Jain, Nikhil; Kale, Laxmikant; Martyna, Glenn; Ismail-Beigi, Sohrab

    Electronic excitations are important in understanding and designing many functional materials. In terms of ab initio methods, the GW and Bethe-Saltpeter Equation (GW-BSE) beyond DFT methods have proved successful in describing excited states in many materials. However, the heavy computational loads and large memory requirements have hindered their routine applicability by the materials physics community. We summarize some of our collaborative efforts to develop a new software framework designed for GW calculations on massively parallel supercomputers. Our GW code is interfaced with the plane-wave pseudopotential ab initio molecular dynamics software ``OpenAtom'' which is based on the Charm++ parallel library. The computation of the electronic polarizability is one of the most expensive parts of any GW calculation. We describe our strategy that uses a real-space representation to avoid the large number of fast Fourier transforms (FFTs) common to most GW methods. We also describe an eigendecomposition of the plasmon modes from the resulting dielectric matrix that enhances efficiency. This work is supported by NSF through Grant ACI-1339804.

  15. Full sky harmonic analysis hints at large ultra-high energy cosmic ray deflections

    SciTech Connect

    Tinyakov, P. G. Urban, F. R.

    2015-03-15

    The full-sky multipole coefficients of the ultra-high energy cosmic ray (UHECR) flux have been measured for the first time by the Pierre Auger and Telescope Array collaborations using a joint data set with E > 10 EeV. We calculate these harmonic coefficients in the model where UHECR are protons and sources trace the local matter distribution, and compare our results with observations. We find that the expected power for low multipoles (dipole and quadrupole, in particular) is sytematically higher than in the data: the observed flux is too isotropic. We then investigate to which degree our predictions are influenced by UHECR deflections in the regular Galactic magnetic field. It turns out that the UHECR power spectrum coefficients C{sub l} are quite insensitive to the effects of the Galactic magnetic field, so it is unlikely that the discordance can be reconciled by tuning the Galactic magnetic field model. On the contrary, a sizeable fraction of uniformly distributed flux (representing for instance an admixture of heavy nuclei with considerably larger deflections) can bring simulations and observations to an accord.

  16. Progress in Large Period Multilayer Coatings for High Harmonic and Solar Applications

    SciTech Connect

    Jones, Juanita; Aquila, Andrew; Salmassi, Farhad; Gullikson, Eric

    2008-01-07

    Multilayer coatings for normal incidence optics designed for the long wavelength region (25 nm < {lambda} < 50 nm) are particularly challenging due to the few number of layers that can be utilized in the reflection. Recently, Mg/SiC multilayers have been fabricated with normal incidence reflectivity in the vicinity of 40% for wavelengths near the He-II line at 30.4 nm. Motivated by this success we have investigated the use of a tri-band multilayer to increase the bandwidth while maintaining the reflectivity. The multilayers were deposited by conventional magnetron sputtering. Using Mg/SiC bilayers a reflectivity of 45% was achieved at 27 to 32 nm at an angle of 5 deg from normal. The Mg/Sc/SiC multilayer systems have also been investigated. It obtained a near normal incidence reflectivity of 35% while increasing the bandwidth by a factor of 2. These results are very encouraging for the possibility of more widespread applications of normal incidence optics in high harmonic applications.

  17. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier; Toth, Balazs; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; Rouvreau, Sebastien; Jomaas, Grunde

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant know how about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  18. Large Scale Experiments on Spacecraft Fire Safety

    NASA Technical Reports Server (NTRS)

    Urban, David; Ruff, Gary A.; Minster, Olivier; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; hide

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  19. Goethite Bench-scale and Large-scale Preparation Tests

    SciTech Connect

    Josephson, Gary B.; Westsik, Joseph H.

    2011-10-23

    The Hanford Waste Treatment and Immobilization Plant (WTP) is the keystone for cleanup of high-level radioactive waste from our nation's nuclear defense program. The WTP will process high-level waste from the Hanford tanks and produce immobilized high-level waste glass for disposal at a national repository, low activity waste (LAW) glass, and liquid effluent from the vitrification off-gas scrubbers. The liquid effluent will be stabilized into a secondary waste form (e.g. grout-like material) and disposed on the Hanford site in the Integrated Disposal Facility (IDF) along with the low-activity waste glass. The major long-term environmental impact at Hanford results from technetium that volatilizes from the WTP melters and finally resides in the secondary waste. Laboratory studies have indicated that pertechnetate ({sup 99}TcO{sub 4}{sup -}) can be reduced and captured into a solid solution of {alpha}-FeOOH, goethite (Um 2010). Goethite is a stable mineral and can significantly retard the release of technetium to the environment from the IDF. The laboratory studies were conducted using reaction times of many days, which is typical of environmental subsurface reactions that were the genesis of this new process. This study was the first step in considering adaptation of the slow laboratory steps to a larger-scale and faster process that could be conducted either within the WTP or within the effluent treatment facility (ETF). Two levels of scale-up tests were conducted (25x and 400x). The largest scale-up produced slurries of Fe-rich precipitates that contained rhenium as a nonradioactive surrogate for {sup 99}Tc. The slurries were used in melter tests at Vitreous State Laboratory (VSL) to determine whether captured rhenium was less volatile in the vitrification process than rhenium in an unmodified feed. A critical step in the technetium immobilization process is to chemically reduce Tc(VII) in the pertechnetate (TcO{sub 4}{sup -}) to Tc(Iv)by reaction with the ferrous

  20. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  1. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  2. Python for large-scale electrophysiology.

    PubMed

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation ("dimstim"); one for electrophysiological waveform visualization and spike sorting ("spyke"); and one for spike train and stimulus analysis ("neuropy"). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience.

  3. Large Scale CW ECRH Systems: Some considerations

    NASA Astrophysics Data System (ADS)

    Erckmann, V.; Kasparek, W.; Plaum, B.; Lechte, C.; Petelin, M. I.; Braune, H.; Gantenbein, G.; Laqua, H. P.; Lubiako, L.; Marushchenko, N. B.; Michel, G.; Turkin, Y.; Weissgerber, M.

    2012-09-01

    Electron Cyclotron Resonance Heating (ECRH) is a key component in the heating arsenal for the next step fusion devices like W7-X and ITER. These devices are equipped with superconducting coils and are designed to operate steady state. ECRH must thus operate in CW-mode with a large flexibility to comply with various physics demands such as plasma start-up, heating and current drive, as well as configurationand MHD - control. The request for many different sophisticated applications results in a growing complexity, which is in conflict with the request for high availability, reliability, and maintainability. `Advanced' ECRH-systems must, therefore, comply with both the complex physics demands and operational robustness and reliability. The W7-X ECRH system is the first CW- facility of an ITER relevant size and is used as a test bed for advanced components. Proposals for future developments are presented together with improvements of gyrotrons, transmission components and launchers.

  4. Python for Large-Scale Electrophysiology

    PubMed Central

    Spacek, Martin; Blanche, Tim; Swindale, Nicholas

    2008-01-01

    Electrophysiology is increasingly moving towards highly parallel recording techniques which generate large data sets. We record extracellularly in vivo in cat and rat visual cortex with 54-channel silicon polytrodes, under time-locked visual stimulation, from localized neuronal populations within a cortical column. To help deal with the complexity of generating and analysing these data, we used the Python programming language to develop three software projects: one for temporally precise visual stimulus generation (“dimstim”); one for electrophysiological waveform visualization and spike sorting (“spyke”); and one for spike train and stimulus analysis (“neuropy”). All three are open source and available for download (http://swindale.ecc.ubc.ca/code). The requirements and solutions for these projects differed greatly, yet we found Python to be well suited for all three. Here we present our software as a showcase of the extensive capabilities of Python in neuroscience. PMID:19198646

  5. Large-Scale Structures of Planetary Systems

    NASA Astrophysics Data System (ADS)

    Murray-Clay, Ruth; Rogers, Leslie A.

    2015-12-01

    A class of solar system analogs has yet to be identified among the large crop of planetary systems now observed. However, since most observed worlds are more easily detectable than direct analogs of the Sun's planets, the frequency of systems with structures similar to our own remains unknown. Identifying the range of possible planetary system architectures is complicated by the large number of physical processes that affect the formation and dynamical evolution of planets. I will present two ways of organizing planetary system structures. First, I will suggest that relatively few physical parameters are likely to differentiate the qualitative architectures of different systems. Solid mass in a protoplanetary disk is perhaps the most obvious possible controlling parameter, and I will give predictions for correlations between planetary system properties that we would expect to be present if this is the case. In particular, I will suggest that the solar system's structure is representative of low-metallicity systems that nevertheless host giant planets. Second, the disk structures produced as young stars are fed by their host clouds may play a crucial role. Using the observed distribution of RV giant planets as a function of stellar mass, I will demonstrate that invoking ice lines to determine where gas giants can form requires fine tuning. I will suggest that instead, disk structures built during early accretion have lasting impacts on giant planet distributions, and disk clean-up differentially affects the orbital distributions of giant and lower-mass planets. These two organizational hypotheses have different implications for the solar system's context, and I will suggest observational tests that may allow them to be validated or falsified.

  6. Protein-ligand recognition using spherical harmonic molecular surfaces: towards a fast and efficient filter for large virtual throughput screening.

    PubMed

    Cai, Wensheng; Shao, Xueguang; Maigret, Bernard

    2002-01-01

    Molecular surfaces are important because surface-shape complementarity is often a necessary condition in protein-ligand interactions and docking studies. We have previously described a fast and efficient method to obtain triangulated surface-meshes by topologically mapping ellipsoids on molecular surfaces. In this paper, we present an extension of our work to spherical harmonic surfaces in order to approximate molecular surfaces of both ligands and receptor-cavities and to easily check the surface-shape complementarity. The method consists of (1) finding lobes and holes on both ligand and cavity surfaces using contour maps of radius functions with spherical harmonic expansions, (2) superposing the surfaces around a given binding site by minimizing the distance between their respective expansion coefficients. This docking procedure capabilities was demonstrated by application to 35 protein-ligand complexes of known crystal structures. The method can also be easily and efficiently used as a filter to detect in a large conformational sampling the possible conformations presenting good complementarity with the receptor site, and being, therefore, good candidates for further more elaborate docking studies. This "virtual screening" was demonstrated on the platelet thrombin receptor.

  7. Symmetry-guided large-scale shell-model theory

    NASA Astrophysics Data System (ADS)

    Launey, Kristina D.; Dytrych, Tomas; Draayer, Jerry P.

    2016-07-01

    In this review, we present a symmetry-guided strategy that utilizes exact as well as partial symmetries for enabling a deeper understanding of and advancing ab initio studies for determining the microscopic structure of atomic nuclei. These symmetries expose physically relevant degrees of freedom that, for large-scale calculations with QCD-inspired interactions, allow the model space size to be reduced through a very structured selection of the basis states to physically relevant subspaces. This can guide explorations of simple patterns in nuclei and how they emerge from first principles, as well as extensions of the theory beyond current limitations toward heavier nuclei and larger model spaces. This is illustrated for the ab initio symmetry-adapted no-core shell model (SA-NCSM) and two significant underlying symmetries, the symplectic Sp(3 , R) group and its deformation-related SU(3) subgroup. We review the broad scope of nuclei, where these symmetries have been found to play a key role-from the light p-shell systems, such as 6Li, 8B, 8Be, 12C, and 16O, and sd-shell nuclei exemplified by 20Ne, based on first-principle explorations; through the Hoyle state in 12C and enhanced collectivity in intermediate-mass nuclei, within a no-core shell-model perspective; up to strongly deformed species of the rare-earth and actinide regions, as investigated in earlier studies. A complementary picture, driven by symmetries dual to Sp(3 , R) , is also discussed. We briefly review symmetry-guided techniques that prove useful in various nuclear-theory models, such as Elliott model, ab initio SA-NCSM, symplectic model, pseudo- SU(3) and pseudo-symplectic models, ab initio hyperspherical harmonics method, ab initio lattice effective field theory, exact pairing-plus-shell model approaches, and cluster models, including the resonating-group method. Important implications of these approaches that have deepened our understanding of emergent phenomena in nuclei, such as enhanced

  8. Large scale static and dynamic friction experiments

    SciTech Connect

    Bakhtar, K.; Barton, N.

    1984-12-31

    A series of nineteen shear tests were performed on fractures 1 m/sup 2/ in area, generated in blocks of sandstone, granite, tuff, hydrostone and concrete. The tests were conducted under quasi-static and dynamic loading conditions. A vertical stress assisted fracturing technique was developed to create the fractures through the large test blocks. Prior to testing, the fractured surface of each block was characterized using the Barton JRC-JCS concept. the results of characterization were used to generate the peak strength envelope for each fractured surface. Attempts were made to model the stress path based on the classical transformation equations which assumes a theoretical plane, elastic isotropic properties, and therefore no slip. However, this approach gave rise to a stress path passing above the strength envelope which is clearly unacceptable. The results of the experimental investigations indicated that actual stress path is affected by the dilatancy due to fracture roughness, as well as by the side friction imposed by the boundary conditions. By introducing the corrections due to the dilation and boundary conditions into the stress transformation equation, the fully corrected stress paths for predicting the strength of fractured blocks were obtained.

  9. Impact Cratering Physics al Large Planetary Scales

    NASA Astrophysics Data System (ADS)

    Ahrens, Thomas J.

    2007-06-01

    Present understanding of the physics controlling formation of ˜10^3 km diameter, multi-ringed impact structures on planets were derived from the ideas of Scripps oceanographer, W. Van Dorn, University of London's, W, Murray, and, Caltech's, D. O'Keefe who modeled the vertical oscillations (gravity and elasticity restoring forces) of shock-induced melt and damaged rock within the transient crater immediately after the downward propagating hemispheric shock has processed rock (both lining, and substantially below, the transient cavity crater). The resulting very large surface wave displacements produce the characteristic concentric, multi-ringed basins, as stored energy is radiated away and also dissipated upon inducing further cracking. Initial calculational description, of the above oscillation scenario, has focused upon on properly predicting the resulting density of cracks, and, their orientations. A new numerical version of the Ashby--Sammis crack damage model is coupled to an existing shock hydrodynamics code to predict impact induced damage distributions in a series of 15--70 cm rock targets from high speed impact experiments for a range of impactor type and velocity. These are compared to results of crack damage distributions induced in crustal rocks with small arms impactors and mapped ultrasonically in recent Caltech experiments (Ai and Ahrens, 2006).

  10. Superconducting materials for large scale applications

    SciTech Connect

    Scanlan, Ronald M.; Malozemoff, Alexis P.; Larbalestier, David C.

    2004-05-06

    Significant improvements in the properties ofsuperconducting materials have occurred recently. These improvements arebeing incorporated into the latest generation of wires, cables, and tapesthat are being used in a broad range of prototype devices. These devicesinclude new, high field accelerator and NMR magnets, magnets for fusionpower experiments, motors, generators, and power transmission lines.These prototype magnets are joining a wide array of existing applicationsthat utilize the unique capabilities of superconducting magnets:accelerators such as the Large Hadron Collider, fusion experiments suchas ITER, 930 MHz NMR, and 4 Tesla MRI. In addition, promising newmaterials such as MgB2 have been discovered and are being studied inorder to assess their potential for new applications. In this paper, wewill review the key developments that are leading to these newapplications for superconducting materials. In some cases, the key factoris improved understanding or development of materials with significantlyimproved properties. An example of the former is the development of Nb3Snfor use in high field magnets for accelerators. In other cases, thedevelopment is being driven by the application. The aggressive effort todevelop HTS tapes is being driven primarily by the need for materialsthat can operate at temperatures of 50 K and higher. The implications ofthese two drivers for further developments will be discussed. Finally, wewill discuss the areas where further improvements are needed in order fornew applications to be realized.

  11. Large-Scale Magnetic Connectivity in CMEs

    NASA Astrophysics Data System (ADS)

    Zhang, Yuzong; Wang, Jingxiu; Attrill, Gemma; Harra, Louise K.

    Five flare/CME events were selected in this study. One is on May 12, 1997, for which there is only two active regions on the visible solar disc, and the magnetic configuration is rather simple. For other cases, many active regions were visible. They are the flare/CME events that occurred on Bastille Day of 2000, Oct. 28, 2003, Nov. 7, 2004 and Jan. 20, 2005. By tracing the spread of EUV dimming, which was obtained by SOHO/EIT 195 Å fixed-difference images, we studied the CME initiation and development on the solar disc. At the same time we reconstructed the 3D magnetic structure of coronal magnetic fields, extrapolated from the observed photospheric magnetograms by SOHO/MDI. In scrutinizing the EUV brightening and dimming propagation from CME initiation sites to large areas with different magnetic connectivities, we determine the overall coupling and interacting of multiple flux systems in the CME processes. Several typical patterns of magnetic connectivity are described and discussed in the view of CME initiation mechanism or mechanisms.

  12. Large-scale structural monitoring systems

    NASA Astrophysics Data System (ADS)

    Solomon, Ian; Cunnane, James; Stevenson, Paul

    2000-06-01

    Extensive structural health instrumentation systems have been installed on three long-span cable-supported bridges in Hong Kong. The quantities measured include environment and applied loads (such as wind, temperature, seismic and traffic loads) and the bridge response to these loadings (accelerations, displacements, and strains). Measurements from over 1000 individual sensors are transmitted to central computing facilities via local data acquisition stations and a fault- tolerant fiber-optic network, and are acquired and processed continuously. The data from the systems is used to provide information on structural load and response characteristics, comparison with design, optimization of inspection, and assurance of continued bridge health. Automated data processing and analysis provides information on important structural and operational parameters. Abnormal events are noted and logged automatically. Information of interest is automatically archived for post-processing. Novel aspects of the instrumentation system include a fluid-based high-accuracy long-span Level Sensing System to measure bridge deck profile and tower settlement. This paper provides an outline of the design and implementation of the instrumentation system. A description of the design and implementation of the data acquisition and processing procedures is also given. Examples of the use of similar systems in monitoring other large structures are discussed.

  13. A Large Scale Virtual Gas Sensor Array

    NASA Astrophysics Data System (ADS)

    Ziyatdinov, Andrey; Fernández-Diaz, Eduard; Chaudry, A.; Marco, Santiago; Persaud, Krishna; Perera, Alexandre

    2011-09-01

    This paper depicts a virtual sensor array that allows the user to generate gas sensor synthetic data while controlling a wide variety of the characteristics of the sensor array response: arbitrary number of sensors, support for multi-component gas mixtures and full control of the noise in the system such as sensor drift or sensor aging. The artificial sensor array response is inspired on the response of 17 polymeric sensors for three analytes during 7 month. The main trends in the synthetic gas sensor array, such as sensitivity, diversity, drift and sensor noise, are user controlled. Sensor sensitivity is modeled by an optionally linear or nonlinear method (spline based). The toolbox on data generation is implemented in open source R language for statistical computing and can be freely accessed as an educational resource or benchmarking reference. The software package permits the design of scenarios with a very large number of sensors (over 10000 sensels), which are employed in the test and benchmarking of neuromorphic models in the Bio-ICT European project NEUROCHEM.

  14. Nonlinear large-scale optimization with WORHP

    NASA Astrophysics Data System (ADS)

    Nikolayzik, Tim; Büskens, Christof; Gerdts, Matthias

    Nonlinear optimization has grown to a key technology in many areas of aerospace industry, e.g. satellite control, shape-optimization, aerodynamamics, trajectory planning, reentry prob-lems, interplanetary flights. One of the most extensive areas is the optimization of trajectories for aerospace applications. These problems typically are discretized optimal control problems, which leads to large sparse nonlinear optimization problems. In the end all these different problems from different areas can be described in the general formulation as a nonlinear opti-mization problem. WORHP is designed to solve nonlinear optimization problems with more then one million variables and one million constraints. WORHP uses a lot of different advanced techniques, e.g. reverse communication, to organize the optimization process as efficient and controllable by the user as possible. The solver has nine different interfaces, e.g. to MAT-LAB/SIMULINK and AMPL. Tests of WORHP had shown that WORHP is a very robust and promising solver. Several examples from space applications will be presented.

  15. Using Web-Based Testing for Large-Scale Assessment.

    ERIC Educational Resources Information Center

    Hamilton, Laura S.; Klein, Stephen P.; Lorie, William

    This paper describes an approach to large-scale assessment that uses tests that are delivered to students over the Internet and that are tailored (adapted) to each student's own level of proficiency. A brief background on large-scale assessment is followed by a description of this new technology and an example. Issues that need to be investigated…

  16. Large-scale societal changes and intentionality - an uneasy marriage.

    PubMed

    Bodor, Péter; Fokas, Nikos

    2014-08-01

    Our commentary focuses on juxtaposing the proposed science of intentional change with facts and concepts pertaining to the level of large populations or changes on a worldwide scale. Although we find a unified evolutionary theory promising, we think that long-term and large-scale, scientifically guided - that is, intentional - social change is not only impossible, but also undesirable.

  17. Large Scale Interconnections Using Dynamic Gratings

    NASA Astrophysics Data System (ADS)

    Pauliat, Gilles; Roosen, Gerald

    1987-01-01

    Optics is attractive for interconnects because the possibility of crossing without any interaction multiple light beams. A crossbar network can be achieved using holographic elements which permit to connect independently all inputs and all outputs. The incorporation of dynamic holographic materials is enticing as this will render the interconnection changeable. However, it is necessary to find first a passive method permitting to achieve beam deflection and secondly a photosensitive material of high optical quality requiring low power levels to optically induce the refractive index changes. We first describe an optical method allowing to produce very large deflections of light beams thus enabling to randomly address any spot on a plane. Such a technique appears applicable to both interconnections of VLSI chips and random access of optical memories. Our scheme for realizing dynamic optical interconnects is based on Bragg diffraction of the beam to steer by a dynamic phase grating which spacing and orientation are changeable in real time. This is achieved in a passive way by acting on the optical frequency of the control beams used to record the dynamic grating. Deflection angles of 15° have been experimentally demonstrated for a 27 nm shift in the control wavelength. For a larger wavelength scanning (50 nm), 28° deflections are anticipated while maintaining the Bragg condition satisfied. We then discuss some issues related to photosensitive materials able to dynamically record the optically induced refractive index change. The specific example of Bi12 Si 020 or Bi12 Ge 020 photorefractive crystals is presented. Indeed these materials are very attractive as they require low driving energy and exhibit a memory effect. This latter property permits to achieve numerous iterations between computing cells before reconfiguration of the interconnect network.

  18. Amplification of large-scale magnetic field in nonhelical magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Kumar, Rohit; Verma, Mahendra K.

    2017-09-01

    It is typically assumed that the kinetic and magnetic helicities play a crucial role in the growth of large-scale dynamo. In this paper, we demonstrate that helicity is not essential for the amplification of large-scale magnetic field. For this purpose, we perform nonhelical magnetohydrodynamic (MHD) simulation, and show that the large-scale magnetic field can grow in nonhelical MHD when random external forcing is employed at scale 1/10 the box size. The energy fluxes and shell-to-shell transfer rates computed using the numerical data show that the large-scale magnetic energy grows due to the energy transfers from the velocity field at the forcing scales.

  19. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations(DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those, of a spatially evolving jet, a temporal jet problem was solved, using periodicity ill the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible application to active noise suppression. In addition, the data generated can be used to compute various turbulence quantities such as mean velocities

  20. Large Scale Turbulent Structures in Supersonic Jets

    NASA Technical Reports Server (NTRS)

    Rao, Ram Mohan; Lundgren, Thomas S.

    1997-01-01

    Jet noise is a major concern in the design of commercial aircraft. Studies by various researchers suggest that aerodynamic noise is a major contributor to jet noise. Some of these studies indicate that most of the aerodynamic jet noise due to turbulent mixing occurs when there is a rapid variation in turbulent structure, i.e. rapidly growing or decaying vortices. The objective of this research was to simulate a compressible round jet to study the non-linear evolution of vortices and the resulting acoustic radiations. In particular, to understand the effect of turbulence structure on the noise. An ideal technique to study this problem is Direct Numerical Simulations (DNS), because it provides precise control on the initial and boundary conditions that lead to the turbulent structures studied. It also provides complete 3-dimensional time dependent data. Since the dynamics of a temporally evolving jet are not greatly different from those of a spatially evolving jet, a temporal jet problem was solved, using periodicity in the direction of the jet axis. This enables the application of Fourier spectral methods in the streamwise direction. Physically this means that turbulent structures in the jet are repeated in successive downstream cells instead of being gradually modified downstream into a jet plume. The DNS jet simulation helps us understand the various turbulent scales and mechanisms of turbulence generation in the evolution of a compressible round jet. These accurate flow solutions will be used in future research to estimate near-field acoustic radiation by computing the total outward flux across a surface and determine how it is related to the evolution of the turbulent solutions. Furthermore, these simulations allow us to investigate the sensitivity of acoustic radiations to inlet/boundary conditions, with possible appli(,a- tion to active noise suppression. In addition, the data generated can be used to compute, various turbulence quantities such as mean

  1. Harmonizing the RR Lyrae and Clump Distance Scales-Stretching the Short Distance Scale to Intermediate Ranges?

    SciTech Connect

    Popowski, P.

    2000-01-31

    I explore the consequences of making the RR Lyrae and clump giant distance scales consistent in the solar neighborhood, Galactic bulge and Large Magellanic Cloud (LMC). I employ two major assumptions: (1) that the absolute magnitude -metallicity, M{sub V}(RR) - [Fe/H], relation for RR Lyrae stars is universal, and (2) that absolute I-magnitudes of clump giants, M{sub I}(RC), in Baade's Window can be inferred from the local Hipparcos calibration of clump giants' magnitudes. A comparison between the solar neighborhood and Baade's Window sets M{sub V}(RR) at [Fe/H] = -1.6 in the range (0.59 {+-} 0.05, 0.70 {+-} 0.05), somewhat brighter than the statistical parallax solution. A comparison between Baade's Window and the LMC sets the M{sub I}{sup LMC}(RC) in the range (-0.33 {+-} 0.09, -0.53 {+-} 0.09). The distance modulus to the LMC is {mu}{sup LMC} {element_of} (18.24 {+-} 0.08, 18.44 {+-} 0.07). I argue that the currently available information slightly favors the short distance scale but is insufficient to select the correct solutions with high confidence.

  2. A New Deep-Ultraviolet Transparent Orthophosphate LiCs2PO4 with Large Second Harmonic Generation Response.

    PubMed

    Li, Lin; Wang, Ying; Lei, Bing-Hua; Han, Shujuan; Yang, Zhihua; Poeppelmeier, Kenneth R; Pan, Shilie

    2016-07-27

    LiCs2PO4, a new deep-ultraviolet (UV) transparent material, was synthesized by the flux method. The material contains unusual edge-sharing LiO4-PO4 tetrahedra. It exhibits a very short absorption edge of λ = 174 nm and generates the largest powder second harmonic generation (SHG) response for deep-UV phosphates that do not contain additional anionic groups, i.e., 2.6 times that of KH2PO4 (KDP). First-principles electronic structure analyses confirm the experimental results and suggest that the strong SHG response may originate from the aligned nonbonding O-2p orbitals. The discovery and characterization of LiCs2PO4 provide a new insight into the structure-property relationships of phosphate-based nonlinear optical materials with large SHG responses and short absorption edges.

  3. Distribution probability of large-scale landslides in central Nepal

    NASA Astrophysics Data System (ADS)

    Timilsina, Manita; Bhandary, Netra P.; Dahal, Ranjan Kumar; Yatabe, Ryuichi

    2014-12-01

    Large-scale landslides in the Himalaya are defined as huge, deep-seated landslide masses that occurred in the geological past. They are widely distributed in the Nepal Himalaya. The steep topography and high local relief provide high potential for such failures, whereas the dynamic geology and adverse climatic conditions play a key role in the occurrence and reactivation of such landslides. The major geoscientific problems related with such large-scale landslides are 1) difficulties in their identification and delineation, 2) sources of small-scale failures, and 3) reactivation. Only a few scientific publications have been published concerning large-scale landslides in Nepal. In this context, the identification and quantification of large-scale landslides and their potential distribution are crucial. Therefore, this study explores the distribution of large-scale landslides in the Lesser Himalaya. It provides simple guidelines to identify large-scale landslides based on their typical characteristics and using a 3D schematic diagram. Based on the spatial distribution of landslides, geomorphological/geological parameters and logistic regression, an equation of large-scale landslide distribution is also derived. The equation is validated by applying it to another area. For the new area, the area under the receiver operating curve of the landslide distribution probability in the new area is 0.699, and a distribution probability value could explain > 65% of existing landslides. Therefore, the regression equation can be applied to areas of the Lesser Himalaya of central Nepal with similar geological and geomorphological conditions.

  4. A study of MLFMA for large-scale scattering problems

    NASA Astrophysics Data System (ADS)

    Hastriter, Michael Larkin

    This research is centered in computational electromagnetics with a focus on solving large-scale problems accurately in a timely fashion using first principle physics. Error control of the translation operator in 3-D is shown. A parallel implementation of the multilevel fast multipole algorithm (MLFMA) was studied as far as parallel efficiency and scaling. The large-scale scattering program (LSSP), based on the ScaleME library, was used to solve ultra-large-scale problems including a 200lambda sphere with 20 million unknowns. As these large-scale problems were solved, techniques were developed to accurately estimate the memory requirements. Careful memory management is needed in order to solve these massive problems. The study of MLFMA in large-scale problems revealed significant errors that stemmed from inconsistencies in constants used by different parts of the algorithm. These were fixed to produce the most accurate data possible for large-scale surface scattering problems. Data was calculated on a missile-like target using both high frequency methods and MLFMA. This data was compared and analyzed to determine possible strategies to increase data acquisition speed and accuracy through multiple computation method hybridization.

  5. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  6. Needs, opportunities, and options for large scale systems research

    SciTech Connect

    Thompson, G.L.

    1984-10-01

    The Office of Energy Research was recently asked to perform a study of Large Scale Systems in order to facilitate the development of a true large systems theory. It was decided to ask experts in the fields of electrical engineering, chemical engineering and manufacturing/operations research for their ideas concerning large scale systems research. The author was asked to distribute a questionnaire among these experts to find out their opinions concerning recent accomplishments and future research directions in large scale systems research. He was also requested to convene a conference which included three experts in each area as panel members to discuss the general area of large scale systems research. The conference was held on March 26--27, 1984 in Pittsburgh with nine panel members, and 15 other attendees. The present report is a summary of the ideas presented and the recommendations proposed by the attendees.

  7. Large-scale velocity structures in turbulent thermal convection.

    PubMed

    Qiu, X L; Tong, P

    2001-09-01

    A systematic study of large-scale velocity structures in turbulent thermal convection is carried out in three different aspect-ratio cells filled with water. Laser Doppler velocimetry is used to measure the velocity profiles and statistics over varying Rayleigh numbers Ra and at various spatial positions across the whole convection cell. Large velocity fluctuations are found both in the central region and near the cell boundary. Despite the large velocity fluctuations, the flow field still maintains a large-scale quasi-two-dimensional structure, which rotates in a coherent manner. This coherent single-roll structure scales with Ra and can be divided into three regions in the rotation plane: (1) a thin viscous boundary layer, (2) a fully mixed central core region with a constant mean velocity gradient, and (3) an intermediate plume-dominated buffer region. The experiment reveals a unique driving mechanism for the large-scale coherent rotation in turbulent convection.

  8. Organised convection embedded in a large-scale flow

    NASA Astrophysics Data System (ADS)

    Naumann, Ann Kristin; Stevens, Bjorn; Hohenegger, Cathy

    2017-04-01

    In idealised simulations of radiative convective equilibrium, convection aggregates spontaneously from randomly distributed convective cells into organized mesoscale convection despite homogeneous boundary conditions. Although these simulations apply very idealised setups, the process of self-aggregation is thought to be relevant for the development of tropical convective systems. One feature that idealised simulations usually neglect is the occurrence of a large-scale background flow. In the tropics, organised convection is embedded in a large-scale circulation system, which advects convection in along-wind direction and alters near surface convergence in the convective areas. A large-scale flow also modifies the surface fluxes, which are expected to be enhanced upwind of the convective area if a large-scale flow is applied. Convective clusters that are embedded in a large-scale flow therefore experience an asymmetric component of the surface fluxes, which influences the development and the pathway of a convective cluster. In this study, we use numerical simulations with explicit convection and add a large-scale flow to the established setup of radiative convective equilibrium. We then analyse how aggregated convection evolves when being exposed to wind forcing. The simulations suggest that convective line structures are more prevalent if a large-scale flow is present and that convective clusters move considerably slower than advection by the large-scale flow would suggest. We also study the asymmetric component of convective aggregation due to enhanced surface fluxes, and discuss the pathway and speed of convective clusters as a function of the large-scale wind speed.

  9. Large scale purification of RNA nanoparticles by preparative ultracentrifugation.

    PubMed

    Jasinski, Daniel L; Schwartz, Chad T; Haque, Farzin; Guo, Peixuan

    2015-01-01

    Purification of large quantities of supramolecular RNA complexes is of paramount importance due to the large quantities of RNA needed and the purity requirements for in vitro and in vivo assays. Purification is generally carried out by liquid chromatography (HPLC), polyacrylamide gel electrophoresis (PAGE), or agarose gel electrophoresis (AGE). Here, we describe an efficient method for the large-scale purification of RNA prepared by in vitro transcription using T7 RNA polymerase by cesium chloride (CsCl) equilibrium density gradient ultracentrifugation and the large-scale purification of RNA nanoparticles by sucrose gradient rate-zonal ultracentrifugation or cushioned sucrose gradient rate-zonal ultracentrifugation.

  10. Nonlinear equatorial spread F: spatially large bubbles resulting from large horizontal scale initial perturbations. Memorandum report

    SciTech Connect

    Zalesak, S.T.; Ossakow, S.L.

    1980-02-06

    Motivated by the observations of large horizontal scale length equatorial spread F 'bubbles', we have performed numerical simulations of the nonlinear evolution of the collisional Rayleigh-Taylor instability in the nighttime equatorial ionosphere, using large horizontal scale length initial perturbations. The calculations were performed using a new, improved numerical code which utilizes the recently developed, fully multidimensional flux-corrected transport (FCT) techniques. We find that large horizontal scale initial perturbations evolve nonlinearly into equally large horizontal scale spread F bubbles, on a time scale as fast as that of the corresponding small horizontal scale length perturbations previously used. Further, we find the level of plasma depletion inside the large scale bubbles to be appreciably higher than that of the smaller scale bubbles, approaching 100%, in substantial agreement with the observations. This level of depletion is due to the fact that the plasma comprising the large scale bubbles has its origin at much lower altitudes than that comprising the smaller scale bubbles. Analysis of the polarization electric fields produced by the vertically aligned ionospheric irregularities show this effect to be due to fringe fields similar in structure to those produced at the edge of a parallel plate capacitor.

  11. Nonlinear Generation of shear flows and large scale magnetic fields by small scale

    NASA Astrophysics Data System (ADS)

    Aburjania, G.

    2009-04-01

    EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge

  12. Learning networks for sustainable, large-scale improvement.

    PubMed

    McCannon, C Joseph; Perla, Rocco J

    2009-05-01

    Large-scale improvement efforts known as improvement networks offer structured opportunities for exchange of information and insights into the adaptation of clinical protocols to a variety of settings.

  13. Cut-off scaling of high-harmonic generation driven by a femtosecond visible optical parametric amplifier

    NASA Astrophysics Data System (ADS)

    Cirmi, Giovanni; Lai, Chien-Jen; Granados, Eduardo; Huang, Shu-Wei; Sell, Alexander; Hong, Kyung-Han; Moses, Jeffrey; Keathley, Phillip; Kärtner, Franz X.

    2012-10-01

    We studied high-harmonic generation (HHG) in Ar, Ne and He gas jets using a broadly tunable, high-energy optical parametric amplifier (OPA) in the visible wavelength range. We optimized the noncollinear OPA to deliver tunable, femtosecond pulses with 200-500 µJ energy at the 1 kHz repetition rate with excellent spatiotemporal properties, suitable for HHG experiments. By tuning the central wavelength of the OPA while keeping other parameters (energy, duration and beam size) constant, we experimentally studied the scaling law of cut-off energy with the driver wavelength in helium. Our measurements show a λ1.7 + 0.2 dependence of the HHG cut-off photon energy over the full visible range in agreement with previous experiments of near- and mid-IR wavelengths. By tuning the central wavelength of the driver source, the high-order harmonic spectra in the extreme ultraviolet cover the full range of photon energy between ˜25 and ˜100 eV. Due to the high coherence intrinsic in HHG, as well as the broad and continuous tunability in the extreme UV range, a high energy, high repetition rate version of this source might be an ideal seed for free electron lasers.

  14. A unified large/small-scale dynamo in helical turbulence

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Subramanian, Kandaswamy; Brandenburg, Axel

    2016-09-01

    We use high resolution direct numerical simulations (DNS) to show that helical turbulence can generate significant large-scale fields even in the presence of strong small-scale dynamo action. During the kinematic stage, the unified large/small-scale dynamo grows fields with a shape-invariant eigenfunction, with most power peaked at small scales or large k, as in Subramanian & Brandenburg. Nevertheless, the large-scale field can be clearly detected as an excess power at small k in the negatively polarized component of the energy spectrum for a forcing with positively polarized waves. Its strength overline{B}, relative to the total rms field Brms, decreases with increasing magnetic Reynolds number, ReM. However, as the Lorentz force becomes important, the field generated by the unified dynamo orders itself by saturating on successively larger scales. The magnetic integral scale for the positively polarized waves, characterizing the small-scale field, increases significantly from the kinematic stage to saturation. This implies that the small-scale field becomes as coherent as possible for a given forcing scale, which averts the ReM-dependent quenching of overline{B}/B_rms. These results are obtained for 10243 DNS with magnetic Prandtl numbers of PrM = 0.1 and 10. For PrM = 0.1, overline{B}/B_rms grows from about 0.04 to about 0.4 at saturation, aided in the final stages by helicity dissipation. For PrM = 10, overline{B}/B_rms grows from much less than 0.01 to values of the order the 0.2. Our results confirm that there is a unified large/small-scale dynamo in helical turbulence.

  15. Turbulent amplification of large-scale magnetic fields

    NASA Technical Reports Server (NTRS)

    Montgomery, D.; Chen, H.

    1984-01-01

    Previously-introduced methods for analytically estimating the effects of small-scale turbulent fluctuations on large-scale dynamics are extended to fully three-dimensional magnetohydrodynamics. The problem becomes algebraically tractable in the presence of sufficiently large spectral gaps. The calculation generalizes 'alpha dynamo' calculations, except that the velocity fluctuations and magnetic fluctuations are treated on an independent and equal footing. Earlier expressions for the 'alpha coefficients' of turbulent magnetic field amplification are recovered as a special case.

  16. An Adaptive Multiscale Finite Element Method for Large Scale Simulations

    DTIC Science & Technology

    2015-09-28

    the method . Using the above definitions , the weak statement of the non-linear local problem at the kth 4 DISTRIBUTION A: Distribution approved for...AFRL-AFOSR-VA-TR-2015-0305 An Adaptive Multiscale Finite Element Method for Large Scale Simulations Carlos Duarte UNIVERSITY OF ILLINOIS CHAMPAIGN...14-07-2015 4. TITLE AND SUBTITLE An Adaptive Multiscale Generalized Finite Element Method for Large Scale Simulations 5a.  CONTRACT NUMBER 5b

  17. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...estimating blue and fin whale density that is effective over large spatial scales and is designed to cope with spatial variation in animal density utilizing...a density estimation methodology for quantifying blue and fin whale abundance from passive acoustic data recorded on sparse hydrophone arrays in the

  18. A Cloud Computing Platform for Large-Scale Forensic Computing

    NASA Astrophysics Data System (ADS)

    Roussev, Vassil; Wang, Liqiang; Richard, Golden; Marziale, Lodovico

    The timely processing of massive digital forensic collections demands the use of large-scale distributed computing resources and the flexibility to customize the processing performed on the collections. This paper describes MPI MapReduce (MMR), an open implementation of the MapReduce processing model that outperforms traditional forensic computing techniques. MMR provides linear scaling for CPU-intensive processing and super-linear scaling for indexing-related workloads.

  19. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  20. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  1. Large-scale microwave anisotropy from gravitating seeds

    NASA Technical Reports Server (NTRS)

    Veeraraghavan, Shoba; Stebbins, Albert

    1992-01-01

    Topological defects could have seeded primordial inhomogeneities in cosmological matter. We examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. We describe the pattern of the resulting large angular scale microwave anisotropy.

  2. Large-scale microwave anisotropy from gravitating seeds

    NASA Technical Reports Server (NTRS)

    Veeraraghavan, Shoba; Stebbins, Albert

    1992-01-01

    Topological defects could have seeded primordial inhomogeneities in cosmological matter. We examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. We describe the pattern of the resulting large angular scale microwave anisotropy.

  3. Large-scale studies of marked birds in North America

    USGS Publications Warehouse

    Tautin, J.; Metras, L.; Smith, G.

    1999-01-01

    The first large-scale, co-operative, studies of marked birds in North America were attempted in the 1950s. Operation Recovery, which linked numerous ringing stations along the east coast in a study of autumn migration of passerines, and the Preseason Duck Ringing Programme in prairie states and provinces, conclusively demonstrated the feasibility of large-scale projects. The subsequent development of powerful analytical models and computing capabilities expanded the quantitative potential for further large-scale projects. Monitoring Avian Productivity and Survivorship, and Adaptive Harvest Management are current examples of truly large-scale programmes. Their exemplary success and the availability of versatile analytical tools are driving changes in the North American bird ringing programme. Both the US and Canadian ringing offices are modifying operations to collect more and better data to facilitate large-scale studies and promote a more project-oriented ringing programme. New large-scale programmes such as the Cornell Nest Box Network are on the horizon.

  4. Recursive architecture for large-scale adaptive system

    NASA Astrophysics Data System (ADS)

    Hanahara, Kazuyuki; Sugiyama, Yoshihiko

    1994-09-01

    'Large scale' is one of major trends in the research and development of recent engineering, especially in the field of aerospace structural system. This term expresses the large scale of an artifact in general, however, it also implies the large number of the components which make up the artifact in usual. Considering a large scale system which is especially used in remote space or deep-sea, such a system should be adaptive as well as robust by itself, because its control as well as maintenance by human operators are not easy due to the remoteness. An approach to realizing this large scale, adaptive and robust system is to build the system as an assemblage of components which are respectively adaptive by themselves. In this case, the robustness of the system can be achieved by using a large number of such components and suitable adaptation as well as maintenance strategies. Such a system gathers many research's interest and their studies such as decentralized motion control, configurating algorithm and characteristics of structural elements are reported. In this article, a recursive architecture concept is developed and discussed towards the realization of large scale system which consists of a number of uniform adaptive components. We propose an adaptation strategy based on the architecture and its implementation by means of hierarchically connected processing units. The robustness and the restoration from degeneration of the processing unit are also discussed. Two- and three-dimensional adaptive truss structures are conceptually designed based on the recursive architecture.

  5. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  6. Magnetic Helicity and Large Scale Magnetic Fields: A Primer

    NASA Astrophysics Data System (ADS)

    Blackman, Eric G.

    2015-05-01

    Magnetic fields of laboratory, planetary, stellar, and galactic plasmas commonly exhibit significant order on large temporal or spatial scales compared to the otherwise random motions within the hosting system. Such ordered fields can be measured in the case of planets, stars, and galaxies, or inferred indirectly by the action of their dynamical influence, such as jets. Whether large scale fields are amplified in situ or a remnant from previous stages of an object's history is often debated for objects without a definitive magnetic activity cycle. Magnetic helicity, a measure of twist and linkage of magnetic field lines, is a unifying tool for understanding large scale field evolution for both mechanisms of origin. Its importance stems from its two basic properties: (1) magnetic helicity is typically better conserved than magnetic energy; and (2) the magnetic energy associated with a fixed amount of magnetic helicity is minimized when the system relaxes this helical structure to the largest scale available. Here I discuss how magnetic helicity has come to help us understand the saturation of and sustenance of large scale dynamos, the need for either local or global helicity fluxes to avoid dynamo quenching, and the associated observational consequences. I also discuss how magnetic helicity acts as a hindrance to turbulent diffusion of large scale fields, and thus a helper for fossil remnant large scale field origin models in some contexts. I briefly discuss the connection between large scale fields and accretion disk theory as well. The goal here is to provide a conceptual primer to help the reader efficiently penetrate the literature.

  7. Clearing and Labeling Techniques for Large-Scale Biological Tissues

    PubMed Central

    Seo, Jinyoung; Choe, Minjin; Kim, Sung-Yon

    2016-01-01

    Clearing and labeling techniques for large-scale biological tissues enable simultaneous extraction of molecular and structural information with minimal disassembly of the sample, facilitating the integration of molecular, cellular and systems biology across different scales. Recent years have witnessed an explosive increase in the number of such methods and their applications, reflecting heightened interest in organ-wide clearing and labeling across many fields of biology and medicine. In this review, we provide an overview and comparison of existing clearing and labeling techniques and discuss challenges and opportunities in the investigations of large-scale biological systems. PMID:27239813

  8. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    DOE PAGES

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic naturemore » of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.« less

  9. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    SciTech Connect

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.

  10. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  11. Lessons from Large-Scale Renewable Energy Integration Studies: Preprint

    SciTech Connect

    Bird, L.; Milligan, M.

    2012-06-01

    In general, large-scale integration studies in Europe and the United States find that high penetrations of renewable generation are technically feasible with operational changes and increased access to transmission. This paper describes other key findings such as the need for fast markets, large balancing areas, system flexibility, and the use of advanced forecasting.

  12. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  13. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  14. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  15. Large-Scale Graph Processing Analysis using Supercomputer Cluster

    NASA Astrophysics Data System (ADS)

    Vildario, Alfrido; Fitriyani; Nugraha Nurkahfi, Galih

    2017-01-01

    Graph implementation is widely use in various sector such as automotive, traffic, image processing and many more. They produce graph in large-scale dimension, cause the processing need long computational time and high specification resources. This research addressed the analysis of implementation large-scale graph using supercomputer cluster. We impelemented graph processing by using Breadth-First Search (BFS) algorithm with single destination shortest path problem. Parallel BFS implementation with Message Passing Interface (MPI) used supercomputer cluster at High Performance Computing Laboratory Computational Science Telkom University and Stanford Large Network Dataset Collection. The result showed that the implementation give the speed up averages more than 30 times and eficiency almost 90%.

  16. Potential for geophysical experiments in large scale tests

    SciTech Connect

    Dieterich, J.H.

    1981-07-01

    Potential research applications for large-specimen geophysical experiments include measurements of scale dependence of physical parameters and examination of interactions with heterogeneities, especially flaws such as cracks. In addition, increased specimen size provides opportunities for improved recording resolution and greater control of experimental variables. Large-scale experiments using a special purpose low stress (<40 MPa) bi-axial apparatus demonstrate that a minimum fault length is required to generate confined shear instabilities along pre-existing faults. Experimental analysis of source interactions for simulated earthquakes consisting of confined shear instabilities on a fault with gouge appears to require large specimens (approx.1m) and high confining pressures (>100 MPa).

  17. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  18. Toward Improved Support for Loosely Coupled Large Scale Simulation Workflows

    SciTech Connect

    Boehm, Swen; Elwasif, Wael R; Naughton, III, Thomas J; Vallee, Geoffroy R

    2014-01-01

    High-performance computing (HPC) workloads are increasingly leveraging loosely coupled large scale simula- tions. Unfortunately, most large-scale HPC platforms, including Cray/ALPS environments, are designed for the execution of long-running jobs based on coarse-grained launch capabilities (e.g., one MPI rank per core on all allocated compute nodes). This assumption limits capability-class workload campaigns that require large numbers of discrete or loosely coupled simulations, and where time-to-solution is an untenable pacing issue. This paper describes the challenges related to the support of fine-grained launch capabilities that are necessary for the execution of loosely coupled large scale simulations on Cray/ALPS platforms. More precisely, we present the details of an enhanced runtime system to support this use case, and report on initial results from early testing on systems at Oak Ridge National Laboratory.

  19. Acoustic Studies of the Large Scale Ocean Circulation

    NASA Technical Reports Server (NTRS)

    Menemenlis, Dimitris

    1999-01-01

    Detailed knowledge of ocean circulation and its transport properties is prerequisite to an understanding of the earth's climate and of important biological and chemical cycles. Results from two recent experiments, THETIS-2 in the Western Mediterranean and ATOC in the North Pacific, illustrate the use of ocean acoustic tomography for studies of the large scale circulation. The attraction of acoustic tomography is its ability to sample and average the large-scale oceanic thermal structure, synoptically, along several sections, and at regular intervals. In both studies, the acoustic data are compared to, and then combined with, general circulation models, meteorological analyses, satellite altimetry, and direct measurements from ships. Both studies provide complete regional descriptions of the time-evolving, three-dimensional, large scale circulation, albeit with large uncertainties. The studies raise serious issues about existing ocean observing capability and provide guidelines for future efforts.

  20. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    SciTech Connect

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of the kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.

  1. An investigation of rotor harmonic noise by the use of small scale wind tunnel models

    NASA Technical Reports Server (NTRS)

    Sternfeld, H., Jr.; Schaffer, E. G.

    1982-01-01

    Noise measurements of small scale helicopter rotor models were compared with noise measurements of full scale helicopters to determine what information about the full scale helicopters could be derived from noise measurements of small scale helicopter models. Comparisons were made of the discrete frequency (rotational) noise for 4 pairs of tests. Areas covered were tip speed effects, isolated rotor, tandem rotor, and main rotor/tail rotor interaction. Results show good comparison of noise trends with configuration and test condition changes, and good comparison of absolute noise measurements with the corrections used except for the isolated rotor case. Noise measurements of the isolated rotor show a great deal of scatter reflecting the fact that the rotor in hover is basically unstable.

  2. Large Scale Processes and Extreme Floods in Brazil

    NASA Astrophysics Data System (ADS)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  3. A model of plasma heating by large-scale flow

    NASA Astrophysics Data System (ADS)

    Pongkitiwanichakul, P.; Cattaneo, F.; Boldyrev, S.; Mason, J.; Perez, J. C.

    2015-12-01

    In this work, we study the process of energy dissipation triggered by a slow large-scale motion of a magnetized conducting fluid. Our consideration is motivated by the problem of heating the solar corona, which is believed to be governed by fast reconnection events set off by the slow motion of magnetic field lines anchored in the photospheric plasma. To elucidate the physics governing the disruption of the imposed laminar motion and the energy transfer to small scales, we propose a simplified model where the large-scale motion of magnetic field lines is prescribed not at the footpoints but rather imposed volumetrically. As a result, the problem can be treated numerically with an efficient, highly accurate spectral method, allowing us to use a resolution and statistical ensemble exceeding those of the previous work. We find that, even though the large-scale deformations are slow, they eventually lead to reconnection events that drive a turbulent state at smaller scales. The small-scale turbulence displays many of the universal features of field-guided magnetohydrodynamic turbulence like a well-developed inertial range spectrum. Based on these observations, we construct a phenomenological model that gives the scalings of the amplitude of the fluctuations and the energy-dissipation rate as functions of the input parameters. We find good agreement between the numerical results and the predictions of the model.

  4. EINSTEIN'S SIGNATURE IN COSMOLOGICAL LARGE-SCALE STRUCTURE

    SciTech Connect

    Bruni, Marco; Hidalgo, Juan Carlos; Wands, David

    2014-10-10

    We show how the nonlinearity of general relativity generates a characteristic nonGaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large-scale limit. Newtonian gravity and general relativity provide complementary theoretical frameworks for modeling large-scale structure in ΛCDM cosmology; a relativistic approach is essential to determine initial conditions, which can then be used in Newtonian simulations studying the nonlinear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, ζ. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, R, that drives structure formation at large scales. We show how the nonlinear relation between the spatial curvature, R, and the metric perturbation, ζ, translates into a specific nonGaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian ζ. Our analysis shows the nonlinear signature of Einstein's gravity in large-scale structure.

  5. The Influence of Large-scale Environments on Galaxy Properties

    NASA Astrophysics Data System (ADS)

    Wei, Yu-qing; Wang, Lei; Dai, Cai-ping

    2017-07-01

    The star formation properties of galaxies and their dependence on environments play an important role for understanding the formation and evolution of galaxies. Using the galaxy sample of the Sloan Digital Sky Survey (SDSS), different research groups have studied the physical properties of galaxies and their large-scale environments. Here, using the filament catalog from Tempel et al. and the galaxy catalog of large-scale structure classification from Wang et al., and taking the influence of the galaxy morphology, high/low local density environment, and central (satellite) galaxy into consideration, we have found that the properties of galaxies are correlated with their residential large-scale environments: the SSFR (specific star formation rate) and SFR (star formation rate) strongly depend on the large-scale environment for spiral galaxies and satellite galaxies, but this dependence is very weak for elliptical galaxies and central galaxies, and the influence of large-scale environments on galaxies in low density region is more sensitive than that in high density region. The above conclusions remain valid even for the galaxies with the same mass. In addition, the SSFR distributions derived from the catalogs of Tempel et al. and Wang et al. are not entirely consistent.

  6. Large-scale silicon optical switches for optical interconnection

    NASA Astrophysics Data System (ADS)

    Qiao, Lei; Tang, Weijie; Chu, Tao

    2016-11-01

    Large-scale optical switches are greatly demanded in building optical interconnections in data centers and high performance computers (HPCs). Silicon optical switches have advantages of being compact and CMOS process compatible, which can be easily monolithically integrated. However, there are difficulties to construct large ports silicon optical switches. One of them is the non-uniformity of the switch units in large scale silicon optical switches, which arises from the fabrication error and causes confusion in finding the unit optimum operation points. In this paper, we proposed a method to detect the optimum operating point in large scale switch with limited build-in power monitors. We also propose methods for improving the unbalanced crosstalk of cross/bar states in silicon electro-optical MZI switches and insertion losses. Our recent progress in large scale silicon optical switches, including 64 × 64 thermal-optical and 32 × 32 electro-optical switches will be introduced. To the best our knowledge, both of them are the largest scale silicon optical switches in their sections, respectively. The switches were fabricated on 340-nm SOI substrates with CMOS 180- nm processes. The crosstalk of the 32 × 32 electro-optic switch was -19.2dB to -25.1 dB, while the value of the 64 × 64 thermal-optic switch was -30 dB to -48.3 dB.

  7. The Phoenix series large scale LNG pool fire experiments.

    SciTech Connect

    Simpson, Richard B.; Jensen, Richard Pearson; Demosthenous, Byron; Luketa, Anay Josephine; Ricks, Allen Joseph; Hightower, Marion Michael; Blanchat, Thomas K.; Helmick, Paul H.; Tieszen, Sheldon Robert; Deola, Regina Anne; Mercier, Jeffrey Alan; Suo-Anttila, Jill Marie; Miller, Timothy J.

    2010-12-01

    The increasing demand for natural gas could increase the number and frequency of Liquefied Natural Gas (LNG) tanker deliveries to ports across the United States. Because of the increasing number of shipments and the number of possible new facilities, concerns about the potential safety of the public and property from an accidental, and even more importantly intentional spills, have increased. While improvements have been made over the past decade in assessing hazards from LNG spills, the existing experimental data is much smaller in size and scale than many postulated large accidental and intentional spills. Since the physics and hazards from a fire change with fire size, there are concerns about the adequacy of current hazard prediction techniques for large LNG spills and fires. To address these concerns, Congress funded the Department of Energy (DOE) in 2008 to conduct a series of laboratory and large-scale LNG pool fire experiments at Sandia National Laboratories (Sandia) in Albuquerque, New Mexico. This report presents the test data and results of both sets of fire experiments. A series of five reduced-scale (gas burner) tests (yielding 27 sets of data) were conducted in 2007 and 2008 at Sandia's Thermal Test Complex (TTC) to assess flame height to fire diameter ratios as a function of nondimensional heat release rates for extrapolation to large-scale LNG fires. The large-scale LNG pool fire experiments were conducted in a 120 m diameter pond specially designed and constructed in Sandia's Area III large-scale test complex. Two fire tests of LNG spills of 21 and 81 m in diameter were conducted in 2009 to improve the understanding of flame height, smoke production, and burn rate and therefore the physics and hazards of large LNG spills and fires.

  8. Large Scale Modelling of Glow Discharges or Non - Plasmas

    NASA Astrophysics Data System (ADS)

    Shankar, Sadasivan

    The Electron Velocity Distribution Function (EVDF) in the cathode fall of a DC helium glow discharge was evaluated from a numerical solution of the Boltzmann Transport Equation(BTE). The numerical technique was based on a Petrov-Galerkin technique and a unique combination of streamline upwinding with self -consistent feedback-based shock-capturing. EVDF for the cathode fall was solved at 1 Torr, as a function of position x, axial velocity v_{rm x}, radial velocity v_{rm r}, and time t. The electron-neutral collisions consisted of elastic, excitation, and ionization processes. The algorithm was optimized and vectorized to speed execution by more than a factor of 10 on CRAY-XMP. Efficient storage schemes were used to save the memory allocation required by the algorithm. The analysis of the solution of BTE was done in terms of the 8-moments that were evaluated. Higher moments were found necessary to study the momentum and energy fluxes. The time and length scales were estimated and used as a basis for the characterization of DC glow discharges. Based on an exhaustive study of Knudsen numbers, it was observed that the electrons in the cathode fall were in the transition or Boltzmann regime. The shortest relaxation time was the momentum relaxation and the longest times were the ionization and energy relaxation times. The other times in the processes were that for plasma reaction, diffusion, convection, transit, entropy relaxation, and that for mean free flight between the collisions. Different models were classified based on the moments, time scales, and length scales in their applicability to glow discharges. These consisted of BTE with different number af phase and configuration dimensions, Bhatnagar-Gross-Krook equation, moment equations (e.g. Drift-Diffusion, Drift-Diffusion-Inertia), and spherical harmonic expansions.

  9. Feasibility of large-scale aquatic microcosms. Final report

    SciTech Connect

    Pease, T.; Wyman, R.L.; Logan, D.T.; Logan, C.M.; Lispi, D.R.

    1982-02-01

    Microcosms have been used to study a number of fundamental ecological principles and more recently to investigate the effects of man-made perturbations on ecosystems. In this report the feasibility of using large-scale microcosms to access aquatic impacts of power generating facilities is evaluated. Aquatic problems of concern to utilities are outlined, and various research approaches, including large and small microcosms, bioassays, and other laboratory experiments, are discussed. An extensive critical review and synthesis of the literature on recent microcosm research, which includes a comparison of the factors influencing physical, chemical, and biological processes in small vs large microcosms and in microcosms vs nature, led the authors to conclude that large-scale microcosms offer several advantages over other study techniques for particular types of problems. A hypothetical large-scale facility simulating a lake ecosystem is presented to illustrate the size, cost, and complexity of such facilities. The rationale for designing a lake-simulating large-scale microcosm is presented.

  10. Comparison of CMB lensing efficiency of gravitational waves and large scale structure

    NASA Astrophysics Data System (ADS)

    Padmanabhan, Hamsa; Rotti, Aditya; Souradeep, Tarun

    2013-09-01

    We provide a detailed treatment and comparison of the weak lensing effects due to large scale structure (LSS), or scalar density perturbations and those due to gravitational waves (GWs) or tensor perturbations, on the temperature and polarization power spectra of the cosmic microwave background (CMB). We carry out the analysis both in real space by using the correlation function method, as well as in the spherical harmonic space. We find an intriguing similarity between the lensing kernels associated with LSS lensing and GW lensing. It is found that the lensing kernels only differ in relative negative signs and their form is very reminiscent of even and odd parity bipolar spherical harmonic coefficients. Through a numerical study of these lensing kernels, we establish that lensing due to GW is more efficient at distorting the CMB spectra as compared to LSS lensing, particularly for the polarization power spectra. Finally we argue that the CMB B-mode power spectra measurements can be used to place interesting constraints on GW energy densities.

  11. Large-Scale Shell-Model Analysis of the Neutrinoless β β Decay of 48Ca

    NASA Astrophysics Data System (ADS)

    Iwata, Y.; Shimizu, N.; Otsuka, T.; Utsuno, Y.; Menéndez, J.; Honma, M.; Abe, T.

    2016-03-01

    We present the nuclear matrix element for the neutrinoless double-beta decay of 48Ca based on large-scale shell-model calculations including two harmonic oscillator shells (s d and p f shells). The excitation spectra of 48Ca and 48Ti, and the two-neutrino double-beta decay of 48Ca are reproduced in good agreement to the experimental data. We find that the neutrinoless double-beta decay nuclear matrix element is enhanced by about 30% compared to p f -shell calculations. This reduces the decay lifetime by almost a factor of 2. The matrix-element increase is mostly due to pairing correlations associated with cross-shell s d -p f excitations. We also investigate possible implications for heavier neutrinoless double-beta decay candidates.

  12. On determining the large-scale ocean circulation from satellite altimetry

    NASA Technical Reports Server (NTRS)

    Tai, C.-K.

    1983-01-01

    It is contended that a spherical harmonic expansion of the difference between the altimeter-derived mean sea surface and the geoid estimate should reveal the large-scale circulation of the ocean surface layer when the low-degree terms are examined. Methods based on this principle are proposed and partially demonstrated over the Pacific Ocean with the aid of the mean sea surface derived from the Seasat altimeter and the Goddard Earth Model 9 earth gravity model. The preliminary results reveal a well-defined clockwise gyre in the North Pacific and a much less well defined counterclockwise gyre in the South Pacific. When the dynamic topography thus obtained is compared with Wyrtki's (1975) dynamic topography derived from hydrographic data, the agreement is found to be within the limit of geoid uncertainties and satellite orbital errors.

  13. Large scale meteorological influence during the Geysers 1979 field experiment

    SciTech Connect

    Barr, S.

    1980-01-01

    A series of meteorological field measurements conducted during July 1979 near Cobb Mountain in Northern California reveals evidence of several scales of atmospheric circulation consistent with the climatic pattern of the area. The scales of influence are reflected in the structure of wind and temperature in vertically stratified layers at a given observation site. Large scale synoptic gradient flow dominates the wind field above about twice the height of the topographic ridge. Below that there is a mixture of effects with evidence of a diurnal sea breeze influence and a sublayer of katabatic winds. The July observations demonstrate that weak migratory circulations in the large scale synoptic meteorological pattern have a significant influence on the day-to-day gradient winds and must be accounted for in planning meteorological programs including tracer experiments.

  14. Seismic safety in conducting large-scale blasts

    NASA Astrophysics Data System (ADS)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  15. PKI security in large-scale healthcare networks.

    PubMed

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  16. Large-scale simulations of complex physical systems

    NASA Astrophysics Data System (ADS)

    Belić, A.

    2007-04-01

    Scientific computing has become a tool as vital as experimentation and theory for dealing with scientific challenges of the twenty-first century. Large scale simulations and modelling serve as heuristic tools in a broad problem-solving process. High-performance computing facilities make possible the first step in this process - a view of new and previously inaccessible domains in science and the building up of intuition regarding the new phenomenology. The final goal of this process is to translate this newly found intuition into better algorithms and new analytical results. In this presentation we give an outline of the research themes pursued at the Scientific Computing Laboratory of the Institute of Physics in Belgrade regarding large-scale simulations of complex classical and quantum physical systems, and present recent results obtained in the large-scale simulations of granular materials and path integrals.

  17. Large-scale simulations of complex physical systems

    SciTech Connect

    Belic, A.

    2007-04-23

    Scientific computing has become a tool as vital as experimentation and theory for dealing with scientific challenges of the twenty-first century. Large scale simulations and modelling serve as heuristic tools in a broad problem-solving process. High-performance computing facilities make possible the first step in this process - a view of new and previously inaccessible domains in science and the building up of intuition regarding the new phenomenology. The final goal of this process is to translate this newly found intuition into better algorithms and new analytical results.In this presentation we give an outline of the research themes pursued at the Scientific Computing Laboratory of the Institute of Physics in Belgrade regarding large-scale simulations of complex classical and quantum physical systems, and present recent results obtained in the large-scale simulations of granular materials and path integrals.

  18. Space transportation booster engine thrust chamber technology, large scale injector

    NASA Technical Reports Server (NTRS)

    Schneider, J. A.

    1993-01-01

    The objective of the Large Scale Injector (LSI) program was to deliver a 21 inch diameter, 600,000 lbf thrust class injector to NASA/MSFC for hot fire testing. The hot fire test program would demonstrate the feasibility and integrity of the full scale injector, including combustion stability, chamber wall compatibility (thermal management), and injector performance. The 21 inch diameter injector was delivered in September of 1991.

  19. Scalable WIM: effective exploration in large-scale astrophysical environments.

    PubMed

    Li, Yinggang; Fu, Chi-Wing; Hanson, Andrew J

    2006-01-01

    Navigating through large-scale virtual environments such as simulations of the astrophysical Universe is difficult. The huge spatial range of astronomical models and the dominance of empty space make it hard for users to travel across cosmological scales effectively, and the problem of wayfinding further impedes the user's ability to acquire reliable spatial knowledge of astronomical contexts. We introduce a new technique called the scalable world-in-miniature (WIM) map as a unifying interface to facilitate travel and wayfinding in a virtual environment spanning gigantic spatial scales: Power-law spatial scaling enables rapid and accurate transitions among widely separated regions; logarithmically mapped miniature spaces offer a global overview mode when the full context is too large; 3D landmarks represented in the WIM are enhanced by scale, positional, and directional cues to augment spatial context awareness; a series of navigation models are incorporated into the scalable WIM to improve the performance of travel tasks posed by the unique characteristics of virtual cosmic exploration. The scalable WIM user interface supports an improved physical navigation experience and assists pragmatic cognitive understanding of a visualization context that incorporates the features of large-scale astronomy.

  20. [Issues of large scale tissue culture of medicinal plant].

    PubMed

    Lv, Dong-Mei; Yuan, Yuan; Zhan, Zhi-Lai

    2014-09-01

    In order to increase the yield and quality of the medicinal plant and enhance the competitive power of industry of medicinal plant in our country, this paper analyzed the status, problem and countermeasure of the tissue culture of medicinal plant on large scale. Although the biotechnology is one of the most efficient and promising means in production of medicinal plant, it still has problems such as stability of the material, safety of the transgenic medicinal plant and optimization of cultured condition. Establishing perfect evaluation system according to the characteristic of the medicinal plant is the key measures to assure the sustainable development of the tissue culture of medicinal plant on large scale.

  1. The CLASSgal code for relativistic cosmological large scale structure

    NASA Astrophysics Data System (ADS)

    Di Dio, Enea; Montanari, Francesco; Lesgourgues, Julien; Durrer, Ruth

    2013-11-01

    We present accurate and efficient computations of large scale structure observables, obtained with a modified version of the CLASS code which is made publicly available. This code includes all relativistic corrections and computes both the power spectrum Cl(z1,z2) and the corresponding correlation function ξ(θ,z1,z2) of the matter density and the galaxy number fluctuations in linear perturbation theory. For Gaussian initial perturbations, these quantities contain the full information encoded in the large scale matter distribution at the level of linear perturbation theory. We illustrate the usefulness of our code for cosmological parameter estimation through a few simple examples.

  2. Corridors Increase Plant Species Richness at Large Scales

    SciTech Connect

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.; Tewksbury, Joshua J.; Levey, Douglas J.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  3. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  4. The Evolution of Baryons in Cosmic Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Snedden, Ali; Arielle Phillips, Lara; Mathews, Grant James; Coughlin, Jared; Suh, In-Saeng; Bhattacharya, Aparna

    2015-01-01

    The environments of galaxies play a critical role in their formation and evolution. We study these environments using cosmological simulations with star formation and supernova feedback included. From these simulations, we parse the large scale structure into clusters, filaments and voids using a segmentation algorithm adapted from medical imaging. We trace the star formation history, gas phase and metal evolution of the baryons in the intergalactic medium as function of structure. We find that our algorithm reproduces the baryon fraction in the intracluster medium and that the majority of star formation occurs in cold, dense filaments. We present the consequences this large scale environment has for galactic halos and galaxy evolution.

  5. Large-scale structure from wiggly cosmic strings

    NASA Astrophysics Data System (ADS)

    Vachaspati, Tanmay; Vilenkin, Alexander

    1991-08-01

    Recent simulations of the evolution of cosmic strings indicate the presence of small-scale structure on the strings. It is shown that wakes produced by such 'wiggly' cosmic strings can result in the efficient formation of large-scale structure and large streaming velocities in the universe without significantly affecting the microwave-background isotropy. It is also argued that the motion of strings will lead to the generation of a primordial magnetic field. The most promising version of this scenario appears to be the one in which the universe is dominated by light neutrinos.

  6. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGES

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  7. Large-scale anisotropy in stably stratified rotating flows

    SciTech Connect

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; Pouquet, A.

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up to $1024^3$ grid points and Reynolds numbers of $\\approx 1000$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $\\sim k_\\perp^{-5/3}$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.

  8. New probes of Cosmic Microwave Background large-scale anomalies

    NASA Astrophysics Data System (ADS)

    Aiola, Simone

    Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the

  9. Local and Regional Impacts of Large Scale Wind Energy Deployment

    NASA Astrophysics Data System (ADS)

    Michalakes, J.; Hammond, S.; Lundquist, J. K.; Moriarty, P.; Robinson, M.

    2010-12-01

    The U.S. is currently on a path to produce 20% of its electricity from wind energy by 2030, almost a 10-fold increase over present levels of electricity generated from wind. Such high-penetration wind energy deployment will entail extracting elevated energy levels from the planetary boundary layer and preliminary studies indicate that this will have significant but uncertain impacts on the local and regional environment. State and federal regulators have raised serious concerns regarding potential agricultural impacts from large farms deployed throughout the Midwest where agriculture is the basis of the local economy. The effects of large wind farms have been proposed to be both beneficial (drying crops to reduce occurrences of fungal diseases, avoiding late spring freezes, enhancing pollen viability, reducing dew duration) and detrimental (accelerating moisture loss during drought) with no conclusive investigations thus far. As both wind and solar technologies are deployed at scales required to replace conventional technologies, there must be reasonable certainty that the potential environmental impacts at the micro, macro, regional and global scale do not exceed those anticipated from carbon emissions. Largely because of computational limits, the role of large wind farms in affecting regional-scale weather patterns has only been investigated in coarse simulations and modeling tools do not yet exist which are capable of assessing the downwind affects of large wind farms may have on microclimatology. In this presentation, we will outline the vision for and discuss technical and scientific challenges in developing a multi-model high-performance simulation capability covering the range of mesoscale to sub-millimeter scales appropriate for assessing local, regional, and ultimately global environmental impacts and quantifying uncertainties of large scale wind energy deployment scenarios. Such a system will allow continuous downscaling of atmospheric processes on wind

  10. Optimization and large scale computation of an entropy-based moment closure

    SciTech Connect

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  11. Optimization and large scale computation of an entropy-based moment closure

    DOE PAGES

    Hauck, Cory D.; Hill, Judith C.; Garrett, C. Kristopher

    2015-09-10

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used asmore » test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. Lastly, these results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.« less

  12. Optimization and large scale computation of an entropy-based moment closure

    NASA Astrophysics Data System (ADS)

    Kristopher Garrett, C.; Hauck, Cory; Hill, Judith

    2015-12-01

    We present computational advances and results in the implementation of an entropy-based moment closure, MN, in the context of linear kinetic equations, with an emphasis on heterogeneous and large-scale computing platforms. Entropy-based closures are known in several cases to yield more accurate results than closures based on standard spectral approximations, such as PN, but the computational cost is generally much higher and often prohibitive. Several optimizations are introduced to improve the performance of entropy-based algorithms over previous implementations. These optimizations include the use of GPU acceleration and the exploitation of the mathematical properties of spherical harmonics, which are used as test functions in the moment formulation. To test the emerging high-performance computing paradigm of communication bound simulations, we present timing results at the largest computational scales currently available. These results show, in particular, load balancing issues in scaling the MN algorithm that do not appear for the PN algorithm. We also observe that in weak scaling tests, the ratio in time to solution of MN to PN decreases.

  13. Strong New Evidence for Oscillation of the Cosmological Scale Factor Observed in the Large Scale Structure

    NASA Astrophysics Data System (ADS)

    Ringermacher, Harry I.; Mead, Lawrence R.

    2017-01-01

    We have analyzed SDSSIII-BOSS, DR9 galaxy number count data using 2 independent approaches, a relativistic expanding space model based on work by Ostriker and direct Fourier analysis, and found incontrovertible evidence for a scale factor oscillation at 7 Hubble-Hertz (HHz) in both methods, where we define 1 HHz as 1 cycle over 1 Hubble-time. The number count of galaxies on these scales should be relatively smooth. However, a DR9 plot of galaxy number count per 0.01 redshift bin as a function of redshift shows significant bumps to redshift 0.5. We take the SDSSIII data (about ¼ of the sky) to be a fair representation of the entire sky when using number count. Our model fits essentially all bumps at a 99.8% R-squared goodness level if and only if the 7 HHz oscillation ( plus 2nd and 3rd harmonics at 14 HHz and 21 HHz) is included. These are the same frequencies observed by us in AJ 149, 137 (2015) using SNe data. Since the SDSSIII data set only goes to redshift 0.8, only one cycle of oscillation is included compared to 2-3 in our earlier work. Thus a Fourier analysis performed on the SDSS redshift data converted to equal-time binning leaves a broadened spectrum over the range where harmonics would normally reside but nevertheless peaked at 7 HHz. A scalar field model presented in the AJ paper describes the oscillation and enters the Friedmann equations by replacing the LCDM dark matter density parameter with the scalar field density. Thus, LCDM dark matter is the median of the wave which appears to act like a fluid with a changing equation-of-state. The oscillation may be a longitudinal gravitational wave originating with the Big Bang and requiring a massive graviton. 7 HHz is consistent with a graviton mass of 10^ -32 eV.

  14. Large-scale smart passive system for civil engineering applications

    NASA Astrophysics Data System (ADS)

    Jung, Hyung-Jo; Jang, Dong-Doo; Lee, Heon-Jae; Cho, Sang-Won

    2008-03-01

    The smart passive system consisting of a magnetorheological (MR) damper and an electromagnetic induction (EMI) part has been recently proposed. An EMI part can generate the input current for an MR damper from vibration of a structure according to Faraday's law of electromagnetic induction. The control performance of the smart passive system has been demonstrated mainly by numerical simulations. It was verified from the numerical results that the system could be effective to reduce the structural responses in the cases of civil engineering structures such as buildings and bridges. On the other hand, the experimental validation of the system is not sufficiently conducted yet. In this paper, the feasibility of the smart passive system to real-scale structures is investigated. To do this, the large-scale smart passive system is designed, manufactured, and tested. The system consists of the large-capacity MR damper, which has a maximum force level of approximately +/-10,000N, a maximum stroke level of +/-35mm and the maximum current level of 3 A, and the large-scale EMI part, which is designed to generate sufficient induced current for the damper. The applicability of the smart passive system to large real-scale structures is examined through a series of shaking table tests. The magnitudes of the induced current of the EMI part with various sinusoidal excitation inputs are measured. According to the test results, the large-scale EMI part shows the possibility that it could generate the sufficient current or power for changing the damping characteristics of the large-capacity MR damper.

  15. The effective field theory of cosmological large scale structures

    SciTech Connect

    Carrasco, John Joseph M.; Hertzberg, Mark P.; Senatore, Leonardo

    2012-09-20

    Large scale structure surveys will likely become the next leading cosmological probe. In our universe, matter perturbations are large on short distances and small at long scales, i.e. strongly coupled in the UV and weakly coupled in the IR. To make precise analytical predictions on large scales, we develop an effective field theory formulated in terms of an IR effective fluid characterized by several parameters, such as speed of sound and viscosity. These parameters, determined by the UV physics described by the Boltzmann equation, are measured from N-body simulations. We find that the speed of sound of the effective fluid is c2s ≈ 10–6c2 and that the viscosity contributions are of the same order. The fluid describes all the relevant physics at long scales k and permits a manifestly convergent perturbative expansion in the size of the matter perturbations δ(k) for all the observables. As an example, we calculate the correction to the power spectrum at order δ(k)4. As a result, the predictions of the effective field theory are found to be in much better agreement with observation than standard cosmological perturbation theory, already reaching percent precision at this order up to a relatively short scale k ≃ 0.24h Mpc–1.

  16. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  17. Large scale structure in universes dominated by cold dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard

    1986-01-01

    The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.

  18. Large eddy simulation of the atmosphere on various scales.

    PubMed

    Cullen, M J P; Brown, A R

    2009-07-28

    Numerical simulations of the atmosphere are routinely carried out on various scales for purposes ranging from weather forecasts for local areas a few hours ahead to forecasts of climate change over periods of hundreds of years. Almost without exception, these forecasts are made with space/time-averaged versions of the governing Navier-Stokes equations and laws of thermodynamics, together with additional terms representing internal and boundary forcing. The calculations are a form of large eddy modelling, because the subgrid-scale processes have to be modelled. In the global atmospheric models used for long-term predictions, the primary method is implicit large eddy modelling, using discretization to perform the averaging, supplemented by specialized subgrid models, where there is organized small-scale activity, such as in the lower boundary layer and near active convection. Smaller scale models used for local or short-range forecasts can use a much smaller averaging scale. This allows some of the specialized subgrid models to be dropped in favour of direct simulations. In research mode, the same models can be run as a conventional large eddy simulation only a few orders of magnitude away from a direct simulation. These simulations can then be used in the development of the subgrid models for coarser resolution models.

  19. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  20. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  1. Firebrands and spotting ignition in large-scale fires

    Treesearch

    Eunmo Koo; Patrick J. Pagni; David R. Weise; John P. Woycheese

    2010-01-01

    Spotting ignition by lofted firebrands is a significant mechanism of fire spread, as observed in many largescale fires. The role of firebrands in fire propagation and the important parameters involved in spot fire development are studied. Historical large-scale fires, including wind-driven urban and wildland conflagrations and post-earthquake fires are given as...

  2. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    ERIC Educational Resources Information Center

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  3. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  4. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  5. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  6. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  7. Developing and Understanding Methods for Large-Scale Nonlinear Optimization

    DTIC Science & Technology

    2006-07-24

    algorithms for large-scale uncon- strained and constrained optimization problems, including limited-memory methods for problems with -2- many thousands...34Published in peer-reviewed journals" E. Eskow, B. Bader, R. Byrd, S. Crivelli, T. Head-Gordon, V. Lamberti and R. Schnabel, "An optimization approach to the

  8. Probabilistic Cuing in Large-Scale Environmental Search

    ERIC Educational Resources Information Center

    Smith, Alastair D.; Hood, Bruce M.; Gilchrist, Iain D.

    2010-01-01

    Finding an object in our environment is an important human ability that also represents a critical component of human foraging behavior. One type of information that aids efficient large-scale search is the likelihood of the object being in one location over another. In this study we investigated the conditions under which individuals respond to…

  9. The Cosmology Large Angular Scale Surveyor (CLASS) Telescope Architecture

    NASA Technical Reports Server (NTRS)

    Chuss, David T.; Ali, Aamir; Amiri, Mandana; Appel, John W.; Araujo, Derek; Bennett, Charles L.; Boone, Fletcher; Chan, Manwei; Cho, Hsiao-Mei; Colazo, Felipe; hide

    2014-01-01

    We describe the instrument architecture of the Johns Hopkins University-led CLASS instrument, a groundbased cosmic microwave background (CMB) polarimeter that will measure the large-scale polarization of the CMB in several frequency bands to search for evidence of inflation.

  10. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  11. Improving the Utility of Large-Scale Assessments in Canada

    ERIC Educational Resources Information Center

    Rogers, W. Todd

    2014-01-01

    Principals and teachers do not use large-scale assessment results because the lack of distinct and reliable subtests prevents identifying strengths and weaknesses of students and instruction, the results arrive too late to be used, and principals and teachers need assistance to use the results to improve instruction so as to improve student…

  12. Sparse approximation through boosting for learning large scale kernel machines.

    PubMed

    Sun, Ping; Yao, Xin

    2010-06-01

    Recently, sparse approximation has become a preferred method for learning large scale kernel machines. This technique attempts to represent the solution with only a subset of original data points also known as basis vectors, which are usually chosen one by one with a forward selection procedure based on some selection criteria. The computational complexity of several resultant algorithms scales as O(NM(2)) in time and O(NM) in memory, where N is the number of training points and M is the number of basis vectors as well as the steps of forward selection. For some large scale data sets, to obtain a better solution, we are sometimes required to include more basis vectors, which means that M is not trivial in this situation. However, the limited computational resource (e.g., memory) prevents us from including too many vectors. To handle this dilemma, we propose to add an ensemble of basis vectors instead of only one at each forward step. The proposed method, closely related to gradient boosting, could decrease the required number M of forward steps significantly and thus a large fraction of computational cost is saved. Numerical experiments on three large scale regression tasks and a classification problem demonstrate the effectiveness of the proposed approach.

  13. Research directions in large scale systems and decentralized control

    NASA Technical Reports Server (NTRS)

    Tenney, R. R.

    1980-01-01

    Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.

  14. Ecosystem resilience despite large-scale altered hydro climatic conditions

    USDA-ARS?s Scientific Manuscript database

    Climate change is predicted to increase both drought frequency and duration, and when coupled with substantial warming, will establish a new hydroclimatological paradigm for many regions. Large-scale, warm droughts have recently impacted North America, Africa, Europe, Amazonia, and Australia result...

  15. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  16. Large-Scale Assessments and Educational Policies in Italy

    ERIC Educational Resources Information Center

    Damiani, Valeria

    2016-01-01

    Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…

  17. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  18. Large-Scale Assessments and Educational Policies in Italy

    ERIC Educational Resources Information Center

    Damiani, Valeria

    2016-01-01

    Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…

  19. Large scale fire whirls: Can their formation be predicted?

    Treesearch

    J. Forthofer; Bret Butler

    2010-01-01

    Large scale fire whirls have not traditionally been recognized as a frequent phenomenon on wildland fires. However, there are anecdotal data suggesting that they can and do occur with some regularity. This paper presents a brief summary of this information and an analysis of the causal factors leading to their formation.

  20. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  1. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  2. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  3. Large-Scale Innovation and Change in UK Higher Education

    ERIC Educational Resources Information Center

    Brown, Stephen

    2013-01-01

    This paper reflects on challenges universities face as they respond to change. It reviews current theories and models of change management, discusses why universities are particularly difficult environments in which to achieve large scale, lasting change and reports on a recent attempt by the UK JISC to enable a range of UK universities to employ…

  4. Mixing Metaphors: Building Infrastructure for Large Scale School Turnaround

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Neumerski, Christine M.

    2015-01-01

    The purpose of this analysis is to increase understanding of the possibilities and challenges of building educational infrastructure--the basic, foundational structures, systems, and resources--to support large-scale school turnaround. Building educational infrastructure often exceeds the capacity of schools, districts, and state education…

  5. Individual Skill Differences and Large-Scale Environmental Learning

    ERIC Educational Resources Information Center

    Fields, Alexa W.; Shelton, Amy L.

    2006-01-01

    Spatial skills are known to vary widely among normal individuals. This project was designed to address whether these individual differences are differentially related to large-scale environmental learning from route (ground-level) and survey (aerial) perspectives. Participants learned two virtual environments (route and survey) with limited…

  6. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  7. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  8. Global smoothing and continuation for large-scale molecular optimization

    SciTech Connect

    More, J.J.; Wu, Zhijun

    1995-10-01

    We discuss the formulation of optimization problems that arise in the study of distance geometry, ionic systems, and molecular clusters. We show that continuation techniques based on global smoothing are applicable to these molecular optimization problems, and we outline the issues that must be resolved in the solution of large-scale molecular optimization problems.

  9. Large-scale Eucalyptus energy farms and power cogeneration

    Treesearch

    Robert C. Noroña

    1983-01-01

    A thorough evaluation of all factors possibly affecting a large-scale planting of eucalyptus is foremost in determining the cost effectiveness of the planned operation. Seven basic areas of concern must be analyzed:1. Species Selection 2. Site Preparation 3. Planting 4. Weed Control 5....

  10. Probabilistic Cuing in Large-Scale Environmental Search

    ERIC Educational Resources Information Center

    Smith, Alastair D.; Hood, Bruce M.; Gilchrist, Iain D.

    2010-01-01

    Finding an object in our environment is an important human ability that also represents a critical component of human foraging behavior. One type of information that aids efficient large-scale search is the likelihood of the object being in one location over another. In this study we investigated the conditions under which individuals respond to…

  11. The large scale microwave background anisotropy in decaying particle cosmology

    SciTech Connect

    Panek, M.

    1987-06-01

    We investigate the large-scale anisotropy of the microwave background radiation in cosmological models with decaying particles. The observed value of the quadrupole moment combined with other constraints gives an upper limit on the redshift of the decay z/sub d/ < 3-5. 12 refs., 2 figs.

  12. Large-scale search for dark-matter axions

    SciTech Connect

    Kinion, D; van Bibber, K

    2000-08-30

    We review the status of two ongoing large-scale searches for axions which may constitute the dark matter of our Milky Way halo. The experiments are based on the microwave cavity technique proposed by Sikivie, and marks a ''second-generation'' to the original experiments performed by the Rochester-Brookhaven-Fermilab collaboration, and the University of Florida group.

  13. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  14. The Role of Plausible Values in Large-Scale Surveys

    ERIC Educational Resources Information Center

    Wu, Margaret

    2005-01-01

    In large-scale assessment programs such as NAEP, TIMSS and PISA, students' achievement data sets provided for secondary analysts contain so-called "plausible values." Plausible values are multiple imputations of the unobservable latent achievement for each student. In this article it has been shown how plausible values are used to: (1)…

  15. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  16. Computational Complexity, Efficiency and Accountability in Large Scale Teleprocessing Systems.

    DTIC Science & Technology

    1980-12-01

    COMPLEXITY, EFFICIENCY AND ACCOUNTABILITY IN LARGE SCALE TELEPROCESSING SYSTEMS DAAG29-78-C-0036 STANFORD UNIVERSITY JOHN T. GILL MARTIN E. BELLMAN...solve but easy to check. Ve have also suggested howy sucb random tapes can be simulated by determin- istically generating "pseudorandom" numbers by a

  17. Large-Scale Assessment and English Language Learners with Disabilities

    ERIC Educational Resources Information Center

    Liu, Kristin K.; Ward, Jenna M.; Thurlow, Martha L.; Christensen, Laurene L.

    2017-01-01

    This article highlights a set of principles and guidelines, developed by a diverse group of specialists in the field, for appropriately including English language learners (ELLs) with disabilities in large-scale assessments. ELLs with disabilities make up roughly 9% of the rapidly increasing ELL population nationwide. In spite of the small overall…

  18. Large-scale silviculture experiments of western Oregon and Washington.

    Treesearch

    Nathan J. Poage; Paul D. Anderson

    2007-01-01

    We review 12 large-scale silviculture experiments (LSSEs) in western Washington and Oregon with which the Pacific Northwest Research Station of the USDA Forest Service is substantially involved. We compiled and arrayed information about the LSSEs as a series of matrices in a relational database, which is included on the compact disc published with this report and...

  19. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  20. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  1. Large-scale screening by the automated Wassermann reaction

    PubMed Central

    Wagstaff, W.; Firth, R.; Booth, J. R.; Bowley, C. C.

    1969-01-01

    In view of the drawbacks in the use of the Kahn test for large-scale screening of blood donors, mainly those of human error through work overload and fatiguability, an attempt was made to adapt an existing automated complement-fixation technique for this purpose. This paper reports the successful results of that adaptation. PMID:5776559

  2. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  3. Extracting Useful Semantic Information from Large Scale Corpora of Text

    ERIC Educational Resources Information Center

    Mendoza, Ray Padilla, Jr.

    2012-01-01

    Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…

  4. Large scale structure of the sun's radio corona

    NASA Technical Reports Server (NTRS)

    Kundu, M. R.

    1986-01-01

    Results of studies of large scale structures of the corona at long radio wavelengths are presented, using data obtained with the multifrequency radioheliograph of the Clark Lake Radio Observatory. It is shown that features corresponding to coronal streamers and coronal holes are readily apparent in the Clark Lake maps.

  5. Resilience of Florida Keys coral communities following large scale disturbances

    EPA Science Inventory

    The decline of coral reefs in the Caribbean over the last 40 years has been attributed to multiple chronic stressors and episodic large-scale disturbances. This study assessed the resilience of coral communities in two different regions of the Florida Keys reef system between 199...

  6. Energy transfers in large-scale and small-scale dynamos

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  7. Honeycomb: Visual Analysis of Large Scale Social Networks

    NASA Astrophysics Data System (ADS)

    van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.

    The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.

  8. Performance Health Monitoring of Large-Scale Systems

    SciTech Connect

    Rajamony, Ram

    2014-11-20

    This report details the progress made on the ASCR funded project Performance Health Monitoring for Large Scale Systems. A large-­scale application may not achieve its full performance potential due to degraded performance of even a single subsystem. Detecting performance faults, isolating them, and taking remedial action is critical for the scale of systems on the horizon. PHM aims to develop techniques and tools that can be used to identify and mitigate such performance problems. We accomplish this through two main aspects. The PHM framework encompasses diagnostics, system monitoring, fault isolation, and performance evaluation capabilities that indicates when a performance fault has been detected, either due to an anomaly present in the system itself or due to contention for shared resources between concurrently executing jobs. Software components called the PHM Control system then build upon the capabilities provided by the PHM framework to mitigate degradation caused by performance problems.

  9. Long gradient mode and large-scale structure observables

    NASA Astrophysics Data System (ADS)

    Allahyari, Alireza; Firouzjaee, Javad T.

    2017-03-01

    We extend the study of long-mode perturbations to other large-scale observables such as cosmic rulers, galaxy-number counts, and halo bias. The long mode is a pure gradient mode that is still outside an observer's horizon. We insist that gradient-mode effects on observables vanish. It is also crucial that the expressions for observables are relativistic. This allows us to show that the effects of a gradient mode on the large-scale observables vanish identically in a relativistic framework. To study the potential modulation effect of the gradient mode on halo bias, we derive a consistency condition to the first order in gradient expansion. We find that the matter variance at a fixed physical scale is not modulated by the long gradient mode perturbations when the consistency condition holds. This shows that the contribution of long gradient modes to bias vanishes in this framework.

  10. LARGE-SCALE MOTIONS IN THE PERSEUS GALAXY CLUSTER

    SciTech Connect

    Simionescu, A.; Werner, N.; Urban, O.; Allen, S. W.; Fabian, A. C.; Sanders, J. S.; Mantz, A.; Nulsen, P. E. J.; Takei, Y.

    2012-10-01

    By combining large-scale mosaics of ROSAT PSPC, XMM-Newton, and Suzaku X-ray observations, we present evidence for large-scale motions in the intracluster medium of the nearby, X-ray bright Perseus Cluster. These motions are suggested by several alternating and interleaved X-ray bright, low-temperature, low-entropy arcs located along the east-west axis, at radii ranging from {approx}10 kpc to over a Mpc. Thermodynamic features qualitatively similar to these have previously been observed in the centers of cool-core clusters, and were successfully modeled as a consequence of the gas sloshing/swirling motions induced by minor mergers. Our observations indicate that such sloshing/swirling can extend out to larger radii than previously thought, on scales approaching the virial radius.

  11. Large-scale structure in f(T) gravity

    SciTech Connect

    Li Baojiu; Sotiriou, Thomas P.; Barrow, John D.

    2011-05-15

    In this work we study the cosmology of the general f(T) gravity theory. We express the modified Einstein equations using covariant quantities, and derive the gauge-invariant perturbation equations in covariant form. We consider a specific choice of f(T), designed to explain the observed late-time accelerating cosmic expansion without including an exotic dark energy component. Our numerical solution shows that the extra degree of freedom of such f(T) gravity models generally decays as one goes to smaller scales, and consequently its effects on scales such as galaxies and galaxies clusters are small. But on large scales, this degree of freedom can produce large deviations from the standard {Lambda}CDM scenario, leading to severe constraints on the f(T) gravity models as an explanation to the cosmic acceleration.

  12. Ecohydrological modeling for large-scale environmental impact assessment.

    PubMed

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model.

  13. Large-scale data mining pilot project in human genome

    SciTech Connect

    Musick, R.; Fidelis, R.; Slezak, T.

    1997-05-01

    This whitepaper briefly describes a new, aggressive effort in large- scale data Livermore National Labs. The implications of `large- scale` will be clarified Section. In the short term, this effort will focus on several @ssion-critical questions of Genome project. We will adapt current data mining techniques to the Genome domain, to quantify the accuracy of inference results, and lay the groundwork for a more extensive effort in large-scale data mining. A major aspect of the approach is that we will be fully-staffed data warehousing effort in the human Genome area. The long term goal is strong applications- oriented research program in large-@e data mining. The tools, skill set gained will be directly applicable to a wide spectrum of tasks involving a for large spatial and multidimensional data. This includes applications in ensuring non-proliferation, stockpile stewardship, enabling Global Ecology (Materials Database Industrial Ecology), advancing the Biosciences (Human Genome Project), and supporting data for others (Battlefield Management, Health Care).

  14. Large-scale structure of randomly jammed spheres

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio

    2017-05-01

    We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.

  15. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  16. Variability in large-scale wind power generation: Variability in large-scale wind power generation

    SciTech Connect

    Kiviluoma, Juha; Holttinen, Hannele; Weir, David; Scharff, Richard; Söder, Lennart; Menemenlis, Nickie; Cutululis, Nicolaos A.; Danti Lopez, Irene; Lannoye, Eamonn; Estanqueiro, Ana; Gomez-Lazaro, Emilio; Bai, Jianhua; Wan, Yih-Huei; Milligan, Michael

    2015-10-25

    The paper demonstrates the characteristics of wind power variability and net load variability in multiple power systems based on real data from multiple years. Demonstrated characteristics include probability distribution for different ramp durations, seasonal and diurnal variability and low net load events. The comparison shows regions with low variability (Sweden, Spain and Germany), medium variability (Portugal, Ireland, Finland and Denmark) and regions with higher variability (Quebec, Bonneville Power Administration and Electric Reliability Council of Texas in North America; Gansu, Jilin and Liaoning in China; and Norway and offshore wind power in Denmark). For regions with low variability, the maximum 1 h wind ramps are below 10% of nominal capacity, and for regions with high variability, they may be close to 30%. Wind power variability is mainly explained by the extent of geographical spread, but also higher capacity factor causes higher variability. It was also shown how wind power ramps are autocorrelated and dependent on the operating output level. When wind power was concentrated in smaller area, there were outliers with high changes in wind output, which were not present in large areas with well-dispersed wind power.

  17. Large scale anisotropic bias from primordial non-Gaussianity

    SciTech Connect

    Baghram, Shant; Firouzjahi, Hassan; Namjoo, Mohammad Hossein E-mail: mh.namjoo@ipm.ir

    2013-08-01

    In this work we study the large scale structure bias in models of anisotropic inflation. We use the Peak Background Splitting method in Excursion Set Theory to find the scale-dependent bias. We show that the amplitude of the bias is modified by a direction-dependent factor. In the specific anisotropic inflation model which we study, the scale-dependent bias vanishes at leading order when the long wavelength mode in squeezed limit is aligned with the anisotropic direction in the sky. We also extend the scale-dependent bias formulation to the general situations with primordial anisotropy. We find some selection rules indicating that some specific parts of a generic anisotropic bispectrum is picked up by the bias parameter. We argue that the anisotropic bias is mainly sourced by the angle between the anisotropic direction and the long wavelength mode in the squeezed limit.

  18. Large Scale Airflow Perturbations and Resultant Dune Dynamics

    NASA Astrophysics Data System (ADS)

    Smith, Alexander B.; Jackson, Derek W. T.; Cooper, J. Andrew G.; Beyers, Meiring

    2017-04-01

    Large-scale atmospheric turbulence can have a large impact on the regional wind regime effecting dune environments. Depending on the incident angle of mesoscale airflow, local topographic steering can also alter wind conditions and subsequent aeolian dynamics. This research analyses the influence of large-scale airflow perturbations occurring at the Maspalomas dunefield located on the southern coast of Gran Canaria, Spain. These perturbations in turn significantly influence the morphometry and migration rates of barchan dunes, monitored at the study site through time. The main meteorological station on Gran Canaria records highly uni-modal NNE wind conditions; however, simultaneously measured winds are highly variable around the island, showing a high degree of steering. Large Eddy Simulations (LES) were used to identify large-scale airflow perturbations around the island of Gran Canaria during NNE, N, and NNW incident flow directions. Results indicate that approaching surface airflow bifurcates around the island's coastline before converging at the lee coast. Winds in areas located around the islands lateral coast are controlled by these diverging flow patterns, whereas lee-side areas are influenced primarily by the islands upwind canyon topography leading to highly turbulent flow. Characteristic turbulent eddies show a complex wind environment at Maspalomas with winds diverging-converging up to 180° between the eastern and western sections of the dunefield. Multi-directional flow conditions lead to highly altered dune dynamics including the production of temporary slip faces on the stoss slopes, rapid reduction in crest height and slope length, and development of bi-crested dunes. This indicates a distinct bi-modality of airflow conditions that control the geomorphic evolution of the dunefield. Variability in wind conditions is not evident in the long-term meteorological records on the island, indicating the significance of large scale atmospheric steering on

  19. Formation of large-scale structures in ablative Kelvin-Helmholtz instability

    NASA Astrophysics Data System (ADS)

    Wang, L. F.; Ye, W. H.; Don, Wai-Sun; Sheng, Z. M.; Li, Y. J.; He, X. T.

    2010-12-01

    In this research, we studied numerically nonlinear evolutions of the Kelvin-Helmholtz instability (KHI) with and without thermal conduction, aka, the ablative KHI (AKHI) and the classical KHI (CKHI). The second order thermal conduction term with a variable thermal conductivity coefficient is added to the energy equation in the Euler equations in the AKHI to investigate the effect of thermal conduction on the evolution of large and small scale structures within the shear layer which separate the fluids with different velocities. The inviscid hyperbolic flux of Euler equation is computed via the classical fifth order weighted essentially nonoscillatory finite difference scheme and the temperature is solved by an implicit fourth order finite difference scheme with variable coefficients in the second order parabolic term to avoid severe time step restriction imposed by the stability of the numerical scheme. As opposed to the CKHI, fine scale structures such as the vortical structures are suppressed from forming in the AKHI due to the dissipative nature of the second order thermal conduction term. With a single-mode sinusoidal interface perturbation, the results of simulations show that the growth of higher harmonics is effectively suppressed and the flow is stabilized by the thermal conduction. With a two-mode sinusoidal interface perturbation, the vortex pairing is strengthened by the thermal conduction which would allow the formation of large-scale structures and enhance the mixing of materials. In summary, our numerical studies show that thermal conduction can have strong influence on the nonlinear evolutions of the KHI. Thus, it should be included in applications where thermal conduction plays an important role, such as the formation of large-scale structures in the high energy density physics and astrophysics.

  20. Formation of large-scale structures in ablative Kelvin-Helmholtz instability

    SciTech Connect

    Wang, L. F.; Ye, W. H.; He, X. T.; Don, Wai-Sun; Sheng, Z. M.; Li, Y. J.

    2010-12-15

    In this research, we studied numerically nonlinear evolutions of the Kelvin-Helmholtz instability (KHI) with and without thermal conduction, aka, the ablative KHI (AKHI) and the classical KHI (CKHI). The second order thermal conduction term with a variable thermal conductivity coefficient is added to the energy equation in the Euler equations in the AKHI to investigate the effect of thermal conduction on the evolution of large and small scale structures within the shear layer which separate the fluids with different velocities. The inviscid hyperbolic flux of Euler equation is computed via the classical fifth order weighted essentially nonoscillatory finite difference scheme and the temperature is solved by an implicit fourth order finite difference scheme with variable coefficients in the second order parabolic term to avoid severe time step restriction imposed by the stability of the numerical scheme. As opposed to the CKHI, fine scale structures such as the vortical structures are suppressed from forming in the AKHI due to the dissipative nature of the second order thermal conduction term. With a single-mode sinusoidal interface perturbation, the results of simulations show that the growth of higher harmonics is effectively suppressed and the flow is stabilized by the thermal conduction. With a two-mode sinusoidal interface perturbation, the vortex pairing is strengthened by the thermal conduction which would allow the formation of large-scale structures and enhance the mixing of materials. In summary, our numerical studies show that thermal conduction can have strong influence on the nonlinear evolutions of the KHI. Thus, it should be included in applications where thermal conduction plays an important role, such as the formation of large-scale structures in the high energy density physics and astrophysics.

  1. Geospatial Optimization of Siting Large-Scale Solar Projects

    SciTech Connect

    Macknick, J.; Quinby, T.; Caulfield, E.; Gerritsen, M.; Diffendorfer, J.; Haines, S.

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent with each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.

  2. Channel capacity of next generation large scale MIMO systems

    NASA Astrophysics Data System (ADS)

    Alshammari, A.; Albdran, S.; Matin, M.

    2016-09-01

    Information rate that can be transferred over a given bandwidth is limited by the information theory. Capacity depends on many factors such as the signal to noise ratio (SNR), channel state information (CSI) and the spatial correlation in the propagation environment. It is very important to increase spectral efficiency in order to meet the growing demand for wireless services. Thus, Multiple input multiple output (MIMO) technology has been developed and applied in most of the wireless standards and it has been very successful in increasing capacity and reliability. As the demand is still increasing, attention now is shifting towards large scale multiple input multiple output (MIMO) which has a potential of bringing orders of magnitude of improvement in spectral and energy efficiency. It has been shown that users channels decorrelate after increasing the number of antennas. As a result, inter-user interference can be avoided since energy can be focused on precise directions. This paper investigates the limits of channel capacity for large scale MIMO. We study the relation between spectral efficiency and the number of antenna N. We use time division duplex (TDD) system in order to obtain CSI using training sequence in the uplink. The same CSI is used for the downlink because the channel is reciprocal. Spectral efficiency is measured for channel model that account for small scale fading while ignoring the effect of large scale fading. It is shown the spectral efficiency can be improved significantly when compared to single antenna systems in ideal circumstances.

  3. Impact of Large-scale Geological Architectures On Recharge

    NASA Astrophysics Data System (ADS)

    Troldborg, L.; Refsgaard, J. C.; Engesgaard, P.; Jensen, K. H.

    Geological and hydrogeological data constitutes the basis for assessment of ground- water flow pattern and recharge zones. The accessibility and applicability of hard ge- ological data is often a major obstacle in deriving plausible conceptual models. Nev- ertheless focus is often on parameter uncertainty caused by the effect of geological heterogeneity due to lack of hard geological data, thus neglecting the possibility of alternative conceptualizations of the large-scale geological architecture. For a catchment in the eastern part of Denmark we have constructed different geologi- cal models based on different conceptualization of the major geological trends and fa- cies architecture. The geological models are equally plausible in a conceptually sense and they are all calibrated to well head and river flow measurements. Comparison of differences in recharge zones and subsequently well protection zones emphasize the importance of assessing large-scale geological architecture in hydrological modeling on regional scale in a non-deterministic way. Geostatistical modeling carried out in a transitional probability framework shows the possibility of assessing multiple re- alizations of large-scale geological architecture from a combination of soft and hard geological information.

  4. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  5. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  6. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  7. Solving large scale structure in ten easy steps with COLA

    SciTech Connect

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J. E-mail: matiasz@ias.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  8. BVI impulsive noise reduction by higher harmonic pitch control - Results of a scaled model rotor experiment in the DNW

    NASA Technical Reports Server (NTRS)

    Splettstoesser, Wolf R.; Schultz, KLAUS-J.; Kube, Roland; Brooks, Thomas F.; Booth, Earl R., Jr.; Niesl, Georg; Streby, Olivier

    1991-01-01

    Results are presented of a model rotor acoustics test performed to examine the benefit of higher harmonic control (HHC) of blade pitch to reduce blade-vortex interaction (BVI) impulsive noise. A dynamically scaled, four-bladed, rigid rotor model, a 40-percent replica of the B0-105 main rotor, was tested in the German Dutch Wind Tunnel. Noise characteristics and noise directivity patterns as well as vibratory loads were measured and used to demonstrate the changes when different HHC schedules were applied. Dramatic changes of the acoustic signatures and the noise radiation directivity with the HHC phase variations are found. Compared to the baseline conditions (without HHC), significant mid-frequency noise reductions of locally 6 dB are obtained for low-speed descent conditions where GVI is most intense. For other rotor operating conditions with less intense BVI there is less or no benefit from the use of HHC. LF noise and vibratory loads, especially at optimum noise reduction control settings, are found to increase.

  9. Large-scale biodiversity patterns in freshwater phytoplankton.

    PubMed

    Stomp, Maayke; Huisman, Jef; Mittelbach, Gary G; Litchman, Elena; Klausmeier, Christopher A

    2011-11-01

    Our planet shows striking gradients in the species richness of plants and animals, from high biodiversity in the tropics to low biodiversity in polar and high-mountain regions. Recently, similar patterns have been described for some groups of microorganisms, but the large-scale biogeographical distribution of freshwater phytoplankton diversity is still largely unknown. We examined the species diversity of freshwater phytoplankton sampled from 540 lakes and reservoirs distributed across the continental United States and found strong latitudinal, longitudinal, and altitudinal gradients in phytoplankton biodiversity, demonstrating that microorganisms can show substantial geographic variation in biodiversity. Detailed analysis using structural equation models indicated that these large-scale biodiversity gradients in freshwater phytoplankton diversity were mainly driven by local environmental factors, although there were residual direct effects of latitude, longitude, and altitude as well. Specifically, we found that phytoplankton species richness was an increasing saturating function of lake chlorophyll a concentration, increased with lake surface area and possibly increased with water temperature, resembling effects of productivity, habitat area, and temperature on diversity patterns commonly observed for macroorganisms. In turn, these local environmental factors varied along latitudinal, longitudinal, and altitudinal gradients. These results imply that changes in land use or climate that affect these local environmental factors are likely to have major impacts on large-scale biodiversity patterns of freshwater phytoplankton.

  10. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Alteration of Large-Scale Chromatin Structure by Estrogen Receptor

    PubMed Central

    Nye, Anne C.; Rajendran, Ramji R.; Stenoien, David L.; Mancini, Michael A.; Katzenellenbogen, Benita S.; Belmont, Andrew S.

    2002-01-01

    The estrogen receptor (ER), a member of the nuclear hormone receptor superfamily important in human physiology and disease, recruits coactivators which modify local chromatin structure. Here we describe effects of ER on large-scale chromatin structure as visualized in live cells. We targeted ER to gene-amplified chromosome arms containing large numbers of lac operator sites either directly, through a lac repressor-ER fusion protein (lac rep-ER), or indirectly, by fusing lac repressor with the ER interaction domain of the coactivator steroid receptor coactivator 1. Significant decondensation of large-scale chromatin structure, comparable to that produced by the ∼150-fold-stronger viral protein 16 (VP16) transcriptional activator, was produced by ER in the absence of estradiol using both approaches. Addition of estradiol induced a partial reversal of this unfolding by green fluorescent protein-lac rep-ER but not by wild-type ER recruited by a lac repressor-SRC570-780 fusion protein. The chromatin decondensation activity did not require transcriptional activation by ER nor did it require ligand-induced coactivator interactions, and unfolding did not correlate with histone hyperacetylation. Ligand-induced coactivator interactions with helix 12 of ER were necessary for the partial refolding of chromatin in response to estradiol using the lac rep-ER tethering system. This work demonstrates that when tethered or recruited to DNA, ER possesses a novel large-scale chromatin unfolding activity. PMID:11971975

  12. Intensive agriculture erodes β-diversity at large scales.

    PubMed

    Karp, Daniel S; Rominger, Andrew J; Zook, Jim; Ranganathan, Jai; Ehrlich, Paul R; Daily, Gretchen C

    2012-09-01

    Biodiversity is declining from unprecedented land conversions that replace diverse, low-intensity agriculture with vast expanses under homogeneous, intensive production. Despite documented losses of species richness, consequences for β-diversity, changes in community composition between sites, are largely unknown, especially in the tropics. Using a 10-year data set on Costa Rican birds, we find that low-intensity agriculture sustained β-diversity across large scales on a par with forest. In high-intensity agriculture, low local (α) diversity inflated β-diversity as a statistical artefact. Therefore, at small spatial scales, intensive agriculture appeared to retain β-diversity. Unlike in forest or low-intensity systems, however, high-intensity agriculture also homogenised vegetation structure over large distances, thereby decoupling the fundamental ecological pattern of bird communities changing with geographical distance. This ~40% decline in species turnover indicates a significant decline in β-diversity at large spatial scales. These findings point the way towards multi-functional agricultural systems that maintain agricultural productivity while simultaneously conserving biodiversity.

  13. The importance of niche differentiation for coexistence on large scales.

    PubMed

    Tang, Junfeng; Zhou, Shurong

    2011-03-21

    It is widely accepted that niche differentiation plays a key role in coexistence on relatively small scales. With regard to a large community scale, the recently propounded neutral theory suggests that species abundances are more influenced by history and chance than they are by interspecies competition. This inference is mainly based on the probability that competitive exclusion is largely slowed by recruitment limitation, which may be common in species rich communities. In this respect, a theoretical study conducted by Hurtt and Pacala (1995) for a niche differentiated community has been frequently cited to support neutral coexistence. In this paper, we focused on the effect of symmetric recruitment limitation on delaying species competitive exclusion caused by both symmetric and asymmetric competition in a large homogeneous habitat. By removing niche differentiation in space, we found that recruitment limitation could delay competitive exclusion to some extent, but the effect was rather limited compared to that predicted by Hurtt and Pacala's model for a niche differentiated community. Our results imply that niche differentiation may be important for species coexistence even on large scales and this has already been confirmed in some species rich communities. Copyright © 2010 Elsevier Ltd. All rights reserved.

  14. Large-Scale Patterns of Filament Channels and Filaments

    NASA Astrophysics Data System (ADS)

    Mackay, Duncan

    2016-07-01

    In this review the properties and large-scale patterns of filament channels and filaments will be considered. Initially, the global formation locations of filament channels and filaments are discussed, along with their hemispheric pattern. Next, observations of the formation of filament channels and filaments are described where two opposing views are considered. Finally, the wide range of models that have been constructed to consider the formation of filament channels and filaments over long time-scales are described, along with the origin of the hemispheric pattern of filaments.

  15. Clusters as cornerstones of large-scale structure.

    NASA Astrophysics Data System (ADS)

    Gottlöber, S.; Retzlaff, J.; Turchaninov, V.

    1997-04-01

    Galaxy clusters are one of the best tracers of large-scale structure in the Universe on scales well above 100 Mpc. The authors investigate here the clustering properties of a redshift sample of Abell/ACO clusters and compare the observational sample with mock samples constructed from N-body simulations on the basis of four different cosmological models. The authors discuss the power spectrum, the Minkowski functionals and the void statistics of these samples and conclude, that the SCDM and TCDM models are ruled out whereas the ACDM and BSI models are in agreement with the observational data.

  16. Large-Scale Optimization for Bayesian Inference in Complex Systems

    SciTech Connect

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their

  17. From Systematic Errors to Cosmology Using Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Hunterer, Dragan

    We propose to carry out a two-pronged program to significantly improve links between galaxy surveys and constraints on primordial cosmology and fundamental physics. We will first develop the methodology to self-calibrate the survey, that is, determine the large-angle calibration systematics internally from the survey. We will use this information to correct biases that propagate from the largest to smaller angular scales. Our approach for tackling the systematics is very complementary to existing ones, in particular in the sense that it does not assume knowledge of specific systematic maps or templates. It is timely to undertake these analyses, since none of the currently known methods addresses the multiplicative effects of large-angle calibration errors that contaminate the small-scale signal and present one of the most significant sources of error in the large-scale structure. The second part of the proposal is to precisely quantify the statistical and systematic errors in the reconstruction of the Integrated Sachs-Wolfe (ISW) contribution to the cosmic microwave background (CMB) sky map using information from galaxy surveys. Unlike the ISW contributions to CMB power, the ISW map reconstruction has not been studied in detail to date. We will create a nimble plug-and-play pipeline to ascertain how reliably a map from an arbitrary LSS survey can be used to separate the late-time and early-time contributions to CMB anisotropy at large angular scales. We will pay particular attention to partial sky coverage, incomplete redshift information, finite redshift range, and imperfect knowledge of the selection function for the galaxy survey. Our work should serve as the departure point for a variety of implications in cosmology, including the physical origin of the large-angle CMB "anomalies".

  18. Large- to small-scale dynamo in domains of large aspect ratio: kinematic regime

    NASA Astrophysics Data System (ADS)

    Shumaylova, Valeria; Teed, Robert J.; Proctor, Michael R. E.

    2017-04-01

    The Sun's magnetic field exhibits coherence in space and time on much larger scales than the turbulent convection that ultimately powers the dynamo. In this work, we look for numerical evidence of a large-scale magnetic field as the magnetic Reynolds number, Rm, is increased. The investigation is based on the simulations of the induction equation in elongated periodic boxes. The imposed flows considered are the standard ABC flow (named after Arnold, Beltrami & Childress) with wavenumber ku = 1 (small-scale) and a modulated ABC flow with wavenumbers ku = m, 1, 1 ± m, where m is the wavenumber corresponding to the long-wavelength perturbation on the scale of the box. The critical magnetic Reynolds number R_m^{crit} decreases as the permitted scale separation in the system increases, such that R_m^{crit} ∝ [L_x/L_z]^{-1/2}. The results show that the α-effect derived from the mean-field theory ansatz is valid for a small range of Rm after which small scale dynamo instability occurs and the mean-field approximation is no longer valid. The transition from large- to small-scale dynamo is smooth and takes place in two stages: a fast transition into a predominantly small-scale magnetic energy state and a slower transition into even smaller scales. In the range of Rm considered, the most energetic Fourier component corresponding to the structure in the long x-direction has twice the length-scale of the forcing scale. The long-wavelength perturbation imposed on the ABC flow in the modulated case is not preserved in the eigenmodes of the magnetic field.

  19. Modelling large-scale halo bias using the bispectrum

    NASA Astrophysics Data System (ADS)

    Pollack, Jennifer E.; Smith, Robert E.; Porciani, Cristiano

    2012-03-01

    We study the relation between the density distribution of tracers for large-scale structure and the underlying matter distribution - commonly termed bias - in the Λ cold dark matter framework. In particular, we examine the validity of the local model of biasing at quadratic order in the matter density. This model is characterized by parameters b1 and b2. Using an ensemble of N-body simulations, we apply several statistical methods to estimate the parameters. We measure halo and matter fluctuations smoothed on various scales. We find that, whilst the fits are reasonably good, the parameters vary with smoothing scale. We argue that, for real-space measurements, owing to the mixing of wavemodes, no smoothing scale can be found for which the parameters are independent of smoothing. However, this is not the case in Fourier space. We measure halo and halo-mass power spectra and from these construct estimates of the effective large-scale bias as a guide for b1. We measure the configuration dependence of the halo bispectra Bhhh and reduced bispectra Qhhh for very large-scale k-space triangles. From these data, we constrain b1 and b2, taking into account the full bispectrum covariance matrix. Using the lowest order perturbation theory, we find that for Bhhh the best-fitting parameters are in reasonable agreement with one another as the triangle scale is varied, although the fits become poor as smaller scales are included. The same is true for Qhhh. The best-fitting values were found to depend on the discreteness correction. This led us to consider halo-mass cross-bispectra. The results from these statistics supported our earlier findings. We then developed a test to explore whether the inconsistency in the recovered bias parameters could be attributed to missing higher order corrections in the models. We prove that low-order expansions are not sufficiently accurate to model the data, even on scales k1˜ 0.04 h Mpc-1. If robust inferences concerning bias are to be drawn

  20. Large scale rigidity-based flexibility analysis of biomolecules

    PubMed Central

    Streinu, Ileana

    2016-01-01

    KINematics And RIgidity (KINARI) is an on-going project for in silico flexibility analysis of proteins. The new version of the software, Kinari-2, extends the functionality of our free web server KinariWeb, incorporates advanced web technologies, emphasizes the reproducibility of its experiments, and makes substantially improved tools available to the user. It is designed specifically for large scale experiments, in particular, for (a) very large molecules, including bioassemblies with high degree of symmetry such as viruses and crystals, (b) large collections of related biomolecules, such as those obtained through simulated dilutions, mutations, or conformational changes from various types of dynamics simulations, and (c) is intended to work as seemlessly as possible on the large, idiosyncratic, publicly available repository of biomolecules, the Protein Data Bank. We describe the system design, along with the main data processing, computational, mathematical, and validation challenges underlying this phase of the KINARI project. PMID:26958583

  1. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  2. Response function of the large-scale structure of the universe to the small scale inhomogeneities

    NASA Astrophysics Data System (ADS)

    Nishimichi, Takahiro; Bernardeau, Francis; Taruya, Atsushi

    2016-11-01

    In order to infer the impact of the small-scale physics to the large-scale properties of the universe, we use a series of cosmological N-body simulations of self-gravitating matter inhomogeneities to measure, for the first time, the response function of such a system defined as a functional derivative of the nonlinear power spectrum with respect to its linear counterpart. Its measured shape and amplitude are found to be in good agreement with perturbation theory predictions except for the coupling from small to large-scale perturbations. The latter is found to be significantly damped, following a Lorentzian form. These results shed light on validity regime of perturbation theory calculations giving a useful guideline for regularization of small scale effects in analytical modeling. Most importantly our result indicates that the statistical properties of the large-scale structure of the universe are remarkably insensitive to the details of the small-scale physics, astrophysical or gravitational, paving the way for the derivation of robust estimates of theoretical uncertainties on the determination of cosmological parameters from large-scale survey observations.

  3. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    NASA Astrophysics Data System (ADS)

    Squire, Jonathan

    2016-10-01

    A novel large-scale dynamo mechanism, the magnetic shear-current effect, is discussed and explored. The effect relies on the interaction of magnetic fluctuations with a mean shear flow, meaning the saturated state of the small-scale dynamo can drive a large-scale dynamo - in some sense the inverse of dynamo quenching. The dynamo is nonhelical, with the mean-field alpha coefficient zero, and is caused by the interaction between an off-diagonal component of the turbulent resistivity and stretching of the large-scale field by shear flow. In this talk, a variety of computational and analytic studies of this mechanism are discussed, which have been carried out both in regimes where magnetic fluctuations arise self-consistently through the small-scale dynamo and at lower Reynolds numbers. In addition, an heuristic description of the effect is presented, which illustrates the fundamental role played by the pressure response of the fluid and helps explain why the magnetic effect is stronger than its kinematic cousin. As well as being interesting for its applications to general high Reynolds number astrophysical turbulence, where strong small-scale magnetic fluctuations are expected to be prevalent, the magnetic shear-current effect is a likely candidate for large-scale dynamo in the unstratified regions of ionized accretion disks. Supported by the Sherman Fairchild Foundation and DOE (DE-AC02-09-CH11466).

  4. Resolving the paradox of oceanic large-scale balance and small-scale mixing.

    PubMed

    Marino, R; Pouquet, A; Rosenberg, D

    2015-03-20

    A puzzle of oceanic dynamics is the contrast between the observed geostrophic balance, involving gravity, pressure gradient, and Coriolis forces, and the necessary turbulent transport: in the former case, energy flows to large scales, leading to spectral condensation, whereas in the latter, it is transferred to small scales, where dissipation prevails. The known bidirectional constant-flux energy cascade maintaining both geostrophic balance and mixing tends towards flux equilibration as turbulence strengthens, contradicting models and recent observations which find a dominant large-scale flux. Analyzing a large ensemble of high-resolution direct numerical simulations of the Boussinesq equations in the presence of rotation and no salinity, we show that the ratio of the dual energy flux to large and to small scales agrees with observations, and we predict that it scales with the inverse of the Froude and Rossby numbers when stratification is (realistically) stronger than rotation. Furthermore, we show that the kinetic and potential energies separately undergo a bidirectional transfer to larger and smaller scales. Altogether, this allows for small-scale mixing which drives the global oceanic circulation and will thus potentially lead to more accurate modeling of climate dynamics.

  5. On decentralized control of large-scale systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1978-01-01

    A scheme is presented for decentralized control of large-scale linear systems which are composed of a number of interconnected subsystems. By ignoring the interconnections, local feedback controls are chosen to optimize each decoupled subsystem. Conditions are provided to establish compatibility of the individual local controllers and achieve stability of the overall system. Besides computational simplifications, the scheme is attractive because of its structural features and the fact that it produces a robust decentralized regulator for large dynamic systems, which can tolerate a wide range of nonlinearities and perturbations among the subsystems.

  6. Frequency domain multiplexing for large-scale bolometer arrays

    SciTech Connect

    Spieler, Helmuth

    2002-05-31

    The development of planar fabrication techniques for superconducting transition-edge sensors has brought large-scale arrays of 1000 pixels or more to the realm of practicality. This raises the problem of reading out a large number of sensors with a tractable number of connections. A possible solution is frequency-domain multiplexing. I summarize basic principles, present various circuit topologies, and discuss design trade-offs, noise performance, cross-talk and dynamic range. The design of a practical device and its readout system is described with a discussion of fabrication issues, practical limits and future prospects.

  7. Large Scale Composite Manufacturing for Heavy Lift Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Stavana, Jacob; Cohen, Leslie J.; Houseal, Keth; Pelham, Larry; Lort, Richard; Zimmerman, Thomas; Sutter, James; Western, Mike; Harper, Robert; Stuart, Michael

    2012-01-01

    Risk reduction for the large scale composite manufacturing is an important goal to produce light weight components for heavy lift launch vehicles. NASA and an industry team successfully employed a building block approach using low-cost Automated Tape Layup (ATL) of autoclave and Out-of-Autoclave (OoA) prepregs. Several large, curved sandwich panels were fabricated at HITCO Carbon Composites. The aluminum honeycomb core sandwich panels are segments of a 1/16th arc from a 10 meter cylindrical barrel. Lessons learned highlight the manufacturing challenges required to produce light weight composite structures such as fairings for heavy lift launch vehicles.

  8. Artificial intelligence and large scale computation: A physics perspective

    NASA Astrophysics Data System (ADS)

    Hogg, Tad; Huberman, B. A.

    1987-12-01

    We study the macroscopic behavior of computation and examine both emergent collective phenomena and dynamical aspects with an emphasis on software issues, which are at the core of large scale distributed computation and artificial intelligence systems. By considering large systems, we exhibit novel phenomena which cannot be foreseen from examination of their smaller counterparts. We review both the symbolic and connectionist views of artificial intelligence, provide a number of examples which display these phenomena, and resort to statistical mechanics, dynamical systems theory and the theory of random graphs to elicit the range of possible behaviors.

  9. Large-scale genotoxicity assessments in the marine environment.

    PubMed

    Hose, J E

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill.

  10. Decentrally stabilizable linear and bilinear large-scale systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.; Vukcevic, M. B.

    1977-01-01

    Two classes of large-scale systems are identified, which can always be stabilized by decentralized feedback control. For the class of systems composed of interconnected linear subsystems, we can choose local controllers for the subsystems to achieve stability of the overall system. The same linear feedback scheme can be used to stabilize a class of linear systems with bilinear interconnections. In this case, however, the scheme is used to establish a finite region of stability for the overall system. The stabilization algorithm is applied to the design of a control system for the Large-Space Telescope.

  11. Large-scale genotoxicity assessments in the marine environment.

    PubMed Central

    Hose, J E

    1994-01-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. PMID:7713029

  12. Large-scale genotoxicity assessments in the marine environment

    SciTech Connect

    Hose, J.E.

    1994-12-01

    There are a number of techniques for detecting genotoxicity in the marine environment, and many are applicable to large-scale field assessments. Certain tests can be used to evaluate responses in target organisms in situ while others utilize surrogate organisms exposed to field samples in short-term laboratory bioassays. Genotoxicity endpoints appear distinct from traditional toxicity endpoints, but some have chemical or ecotoxicologic correlates. One versatile end point, the frequency of anaphase aberrations, has been used in several large marine assessments to evaluate genotoxicity in the New York Bight, in sediment from San Francisco Bay, and following the Exxon Valdez oil spill. 31 refs., 2 tabs.

  13. Nearly incompressible fluids: hydrodynamics and large scale inhomogeneity.

    PubMed

    Hunana, P; Zank, G P; Shaikh, D

    2006-08-01

    A system of hydrodynamic equations in the presence of large-scale inhomogeneities for a high plasma beta solar wind is derived. The theory is derived under the assumption of low turbulent Mach number and is developed for the flows where the usual incompressible description is not satisfactory and a full compressible treatment is too complex for any analytical studies. When the effects of compressibility are incorporated only weakly, a new description, referred to as "nearly incompressible hydrodynamics," is obtained. The nearly incompressible theory, was originally applied to homogeneous flows. However, large-scale gradients in density, pressure, temperature, etc., are typical in the solar wind and it was unclear how inhomogeneities would affect the usual incompressible and nearly incompressible descriptions. In the homogeneous case, the lowest order expansion of the fully compressible equations leads to the usual incompressible equations, followed at higher orders by the nearly incompressible equations, as introduced by Zank and Matthaeus. With this work we show that the inclusion of large-scale inhomogeneities (in this case time-independent and radially symmetric background solar wind) modifies the leading-order incompressible description of solar wind flow. We find, for example, that the divergence of velocity fluctuations is nonsolenoidal and that density fluctuations can be described to leading order as a passive scalar. Locally (for small lengthscales), this system of equations converges to the usual incompressible equations and we therefore use the term "locally incompressible" to describe the equations. This term should be distinguished from the term "nearly incompressible," which is reserved for higher-order corrections. Furthermore, we find that density fluctuations scale with Mach number linearly, in contrast to the original homogeneous nearly incompressible theory, in which density fluctuations scale with the square of Mach number. Inhomogeneous nearly

  14. Electron drift in a large scale solid xenon

    SciTech Connect

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor two faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.

  15. Electron drift in a large scale solid xenon

    DOE PAGES

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  16. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  17. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    SciTech Connect

    Nusser, Adi; Branchini, Enzo; Davis, Marc E-mail: branchin@fis.uniroma3.it

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, because of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.

  18. The Large Scale Synthesis of Aligned Plate Nanostructures

    PubMed Central

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-01-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ′ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential. PMID:27439672

  19. Lagrangian space consistency relation for large scale structure

    SciTech Connect

    Horn, Bart; Hui, Lam; Xiao, Xiao E-mail: lh399@columbia.edu

    2015-09-01

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.

  20. The workshop on iterative methods for large scale nonlinear problems

    SciTech Connect

    Walker, H.F.; Pernice, M.

    1995-12-01

    The aim of the workshop was to bring together researchers working on large scale applications with numerical specialists of various kinds. Applications that were addressed included reactive flows (combustion and other chemically reacting flows, tokamak modeling), porous media flows, cardiac modeling, chemical vapor deposition, image restoration, macromolecular modeling, and population dynamics. Numerical areas included Newton iterative (truncated Newton) methods, Krylov subspace methods, domain decomposition and other preconditioning methods, large scale optimization and optimal control, and parallel implementations and software. This report offers a brief summary of workshop activities and information about the participants. Interested readers are encouraged to look into an online proceedings available at http://www.usi.utah.edu/logan.proceedings. In this, the material offered here is augmented with hypertext abstracts that include links to locations such as speakers` home pages, PostScript copies of talks and papers, cross-references to related talks, and other information about topics addresses at the workshop.

  1. Unstable `black branes' from scaled membranes at large D

    NASA Astrophysics Data System (ADS)

    Dandekar, Yogesh; Mazumdar, Subhajit; Minwalla, Shiraz; Saha, Arunabha

    2016-12-01

    It has recently been demonstrated that the dynamics of black holes at large D can be recast as a set of non gravitational membrane equations. These membrane equations admit a simple static solution with shape S D- p-2× R p,1. In this note we study the equations for small fluctuations about this solution in a limit in which amplitude and length scale of the fluctuations are simultaneously scaled to zero as D is taken to infinity. We demonstrate that the resultant nonlinear equations, which capture the Gregory-Laflamme instability and its end point, exactly agree with the effective dynamical `black brane' equations of Emparan Suzuki and Tanabe. Our results thus identify the `black brane' equations as a special limit of the membrane equations and so unify these approaches to large D black hole dynamics.

  2. Large-scale linear nonparallel support vector machine solver.

    PubMed

    Tian, Yingjie; Ping, Yuan

    2014-02-01

    Twin support vector machines (TWSVMs), as the representative nonparallel hyperplane classifiers, have shown the effectiveness over standard SVMs from some aspects. However, they still have some serious defects restricting their further study and real applications: (1) They have to compute and store the inverse matrices before training, it is intractable for many applications where data appear with a huge number of instances as well as features; (2) TWSVMs lost the sparseness by using a quadratic loss function making the proximal hyperplane close enough to the class itself. This paper proposes a Sparse Linear Nonparallel Support Vector Machine, termed as L1-NPSVM, to deal with large-scale data based on an efficient solver-dual coordinate descent (DCD) method. Both theoretical analysis and experiments indicate that our method is not only suitable for large scale problems, but also performs as good as TWSVMs and SVMs.

  3. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  4. Large-scale quantum networks based on graphs

    NASA Astrophysics Data System (ADS)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-05-01

    Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.

  5. LARGE SCALE PURIFICATION OF PROTEINASES FROM CLOSTRIDIUM HISTOLYTICUM FILTRATES

    PubMed Central

    Conklin, David A.; Webster, Marion E.; Altieri, Patricia L.; Berman, Sanford; Lowenthal, Joseph P.; Gochenour, Raymond B.

    1961-01-01

    Conklin, David A. (Walter Reed Army Institute of Research, Washington, D. C.), Marion E. Webster, Patricia L. Altieri, Sanford Berman, Joseph P. Lowenthal, and Raymond B. Gochenour. Large scale purification of proteinases from Clostridium histolyticum filtrates. J. Bacteriol. 82:589–594. 1961.—A method for the large scale preparation and partial purification of Clostridium histolyticum proteinases by fractional precipitation with ammonium sulfate is described. Conditions for adequate separation and purification of the δ-proteinase and the gelatinase were obtained. Collagenase, on the other hand, was found distributed in four to five fractions and little increase in purity was achieved as compared to the crude ammonium sulfate precipitates. PMID:13880849

  6. The Large Scale Synthesis of Aligned Plate Nanostructures

    NASA Astrophysics Data System (ADS)

    Zhou, Yang; Nash, Philip; Liu, Tian; Zhao, Naiqin; Zhu, Shengli

    2016-07-01

    We propose a novel technique for the large-scale synthesis of aligned-plate nanostructures that are self-assembled and self-supporting. The synthesis technique involves developing nanoscale two-phase microstructures through discontinuous precipitation followed by selective etching to remove one of the phases. The method may be applied to any alloy system in which the discontinuous precipitation transformation goes to completion. The resulting structure may have many applications in catalysis, filtering and thermal management depending on the phase selection and added functionality through chemical reaction with the retained phase. The synthesis technique is demonstrated using the discontinuous precipitation of a γ‧ phase, (Ni, Co)3Al, followed by selective dissolution of the γ matrix phase. The production of the nanostructure requires heat treatments on the order of minutes and can be performed on a large scale making this synthesis technique of great economic potential.

  7. Comparative study of large-scale nonlinear optimization methods

    SciTech Connect

    Alemzadeh, S.A.

    1987-01-01

    Solving large-scale nonlinear optimization problems has been one of the active research areas for the last twenty years. Several heuristic algorithms with codes have been developed and implemented since 1966. This study explores the motivation and basic mathematical ideas leading to the development of MINOS-1.0, GRG-2,and MINOS-5.0 algorithms and their codes. The reliability, accuracy, and complexity of the algorithms and software depend upon their use of the gradient, Jacobian, and the Hessian. MINOS-1.0 and GRG-2 incorporate all of the input and output features, while MINOS-1.0 is not able to handle the nonlinearly constrained problems, and GRG-2 is not able to handle large-scale problems, MINOS-5.0 is a robust and an efficient software that incorporates all input, output features.

  8. The CLASSgal code for relativistic cosmological large scale structure

    SciTech Connect

    Dio, Enea Di; Montanari, Francesco; Durrer, Ruth; Lesgourgues, Julien E-mail: Francesco.Montanari@unige.ch E-mail: Ruth.Durrer@unige.ch

    2013-11-01

    We present accurate and efficient computations of large scale structure observables, obtained with a modified version of the CLASS code which is made publicly available. This code includes all relativistic corrections and computes both the power spectrum C{sub ℓ}(z{sub 1},z{sub 2}) and the corresponding correlation function ξ(θ,z{sub 1},z{sub 2}) of the matter density and the galaxy number fluctuations in linear perturbation theory. For Gaussian initial perturbations, these quantities contain the full information encoded in the large scale matter distribution at the level of linear perturbation theory. We illustrate the usefulness of our code for cosmological parameter estimation through a few simple examples.

  9. A first large-scale flood inundation forecasting model

    SciTech Connect

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  10. Low frequency steady-state brain responses modulate large scale functional networks in a frequency-specific means.

    PubMed

    Wang, Yi-Feng; Long, Zhiliang; Cui, Qian; Liu, Feng; Jing, Xiu-Juan; Chen, Heng; Guo, Xiao-Nan; Yan, Jin H; Chen, Hua-Fu

    2016-01-01

    Neural oscillations are essential for brain functions. Research has suggested that the frequency of neural oscillations is lower for more integrative and remote communications. In this vein, some resting-state studies have suggested that large scale networks function in the very low frequency range (<1 Hz). However, it is difficult to determine the frequency characteristics of brain networks because both resting-state studies and conventional frequency tagging approaches cannot simultaneously capture multiple large scale networks in controllable cognitive activities. In this preliminary study, we aimed to examine whether large scale networks can be modulated by task-induced low frequency steady-state brain responses (lfSSBRs) in a frequency-specific pattern. In a revised attention network test, the lfSSBRs were evoked in the triple network system and sensory-motor system, indicating that large scale networks can be modulated in a frequency tagging way. Furthermore, the inter- and intranetwork synchronizations as well as coherence were increased at the fundamental frequency and the first harmonic rather than at other frequency bands, indicating a frequency-specific modulation of information communication. However, there was no difference among attention conditions, indicating that lfSSBRs modulate the general attention state much stronger than distinguishing attention conditions. This study provides insights into the advantage and mechanism of lfSSBRs. More importantly, it paves a new way to investigate frequency-specific large scale brain activities.

  11. Anisotropy and nonuniversality in scaling laws of the large-scale energy spectrum in rotating turbulence.

    PubMed

    Sen, Amrik; Mininni, Pablo D; Rosenberg, Duane; Pouquet, Annick

    2012-09-01

    Rapidly rotating turbulent flow is characterized by the emergence of columnar structures that are representative of quasi-two-dimensional behavior of the flow. It is known that when energy is injected into the fluid at an intermediate scale Lf, it cascades towards smaller as well as larger scales. In this paper we analyze the flow in the inverse cascade range at a small but fixed Rossby number, Rof≈0.05. Several numerical simulations with helical and nonhelical forcing functions are considered in periodic boxes with unit aspect ratio. In order to resolve the inverse cascade range with reasonably large Reynolds number, the analysis is based on large eddy simulations which include the effect of helicity on eddy viscosity and eddy noise. Thus, we model the small scales and resolve explicitly the large scales. We show that the large-scale energy spectrum has at least two solutions: one that is consistent with Kolmogorov-Kraichnan-Batchelor-Leith phenomenology for the inverse cascade of energy in two-dimensional (2D) turbulence with a ∼k⊥-5/3 scaling, and the other that corresponds to a steeper ∼k⊥-3 spectrum in which the three-dimensional (3D) modes release a substantial fraction of their energy per unit time to the 2D modes. The spectrum that emerges depends on the anisotropy of the forcing function, the former solution prevailing for forcings in which more energy is injected into the 2D modes while the latter prevails for isotropic forcing. In the case of anisotropic forcing, whence the energy goes from the 2D to the 3D modes at low wave numbers, large-scale shear is created, resulting in a time scale τsh, associated with shear, thereby producing a ∼k-1 spectrum for the total energy with the horizontal energy of the 2D modes still following a ∼k⊥-5/3 scaling.

  12. Stability of large scale chromomagnetic fields in the early universe

    NASA Astrophysics Data System (ADS)

    Elmfors, Per; Persson, David

    1999-01-01

    It is well known that Yang-Mills theory in vacuum has a perturbative instability to spontaneously form a large scale magnetic field (the Savvidy mechanism) and that a constant field is unstable so that a possible ground state has to be inhomogenous over the non-perturbative scale Λ (the Copenhagen vacuum). We argue that this spontaneous instability does not occur at high temperature when the induced field strength gB~Λ2 is much weaker than the magnetic mass squared (g2T)2. At high temperature, oscillations of gauge fields acquire a thermal mass M~gT and we show that this mass stabilizes a magnetic field which is constant over length scales shorter than the magnetic screening length (g2T)-1. We therefore conclude that there is no indication for any spontaneous generation of weak non-abelian magnetic fields in the early universe.

  13. Large-scale velocity fields. [of solar rotation

    NASA Technical Reports Server (NTRS)

    Howard, Robert F.; Kichatinov, L. L.; Bogart, Richard S.; Ribes, Elizabeth

    1991-01-01

    The present evaluation of recent observational results bearing on the nature and characteristics of solar rotation gives attention to the status of current understanding on such large-scale velocity-field-associated phenomena as solar supergranulation, mesogranulation, and giant-scale convection. Also noted are theoretical suggestions reconciling theory and observations of giant-scale solar convection. The photosphere's global meridional circulation is suggested by solar rotation models requiring pole-to-equator flows of a few m/sec, as well as by the observed migration of magnetic activity over the solar cycle. The solar rotation exhibits a latitude and cycle dependence which can be understood in terms of a time-dependent convective toroidal roll pattern.

  14. Large Scale Parallel Structured AMR Calculations using the SAMRAI Framework

    SciTech Connect

    Wissink, A M; Hornung, R D; Kohn, S R; Smith, S S; Elliott, N

    2001-08-01

    This paper discusses the design and performance of the parallel data communication infrastructure in SAMRAI, a software framework for structured adaptive mesh refinement (SAMR) multi-physics applications. We describe requirements of such applications and how SAMRAI abstractions manage complex data communication operations found in them. Parallel performance is characterized for two adaptive problems solving hyperbolic conservation laws on up to 512 processors of the IBM ASCI Blue Pacific system. Results reveal good scaling for numerical and data communication operations but poorer scaling in adaptive meshing and communication schedule construction phases of the calculations. We analyze the costs of these different operations, addressing key concerns for scaling SAMR computations to large numbers of processors, and discuss potential changes to improve our current implementation.

  15. Efficient multiobjective optimization scheme for large scale structures

    NASA Astrophysics Data System (ADS)

    Grandhi, Ramana V.; Bharatram, Geetha; Venkayya, V. B.

    1992-09-01

    This paper presents a multiobjective optimization algorithm for an efficient design of large scale structures. The algorithm is based on generalized compound scaling techniques to reach the intersection of multiple functions. Multiple objective functions are treated similar to behavior constraints. Thus, any number of objectives can be handled in the formulation. Pseudo targets on objectives are generated at each iteration in computing the scale factors. The algorithm develops a partial Pareto set. This method is computationally efficient due to the fact that it does not solve many single objective optimization problems in reaching the Pareto set. The computational efficiency is compared with other multiobjective optimization methods, such as the weighting method and the global criterion method. Trusses, plate, and wing structure design cases with stress and frequency considerations are presented to demonstrate the effectiveness of the method.

  16. An economy of scale system's mensuration of large spacecraft

    NASA Technical Reports Server (NTRS)

    Deryder, L. J.

    1981-01-01

    The systems technology and cost particulars of using multipurpose platforms versus several sizes of bus type free flyer spacecraft to accomplish the same space experiment missions. Computer models of these spacecraft bus designs were created to obtain data relative to size, weight, power, performance, and cost. To answer the question of whether or not large scale does produce economy, the dominant cost factors were determined and the programmatic effect on individual experiment costs were evaluated.

  17. Tools for Large-Scale Mobile Malware Analysis

    SciTech Connect

    Bierma, Michael

    2014-01-01

    Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000 Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.

  18. Concurrent Programming Using Actors: Exploiting Large-Scale Parallelism,

    DTIC Science & Technology

    1985-10-07

    ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK* Artificial Inteligence Laboratory AREA Is WORK UNIT NUMBERS 545 Technology Square...D-R162 422 CONCURRENT PROGRMMIZNG USING f"OS XL?ITP TEH l’ LARGE-SCALE PARALLELISH(U) NASI AC E Al CAMBRIDGE ARTIFICIAL INTELLIGENCE L. G AGHA ET AL...RESOLUTION TEST CHART N~ATIONAL BUREAU OF STANDA.RDS - -96 A -E. __ _ __ __’ .,*- - -- •. - MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL

  19. Energy Constraints for Building Large-Scale Systems

    DTIC Science & Technology

    2016-03-17

    neurobiological systems use a similar approach in the fact that over 90% of neurons in cortex project locally to nearby neurons (i.e. nearest 1000 pyramidal ...typical use of Address-Event Representation (AER) communication schemes , comes at a huge cost that makes scaling to large systems impractical...systems use a similar approach in the fact that over 90% of neurons in cortex project locally to nearby neurons (i.e. nearest 1000 pyramidal cells

  20. Host Immunity via Mutable Virtualized Large-Scale Network Containers

    DTIC Science & Technology

    2016-07-25

    migrate to different IP addresses multiple 6mes. We implement a virtual machine based system prototype and evaluate it using state-of-the-a1t scanning...entire !Pv4 address space within 5 Host Immunity via Mutable Virtualized Large-Scale Network Containers 45 minutes from a single machine . Second, when...that the attacker will be trapped into one decoy instead of the real server. We implement a virtual machine (VM)-based prototype that integrates

  1. Developing and Understanding Methods for Large Scale Nonlinear Optimization

    DTIC Science & Technology

    2001-12-01

    development of new algorithms for large-scale uncon- strained and constrained optimization problems, including limited-memory methods for problems with...analysis of tensor and SQP methods for singular con- strained optimization", to appear in SIAM Journal on Optimization. Published in peer-reviewed...Mathematica, Vol III, Journal der Deutschen Mathematiker-Vereinigung, 1998. S. Crivelli, B. Bader, R. Byrd, E. Eskow, V. Lamberti , R.Schnabel and T

  2. An economy of scale system's mensuration of large spacecraft

    NASA Technical Reports Server (NTRS)

    Deryder, L. J.

    1981-01-01

    The systems technology and cost particulars of using multipurpose platforms versus several sizes of bus type free flyer spacecraft to accomplish the same space experiment missions. Computer models of these spacecraft bus designs were created to obtain data relative to size, weight, power, performance, and cost. To answer the question of whether or not large scale does produce economy, the dominant cost factors were determined and the programmatic effect on individual experiment costs were evaluated.

  3. Wiggly cosmic strings, neutrinos and large-scale structure

    NASA Astrophysics Data System (ADS)

    Vachaspati, Tanmay

    1993-04-01

    We discuss the cosmic string scenario of large-scale structure formation in light of the result that the strings are not smooth but instead have a lot of sub-structure or wiggles on them. It appears from the results of Albrecht and Stebbins that the scenario works best if the universe is dominated by massive neutrinos or some other form of hot dark matter. Some unique features of the scenario, such as the generation of primordial magnetic fields, are also described.

  4. Analysis plan for 1985 large-scale tests. Technical report

    SciTech Connect

    McMullan, F.W.

    1983-01-01

    The purpose of this effort is to assist DNA in planning for large-scale (upwards of 5000 tons) detonations of conventional explosives in the 1985 and beyond time frame. Primary research objectives were to investigate potential means to increase blast duration and peak pressures. This report identifies and analyzes several candidate explosives. It examines several charge designs and identifies advantages and disadvantages of each. Other factors including terrain and multiburst techniques are addressed as are test site considerations.

  5. Electrochemical cells for medium- and large-scale energy storage

    SciTech Connect

    Wang, Wei; Wei, Xiaoliang; Choi, Daiwon; Lu, Xiaochuan; Yang, G.; Sun, C.

    2014-12-12

    This is one of the chapters in the book titled “Advances in batteries for large- and medium-scale energy storage: Applications in power systems and electric vehicles” that will be published by the Woodhead Publishing Limited. The chapter discusses the basic electrochemical fundamentals of electrochemical energy storage devices with a focus on the rechargeable batteries. Several practical secondary battery systems are also discussed as examples

  6. Multimodel Design of Large Scale Systems with Multiple Decision Makers.

    DTIC Science & Technology

    1982-08-01

    virtue. 5- , Lead me from darkneu to light. - Lead me from death to eternal Life. ( Vedic Payer) p. I, MULTIMODEL DESIGN OF LARGE SCALE SYSTEMS WITH...guidance during the course of *: this research . He would also like to thank Professors W. R. Perkins, P. V. Kokotovic, T. Basar, and T. N. Trick for...thesis concludes with Chapter 7 where we summarize the results obtained, outline the main contributions, and indicate directions for future research . 7- I

  7. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency

  8. Critical Problems in Very Large Scale Computer Systems

    DTIC Science & Technology

    1990-03-31

    MAY I i9cu( CRITICAL PROBLEMS IN VERY LARGE SCALE COMPUTER SYSTEMS Semiannual Technical Report for the Period October 1, 1989 to...suitability for supporting popular models of parallel computation . During the reporting period they have developed an interface definition. A simulator has...queries in computational geometry . Range queries are a fundamental problem in computational geometry with applications to computer graphics and

  9. Supporting large scale applications on networks of workstations

    NASA Technical Reports Server (NTRS)

    Cooper, Robert; Birman, Kenneth P.

    1989-01-01

    Distributed applications on networks of workstations are an increasingly common way to satisfy computing needs. However, existing mechanisms for distributed programming exhibit poor performance and reliability as application size increases. Extension of the ISIS distributed programming system to support large scale distributed applications by providing hierarchical process groups is discussed. Incorporation of hierarchy in the program structure and exploitation of this to limit the communication and storage required in any one component of the distributed system is examined.

  10. A Holistic Management Architecture for Large-Scale Adaptive Networks

    DTIC Science & Technology

    2007-09-01

    MANAGEMENT ARCHITECTURE FOR LARGE-SCALE ADAPTIVE NETWORKS by Michael R. Clement September 2007 Thesis Advisor: Alex Bordetsky Second Reader...TECHNOLOGY MANAGEMENT from the NAVAL POSTGRADUATE SCHOOL September 2007 Author: Michael R. Clement Approved by: Dr. Alex ...achieve in life is by His will. Ad Majorem Dei Gloriam. To my parents, my family, and Caitlin: For supporting me, listening to me when I got

  11. The large-scale anisotropy with the PAMELA calorimeter

    NASA Astrophysics Data System (ADS)

    Karelin, A.; Adriani, O.; Barbarino, G.; Bazilevskaya, G.; Bellotti, R.; Boezio, M.; Bogomolov, E.; Bongi, M.; Bonvicini, V.; Bottai, S.; Bruno, A.; Cafagna, F.; Campana, D.; Carbone, R.; Carlson, P.; Casolino, M.; Castellini, G.; De Donato, C.; De Santis, C.; De Simone, N.; Di Felice, V.; Formato, V.; Galper, A.; Koldashov, S.; Koldobskiy, S.; Krut'kov, S.; Kvashnin, A.; Leonov, A.; Malakhov, V.; Marcelli, L.; Martucci, M.; Mayorov, A.; Menn, W.; Mergé, M.; Mikhailov, V.; Mocchiutti, E.; Monaco, A.; Mori, N.; Munini, R.; Osteria, G.; Palma, F.; Panico, B.; Papini, P.; Pearce, M.; Picozza, P.; Ricci, M.; Ricciarini, S.; Sarkar, R.; Simon, M.; Scotti, V.; Sparvoli, R.; Spillantini, P.; Stozhkov, Y.; Vacchi, A.; Vannuccini, E.; Vasilyev, G.; Voronov, S.; Yurkin, Y.; Zampa, G.; Zampa, N.

    2015-10-01

    The large-scale anisotropy (or the so-called star-diurnal wave) has been studied using the calorimeter of the space-born experiment PAMELA. The cosmic ray anisotropy has been obtained for the Southern and Northern hemispheres simultaneously in the equatorial coordinate system for the time period 2006-2014. The dipole amplitude and phase have been measured for energies 1-20 TeV n-1.

  12. Relic vector field and CMB large scale anomalies

    SciTech Connect

    Chen, Xingang; Wang, Yi E-mail: yw366@cam.ac.uk

    2014-10-01

    We study the most general effects of relic vector fields on the inflationary background and density perturbations. Such effects are observable if the number of inflationary e-folds is close to the minimum requirement to solve the horizon problem. We show that this can potentially explain two CMB large scale anomalies: the quadrupole-octopole alignment and the quadrupole power suppression. We discuss its effect on the parity anomaly. We also provide analytical template for more detailed data comparison.

  13. Large-scale controls on convective extreme precipitation

    NASA Astrophysics Data System (ADS)

    Loriaux, Jessica M.; Lenderink, Geert; Pier Siebesma, A.

    2017-04-01

    The influence of large-scale conditions on extreme precipitation is not yet understood well enough. We will present the results of Loriaux et al. (2017), in which we investigate the role of large-scale dynamics and environmental conditions on precipitation and on the precipitation response to climate change. To this end, we have set up a composite LES case for convective precipitation using strong large-scale forcing based on idealized profiles for the highest 10 percentiles of peak intensities over the Netherlands, as described by Loriaux et al. (2016). In this setting, we have performed sensitivity analyses for atmospheric stability, large-scale moisture convergence, and relative humidity, and compared present-day climate to a warmer future climate. The results suggest that amplification of the moisture convergence and destabilization of the atmosphere both lead to an increase in precipitation, but due to different effects; Atmospheric stability mainly influences the precipitation intensity, while the moisture convergence mainly controls the precipitation area fraction. Extreme precipitation intensities show qualitatively similar sensitivities to atmospheric stability and moisture convergence. Precipitation increases with RH due to an increase in area fraction, despite a decrease in intensity. The precipitation response to the climate perturbation shows a stronger response for the precipitation intensity than the overall precipitation, with no clear dependency of changes in atmospheric stability, moisture convergence and relative humidity. The difference in response between the precipitation intensity and overall precipitation is caused by a decrease in the precipitation area fraction from present-day to future climate. In other words, our climate perturbation indicates that with warming, it will rain more intensely but in less places. Loriaux, J.M., G. Lenderink, and A.P. Siebesma, 2016, doi: 10.1002/2015JD024274 Loriaux, J.M., G. Lenderink, and A.P. Siebesma

  14. On a Game of Large-Scale Projects Competition

    NASA Astrophysics Data System (ADS)

    Nikonov, Oleg I.; Medvedeva, Marina A.

    2009-09-01

    The paper is devoted to game-theoretical control problems motivated by economic decision making situations arising in realization of large-scale projects, such as designing and putting into operations the new gas or oil pipelines. A non-cooperative two player game is considered with payoff functions of special type for which standard existence theorems and algorithms for searching Nash equilibrium solutions are not applicable. The paper is based on and develops the results obtained in [1]-[5].

  15. Measuring large scale space perception in literary texts

    NASA Astrophysics Data System (ADS)

    Rossi, Paolo

    2007-07-01

    A center and radius of “perception” (in the sense of environmental cognition) can be formally associated with a written text and operationally defined. Simple algorithms for their computation are presented, and indicators for anisotropy in large scale space perception are introduced. The relevance of these notions for the analysis of literary and historical records is briefly discussed and illustrated with an example taken from medieval historiography.

  16. Semantic Concept Discovery for Large Scale Zero Shot Event Detection

    DTIC Science & Technology

    2015-07-25

    NO. 0704-0188 3. DATES COVERED (From - To) - UU UU UU UU 18-08-2015 Approved for public release; distribution is unlimited. Semantic Concept Discovery ...Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 zero shot event detection, semantic concept discovery REPORT DOCUMENTATION PAGE 11...Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 -3815 ABSTRACT Semantic Concept Discovery for Large-Scale Zero-Shot Event Detection Report

  17. Large-scale Alfvén vortices

    SciTech Connect

    Onishchenko, O. G.; Horton, W.; Scullion, E.; Fedun, V.

    2015-12-15

    The new type of large-scale vortex structures of dispersionless Alfvén waves in collisionless plasma is investigated. It is shown that Alfvén waves can propagate in the form of Alfvén vortices of finite characteristic radius and characterised by magnetic flux ropes carrying orbital angular momentum. The structure of the toroidal and radial velocity, fluid and magnetic field vorticity, the longitudinal electric current in the plane orthogonal to the external magnetic field are discussed.

  18. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Technical Reports Server (NTRS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dunner, R.; Essinger-Hileman, T.; Eimer, J.; hide

    2015-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe approx.70% of the sky. A variable-delay polarization modulator provides modulation of the polarization at approx.10Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  19. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  20. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Astrophysics Data System (ADS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dünner, R.; Essinger-Hileman, T.; Eimer, J.; Fluxa, P.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G.; Hubmayr, J.; Iuliano, J.; Marriage, T. A.; Miller, N.; Moseley, S. H.; Mumby, G.; Petroff, M.; Reintsema, C.; Rostem, K.; U-Yen, K.; Watts, D.; Wagner, E.; Wollack, E. J.; Xu, Z.; Zeng, L.

    2016-08-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe ˜ 70 % of the sky. A variable-delay polarization modulator provides modulation of the polarization at ˜ 10 Hz to suppress the 1/ f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  1. Domain nesting for multi-scale large eddy simulation

    NASA Astrophysics Data System (ADS)

    Fuka, Vladimir; Xie, Zheng-Tong

    2016-04-01

    The need to simulate city scale areas (O(10 km)) with high resolution within street canyons in certain areas of interests necessitates different grid resolutions in different part of the simulated area. General purpose computational fluid dynamics codes typically employ unstructured refined grids while mesoscale meteorological models more often employ nesting of computational domains. ELMM is a large eddy simulation model for the atmospheric boundary layer. It employs orthogonal uniform grids and for this reason domain nesting was chosen as the approach for simulations in multiple scales. Domains are implemented as sets of MPI processes which communicate with each other as in a normal non-nested run, but also with processes from another (outer/inner) domain. It should stressed that the duration of solution of time-steps in the outer and in the inner domain must be synchronized, so that the processes do not have to wait for the completion of their boundary conditions. This can achieved by assigning an appropriate number of CPUs to each domain, and to gain high efficiency. When nesting is applied for large eddy simulation, the inner domain receives inflow boundary conditions which lack turbulent motions not represented by the outer grid. ELMM remedies this by optional adding of turbulent fluctuations to the inflow using the efficient method of Xie and Castro (2008). The spatial scale of these fluctuations is in the subgrid-scale of the outer grid and their intensity will be estimated from the subgrid turbulent kinetic energy in the outer grid.

  2. High Speed Networking and Large-scale Simulation in Geodynamics

    NASA Technical Reports Server (NTRS)

    Kuang, Weijia; Gary, Patrick; Seablom, Michael; Truszkowski, Walt; Odubiyi, Jide; Jiang, Weiyuan; Liu, Dong

    2004-01-01

    Large-scale numerical simulation has been one of the most important approaches for understanding global geodynamical processes. In this approach, peta-scale floating point operations (pflops) are often required to carry out a single physically-meaningful numerical experiment. For example, to model convective flow in the Earth's core and generation of the geomagnetic field (geodynamo), simulation for one magnetic free-decay time (approximately 15000 years) with a modest resolution of 150 in three spatial dimensions would require approximately 0.2 pflops. If such a numerical model is used to predict geomagnetic secular variation over decades and longer, with e.g. an ensemble Kalman filter assimilation approach, approximately 30 (and perhaps more) independent simulations of similar scales would be needed for one data assimilation analysis. Obviously, such a simulation would require an enormous computing resource that exceeds the capacity of a single facility currently available at our disposal. One solution is to utilize a very fast network (e.g. 10Gb optical networks) and available middleware (e.g. Globus Toolkit) to allocate available but often heterogeneous resources for such large-scale computing efforts. At NASA GSFC, we are experimenting with such an approach by networking several clusters for geomagnetic data assimilation research. We shall present our initial testing results in the meeting.

  3. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Technical Reports Server (NTRS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dunner, R.; Essinger-Hileman, T.; Eimer, J.; Fluxa, P.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G.; Hubmayr, J.; Iuliano, J.; Marriage, T. A.; Miller, N.; Moseley, S. H.; Mumby, G.; Petroff, M.; Reintsema, C.; Rostem, K.; U-yen, K.; Watts, D.; Wagner, E.; Wollack, E. J.; Xu, Z.; Zeng, L.

    2015-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe approx.70% of the sky. A variable-delay polarization modulator provides modulation of the polarization at approx.10Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  4. Large-scale quantization from local correlations in space plasmas

    NASA Astrophysics Data System (ADS)

    Livadiotis, George; McComas, David J.

    2014-05-01

    This study examines the large-scale quantization that can characterize the phase space of certain physical systems. Plasmas are such systems where large-scale quantization, ħ*, is caused by Debye shielding that structures correlations between particles. The value of ħ* is constant—some 12 orders of magnitude larger than the Planck constant—across a wide range of space plasmas, from the solar wind in the inner heliosphere to the distant plasma in the inner heliosheath and the local interstellar medium. This paper develops the foundation and advances the understanding of the concept of plasma quantization; in particular, we (i) show the analogy of plasma to Planck quantization, (ii) show the key points of plasma quantization, (iii) construct some basic quantum mechanical concepts for the large-scale plasma quantization, (iv) investigate the correlation between plasma parameters that implies plasma quantization, when it is approximated by a relation between the magnetosonic energy and the plasma frequency, (v) analyze typical space plasmas throughout the heliosphere and show the constancy of plasma quantization over many orders of magnitude in plasma parameters, (vi) analyze Advanced Composition Explorer (ACE) solar wind measurements to develop another measurement of the value of ħ*, and (vii) apply plasma quantization to derive unknown plasma parameters when some key observable is missing.

  5. Large-scale investigation of genomic markers for severe periodontitis.

    PubMed

    Suzuki, Asami; Ji, Guijin; Numabe, Yukihiro; Ishii, Keisuke; Muramatsu, Masaaki; Kamoi, Kyuichi

    2004-09-01

    The purpose of the present study was to investigate the genomic markers for periodontitis, using large-scale single-nucleotide polymorphism (SNP) association studies comparing healthy volunteers and patients with periodontitis. Genomic DNA was obtained from 19 healthy volunteers and 22 patients with severe periodontitis, all of whom were Japanese. The subjects were genotyped at 637 SNPs in 244 genes on a large scale, using the TaqMan polymerase chain reaction (PCR) system. Statistically significant differences in allele and genotype frequencies were analyzed with Fisher's exact test. We found statistically significant differences (P < 0.01) between the healthy volunteers and patients with severe periodontitis in the following genes; gonadotropin-releasing hormone 1 (GNRH1), phosphatidylinositol 3-kinase regulatory 1 (PIK3R1), dipeptidylpeptidase 4 (DPP4), fibrinogen-like 2 (FGL2), and calcitonin receptor (CALCR). These results suggest that SNPs in the GNRH1, PIK3R1, DPP4, FGL2, and CALCR genes are genomic markers for severe periodontitis. Our findings indicate the necessity of analyzing SNPs in genes on a large scale (i.e., genome-wide approach), to identify genomic markers for periodontitis.

  6. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances.

    PubMed

    Parker, V Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host.

  7. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  8. Large scale CMB anomalies from thawing cosmic strings

    SciTech Connect

    Ringeval, Christophe; Yamauchi, Daisuke; Yokoyama, Jun'ichi; Bouchet, François R. E-mail: yamauchi@resceu.s.u-tokyo.ac.jp E-mail: bouchet@iap.fr

    2016-02-01

    Cosmic strings formed during inflation are expected to be either diluted over super-Hubble distances, i.e., invisible today, or to have crossed our past light cone very recently. We discuss the latter situation in which a few strings imprint their signature in the Cosmic Microwave Background (CMB) Anisotropies after recombination. Being almost frozen in the Hubble flow, these strings are quasi static and evade almost all of the previously derived constraints on their tension while being able to source large scale anisotropies in the CMB sky. Using a local variance estimator on thousand of numerically simulated Nambu-Goto all sky maps, we compute the expected signal and show that it can mimic a dipole modulation at large angular scales while being negligible at small angles. Interestingly, such a scenario generically produces one cold spot from the thawing of a cosmic string loop. Mixed with anisotropies of inflationary origin, we find that a few strings of tension GU = O(1) × 10{sup −6} match the amplitude of the dipole modulation reported in the Planck satellite measurements and could be at the origin of other large scale anomalies.

  9. An Emerging Infectious Disease Triggering Large-Scale Hyperpredation

    PubMed Central

    Moleón, Marcos; Almaraz, Pablo; Sánchez-Zapata, José A.

    2008-01-01

    Hyperpredation refers to an enhanced predation pressure on a secondary prey due to either an increase in the abundance of a predator population or a sudden drop in the abundance of the main prey. This scarcely documented mechanism has been previously studied in scenarios in which the introduction of a feral prey caused overexploitation of native prey. Here we provide evidence of a previously unreported link between Emergent Infectious Diseases (EIDs) and hyperpredation on a predator-prey community. We show how a viral outbreak caused the population collapse of a host prey at a large spatial scale, which subsequently promoted higher-than-normal predation intensity on a second prey from shared predators. Thus, the disease left a population dynamic fingerprint both in the primary host prey, through direct mortality from the disease, and indirectly in the secondary prey, through hyperpredation. This resulted in synchronized prey population dynamics at a large spatio-temporal scale. We therefore provide evidence for a novel mechanism by which EIDs can disrupt a predator-prey interaction from the individual behavior to the population dynamics. This mechanism can pose a further threat to biodiversity through the human-aided disruption of ecological interactions at large spatial and temporal scales. PMID:18523587

  10. [A large-scale accident in Alpine terrain].

    PubMed

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  11. Power suppression at large scales in string inflation

    SciTech Connect

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar E-mail: sddownes@physics.tamu.edu

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  12. Large-scale columnar vortices in rotating turbulence

    NASA Astrophysics Data System (ADS)

    Yokoyama, Naoto; Takaoka, Masanori

    2016-11-01

    In the rotating turbulence, flow structures are affected by the angular velocity of the system's rotation. When the angular velocity is small, three-dimensional statistically-isotropic flow, which has the Kolmogorov spectrum all over the inertial subrange, is formed. When the angular velocity increases, the flow becomes two-dimensional anisotropic, and the energy spectrum has a power law k-2 in the small wavenumbers in addition to the Kolmogorov spectrum in the large wavenumbers. When the angular velocity decreases, the flow returns to the isotropic one. It is numerically found that the transition between the isotropic and anisotropic flows is hysteretic; the critical angular velocity at which the flow transitions from the anisotropic one to the isotropic one, and that of the reverse transition are different. It is also observed that the large-scale columnar structures in the anisotropic flow depends on the external force which maintains a statistically-steady state. In some cases, small-scale anticyclonic structures are aligned in a columnar structure apart from the cyclonic Taylor column. The formation mechanism of the large-scale columnar structures will be discussed. This work was partially supported by JSPS KAKENHI.

  13. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  14. Origin of the large scale structures of the universe

    SciTech Connect

    Oaknin, David H.

    2004-11-15

    We revise the statistical properties of the primordial cosmological density anisotropies that, at the time of matter-radiation equality, seeded the gravitational development of large scale structures in the otherwise homogeneous and isotropic Friedmann-Robertson-Walker flat universe. Our analysis shows that random fluctuations of the density field at the same instant of equality and with comoving wavelength shorter than the causal horizon at that time can naturally account, when globally constrained to conserve the total mass (energy) of the system, for the observed scale invariance of the anisotropies over cosmologically large comoving volumes. Statistical systems with similar features are generically known as glasslike or latticelike. Obviously, these conclusions conflict with the widely accepted understanding of the primordial structures reported in the literature, which requires an epoch of inflationary cosmology to precede the standard expansion of the universe. The origin of the conflict must be found in the widespread, but unjustified, claim that scale invariant mass (energy) anisotropies at the instant of equality over comoving volumes of cosmological size, larger than the causal horizon at the time, must be generated by fluctuations in the density field with comparably large comoving wavelength.

  15. Reliability assessment for components of large scale photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ahadi, Amir; Ghadimi, Noradin; Mirabbasi, Davar

    2014-10-01

    Photovoltaic (PV) systems have significantly shifted from independent power generation systems to a large-scale grid-connected generation systems in recent years. The power output of PV systems is affected by the reliability of various components in the system. This study proposes an analytical approach to evaluate the reliability of large-scale, grid-connected PV systems. The fault tree method with an exponential probability distribution function is used to analyze the components of large-scale PV systems. The system is considered in the various sequential and parallel fault combinations in order to find all realistic ways in which the top or undesired events can occur. Additionally, it can identify areas that the planned maintenance should focus on. By monitoring the critical components of a PV system, it is possible not only to improve the reliability of the system, but also to optimize the maintenance costs. The latter is achieved by informing the operators about the system component's status. This approach can be used to ensure secure operation of the system by its flexibility in monitoring system applications. The implementation demonstrates that the proposed method is effective and efficient and can conveniently incorporate more system maintenance plans and diagnostic strategies.

  16. Large-scale impact cratering on the terrestrial planets

    SciTech Connect

    Grieve, R.A.F.

    1982-01-01

    The crater densities on the earth and moon form the basis for a standard flux-time curve that can be used in dating unsampled planetary surfaces and constraining the temporal history of endogenic geologic processes. Abundant evidence is seen not only that impact cratering was an important surface process in planetary history but also that large imapact events produced effects that were crucial in scale. By way of example, it is noted that the formation of multiring basins on the early moon was as important in defining the planetary tectonic framework as plate tectonics is on the earth. Evidence from several planets suggests that the effects of very-large-scale impacts go beyond the simple formation of an impact structure and serve to localize increased endogenic activity over an extended period of geologic time. Even though no longer occurring with the frequency and magnitude of early solar system history, it is noted that large scale impact events continue to affect the local geology of the planets. 92 references.

  17. Equivalent common path method in large-scale laser comparator

    NASA Astrophysics Data System (ADS)

    He, Mingzhao; Li, Jianshuang; Miao, Dongjing

    2015-02-01

    Large-scale laser comparator is main standard device that providing accurate, reliable and traceable measurements for high precision large-scale line and 3D measurement instruments. It mainly composed of guide rail, motion control system, environmental parameters monitoring system and displacement measurement system. In the laser comparator, the main error sources are temperature distribution, straightness of guide rail and pitch and yaw of measuring carriage. To minimize the measurement uncertainty, an equivalent common optical path scheme is proposed and implemented. Three laser interferometers are adjusted to parallel with the guide rail. The displacement in an arbitrary virtual optical path is calculated using three displacements without the knowledge of carriage orientations at start and end positions. The orientation of air floating carriage is calculated with displacements of three optical path and position of three retroreflectors which are precisely measured by Laser Tracker. A 4th laser interferometer is used in the virtual optical path as reference to verify this compensation method. This paper analyzes the effect of rail straightness on the displacement measurement. The proposed method, through experimental verification, can improve the measurement uncertainty of large-scale laser comparator.

  18. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances

    PubMed Central

    Parker, V. Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  19. Large-scale flow generation by inhomogeneous helicity.

    PubMed

    Yokoi, N; Brandenburg, A

    2016-03-01

    The effect of kinetic helicity (velocity-vorticity correlation) on turbulent momentum transport is investigated. The turbulent kinetic helicity (pseudoscalar) enters the Reynolds stress (mirror-symmetric tensor) expression in the form of a helicity gradient as the coupling coefficient for the mean vorticity and/or the angular velocity (axial vector), which suggests the possibility of mean-flow generation in the presence of inhomogeneous helicity. This inhomogeneous helicity effect, which was previously confirmed at the level of a turbulence- or closure-model simulation, is examined with the aid of direct numerical simulations of rotating turbulence with nonuniform helicity sustained by an external forcing. The numerical simulations show that the spatial distribution of the Reynolds stress is in agreement with the helicity-related term coupled with the angular velocity, and that a large-scale flow is generated in the direction of angular velocity. Such a large-scale flow is not induced in the case of homogeneous turbulent helicity. This result confirms the validity of the inhomogeneous helicity effect in large-scale flow generation and suggests that a vortex dynamo is possible even in incompressible turbulence where there is no baroclinicity effect.

  20. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  1. Large scale stochastic spatio-temporal modelling with PCRaster

    NASA Astrophysics Data System (ADS)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  2. A new large-scale manufacturing platform for complex biopharmaceuticals.

    PubMed

    Vogel, Jens H; Nguyen, Huong; Giovannini, Roberto; Ignowski, Jolene; Garger, Steve; Salgotra, Anil; Tom, Jennifer

    2012-12-01

    Complex biopharmaceuticals, such as recombinant blood coagulation factors, are addressing critical medical needs and represent a growing multibillion-dollar market. For commercial manufacturing of such, sometimes inherently unstable, molecules it is important to minimize product residence time in non-ideal milieu in order to obtain acceptable yields and consistently high product quality. Continuous perfusion cell culture allows minimization of residence time in the bioreactor, but also brings unique challenges in product recovery, which requires innovative solutions. In order to maximize yield, process efficiency, facility and equipment utilization, we have developed, scaled-up and successfully implemented a new integrated manufacturing platform in commercial scale. This platform consists of a (semi-)continuous cell separation process based on a disposable flow path and integrated with the upstream perfusion operation, followed by membrane chromatography on large-scale adsorber capsules in rapid cycling mode. Implementation of the platform at commercial scale for a new product candidate led to a yield improvement of 40% compared to the conventional process technology, while product quality has been shown to be more consistently high. Over 1,000,000 L of cell culture harvest have been processed with 100% success rate to date, demonstrating the robustness of the new platform process in GMP manufacturing. While membrane chromatography is well established for polishing in flow-through mode, this is its first commercial-scale application for bind/elute chromatography in the biopharmaceutical industry and demonstrates its potential in particular for manufacturing of potent, low-dose biopharmaceuticals.

  3. Performance Assessment of a Large Scale Pulsejet- Driven Ejector System

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.; Litke, Paul J.; Schauer, Frederick R.; Bradley, Royce P.; Hoke, John L.

    2006-01-01

    Unsteady thrust augmentation was measured on a large scale driver/ejector system. A 72 in. long, 6.5 in. diameter, 100 lb(sub f) pulsejet was tested with a series of straight, cylindrical ejectors of varying length, and diameter. A tapered ejector configuration of varying length was also tested. The objectives of the testing were to determine the dimensions of the ejectors which maximize thrust augmentation, and to compare the dimensions and augmentation levels so obtained with those of other, similarly maximized, but smaller scale systems on which much of the recent unsteady ejector thrust augmentation studies have been performed. An augmentation level of 1.71 was achieved with the cylindrical ejector configuration and 1.81 with the tapered ejector configuration. These levels are consistent with, but slightly lower than the highest levels achieved with the smaller systems. The ejector diameter yielding maximum augmentation was 2.46 times the diameter of the pulsejet. This ratio closely matches those of the small scale experiments. For the straight ejector, the length yielding maximum augmentation was 10 times the diameter of the pulsejet. This was also nearly the same as the small scale experiments. Testing procedures are described, as are the parametric variations in ejector geometry. Results are discussed in terms of their implications for general scaling of pulsed thrust ejector systems

  4. Foundational perspectives on causality in large-scale brain networks.

    PubMed

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  5. Robust large-scale parallel nonlinear solvers for simulations.

    SciTech Connect

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write

  6. Foundational perspectives on causality in large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  7. The Impact of Large Scale Environments on Cluster Entropy Profiles

    NASA Astrophysics Data System (ADS)

    Trierweiler, Isabella; Su, Yuanyuan

    2017-01-01

    We perform a systematic analysis of 21 clusters imaged by the Suzaku satellite to determine the relation between the richness of cluster environments and entropy at large radii. Entropy profiles for clusters are expected to follow a power-law, but Suzaku observations show that the entropy profiles of many clusters are significantly flattened beyond 0.3 Rvir. While the entropy at the outskirts of clusters is thought to be highly dependent on the large scale cluster environment, the exact nature of the environment/entropy relation is unclear. Using the Sloan Digital Sky Survey and 6dF Galaxy Survey, we study the 20 Mpc large scale environment for all clusters in our sample. We find no strong relation between the entropy deviations at the virial radius and the total luminosity of the cluster surroundings, indicating that accretion and mergers have a more complex and indirect influence on the properties of the gas at large radii. We see a possible anti-correlation between virial temperature and richness of the cluster environment and find that density excess appears to play a larger role in the entropy flattening than temperature, suggesting that clumps of gas can lower entropy.

  8. Scaling statistical multiple sequence alignment to large datasets.

    PubMed

    Nute, Michael; Warnow, Tandy

    2016-11-11

    Multiple sequence alignment is an important task in bioinformatics, and alignments of large datasets containing hundreds or thousands of sequences are increasingly of interest. While many alignment methods exist, the most accurate alignments are likely to be based on stochastic models where sequences evolve down a tree with substitutions, insertions, and deletions. While some methods have been developed to estimate alignments under these stochastic models, only the Bayesian method BAli-Phy has been able to run on even moderately large datasets, containing 100 or so sequences. A technique to extend BAli-Phy to enable alignments of thousands of sequences could potentially improve alignment and phylogenetic tree accuracy on large-scale data beyond the best-known methods today. We use simulated data with up to 10,000 sequences representing a variety of model conditions, including some that are significantly divergent from the statistical models used in BAli-Phy and elsewhere. We give a method for incorporating BAli-Phy into PASTA and UPP, two strategies for enabling alignment methods to scale to large datasets, and give alignment and tree accuracy results measured against the ground truth from simulations. Comparable results are also given for other methods capable of aligning this many sequences. Extensions of BAli-Phy using PASTA and UPP produce significantly more accurate alignments and phylogenetic trees than the current leading methods.

  9. Some ecological guidelines for large-scale biomass plantations

    SciTech Connect

    Hoffman, W.; Cook, J.H.; Beyea, J.

    1993-12-31

    The National Audubon Society sees biomass as an appropriate and necessary source of energy to help replace fossil fuels in the near future, but is concerned that large-scale biomass plantations could displace significant natural vegetation and wildlife habitat, and reduce national and global biodiversity. We support the development of an industry large enough to provide significant portions of our energy budget, but we see a critical need to ensure that plantations are designed and sited in ways that minimize ecological disruption, or even provide environmental benefits. We have been studying the habitat value of intensively managed short-rotation tree plantations. Our results show that these plantations support large populations of some birds, but not all of the species using the surrounding landscape, and indicate that their value as habitat can be increased greatly by including small areas of mature trees within them. We believe short-rotation plantations can benefit regional biodiversity if they can be deployed as buffers for natural forests, or as corridors connecting forest tracts. To realize these benefits, and to avoid habitat degradation, regional biomass plantation complexes (e.g., the plantations supplying all the fuel for a powerplant) need to be planned, sited, and developed as large-scale units in the context of the regional landscape mosaic.

  10. Turbulence and entrainment length scales in large wind farms

    NASA Astrophysics Data System (ADS)

    Andersen, Søren J.; Sørensen, Jens N.; Mikkelsen, Robert F.

    2017-03-01

    A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control. This article is part of the themed issue 'Wind energy in complex terrains'.

  11. Turbulence and entrainment length scales in large wind farms.

    PubMed

    Andersen, Søren J; Sørensen, Jens N; Mikkelsen, Robert F

    2017-04-13

    A number of large wind farms are modelled using large eddy simulations to elucidate the entrainment process. A reference simulation without turbines and three farm simulations with different degrees of imposed atmospheric turbulence are presented. The entrainment process is assessed using proper orthogonal decomposition, which is employed to detect the largest and most energetic coherent turbulent structures. The dominant length scales responsible for the entrainment process are shown to grow further into the wind farm, but to be limited in extent by the streamwise turbine spacing, which could be taken into account when developing farm layouts. The self-organized motion or large coherent structures also yield high correlations between the power productions of consecutive turbines, which can be exploited through dynamic farm control.This article is part of the themed issue 'Wind energy in complex terrains'.

  12. Nuclear-pumped lasers for large-scale applications

    SciTech Connect

    Anderson, R.E.; Leonard, E.M.; Shea, R.E.; Berggren, R.R.

    1988-01-01

    Efficient initiation of large-volume chemical lasers may be achieved by neutron induced reactions which produce charged particles in the final state. When a burst mode nuclear reactor is used as the neutron source, both a sufficiently intense neutron flux and a sufficient short initiation pulse may be possible. Proof-of-principle experiments are planned to demonstrate lasing in a direct nuclear-pumped large-volume system: to study the effects of various neutron absorbing materials on laser performance; to study the effects of long initiation pulse lengths; to determine the performance of large-scale optics and the beam quality that may bo obtained; and to assess the performance of alternative designs of burst systems that increase the neutron output and burst repetition rate. 21 refs., 7 figs., 5 tabs.

  13. Self-* and Adaptive Mechanisms for Large Scale Distributed Systems

    NASA Astrophysics Data System (ADS)

    Fragopoulou, P.; Mastroianni, C.; Montero, R.; Andrjezak, A.; Kondo, D.

    Large-scale distributed computing systems and infrastructure, such as Grids, P2P systems and desktop Grid platforms, are decentralized, pervasive, and composed of a large number of autonomous entities. The complexity of these systems is such that human administration is nearly impossible and centralized or hierarchical control is highly inefficient. These systems need to run on highly dynamic environments, where content, network topologies and workloads are continuously changing. Moreover, they are characterized by the high degree of volatility of their components and the need to provide efficient service management and to handle efficiently large amounts of data. This paper describes some of the areas for which adaptation emerges as a key feature, namely, the management of computational Grids, the self-management of desktop Grid platforms and the monitoring and healing of complex applications. It also elaborates on the use of bio-inspired algorithms to achieve self-management. Related future trends and challenges are described.

  14. Development of a single-phase harmonic power flow program to study the 20 kHz AC power system for large spacecraft

    NASA Technical Reports Server (NTRS)

    Kraft, L. Alan; Kankam, M. David

    1991-01-01

    The development of software is described to aid in design and analysis of AC power systems for large spacecraft. The algorithm is an important version of harmonic power flow program, HARMFLO, used for the study of AC power quality. The new program is applicable to three-phase systems typified by terrestrial power systems, and single-phase systems characteristic of space power systems. The modified HARMFLO accommodates system operating frequencies ranging from terrestrial 60 Hz to and beyond aerospace 20 kHz, and can handle both source and load-end harmonic distortions. Comparison of simulation and test results of a representative spacecraft power system shows a satisfactory correlation. Recommendations are made for the direction of future improvements to the software, to enhance its usefulness to power system designer and analysts.

  15. LARGE-SCALE CO2 TRANSPORTATION AND DEEP OCEAN SEQUESTRATION

    SciTech Connect

    Hamid Sarv

    1999-03-01

    Technical and economical feasibility of large-scale CO{sub 2} transportation and ocean sequestration at depths of 3000 meters or grater was investigated. Two options were examined for transporting and disposing the captured CO{sub 2}. In one case, CO{sub 2} was pumped from a land-based collection center through long pipelines laid on the ocean floor. Another case considered oceanic tanker transport of liquid carbon dioxide to an offshore floating structure for vertical injection to the ocean floor. In the latter case, a novel concept based on subsurface towing of a 3000-meter pipe, and attaching it to the offshore structure was considered. Budgetary cost estimates indicate that for distances greater than 400 km, tanker transportation and offshore injection through a 3000-meter vertical pipe provides the best method for delivering liquid CO{sub 2} to deep ocean floor depressions. For shorter distances, CO{sub 2} delivery by parallel-laid, subsea pipelines is more cost-effective. Estimated costs for 500-km transport and storage at a depth of 3000 meters by subsea pipelines and tankers were 1.5 and 1.4 dollars per ton of stored CO{sub 2}, respectively. At these prices, economics of ocean disposal are highly favorable. Future work should focus on addressing technical issues that are critical to the deployment of a large-scale CO{sub 2} transportation and disposal system. Pipe corrosion, structural design of the transport pipe, and dispersion characteristics of sinking CO{sub 2} effluent plumes have been identified as areas that require further attention. Our planned activities in the next Phase include laboratory-scale corrosion testing, structural analysis of the pipeline, analytical and experimental simulations of CO{sub 2} discharge and dispersion, and the conceptual economic and engineering evaluation of large-scale implementation.

  16. Large- and Small-Scale Ring Current Electrodynamic Coupling

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.

    2003-01-01

    In this talk we will address the two primary issues of ring current (RC) electrodynamic coupling: 1. RC self-consistent magnetosphere-ionosphere coupling that includes calculation of the magnetospheric electric field (large scale electrodynamic coupling); and 2. RC self-consistent coupling with electromagnetic ion cyclotron (EMIC) waves (small scale electrodynamic coupling). Our study will be based on two RC models that we have recently developed in our group. The first model by Khazanov et al. [2002] couples the system of two kinetic equations: one equation which describes the RC ion dynamics and another equation which describes the energy density evolution of EMIC waves. The second model by Khazanov et al. [2003] deals with large scale electrodynamic coupling processes and provides a self-consistent simulation of RC ions and the magnetospheric electric field. There is presently no model that addresses both of these issues simultaneously in a self-consistent calculation. However, the need exists for such a model, because these two processes directly influence each other, with the mesoscale coupling changing the drift paths of the thermal and energetic particle populations in the inner magnetosphere, thereby changing the wave interactions, and the microscale coupling altering the pitch angle distributions and ionospheric conductivities (through increased precipitation), thus changing the field-aligned currents and electric potential structure. The initial thrust of the work will be the development of a combined kinetic model of micro- and meso-scale RC electrodynamic coupling processes and to examine their interactions with each other on a global scale.

  17. Large and small color differences: predicting them from hue scaling

    NASA Astrophysics Data System (ADS)

    Chan, Hoover; Abramov, Israel; Gordon, James

    1991-06-01

    Color appearance can be specified by a procedure of direct hue scaling. In this procedure, subjects look at a stimulus and then simply state the proportions of their sensations using the four unique hue names red, yellow, green, and blue; to completeness, they also state the apparent saturation. Observers can scale stimuli quickly and reliably, and this is true even if they are relatively inexperienced. Thus stimuli can be rescaled whenever viewing conditions change such that a new specification of appearance is required. The scaled sensory values elicited by a set of stimuli are used to derive the locations of the stimuli on a color diagram that is based on appearance and which we term a Uniform Appearance Diagram (UAD). The orthogonal axes of these space are red-green and yellow-blue; the location of a stimulus specifies its hue and its distance from the origin specifies its apparent saturation. We have investigated the uniformity of this space by using a subject's UAD, for a particular set of viewing conditions, to predict both small and large color differences under comparable viewing conditions. For small-scale differences we compared wavelength discrimination functions derived from UADs with those obtained by direct adjustment of a bipartite field. For large-scale differences, subjects rated the degree of similarity of pairs of different wavelengths; these ratings were compared with the distances separating the same pairs of wavelengths on a UAD. In both cases, the agreements were very good, implying that UAD's are metrically uniform. Thus, UADs could be used to adjust the hues in a pseudo-color display so that all transitions would be equally perceptible or would differ by specified amounts.

  18. Optimally amplified large-scale streaks and drag reduction in turbulent pipe flow.

    PubMed

    Willis, Ashley P; Hwang, Yongyun; Cossu, Carlo

    2010-09-01

    The optimal amplifications of small coherent perturbations within turbulent pipe flow are computed for Reynolds numbers up to one million. Three standard frameworks are considered: the optimal growth of an initial condition, the response to harmonic forcing and the Karhunen-Loève (proper orthogonal decomposition) analysis of the response to stochastic forcing. Similar to analyses of the turbulent plane channel flow and boundary layer, it is found that streaks elongated in the streamwise direction can be greatly amplified from quasistreamwise vortices, despite linear stability of the mean flow profile. The most responsive perturbations are streamwise uniform and, for sufficiently large Reynolds number, the most responsive azimuthal mode is of wave number m=1 . The response of this mode increases with the Reynolds number. A secondary peak, where m corresponds to azimuthal wavelengths λ_{θ}^{+}≈70-90 in wall units, also exists in the amplification of initial conditions and in premultiplied response curves for the forced problems. Direct numerical simulations at Re=5300 confirm that the forcing of m=1,2 and m=4 optimal structures results in the large response of coherent large-scale streaks. For moderate amplitudes of the forcing, low-speed streaks become narrower and more energetic, whereas high-speed streaks become more spread. It is further shown that drag reduction can be achieved by forcing steady large-scale structures, as anticipated from earlier investigations. Here the energy balance is calculated. At Re=5300 it is shown that, due to the small power required by the forcing of optimal structures, a net power saving of the order of 10% can be achieved following this approach, which could be relevant for practical applications.

  19. Large- and Very-Large-Scale Motions in Katabatic Flows Over Steep Slopes

    NASA Astrophysics Data System (ADS)

    Giometto, M. G.; Fang, J.; Salesky, S.; Parlange, M. B.

    2016-12-01

    Evidence of large- and very-large-scale motions populating the boundary layer in katabatic flows over steep slopes is presented via direct numerical simulations (DNSs). DNSs are performed at a modified Reynolds number (Rem = 967), considering four sloping angles (α = 60°, 70°, 80° and 90°). Large coherent structures prove to be strongly dependent on the inclination of the underlying surface. Spectra and co-spectra consistently show signatures of large-scale motions (LSMs), with streamwise extension on the order of the boundary layer thickness. A second low-wavenumber mode characterizes pre-multiplied spectra and co-spectra when the slope angle is below 70°, indicative of very-large-scale motions (VLSMs). In addition, conditional sampling and averaging shows how LSMs and VLSMs are induced by counter-rotating roll modes, in agreement with findings from canonical wall-bounded flows. VLSMs contribute to the stream-wise velocity variance and shear stress in the above-jet regions up to 30% and 45% respectively, whereas both LSMs and VLSMs are inactive in the near-wall regions.

  20. Small-Scale Variability of Large Cloud Drops

    NASA Technical Reports Server (NTRS)

    Marshak, Alexander; Knyazikhin, Y.; Wiscombe, Warren

    2004-01-01

    Cloud droplet size distribution is one of the most fundamental subjects in cloud physics. Understanding of spatial distribution and small-scale fluctuations of cloud droplets is essential for both cloud physics and atmospheric radiation. For cloud physics, it relates to the coalescence growth of raindrops while for radiation, it has a strong impact on a cloud's radiative properties. Most of the existing cloud radiation and precipitation formation models assume that the mean number of drops with a given radius varies proportionally to volume. The analysis of microphysical data on liquid water drop sizes shows that, for sufficiently small volumes, the number is proportional to the drop size dependent power of the volume. For abundant small drops present, the exponent is 1 as assumed in the conventional approach. However, for rarer large drops, the exponents fall below unity. At small scales, therefore, the mean number of large drops decreases with volume at a slower rate than the conventional approach assumes, suggesting more large drops at these scales than conventional models account for; their impact is consequently underestimated. Size dependent models of spatial distribution of cloud drops that simulate the observed power laws show strong drop clustering, the more so the larger the drops. The degree of clustering is determined by the observed exponents. The strong clustering of large drops arises naturally from the observed power-law statistics. Current theories of photon-cloud interaction and warm rain formation will need radical revision in order to produce these statistics; their underlying equations are unable to yield the observed power law.