Science.gov

Sample records for large non-orthogonal stbcs

  1. Non-orthogonal subband/transform coder

    NASA Technical Reports Server (NTRS)

    Glover, Daniel R. (Inventor)

    1993-01-01

    The present invention is directed to a simplified digital subband coder/decoder. In the present invention a signal is fed into a coder. The coder uses a non-orthogonal algorithm that is simply implemented in the coder hardware. The simple non-orthogonal design is then used in the implementation of the decoder to decode the signal.

  2. Accurate Calculation of Oscillator Strengths for CI II Lines Using Non-orthogonal Wavefunctions

    NASA Technical Reports Server (NTRS)

    Tayal, S. S.

    2004-01-01

    Non-orthogonal orbitals technique in the multiconfiguration Hartree-Fock approach is used to calculate oscillator strengths and transition probabilities for allowed and intercombination lines in Cl II. The relativistic corrections are included through the Breit-Pauli Hamiltonian. The Cl II wave functions show strong term dependence. The non-orthogonal orbitals are used to describe the term dependence of radial functions. Large sets of spectroscopic and correlation functions are chosen to describe adequately strong interactions in the 3s(sup 2)3p(sup 3)nl (sup 3)Po, (sup 1)Po and (sup 3)Do Rydberg series and to properly account for the important correlation and relaxation effects. The length and velocity forms of oscillator strength show good agreement for most transitions. The calculated radiative lifetime for the 3s3p(sup 5) (sup 3)Po state is in good agreement with experiment.

  3. The tensor properties of energy gradients within a non-orthogonal basis

    NASA Astrophysics Data System (ADS)

    White, Christopher A.; Maslen, Paul; Lee, Michael S.; Head-Gordon, Martin

    1997-09-01

    The application of standard minimization techniques to electronic structure theory calculations often requires the formation of an electronic energy gradient. The tensor nature of the electronic gradient, while implicitly treated within an orthogonal basis set, manifests itself explicitly in a non-orthogonal basis set. We apply simple tensor theory to define the electronic gradient in an arbitrary reference frame using the energy minimization method of Li, Nunes and Vanderbilt in a non-orthogonal basis as a concrete example. The minimal basis HeH + energy surface is used to portray the strong effect of consistently accounting for these tensor properties versus neglecting them.

  4. Orthogonal and Non-Orthogonal Tight Binding Parameters for III-V Semiconductors Nitrides

    NASA Astrophysics Data System (ADS)

    Martins, A. S.; Fellows, C. E.

    2016-08-01

    A simulated annealing (SA) approach is employed in the determination of different tight binding (TB) sets of parameters for the nitride semiconductors AlN, GaN and InN, as well their limitations and potentialities are also discussed. Two kinds of atomic basis set are considered: (i) the orthogonal sp 3 s∗ with interaction up to second neighbors and (ii) a spd non-orthogonal set, with the Hamiltonian matrix elements calculated within the Extended Hückel Theory (EHT) prescriptions. For the non-orthogonal method, TB parameters are given for both zincblend and wurtzite crystalline structures.

  5. Non-Orthogonality of Seafloor Spreading: A New Look at Fast Spreading Centers

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Gordon, R. G.

    2015-12-01

    Most of Earth's surface is created by seafloor spreading. While most seafloor spreading is orthogonal, that is, the strike of mid-ocean ridge segments is perpendicular to nearby transform faults, examples of significant non-orthogonality have been noted since the 1970s, in particular in regions of slow seafloor spreading such as the western Gulf of Aden with non-orthogonality up to 45°. In contrast, here we focus on fast and ultra-fast seafloor spreading along the East Pacific Rise. To estimate non-orthogonality, we compare ridge-segment strikes with the direction of plate motion determined from the angular velocity that best fits all the data along the boundary of a single plate pair [DeMets et al., 2010]. The advantages of this approach include greater accuracy and the ability to estimate non-orthogonality where there are no nearby transform faults. Estimating the strikes of fast-spreading mid-ocean ridge segments present several challenges as non-transform offsets on various scales affect the estimate of the strike. While spreading is orthogonal or nearly orthogonal along much of the East Pacific Rise, some ridge segments along the Pacific-Nazca boundary near 30°S and near 16°S-22°S deviate from orthogonality by as much as 6°-12° even when we exclude the portions of mid-ocean ridge segments involved in overlapping spreading centers. Thus modest but significant non-orthogonality occurs where seafloor spreading is the fastest on the planet. If a plume lies near the ridge segment, we assume it contributes to magma overpressure along the ridge segment [Abelson & Agnon, 1997]. We further assume that the contribution to magma overpressure is proportional to the buoyancy flux of the plume [Sleep, 1990] and inversely proportional to the distance between the mid-ocean ridge segment and a given plume. We find that the non-orthogonal angle tends to decrease with increasing spreading rate and with increasing distance between ridge segment and plume.

  6. Dynamical electron diffraction simulation for non-orthogonal crystal system by a revised real space method.

    PubMed

    Lv, C L; Liu, Q B; Cai, C Y; Huang, J; Zhou, G W; Wang, Y G

    2015-01-01

    In the transmission electron microscopy, a revised real space (RRS) method has been confirmed to be a more accurate dynamical electron diffraction simulation method for low-energy electron diffraction than the conventional multislice method (CMS). However, the RRS method can be only used to calculate the dynamical electron diffraction of orthogonal crystal system. In this work, the expression of the RRS method for non-orthogonal crystal system is derived. By taking Na2 Ti3 O7 and Si as examples, the correctness of the derived RRS formula for non-orthogonal crystal system is confirmed by testing the coincidence of numerical results of both sides of Schrödinger equation; moreover, the difference between the RRS method and the CMS for non-orthogonal crystal system is compared at the accelerating voltage range from 40 to 10 kV. Our results show that the CMS method is almost the same as the RRS method for the accelerating voltage above 40 kV. However, when the accelerating voltage is further lowered to 20 kV or below, the CMS method introduces significant errors, not only for the higher-order Laue zone diffractions, but also for zero-order Laue zone. These indicate that the RRS method for non-orthogonal crystal system is necessary to be used for more accurate dynamical simulation when the accelerating voltage is low. Furthermore, the reason for the increase of differences between those diffraction patterns calculated by the RRS method and the CMS method with the decrease of the accelerating voltage is discussed. PMID:26461207

  7. Non-orthogonal configuration interaction for the calculation of multielectron excited states

    NASA Astrophysics Data System (ADS)

    Sundstrom, Eric J.; Head-Gordon, Martin

    2014-03-01

    We apply Non-orthogonal Configuration Interaction (NOCI) to molecular systems where multielectron excitations, in this case double excitations, play a substantial role: the linear polyenes and β-carotene. We demonstrate that NOCI when applied to systems with extended conjugation, provides a qualitatively correct wavefunction at a fraction of the cost of many other multireference treatments. We also present a new extension to this method allowing for purification of higher-order spin states by utilizing Generalized Hartree-Fock Slater determinants and the details for computing ⟨S2⟩ for the ground and excited states.

  8. Fairness for Non-Orthogonal Multiple Access in 5G Systems

    NASA Astrophysics Data System (ADS)

    Timotheou, Stelios; Krikidis, Ioannis

    2015-10-01

    In non-orthogonal multiple access (NOMA) downlink, multiple data flows are superimposed in the power domain and user decoding is based on successive interference cancellation. NOMA's performance highly depends on the power split among the data flows and the associated power allocation (PA) problem. In this letter, we study NOMA from a fairness standpoint and we investigate PA techniques that ensure fairness for the downlink users under i) instantaneous channel state information (CSI) at the transmitter, and ii) average CSI. Although the formulated problems are non-convex, we have developed low-complexity polynomial algorithms that yield the optimal solution in both cases considered.

  9. Non-orthogonal optical multicarrier access based on filter bank and SCMA.

    PubMed

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-10-19

    This paper proposes a novel non-orthogonal optical multicarrier access system based on filter bank and sparse code multiple access (SCMA). It offers released frequency offset and better spectral efficiency for multicarrier access. An experiment of 73.68 Gb/s filter bank-based multicarrier (FBMC) SCMA system with 60 km single mode fiber link is performed to demonstrate the feasibility. The comparison between fast Fourier transform (FFT) based multicarrier and the proposed scheme is also investigated in the experiment. PMID:26480395

  10. Non-Orthogonality of Seafloor Spreading: A New Look at Fast Spreading Centers

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Gordon, R. G.

    2014-12-01

    Most of Earth's surface is created by seafloor spreading, which is one of a handful of fundamental global tectonic processes. While most seafloor spreading is orthogonal, that is, the strike of mid-ocean ridge segments are perpendicular to transform faults, examples of significant non-orthogonality have been noted since the 1970s, in particular in regions of slow seafloor spreading such as the western Gulf of Aden with the non-orthogonality up to 45°. In contrast, here we focus on fast and ultra-fast seafloor spreading along the East Pacific Rise. For our analysis, instead of comparing the strike of mid-ocean ridges with the strike of nearby transform faults, the azimuth of which can be uncertain, we compare with the direction of plate motion determined from the angular velocity that best fits all the data along the boundary of a single plate pair [DeMet, Gordon, and Argus 2010]. The advantages of our approach include greater accuracy and the ability to estimate non-orthogonality where there are no nearby transform faults. Estimating the strikes of fast-spreading mid-ocean ridge segments present several challenges as non-transform offsets on various scales affect the estimate of the strike. Moreover, the strike may vary considerably within a single ridge segment bounded by transform faults. This is especially evident near overlapping spreading centers along with the strike varies rapidly with distance along a ridge segment. We use various bathymetric data sets to make our estimates including ETOPO1 [Amante and Eakins, 2009] and GeoMapApp [Ryan et al., 2009]. While spreading is orthogonal or nearly orthogonal along much of the East Pacific Rise, it appears that some ridge segments along the Pacific-Nazca boundary near 30°S and near 16°S-22°S deviate significantly from orthogonality by as much as 6°-12° even when we exclude the portions of mid-ocean ridge segments involved in overlapping spreading centers. Thus modest but significant non-orthogonality occurs

  11. Non-Orthogonality of Seafloor Spreading: A New Global Survey Building on the MORVEL Plate Motion Project

    NASA Astrophysics Data System (ADS)

    Throckmorton, C. R.; Zhang, T.; Gordon, R. G.

    2013-12-01

    Most of Earth's surface is created by seafloor spreading, which is one of a handful of fundamental global tectonic processes. While most seafloor spreading is orthogonal, that is, the strike of mid-ocean ridge segments are perpendicular to transform faults, examples of significant non-orthogonality have been noted since the 1970s, in particular in regions of slow seafloor spreading such as the western Gulf of Aden. Here we present a new global analysis of non-orthogonality of seafloor spreading by building on the results of the MORVEL global plate motion project including both new estimates of plate angular velocities and global estimates of the strikes of mid-ocean ridge segments [DeMets, Gordon, & Argus, 2010]. For our analysis, instead of comparing the strike of mid-ocean ridges with the strike of nearby transform faults, the azimuth of which can be uncertain, we compare with the direction of plate motion determined from the angular velocity that best fits all the data along the boundary of a single plate pair. The advantages of our approach include greater accuracy and the ability to estimate non-orthogonality where there are no nearby transform faults. Unsurprisingly we confirm that most seafloor spreading is within a few degrees of orthogonality. Moreover we confirm non-orthogonality in many previously recognized regions of slow seafloor spreading. Surprisingly, however, we find non-orthogonality in several regions of fast seafloor spreading. Implications for mid-ocean ridge processes and hypothesized lithosphere deformation will be discussed.

  12. Simultaneous Source Localization and Polarization Estimation via Non-Orthogonal Joint Diagonalization with Vector-Sensors

    PubMed Central

    Gong, Xiao-Feng; Wang, Ke; Lin, Qiu-Hua; Liu, Zhi-Wen; Xu, You-Gen

    2012-01-01

    Joint estimation of direction-of-arrival (DOA) and polarization with electromagnetic vector-sensors (EMVS) is considered in the framework of complex-valued non-orthogonal joint diagonalization (CNJD). Two new CNJD algorithms are presented, which propose to tackle the high dimensional optimization problem in CNJD via a sequence of simple sub-optimization problems, by using LU or LQ decompositions of the target matrices as well as the Jacobi-type scheme. Furthermore, based on the above CNJD algorithms we present a novel strategy to exploit the multi-dimensional structure present in the second-order statistics of EMVS outputs for simultaneous DOA and polarization estimation. Simulations are provided to compare the proposed strategy with existing tensorial or joint diagonalization based methods. PMID:22737015

  13. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors

    PubMed Central

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  14. Spatio-Temporal Evolutions of Non-Orthogonal Equatorial Wave Modes Derived from Observations

    NASA Astrophysics Data System (ADS)

    Barton, C.; Cai, M.

    2015-12-01

    Equatorial waves have been studied extensively due to their importance to the tropical climate and weather systems. Historically, their activity is diagnosed mainly in the wavenumber-frequency domain. Recently, many studies have projected observational data onto parabolic cylinder functions (PCF), which represent the meridional structure of individual wave modes, to attain time-dependent spatial wave structures. In this study, we propose a methodology that seeks to identify individual wave modes in instantaneous fields of observations by determining their projections on PCF modes according to the equatorial wave theory. The new method has the benefit of yielding a closed system with a unique solution for all waves' spatial structures, including IG waves, for a given instantaneous observed field. We have applied our method to the ERA-Interim reanalysis dataset in the tropical stratosphere where the wave-mean flow interaction mechanism for the quasi-biennial oscillation (QBO) is well-understood. We have confirmed the continuous evolution of the selection mechanism for equatorial waves in the stratosphere from observations as predicted by the theory for the QBO. This also validates the proposed method for decomposition of observed tropical wave fields into non-orthogonal equatorial wave modes.

  15. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors.

    PubMed

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  16. A program for calculating photonic band structures, Green's functions and transmission/reflection coefficients using a non-orthogonal FDTD method

    NASA Astrophysics Data System (ADS)

    Ward, A. J.; Pendry, J. B.

    2000-06-01

    In this paper we present an updated version of our ONYX program for calculating photonic band structures using a non-orthogonal finite difference time domain method. This new version employs the same transparent formalism as the first version with the same capabilities for calculating photonic band structures or causal Green's functions but also includes extra subroutines for the calculation of transmission and reflection coefficients. Both the electric and magnetic fields are placed onto a discrete lattice by approximating the spacial and temporal derivatives with finite differences. This results in discrete versions of Maxwell's equations which can be used to integrate the fields forwards in time. The time required for a calculation using this method scales linearly with the number of real space points used in the discretization so the technique is ideally suited to handling systems with large and complicated unit cells.

  17. Non-orthogonal spin-adaptation of coupled cluster methods: A new implementation of methods including quadruple excitations

    SciTech Connect

    Matthews, Devin A.; Stanton, John F.

    2015-02-14

    The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))

  18. Multireference M[oslash]ller Plesset perturbation theory with non-canonical and non-orthogonal orbitals

    NASA Astrophysics Data System (ADS)

    Finley, James P.; Hirao, Kimihiko

    2000-09-01

    Using non-orthogonal secondary orbitals and non-canonical (localized) inactive and active orbitals, a second-order multireference perturbation theory is formulated, based on a complete active space self-consistent field (CASSCF) wavefunction. The equations of interest are derived from the first-order Bloch equation by using an approach based on a bi-orthogonal basis and operators expressed in second-quantization.

  19. Spatio-temporal evolutions of non-orthogonal equatorial wave modes derived from observations

    NASA Astrophysics Data System (ADS)

    Barton, Cory

    Equatorial waves have been studied extensively due to their importance to the tropical climate and weather systems. Historically, their activity is diagnosed mainly in the wavenumber-frequency domain. Recently, many studies have projected observational data onto parabolic cylinder functions (PCFs), which represent the meridional structure of individual wave modes, to attain time-dependent spatial wave structures. The non-orthogonality of wave modes has yet posed a problem when attempting to separate data into wave fields where the waves project onto the same structure functions. We propose the development and application of a new methodology for equatorial wave expansion of instantaneous flows using the full equatorial wave spectrum. By creating a mapping from the meridional structure function amplitudes to the equatorial wave class amplitudes, we are able to diagnose instantaneous wave fields and determine their evolution. Because all meridional modes are shared by some subset of the wave classes, we require constraints on the wave class amplitudes to yield a closed system with a unique solution for all waves' spatial structures, including IG waves. A synthetic field is analyzed using this method to determine its accuracy for data of a single vertical mode. The wave class spectra diagnosed using this method successfully match the correct dispersion curves even if the incorrect depth is chosen for the spatial decomposition. In the case of more than one depth scale, waves with varying equivalent depth may be similarly identified using the dispersion curves. The primary vertical mode is the 200 m equivalent depth mode, which is that of the peak projection response. A distinct spectral power peak along the Kelvin wave dispersion curve for this value validates our choice of equivalent depth, although the possibility of depth varying with time and height is explored. The wave class spectra diagnosed assuming this depth scale mostly match their expected dispersion curves

  20. On the Performance of Non-Orthogonal Multiple Access in 5G Systems with Randomly Deployed Users

    NASA Astrophysics Data System (ADS)

    Ding, Zhiguo; Yang, Zheng; Fan, Pingzhi; Poor, H. Vincent

    2014-12-01

    In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.

  1. The non-orthogonal fixed beam arrangement for the second proton therapy facility at the National Accelerator Center

    NASA Astrophysics Data System (ADS)

    Schreuder, A. N.; Jones, D. T. L.; Conradie, J. L.; Fourie, D. T.; Botha, A. H.; Müller, A.; Smit, H. A.; O'Ryan, A.; Vernimmen, F. J. A.; Wilson, J.; Stannard, C. E.

    1999-06-01

    The medical user group at the National Accelerator Center (NAC) is currently unable to treat all eligible patients with high energy protons. Developing a second proton treatment room is desirable since the 200 MeV proton beam from the NAC separated sector cyclotron is currently under-utilized during proton therapy sessions. During the patient positioning phase in one treatment room, the beam could be used for therapy in a second room. The second proton therapy treatment room at the NAC will be equipped with two non-orthogonal beam lines, one horizontal and one at 30 degrees to the vertical. The two beams will have a common isocentre. This beam arrangement together with a versatile patient positioning system (commercial robot arm) will provide the radiation oncologist with a diversity of possible beam arrangements and offers a reasonable cost-effective alternative to an isocentric gantry.

  2. The non-orthogonal fixed beam arrangement for the second proton therapy facility at the National Accelerator Center

    SciTech Connect

    Schreuder, A. N.; Jones, D. T. L.; Conradie, J. L.; Fourie, D. T.; Botha, A. H.; Mueller, A.; Smit, H. A.; O'Ryan, A.; Vernimmen, F. J. A.; Wilson, J.; Stannard, C. E.

    1999-06-10

    The medical user group at the National Accelerator Center (NAC) is currently unable to treat all eligible patients with high energy protons. Developing a second proton treatment room is desirable since the 200 MeV proton beam from the NAC separated sector cyclotron is currently under-utilized during proton therapy sessions. During the patient positioning phase in one treatment room, the beam could be used for therapy in a second room. The second proton therapy treatment room at the NAC will be equipped with two non-orthogonal beam lines, one horizontal and one at 30 degrees to the vertical. The two beams will have a common isocentre. This beam arrangement together with a versatile patient positioning system (commercial robot arm) will provide the radiation oncologist with a diversity of possible beam arrangements and offers a reasonable cost-effective alternative to an isocentric gantry.

  3. Size consistent formulations of the perturb-then-diagonalize Møller-Plesset perturbation theory correction to non-orthogonal configuration interaction

    NASA Astrophysics Data System (ADS)

    Yost, Shane R.; Head-Gordon, Martin

    2016-08-01

    In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the number of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis.

  4. Size consistent formulations of the perturb-then-diagonalize Møller-Plesset perturbation theory correction to non-orthogonal configuration interaction.

    PubMed

    Yost, Shane R; Head-Gordon, Martin

    2016-08-01

    In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the number of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis. PMID:27497537

  5. Functional Implications of Ubiquitous Semicircular Canal Non-Orthogonality in Mammals

    PubMed Central

    Berlin, Jeri C.; Kirk, E. Christopher; Rowe, Timothy B.

    2013-01-01

    The ‘canonical model’ of semicircular canal orientation in mammals assumes that 1) the three ipsilateral canals of an inner ear exist in orthogonal planes (i.e., orthogonality), 2) corresponding left and right canal pairs have equivalent angles (i.e., angle symmetry), and 3) contralateral synergistic canals occupy parallel planes (i.e., coplanarity). However, descriptions of vestibular anatomy that quantify semicircular canal orientation in single species often diverge substantially from this model. Data for primates further suggest that semicircular canal orthogonality varies predictably with the angular head velocities encountered in locomotion. These observations raise the possibility that orthogonality, symmetry, and coplanarity are misleading descriptors of semicircular canal orientation in mammals, and that deviations from these norms could have significant functional consequences. Here we critically assess the canonical model of semicircular canal orientation using high-resolution X-ray computed tomography scans of 39 mammal species. We find that substantial deviations from orthogonality, angle symmetry, and coplanarity are the rule for the mammals in our comparative sample. Furthermore, the degree to which the semicircular canals of a given species deviate from orthogonality is negatively correlated with estimated vestibular sensitivity. We conclude that the available comparative morphometric data do not support the canonical model and that its overemphasis as a heuristic generalization obscures a large amount of functionally relevant variation in semicircular canal orientation between species. PMID:24260256

  6. Three Dimensional Wind Speed and Flux Measurement over a Rain-fed Soybean Field Using Orthogonal and Non-orthogonal Sonic Anemometer Designs

    NASA Astrophysics Data System (ADS)

    Thomas, T.; Suyker, A.; Burba, G. G.; Billesbach, D.

    2014-12-01

    The eddy covariance method for estimating fluxes of trace gases, energy and momentum in the constant flux layer above a plant canopy fundamentally relies on accurate measurements of the vertical wind speed. This wind speed is typically measured using a three dimensional ultrasonic anemometer. These anemometers incorporate designs with transducer sets that are aligned either orthogonally or non-orthogonally. Previous studies comparing the two designs suggest differences in measured 3D wind speed components, in particular vertical wind speed, from the non-orthogonal transducer relative to the orthogonal design. These differences, attributed to additional flow distortion caused by the non-orthogonal transducer arrangement, directly affect fluxes of trace gases, energy and momentum. A field experiment is being conducted over a rain-fed soybean field at the AmeriFlux site (US-Ne3) near Mead, Nebraska. In this study, ultrasonic anemometers featuring orthogonal transducer sets (ATI Vx Probe) and non-orthogonal transducer sets (Gill R3-100) collect high frequency wind vector and sonic temperature data. Sensible heat and momentum fluxes and other key sonic performance data are evaluated based on environmental parameters including wind speed, wind direction, temperature, and angle of attack. Preliminary field experiment results are presented.

  7. Reliable Attention Network Scores and Mutually Inhibited Inter-network Relationships Revealed by Mixed Design and Non-orthogonal Method.

    PubMed

    Wang, Yi-Feng; Jing, Xiu-Juan; Liu, Feng; Li, Mei-Ling; Long, Zhi-Liang; Yan, Jin H; Chen, Hua-Fu

    2015-01-01

    The attention system can be divided into alerting, orienting, and executive control networks. The efficiency and independence of attention networks have been widely tested with the attention network test (ANT) and its revised versions. However, many studies have failed to find effects of attention network scores (ANSs) and inter-network relationships (INRs). Moreover, the low reliability of ANSs can not meet the demands of theoretical and empirical investigations. Two methodological factors (the inter-trial influence in the event-related design and the inter-network interference in orthogonal contrast) may be responsible for the unreliability of ANT. In this study, we combined the mixed design and non-orthogonal method to explore ANSs and directional INRs. With a small number of trials, we obtained reliable and independent ANSs (split-half reliability of alerting: 0.684; orienting: 0.588; and executive control: 0.616), suggesting an individual and specific attention system. Furthermore, mutual inhibition was observed when two networks were operated simultaneously, indicating a differentiated but integrated attention system. Overall, the reliable and individual specific ANSs and mutually inhibited INRs provide novel insight into the understanding of the developmental, physiological and pathological mechanisms of attention networks, and can benefit future experimental and clinical investigations of attention using ANT.

  8. Reliable Attention Network Scores and Mutually Inhibited Inter-network Relationships Revealed by Mixed Design and Non-orthogonal Method

    PubMed Central

    Wang, Yi-Feng; Jing, Xiu-Juan; Liu, Feng; Li, Mei-Ling; Long, Zhi-Liang; Yan, Jin H.; Chen, Hua-Fu

    2015-01-01

    The attention system can be divided into alerting, orienting, and executive control networks. The efficiency and independence of attention networks have been widely tested with the attention network test (ANT) and its revised versions. However, many studies have failed to find effects of attention network scores (ANSs) and inter-network relationships (INRs). Moreover, the low reliability of ANSs can not meet the demands of theoretical and empirical investigations. Two methodological factors (the inter-trial influence in the event-related design and the inter-network interference in orthogonal contrast) may be responsible for the unreliability of ANT. In this study, we combined the mixed design and non-orthogonal method to explore ANSs and directional INRs. With a small number of trials, we obtained reliable and independent ANSs (split-half reliability of alerting: 0.684; orienting: 0.588; and executive control: 0.616), suggesting an individual and specific attention system. Furthermore, mutual inhibition was observed when two networks were operated simultaneously, indicating a differentiated but integrated attention system. Overall, the reliable and individual specific ANSs and mutually inhibited INRs provide novel insight into the understanding of the developmental, physiological and pathological mechanisms of attention networks, and can benefit future experimental and clinical investigations of attention using ANT. PMID:25997025

  9. Novel methods for configuration interaction and orbital optimization for wave functions containing non-orthogonal orbitals with applications to the chromium dimer and trimer.

    PubMed

    Olsen, Jeppe

    2015-09-21

    A novel algorithm for performing configuration interaction (CI) calculations using non-orthogonal orbitals is introduced. In the new algorithm, the explicit calculation of the Hamiltonian matrix is replaced by the direct evaluation of the Hamiltonian matrix times a vector, which allows expressing the CI-vector in a bi-orthonormal basis, thereby drastically reducing the computational complexity. A new non-orthogonal orbital optimization method that employs exponential mappings is also described. To allow non-orthogonal transformations of the orbitals, the standard exponential mapping using anti-symmetric operators is supplemented with an exponential mapping based on a symmetric operator in the active orbital space. Expressions are obtained for the orbital gradient and Hessian, which involve the calculation of at most two-body density matrices, thereby avoiding the time-consuming calculation of the three- and four-body density matrices of the previous approaches. An approach that completely avoids the calculation of any four-body terms with limited degradation of convergence is also devised. The novel methods for non-orthogonal configuration interaction and orbital optimization are applied to the chromium dimer and trimer. For internuclear distances that are typical for chromium clusters, it is shown that a reference configuration consisting of optimized singly occupied active orbitals is sufficient to give a potential curve that is in qualitative agreement with complete active space self-consistent field (CASSCF) calculations containing more than 500 × 10(6) determinants. To obtain a potential curve that deviates from the CASSCF curve by less than 1 mHartree, it is sufficient to add single and double excitations out from the reference configuration. PMID:26395682

  10. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  11. Stability of a non-orthogonal stagnation flow to three dimensional disturbances

    NASA Technical Reports Server (NTRS)

    Lasseigne, D. G.; Jackson, T. L.

    1991-01-01

    A similarity solution for a low Mach number nonorthogonal flow impinging on a hot or cold plate is presented. For the constant density case, it is known that the stagnation point shifts in the direction of the incoming flow and that this shift increases as the angle of attack decreases. When the effects of density variations are included, a critical plate temperature exists; above this temperature the stagnation point shifts away from the incoming stream as the angle is decreased. This flow field is believed to have application to the reattachment zone of certain separated flows or to a lifting body at a high angle of attack. Finally, the stability of this nonorthogonal flow to self similar, 3-D disturbances is examined. Stability properties of the flow are given as a function of the parameters of this study; ratio of the plate temperature to that of the outer potential flow and angle of attack. In particular, it is shown that the angle of attack can be scaled out by a suitable definition of an equivalent wavenumber and temporal growth rate, and the stability problem for the nonorthogonal case is identical to the stability problem for the orthogonal case.

  12. Thinking large.

    PubMed

    Devries, Egbert

    2016-05-01

    Egbert Devries was brought up on a farm in the Netherlands and large animal medicine has always been his area of interest. After working in UK practice for 12 years he joined CVS and was soon appointed large animal director with responsibility for building a stronger large animal practice base. PMID:27154956

  13. Design of a large area 3D surface structure measurement system

    NASA Astrophysics Data System (ADS)

    Wang, Shenghuai; Li, Xin; Chen, Yurong; Xie, Tiebang

    2010-10-01

    Surface texture plays a vital role in modern engineering products. Currently surface metrology discipline is undergoing a paradigm shift from 2D profile to 3D areal and from stochastic to structured surface characterization. Areal surface texture measurements have greater fully functional significance parameters, better repeatability and more effectively visual express than profile measurements. The existing white light microscopy interference measurement can be used for the non-contact measurement of areal surface texture. However, the measurement field and lateral resolution of this method is restricted to the numerical aperture of objective. To address this issue, a type of vertical scanning white light interference stitching measurement system with large area and seamless has been built up in this paper. This system is based on the compound optical microscopy system and 3D precision displacement system with large travel, nanometer level and displacement measurement. The CCD calibration and angles calculation between CCD and level worktables are settled depending on the measurement system itself. A non-orthogonal worktable moving strategy is used for the seamless stitching measurement of this measurement method, which reduces the cost of stitching and enlarges the measurement field. Therefore the problem, which the lateral resolution and the measurement filed are restricted to the numerical aperture of objective, is solved. An automatic search and location method of fringe for white light interference measurement based on the normalized standard deviation of gray value of interference microscopy images is proposed to solve the problem of inefficiency for the search of interference fringe by hand.

  14. Large ethics.

    PubMed

    Chambers, David W

    2008-01-01

    This essay presents an alternative to the traditional view that ethics means judging individual behavior against standards of right and wrong. Instead, ethics is understood as creating ethical communities through the promises we make to each other. The "aim" of ethics is to demonstrate in our own behavior a credible willingness to work to create a mutually better world. The "game" of ethics then becomes searching for strategies that overlap with others' strategies so that we are all better for intending to act on a basis of reciprocal trust. This is a difficult process because we have partial, simultaneous, shifting, and inconsistent views of the world. But despite the reality that we each "frame" ethics in personal terms, it is still possible to create sufficient common understanding to prosper together. Large ethics does not make it a prerequisite for moral behavior that everyone adheres to a universally agreed set of ethical principles; all that is necessary is sufficient overlap in commitment to searching for better alternatives.

  15. A multireference perturbation method using non-orthogonal Hartree-Fock determinants for ground and excited states

    SciTech Connect

    Yost, Shane R.; Kowalczyk, Tim; Van Voorhis, Troy

    2013-11-07

    In this article we propose the ΔSCF(2) framework, a multireference strategy based on second-order perturbation theory, for ground and excited electronic states. Unlike the complete active space family of methods, ΔSCF(2) employs a set of self-consistent Hartree-Fock determinants, also known as ΔSCF states. Each ΔSCF electronic state is modified by a first-order correction from Møller-Plesset perturbation theory and used to construct a Hamiltonian in a configuration interactions like framework. We present formulas for the resulting matrix elements between nonorthogonal states that scale as N{sub occ}{sup 2}N{sub virt}{sup 3}. Unlike most active space methods, ΔSCF(2) treats the ground and excited state determinants even-handedly. We apply ΔSCF(2) to the H{sub 2}, hydrogen fluoride, and H{sub 4} systems and show that the method provides accurate descriptions of ground- and excited-state potential energy surfaces with no single active space containing more than 10 ΔSCF states.

  16. On the Relative Merits of Non-Orthogonal and Orthogonal Valence Bond Methods Illustrated on the Hydrogen Molecule

    ERIC Educational Resources Information Center

    Angeli, Celestino; Cimiraglia, Renzo; Malrieu, Jean-Paul

    2008-01-01

    Valence bond (VB) is one of the cornerstone theories of quantum chemistry. Even if in practical applications the molecular orbital (MO) approach has obtained more attention, some basic chemical concepts (such as the nature of the chemical bond and the failure of the single determinant-based MO methods in describing the bond cleavage) are normally…

  17. A comparative study of scale-adaptive and large-eddy simulations of highly swirling turbulent flow through an abrupt expansion

    NASA Astrophysics Data System (ADS)

    Javadi, Ardalan; Nilsson, Håkan

    2014-03-01

    The strongly swirling turbulent flow through an abrupt expansion is investigated using highly resolved LES and SAS, to shed more light on the stagnation region and the helical vortex breakdown. The vortex breakdown in an abrupt expansion resembles the so-called vortex rope occurring in hydro power draft tubes. It is known that the large-scale helical vortex structures can be captured by regular RANS turbulence models. However, the spurious suppression of the small-scale structures should be avoided using less diffusive methods. The present work compares LES and SAS results with the experimental measurement of Dellenback et al. (1988). The computations are conducted using a general non-orthogonal finite-volume method with a fully collocated storage available in the OpenFOAM-2.1.x CFD code. The dynamics of the flow is studied at two Reynolds numbers, Re=6.0×104 and Re=105, at the almost constant high swirl numbers of Sr=1.16 and Sr=1.23, respectively. The time-averaged velocity and pressure fields and the root mean square of the velocity fluctuations, are captured and investigated qualitatively. The flow with the lower Reynolds number gives a much weaker outburst although the frequency of the structures seems to be constant for the plateau swirl number.

  18. Large displacement spherical joint

    DOEpatents

    Bieg, Lothar F.; Benavides, Gilbert L.

    2002-01-01

    A new class of spherical joints has a very large accessible full cone angle, a property which is beneficial for a wide range of applications. Despite the large cone angles, these joints move freely without singularities.

  19. High-resolution combined global gravity field modelling: Solving large kite systems using distributed computational algorithms

    NASA Astrophysics Data System (ADS)

    Zingerle, Philipp; Fecher, Thomas; Pail, Roland; Gruber, Thomas

    2016-04-01

    One of the major obstacles in modern global gravity field modelling is the seamless combination of lower degree inhomogeneous gravity field observations (e.g. data from satellite missions) with (very) high degree homogeneous information (e.g. gridded and reduced gravity anomalies, beyond d/o 1000). Actual approaches mostly combine such data only on the basis of the coefficients, meaning that previously for both observation classes (resp. models) a spherical harmonic analysis is done independently, solving dense normal equations (NEQ) for the inhomogeneous model and block-diagonal NEQs for the homogeneous. Obviously those methods are unable to identify or eliminate effects as spectral leakage due to band limitations of the models and non-orthogonality of the spherical harmonic base functions. To antagonize such problems a combination of both models on NEQ-basis is desirable. Theoretically this can be achieved using NEQ-stacking. Because of the higher maximum degree of the homogeneous model a reordering of the coefficient is needed which leads inevitably to the destruction of the block diagonal structure of the appropriate NEQ-matrix and therefore also to the destruction of simple sparsity. Hence, a special coefficient ordering is needed to create some new favorable sparsity pattern leading to a later efficient computational solving method. Such pattern can be found in the so called kite-structure (Bosch, 1993), achieving when applying the kite-ordering to the stacked NEQ-matrix. In a first step it is shown what is needed to attain the kite-(NEQ)system, how to solve it efficiently and also how to calculate the appropriate variance information from it. Further, because of the massive computational workload when operating on large kite-systems (theoretically possible up to about max. d/o 100.000), the main emphasis is put on to the presentation of special distributed algorithms which may solve those systems parallel on an indeterminate number of processes and are

  20. Large mode radius resonators

    NASA Technical Reports Server (NTRS)

    Harris, Michael R.

    1987-01-01

    Resonator configurations permitting operation with large mode radius while maintaining good transverse mode discrimination are considered. Stable resonators incorporating an intracavity telescope and unstable resonator geometries utilizing an output coupler with a Gaussian reflectivity profile are shown to enable large radius single mode laser operation. Results of heterodyne studies of pulsed CO2 lasers with large (11mm e sup-2 radius) fundamental mode sizes are presented demonstrating minimal frequency sweeping in accordance with the theory of laser-induced medium perturbations.

  1. Large wind turbine generators

    NASA Technical Reports Server (NTRS)

    Thomas, R. L.; Donovon, R. M.

    1978-01-01

    The development associated with large wind turbine systems is briefly described. The scope of this activity includes the development of several large wind turbines ranging in size from 100 kW to several megawatt levels. A description of the wind turbine systems, their programmatic status and a summary of their potential costs is included.

  2. Large Print Bibliography, 1990.

    ERIC Educational Resources Information Center

    South Dakota State Library, Pierre.

    This bibliography lists materials that are available in large print format from the South Dakota State Library. The annotated entries are printed in large print and include the title of the material and its author, call number, publication date, and type of story or subject area covered. Some recorded items are included in the list. The entries…

  3. LARGE BUILDING RADON MANUAL

    EPA Science Inventory

    The report summarizes information on how bilding systems -- especially the heating, ventilating, and air-conditioning (HVAC) system -- inclurence radon entry into large buildings and can be used to mitigate radon problems. It addresses the fundamentals of large building HVAC syst...

  4. The large hadron collider

    NASA Astrophysics Data System (ADS)

    Brüning, O.; Burkhardt, H.; Myers, S.

    2012-07-01

    The Large Hadron Collider (LHC) is the world’s largest and most energetic particle collider. It took many years to plan and build this large complex machine which promises exciting, new physics results for many years to come. We describe and review the machine design and parameters, with emphasis on subjects like luminosity and beam conditions which are relevant for the large community of physicists involved in the experiments at the LHC. First collisions in the LHC were achieved at the end of 2009 and followed by a period of a rapid performance increase. We discuss what has been learned so far and what can be expected for the future.

  5. Learning with Large Blocks.

    ERIC Educational Resources Information Center

    Cartwright, Sally

    1990-01-01

    Discusses how large hollow blocks can meet many preschool children's learning needs through creative dramatic play, and also gives some guidelines on how these blocks can be constructed by parents and teachers. (BB)

  6. Closed Large Cell Clouds

    Atmospheric Science Data Center

    2013-04-19

    article title:  Closed Large Cell Clouds in the South Pacific     ... unperturbed by cyclonic or frontal activity. When the cell centers are cloudy and the main sinking motion is concentrated at cell ...

  7. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  8. Large intestine (colon) (image)

    MedlinePlus

    ... portion of the digestive system most responsible for absorption of water from the indigestible residue of food. The ileocecal valve of the ileum (small intestine) passes material into the large intestine at the ...

  9. Large Customers (DR Sellers)

    SciTech Connect

    Kiliccot, Sila

    2011-10-25

    State of the large customers for demand response integration of solar and wind into electric grid; openADR; CAISO; DR as a pseudo generation; commercial and industrial DR strategies; California regulations

  10. Large electrostatic accelerators

    SciTech Connect

    Jones, C.M.

    1984-01-01

    The increasing importance of energetic heavy ion beams in the study of atomic physics, nuclear physics, and materials science has partially or wholly motivated the construction of a new generation of large electrostatic accelerators designed to operate at terminal potentials of 20 MV or above. In this paper, the author briefly discusses the status of these new accelerators and also discusses several recent technological advances which may be expected to further improve their performance. The paper is divided into four parts: (1) a discussion of the motivation for the construction of large electrostatic accelerators, (2) a description and discussion of several large electrostatic accelerators which have been recently completed or are under construction, (3) a description of several recent innovations which may be expected to improve the performance of large electrostatic accelerators in the future, and (4) a description of an innovative new large electrostatic accelerator whose construction is scheduled to begin next year. Due to time and space constraints, discussion is restricted to consideration of only tandem accelerators.

  11. Large pore alumina

    SciTech Connect

    Ternan, M. )

    1994-04-01

    Earlier the authors reported preparation conditions for an alumina material which contained large diameter macropores (0.1-100 [mu]). The preparation variable that caused the formation of the uncommonly large macropores was the large acid/alumina ratios which were very much greater than the ones used in the preparation of conventional porous aluminas. The alumina material had large BET surface areas (200 m[sup 2]/g) and small mercury porosimetry surface areas (1 m[sup 2]/g). This indicated that micropores (d[sub MIP]<2 nm) were present in the alumina, since they were large enough for nitrogen gas molecules to enter, but too small for mercury to enter. As a result they would be too small for significant diffusion rates of residuum molecules. In earlier work, the calcining temperature was fixed at 500[degrees]C. In the current work, variations in both calcining temperature and calcining time were used in an attempt to convert some of the micropores into mesopores. 12 refs., 2 figs., 1 tab.

  12. Dynamic of large reflectors

    NASA Astrophysics Data System (ADS)

    Picard, P.; Dauviau, C.; Lefebvre, J. D.; Garnier, C.; Truchi, C.

    1991-10-01

    Work in the field of the unfurlable mesh reflectors as part of the dynamic of large reflectors project is presented. These studies use the unfurlable reflector design developed since 1983: gilded molybdenum reflective mesh supported by a deployable truss. From this strong background two specific critical points are studied: the deployment phase, where, for a deployment test, the test measurements are correlated with dynamic software predictions and the deployment bench chosen uses a 0 g compensation device by helium balloons; the antenna deployed configuration, where the interaction between a large structure and the attitude and orbit control subsystem is analyzed.

  13. Large TV display system

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor)

    1986-01-01

    A relatively small and low cost system is provided for projecting a large and bright television image onto a screen. A miniature liquid crystal array is driven by video circuitry to produce a pattern of transparencies in the array corresponding to a television image. Light is directed against the rear surface of the array to illuminate it, while a projection lens lies in front of the array to project the image of the array onto a large screen. Grid lines in the liquid crystal array are eliminated by a spacial filter which comprises a negative of the Fourier transform of the grid.

  14. The Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Myers, Stephen

    The Large Hadron Collider (LHC) was first suggested (in a documented way) in 1983 [1] as a possible future hadron collider to be installed in the 27 km "LEP" tunnel. More than thirty years later the collider has been operated successfully with beam for three years with spectacular performance and has discovered the long-sought-after Higgs boson. The LHC is the world's largest and most energetic particle collider. It took many years to plan and build this large complex machine which promises exciting, new physics results for many years to come...

  15. Risks of Large Portfolios

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng

    2014-01-01

    The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851

  16. LARGE BUILDING HVAC SIMULATION

    EPA Science Inventory

    The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...

  17. Developing Large CAI Packages.

    ERIC Educational Resources Information Center

    Reed, Mary Jac M.; Smith, Lynn H.

    1983-01-01

    When developing large computer-assisted instructional (CAI) courseware packages, it is suggested that there be more attentive planning to the overall package design before actual lesson development is begun. This process has been simplified by modifying the systems approach used to develop single CAI lessons, followed by planning for the…

  18. Death Writ Large

    ERIC Educational Resources Information Center

    Kastenbaum, Robert

    2004-01-01

    Mainstream thanatology has devoted its efforts to improving the understanding, care, and social integration of people who are confronted with life-threatening illness or bereavement. This article suggests that it might now be time to expand the scope and mission to include large-scale death and death that occurs through complex and multi-domain…

  19. Teaching Large Evening Classes

    ERIC Educational Resources Information Center

    Wambuguh, Oscar

    2008-01-01

    High enrollments, conflicting student work schedules, and the sheer convenience of once-a-week classes are pushing many colleges to schedule evening courses. Held from 6 to 9 pm or 7 to 10 pm, these classes are typically packed, sometimes with more than 150 students in a large lecture theater. How can faculty effectively teach, control, or even…

  20. Novel large aperture EBCCD

    NASA Astrophysics Data System (ADS)

    Suzuki, Atsumu; Aoki, Shigeki; Haba, Junji; Sakuda, Makoto; Suyama, Motohiro

    2011-02-01

    A novel large aperture electron bombardment charge coupled device (EBCCD) has been developed. The diameter of its photocathode is 10 cm and it is the first EBCCD with such a large aperture. Its gain shows good linearity as a function of applied voltage up to -12 kV, where the gain is 2400. The spatial resolution was measured using ladder pattern charts. It is better than 2 line pairs/mm, which corresponds to 3.5 times the CCD pixel size. The spatial resolution was also measured with a copper foil pattern on a fluorescent screen irradiated with X-rays (14 and 18 keV) and a 60 keV gamma-ray from an americium source. The result was consistent with the measurement using ladder pattern charts. The output signal as a function of input light intensity shows better linearity than that of image intensifier tubes (IIT) as expected. We could detect cosmic rays passing through a scintillating fiber block and a plastic scintillator as a demonstration for a practical use in particle physics experiments. This kind of large aperture EBCCD can, for example, be used as an image sensor for a detector with a large number of readout channels and is expected to be additionally applied to other physics experiments.

  1. Large, Easily Deployable Structures

    NASA Technical Reports Server (NTRS)

    Agan, W. E.

    1983-01-01

    Study of concepts for large space structures will interest those designing scaffolding, radio towers, rescue equipment, and prefabricated shelters. Double-fold, double-cell module was selected for further design and for zero gravity testing. Concept is viable for deployment by humans outside space vehicle as well as by remotely operated manipulator.

  2. Estimating Large Numbers

    ERIC Educational Resources Information Center

    Landy, David; Silbert, Noah; Goldin, Aleah

    2013-01-01

    Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…

  3. The Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Schloerb, F. Peter; Carrasco, Luis

    2004-10-01

    We present a summary of the Large Millimeter Telescope Project and its present status. The Large Millimeter Telescope (LMT) is a joint project of the University of Massachusetts (UMass) in the USA and the Instituto Nacional de Astrofisica, Optica y Electronica (INAOE) in Mexico to build a 50m-diameter millimeter-wave telescope. The LMT is being built at an altitude of 4600 m atop Volcan Sierra Negra, an extinct volcanic peak in the state of Puebla, Mexico, approximately 100 km east of the city of Puebla. Construction of the antenna is now well underway. The basic structure with a limited number of surface panels is expected to be completed in 2005. Engineering acceptance and telescope commissioning are expected to be completed in 2007.

  4. The Large Hadron Collider.

    PubMed

    Evans, Lyndon

    2012-02-28

    The construction of the Large Hadron Collider (LHC) has been a massive endeavour spanning almost 30 years from conception to commissioning. Building the machine with the highest possible energy (7 TeV) in the existing large electron-positron (LEP) collider tunnel of 27 km circumference and with a tunnel diameter of only 3.8 m has required considerable innovation. The first was the development of a two-in-one magnet, where the two rings are integrated into a single magnetic structure. This compact two-in-one structure was essential for the LHC owing to the limited space available in the existing LEP collider tunnel and the cost. The second was a bold move to the use of superfluid helium cooling on a massive scale, which was imposed by the need to achieve a high (8.3 T) magnetic field using an affordable Nb-Ti superconductor.

  5. Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Murdin, P.

    2000-11-01

    The larger of two nearby companions of the Milky Way Galaxy that can be seen with the naked eye in the southern hemisphere sky and which are named after the Portuguese navigator, Ferdinand Magellan, who observed them in 1519 during his circumnavigation of the world. Located in the constellation of Dorado, at a distance of about 170 000 light-years, the Large Magellanic Cloud (LMC) has an overall ...

  6. The Large Area Telescope

    SciTech Connect

    Michelson, Peter F.; /KIPAC, Menlo Park /Stanford U., HEPL

    2007-11-13

    The Large Area Telescope (LAT), one of two instruments on the Gamma-ray Large Area Space Telescope (GLAST) mission, is an imaging, wide field-of-view, high-energy pair-conversion telescope, covering the energy range from {approx}20 MeV to more than 300 GeV. The LAT is being built by an international collaboration with contributions from space agencies, high-energy particle physics institutes, and universities in France, Italy, Japan, Sweden, and the United States. The scientific objectives the LAT will address include resolving the high-energy gamma-ray sky and determining the nature of the unidentified gamma-ray sources and the origin of the apparently isotropic diffuse emission observed by EGRET; understanding the mechanisms of particle acceleration in celestial sources, including active galactic nuclei, pulsars, and supernovae remnants; studying the high-energy behavior of gamma-ray bursts and transients; using high-energy gamma-rays to probe the early universe to z {ge} 6; and probing the nature of dark matter. The components of the LAT include a precision silicon-strip detector tracker and a CsI(Tl) calorimeter, a segmented anticoincidence shield that covers the tracker array, and a programmable trigger and data acquisition system. The calorimeter's depth and segmentation enable the high-energy reach of the LAT and contribute significantly to background rejection. The aspect ratio of the tracker (height/width) is 0.4, allowing a large field-of-view and ensuring that nearly all pair-conversion showers initiated in the tracker will pass into the calorimeter for energy measurement. This paper includes a description of each of these LAT subsystems as well as a summary of the overall performance of the telescope.

  7. Large coil test facility

    SciTech Connect

    Nelms, L.W.; Thompson, P.B.

    1980-01-01

    Final design of the facility is nearing completion, and 20% of the construction has been accomplished. A large vacuum chamber, houses the test assembly which is coupled to appropriate cryogenic, electrical, instrumentation, diagnostc systems. Adequate assembly/disassembly areas, shop space, test control center, offices, and test support laboratories are located in the same building. Assembly and installation operations are accomplished with an overhead crane. The major subsystems are the vacuum system, the test stand assembly, the cryogenic system, the experimental electric power system, the instrumentation and control system, and the data aquisition system.

  8. The large hadron computer

    NASA Astrophysics Data System (ADS)

    Hirstius, Andreas

    2008-11-01

    In the mid-1990s, when CERN physicists made their first cautious estimates of the amount of data that experiments at the Large Hadron Collider (LHC) would produce, the microcomputer component manufacturer Intel had just released the Pentium Pro processor. Windows was the dominant operating system, although Linux was gaining momentum. CERN had recently made the World Wide Web public, but the system was still a long way from the all-encompassing network it is today. And a single gigabyte (109 bytes) of disk space cost several hundred dollars.

  9. Death writ large.

    PubMed

    Kastenbaum, Robert

    2004-05-01

    Mainstream thanatology has devoted its efforts to improving the understanding, care, and social integration of people who are confronted with life-threatening illness or bereavement. This article suggests that it might now be time to expand the scope and mission to include large-scale death and death that occurs through complex and multi-domain processes. Obstacles to developing a systematic macrothanatology are identified. The 9-11-01 terrorist attacks on America are discussed as an example of mass death with complex correlates and consequences. Other examples are taken from the realms of war, disease, disaster, and extinction.

  10. Cogeneration in large complexes

    SciTech Connect

    Kovacik, J.M.; Franklin, J.C.

    1982-02-01

    Power cogeneration in large chemical plants producing sulfuric acid and phosphate fertilizers is covered. In these plants, a large quantity of ''by-product steam'' is generated which can be expanded prior to extraction for process use. Steam generated in excess of process needs can be expanded through the steam turbine to a condenser. The combination of a sulfuric acid production facility with a phosphate complex producing wet process phosphoric acid and diammonium phosphate provides a unique opportunity for cogeneration. The exothermic oxidation reactions in the production of sulfuric (or nitric) acid provide the thermal energy for ''by-product'' steam production at elevated steam conditions. Expanding the steam generated in an automatic extraction, condensing steam turbine-generator permits power generation without any incremental fuel requirement in the process plant. Furthermore, steam demands for the phosphate complex for evaporators, vaporizers and other uses would be extracted from the steam turbine-generator. Many of the practical energy systems as well as hardware considerations have been briefly discussed. The data and examples presented illustrate the attractive economics and operational flexibility which are available through use of these cogeneration systems.

  11. Large Particle Titanate Sorbents

    SciTech Connect

    Taylor-Pashow, K.

    2015-10-08

    This research project was aimed at developing a synthesis technique for producing large particle size monosodium titanate (MST) to benefit high level waste (HLW) processing at the Savannah River Site (SRS). Two applications were targeted, first increasing the size of the powdered MST used in batch contact processing to improve the filtration performance of the material, and second preparing a form of MST suitable for deployment in a column configuration. Increasing the particle size should lead to improvements in filtration flux, and decreased frequency of filter cleaning leading to improved throughput. Deployment of MST in a column configuration would allow for movement from a batch process to a more continuous process. Modifications to the typical MST synthesis led to an increase in the average particle size. Filtration testing on dead-end filters showed improved filtration rates with the larger particle material; however, no improvement in filtration rate was realized on a crossflow filter. In order to produce materials suitable for column deployment several approaches were examined. First, attempts were made to coat zirconium oxide microspheres (196 µm) with a layer of MST. This proved largely unsuccessful. An alternate approach was then taken synthesizing a porous monolith of MST which could be used as a column. Several parameters were tested, and conditions were found that were able to produce a continuous structure versus an agglomeration of particles. This monolith material showed Sr uptake comparable to that of previously evaluated samples of engineered MST in batch contact testing.

  12. Large Surface Measuring Machine

    NASA Astrophysics Data System (ADS)

    Egdall, Mark; Breidenthal, Robert S.

    1983-09-01

    A new surface measuring concept developed under government contract at Itek Optical Systems has been previously reported by Allen Greenleaf. The method uses four steerable distance-measuring interferometers at the corners of a tetrahedron to determine the posi-tions of a retroreflecting target at various locations on the surface being measured. A small wooden breadboard had been built and tested, demonstrating the feasibility of the concept. This paper reports the building of a scaled-up prototype surface measuring machine to allow the measurement of large aspheric surfaces. A major advantage of the device is that, unlike conventional interferometry, it provides surface measurement in absolute coordinates, thus allowing direct determination of radius of curvature. In addition, the device is self-calibrating. Measurements of a 24-inch mirror have been made with the new machine, giving repeatability of 4 µ m peak sag in the curvature and accuracy of 0.7 μm rms in the surface figure at best focus. The device is currently being used in the production grinding of large aspheric mirrors at Itek. The device is potentially scalable to other industries where highly accurate measurement of unusual surfaces is required.

  13. Infinitely Large New Dimensions

    SciTech Connect

    Arkani-Hamed, Nima; Dimopoulos, Savas; Dvali, Gia; Kaloper, Nemanja

    1999-07-29

    We construct intersecting brane configurations in Anti-de-Sitter space localizing gravity to the intersection region, with any number n of extra dimensions. This allows us to construct two kinds of theories with infinitely large new dimensions, TeV scale quantum gravity and sub-millimeter deviations from Newton's Law. The effective 4D Planck scale M{sub Pl} is determined in terms of the fundamental Planck scale M{sub *} and the AdS radius of curvature L via the familiar relation M{sub Pl}{sup 2} {approx} M{sub *}{sup 2+n} L{sup n}; L acts as an effective radius of compactification for gravity on the intersection. Taking M{sub *} {approx} TeV and L {approx} sub-mm reproduces the phenomenology of theories with large extra dimensions. Alternately, taking M{sub *} {approx} L{sup -1} {approx} M{sub Pl}, and placing our 3-brane a distance {approx} 100M{sub Pl}{sup -1} away from the intersection gives us a theory with an exponential determination of the Weak/Planck hierarchy.

  14. Large Spectral Library Problem

    SciTech Connect

    Chilton, Lawrence K.; Walsh, Stephen J.

    2008-10-03

    Hyperspectral imaging produces a spectrum or vector at each image pixel. These spectra can be used to identify materials present in the image. In some cases, spectral libraries representing atmospheric chemicals or ground materials are available. The challenge is to determine if any of the library chemicals or materials exist in the hyperspectral image. The number of spectra in these libraries can be very large, far exceeding the number of spectral channels collected in the ¯eld. Suppose an image pixel contains a mixture of p spectra from the library. Is it possible to uniquely identify these p spectra? We address this question in this paper and refer to it as the Large Spectral Library (LSL) problem. We show how to determine if unique identi¯cation is possible for any given library. We also show that if p is small compared to the number of spectral channels, it is very likely that unique identi¯cation is possible. We show that unique identi¯cation becomes less likely as p increases.

  15. Synchronizing large systolic arrays

    SciTech Connect

    Fisher, A.L.; Kung, H.T.

    1982-04-01

    Parallel computing structures consist of many processors operating simultaneously. If a concurrent structure is regular, as in the case of systolic array, it may be convenient to think of all processors as operating in lock step. Totally synchronized systems controlled by central clocks are difficult to implement because of the inevitable problem of clock skews and delays. An alternate means of enforcing necessary synchronization is the use of self-timed, asynchronous schemes, at the cost of increased design complexity and hardware cost. Realizing that different circumstances call for different synchronization methods, this paper provides a spectrum of synchronization models; based on the assumptions made for each model, theoretical lower bounds on clock skew are derived, and appropriate or best-possible synchronization schemes for systolic arrays are proposed. This paper represents a first step towards a systematic study of synchronization problems for large systolic arrays.

  16. Large area plasma source

    NASA Technical Reports Server (NTRS)

    Foster, John (Inventor); Patterson, Michael (Inventor)

    2008-01-01

    An all permanent magnet Electron Cyclotron Resonance, large diameter (e.g., 40 cm) plasma source suitable for ion/plasma processing or electric propulsion, is capable of producing uniform ion current densities at its exit plane at very low power (e.g., below 200 W), and is electrodeless to avoid sputtering or contamination issues. Microwave input power is efficiently coupled with an ionizing gas without using a dielectric microwave window and without developing a throat plasma by providing a ferromagnetic cylindrical chamber wall with a conical end narrowing to an axial entrance hole for microwaves supplied on-axis from an open-ended waveguide. Permanent magnet rings are attached inside the wall with alternating polarities against the wall. An entrance magnet ring surrounding the entrance hole has a ferromagnetic pole piece that extends into the chamber from the entrance hole to a continuing second face that extends radially across an inner pole of the entrance magnet ring.

  17. The Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Pérez-Grovas, Alfonso Serrano; Schloerb, F. Peter; Hughes, David; Yun, Min

    2006-06-01

    We present a summary of the Large Millimeter Telescope (LMT) Project and its current status. The LMT is a joint project of the University of Massachusetts (UMass) in the USA and the Instituto Nacional de Astrofisica, Optica y Electronica (INAOE) in Mexico to build a 50m-diameter millimeter-wave telescope. The LMT site is at an altitude of 4600 m atop Volcan Sierra Negra, an extinct volcanic peak in the state of Puebla, Mexico, approximately 100 km east of the city of Puebla. Construction of the antenna steel structure has been completed and the antenna drive system has been installed. Fabrication of the reflector surface is underway. The telescope is expected to be completed in 2008.

  18. Contrasting Large Solar Events

    NASA Astrophysics Data System (ADS)

    Lanzerotti, Louis J.

    2010-10-01

    After an unusually long solar minimum, solar cycle 24 is slowly beginning. A large coronal mass ejection (CME) from sunspot 1092 occurred on 1 August 2010, with effects reaching Earth on 3 August and 4 August, nearly 38 years to the day after the huge solar event of 4 August 1972. The prior event, which those of us engaged in space research at the time remember well, recorded some of the highest intensities of solar particles and rapid changes of the geomagnetic field measured to date. What can we learn from the comparisons of these two events, other than their essentially coincident dates? One lesson I took away from reading press coverage and Web reports of the August 2010 event is that the scientific community and the press are much more aware than they were nearly 4 decades ago that solar events can wreak havoc on space-based technologies.

  19. Large area Czochralski silicon

    NASA Technical Reports Server (NTRS)

    Rea, S. N.; Gleim, P. S.

    1977-01-01

    The overall cost effectiveness of the Czochralski process for producing large-area silicon was determined. The feasibility of growing several 12 cm diameter crystals sequentially at 12 cm/h during a furnace run and the subsequent slicing of the ingot using a multiblade slurry saw were investigated. The goal of the wafering process was a slice thickness of 0.25 mm with minimal kerf. A slice + kerf of 0.56 mm was achieved on 12 cm crystal using both 400 grit B4C and SiC abrasive slurries. Crystal growth experiments were performed at 12 cm diameter in a commercially available puller with both 10 and 12 kg melts. Several modifications to the puller hoz zone were required to achieve stable crystal growth over the entire crystal length and to prevent crystallinity loss a few centimeters down the crystal. The maximum practical growth rate for 12 cm crystal in this puller design was 10 cm/h, with 12 to 14 cm/h being the absolute maximum range at which melt freeze occurred.

  20. Stability of large systems

    NASA Astrophysics Data System (ADS)

    Hastings, Harold

    2007-03-01

    We address a long-standing dilemma concerning stability of large systems. MacArthur (1955) and Hutchinson (1959) argued that more ``complex'' natural systems tended to be more stable than less complex systems based upon energy flow. May (1972) argued the opposite, using random matrix models; see Cohen and Newman (1984, 1985), Bai and Yin (1986). We show that in some sense both are right: under reasonable scaling assumptions on interaction strength, Lyapunov stability increases but structural stability decreases as complexity is increased (c.f. Harrison, 1979; Hastings, 1984). We apply this result to a variety of network systems. References: Bai, Z.D. & Yin, Y.Q. 1986. Probab. Th. Rel. Fields 73, 555. Cohen, J.E., & Newman, C.M. 1984. Annals Probab. 12, 283; 1985. Theoret. Biol. 113, 153. Harrison, G.W. 1979. Amer. Natur. 113, 659. Hastings, H.M. 1984. BioSystems 17, 171. Hastings, H.M., Juhasz, F., & Schreiber, M. 1992. .Proc. Royal Soc., Ser. B. 249, 223. Hutchinson, G.E. 1959. Amer. Natur. 93, 145, MacArthur, R. H. 1955. Ecology 35, 533, May, R.M. 1972. Nature 238, 413.

  1. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  2. The large binocular telescope.

    PubMed

    Hill, John M

    2010-06-01

    The Large Binocular Telescope (LBT) Observatory is a collaboration among institutions in Arizona, Germany, Italy, Indiana, Minnesota, Ohio, and Virginia. The telescope on Mount Graham in Southeastern Arizona uses two 8.4 m diameter primary mirrors mounted side by side. A unique feature of the LBT is that the light from the two Gregorian telescope sides can be combined to produce phased-array imaging of an extended field. This cophased imaging along with adaptive optics gives the telescope the diffraction-limited resolution of a 22.65 m aperture and a collecting area equivalent to an 11.8 m circular aperture. This paper describes the design, construction, and commissioning of this unique telescope. We report some sample astronomical results with the prime focus cameras. We comment on some of the technical challenges and solutions. The telescope uses two F/15 adaptive secondaries to correct atmospheric turbulence. The first of these adaptive mirrors has completed final system testing in Firenze, Italy, and is planned to be at the telescope by Spring 2010. PMID:20517352

  3. The Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Schloerb, F. Peter

    2008-07-01

    This paper, presented on behalf of the Large Millimeter Telescope (LMT) project team, describes the status and near-term plans for the telescope and its initial instrumentation. The LMT is a bi-national collaboration between Mexico and the USA, led by the Instituto Nacional de Astrofísica, Optica y Electronica (INAOE) and the University of Massachusetts at Amherst, to construct, commission and operate a 50m-diameter millimeter-wave radio telescope. Construction activities are nearly complete at the 4600m LMT site on the summit of Sierra Negra, an extinct volcano in the Mexican state of Puebla. Full movement of the telescope, under computer control in both azimuth and elevation, has been achieved. First-light at centimeter wavelengths on astronomical sources was obtained in November 2006. Installation of precision surface segments for millimeter-wave operation is underway, with the inner 32m-diameter of the surface now complete and ready to be used to obtain first light at millimeter wavelengths in 2008. Installation of the remainder of the reflector will continue during the next year and be completed in 2009 for final commissioning of the antenna. The full LMT antenna, outfitted with its initial complement of scientific instruments, will be a world-leading scientific research facility for millimeter-wave astronomy.

  4. The Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Hughes, D. H.; Schloerb, F. P.; LMT Project Team

    2009-05-01

    This paper, presented on behalf of the Large Millimeter Telescope (LMT) project team, describes the status and near-term plans for the telescope and its initial instrumentation. The LMT is a bi-national collaboration between México and the USA, led by the Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) and the University of Massachusetts at Amherst, to construct, commission and operate a 50 m diameter millimeter-wave radio telescope. Construction activities are nearly complete at the LMT site, at an altitude of ˜ 4600 m on the summit of Sierra Negra, an extinct volcano in the Mexican state of Puebla. Full movement of the telescope, under computer control in both azimuth and elevation, has been achieved. First-light at centimeter wavelengths on astronomical sources was obtained in November 2006. Installation of precision surface segments for millimeter-wave operation is underway, with the inner 32 m diameter of the surface now complete and ready to be used to obtain first-light at millimeter wavelengths in 2008. Installation of the remainder of the reflector will continue during the next year and be completed in 2009 for final commissioning of the antenna. The full LMT antenna, outfitted with its initial complement of scientific instruments, will be a world-leading scientific research facility for millimeter-wave astronomy.

  5. Large forging manufacturing process

    DOEpatents

    Thamboo, Samuel V.; Yang, Ling

    2002-01-01

    A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.

  6. Large building characterization

    SciTech Connect

    Menetrez, M.Y.; Sanchez, D.C.; Kulp, R.N.; Pyle, B.; Williamson, A.; McDonough, S.

    1994-12-31

    Buildings are characterized in this project by examining radon concentrations and indoor air quality (IAQ) levels as affected by building ventilation dynamics. IAQ data collection stations (IAQDS), for monitoring and data logging, remote switches (pressure and sail switches), and a weather station were installed. Measurements of indoor radon, carbon dioxide (CO{sub 2}), and particle concentrations; temperature; humidity; indoor to outdoor or sub-slab pressure differentials; ambient and sub-slab radon concentrations; and outdoor air intake flow rates were collected. The outdoor air intake was adjusted, and fan cycles were controlled while tracer gas measurements were taken in all zones and IAQDS data are processed. Ventilation, infiltration, mixing rates, radon entry, pressure/temperature convective driving forces, CO{sub 2} generation/decay concentrations, and IAQ levels were defined. These dynamic interacting processes characterize the behavior of this and similar large buildings. The techniques incorporated into the experimental plan are discussed with project rationale. Results and the discussion of those results are beyond the limits of this paper.

  7. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  8. Large Databases in Astronomy

    NASA Astrophysics Data System (ADS)

    Szalay, Alexander S.; Gray, Jim; Kunszt, Peter; Thakar, Anirudha; Slutz, Don

    The next-generation astronomy digital archives will cover most of the sky at fine resolution in many wavelengths, from X-rays through ultraviolet, optical, and infrared. The archives will be stored at diverse geographical locations. The intensive use of advanced data archives will enable astronomers to explore their data interactively. Data access will be aided by multidimensional spatial and attribute indices. The data will be partitioned in many ways. Small tag indices consisting of the most popular attributes will accelerate frequent searches. Splitting the data among multiple servers will allow parallel, scalable I/O and parallel data analysis. Hashing techniques will allow efficient clustering, and pair-wise comparison algorithms that should parallelize nicely. Randomly sampled subsets will allow debugging otherwise large queries at the desktop. Central servers will operate a data pump to support sweep searches touching most of the data. The anticipated queries will require special operators related to angular distances and complex similarity tests of object properties, like shapes, colors, velocity vectors, or temporal behaviors. These issues pose interesting data management challenges.

  9. Infinitely Large New Dimensions

    SciTech Connect

    Arkani-Hamed, Nima; Dimopoulos, Savas; Dvali, Gia; Kaloper, Nemanja

    2000-01-24

    We construct intersecting brane configurations in anitde Sitter (AdS) space which localize gravity to the intersection region, generalizing the trapping of gravity to any number n of infinite extra dimensions. Since the 4D Planck scale M{sub Pl} is determined by the fundamental Planck scale M{sub *} and the AdS radius L via the familiar relation M{sup 2}{sub Pl}{approx}M{sup 2+n}{sub *}L{sup n} , we get two kinds of theories with TeV scale quantum gravity and submillimeter deviations from Newton's law. With M{sub *}{approx}TeV and L{approx}submillimeter , we recover the phenomenology of theories with large extra dimensions. Alternatively, if M{sub *}{approx}L{sup -1}{approx}M{sub Pl} , and our 3-brane is at a distance of {approx}100M{sup -1}{sub Pl} from the intersection, we obtain a theory with an exponential determination of the weak/Planck hierarchy. (c) 2000 The American Physical Society.

  10. Large Format Radiographic Imaging

    SciTech Connect

    J. S. Rohrer; Lacey Stewart; M. D. Wilke; N. S. King; S. A Baker; Wilfred Lewis

    1999-08-01

    Radiographic imaging continues to be a key diagnostic in many areas at Los Alamos National Laboratory (LANL). Radiographic recording systems have taken on many form, from high repetition-rate, gated systems to film recording and storage phosphors. Some systems are designed for synchronization to an accelerator while others may be single shot or may record a frame sequence in a dynamic radiography experiment. While film recording remains a reliable standby in the radiographic community, there is growing interest in investigating electronic recording for many applications. The advantages of real time access to remote data acquisition are highly attractive. Cooled CCD camera systems are capable of providing greater sensitivity with improved signal-to-noise ratio. This paper begins with a review of performance characteristics of the Bechtel Nevada large format imaging system, a gated system capable of viewing scintillators up to 300 mm in diameter. We then examine configuration alternatives in lens coupled and fiber optically coupled electro-optical recording systems. Areas of investigation include tradeoffs between fiber optic and lens coupling, methods of image magnification, and spectral matching from scintillator to CCD camera. Key performance features discussed include field of view, resolution, sensitivity, dynamic range, and system noise characteristics.

  11. The large binocular telescope.

    PubMed

    Hill, John M

    2010-06-01

    The Large Binocular Telescope (LBT) Observatory is a collaboration among institutions in Arizona, Germany, Italy, Indiana, Minnesota, Ohio, and Virginia. The telescope on Mount Graham in Southeastern Arizona uses two 8.4 m diameter primary mirrors mounted side by side. A unique feature of the LBT is that the light from the two Gregorian telescope sides can be combined to produce phased-array imaging of an extended field. This cophased imaging along with adaptive optics gives the telescope the diffraction-limited resolution of a 22.65 m aperture and a collecting area equivalent to an 11.8 m circular aperture. This paper describes the design, construction, and commissioning of this unique telescope. We report some sample astronomical results with the prime focus cameras. We comment on some of the technical challenges and solutions. The telescope uses two F/15 adaptive secondaries to correct atmospheric turbulence. The first of these adaptive mirrors has completed final system testing in Firenze, Italy, and is planned to be at the telescope by Spring 2010.

  12. The Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Hughes, David H.; Jáuregui Correa, Juan-Carlos; Schloerb, F. Peter; Erickson, Neal; Romero, Jose Guichard; Heyer, Mark; Reynoso, David Huerta; Narayanan, Gopal; Perez-Grovas, Alfonso Serrano; Souccar, Kamal; Wilson, Grant; Yun, Min

    2010-07-01

    This paper describes the current status of the Large Millimeter Telescope (LMT), the near-term plans for the telescope and the initial suite of instrumentation. The LMT is a bi-national collaboration between Mexico and the USA, led by the Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) and the University of Massachusetts at Amherst, to construct, commission and operate a 50m-diameter millimeter-wave radio telescope. Construction activities are nearly complete at the 4600m LMT site on the summit of Volcán Sierra Negra, an extinct volcano in the Mexican state of Puebla. Full movement of the telescope, under computer control in both azimuth and elevation, has been achieved. The commissioning and scientific operation of the LMT is divided into two major phases. As part of phase 1, the installation of precision surface segments for millimeter-wave operation within the inner 32m-diameter of the LMT surface is now complete. The alignment of these surface segments is underway. The telescope (in its 32-m diameter format) will be commissioned later this year with first-light scientific observations at 1mm and 3mm expected in early 2011. In phase 2, we will continue the installation and alignment of the remainder of the reflector surface, following which the final commissioning of the full 50-m LMT will take place. The LMT antenna, outfitted with its initial complement of scientific instruments, will be a world-leading scientific research facility for millimeter-wave astronomy.

  13. Large planer for finishing smooth, flat surfaces of large pieces ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Large planer for finishing smooth, flat surfaces of large pieces of metal; in operating condition and used for public demonstrations. - Thomas A. Edison Laboratories, Building No. 5, Main Street & Lakeside Avenue, West Orange, Essex County, NJ

  14. Large-D gravity and low-D strings.

    PubMed

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse. PMID:23829726

  15. Large-D gravity and low-D strings.

    PubMed

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse.

  16. Large landslides from oceanic volcanoes

    USGS Publications Warehouse

    Holcomb, R.T.; Searle, R.C.

    1991-01-01

    Large landslides are ubiquitous around the submarine flanks of Hawaiian volcanoes, and GLORIA has also revealed large landslides offshore from Tristan da Cunha and El Hierro. On both of the latter islands, steep flanks formerly attributed to tilting or marine erosion have been reinterpreted as landslide headwalls mantled by younger lava flows. These landslides occur in a wide range of settings and probably represent only a small sample from a large population. They may explain the large volumes of archipelagic aprons and the stellate shapes of many oceanic volcanoes. Large landslides and associated tsunamis pose hazards to many islands. -from Authors

  17. Health impacts of large dams

    SciTech Connect

    Lerer, L.B.; Scudder, T.

    1999-03-01

    Large dams have been criticized because of their negative environmental and social impacts. Public health interest largely has focused on vector-borne diseases, such as schistosomiasis, associated with reservoirs and irrigation projects. Large dams also influence health through changes in water and food security, increases in communicable diseases, and the social disruption caused by construction and involuntary resettlement. Communities living in close proximity to large dams often do not benefit from water transfer and electricity generation revenues. A comprehensive health component is required in environmental and social impact assessments for large dam projects.

  18. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB).

    PubMed

    Grimme, Stefan; Bannwarth, Christoph

    2016-08-01

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H-Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  19. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB)

    NASA Astrophysics Data System (ADS)

    Grimme, Stefan; Bannwarth, Christoph

    2016-08-01

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H-Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  20. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB).

    PubMed

    Grimme, Stefan; Bannwarth, Christoph

    2016-08-01

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H-Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  1. Sharpen Your Skills: Large Type.

    ERIC Educational Resources Information Center

    Knisely, Phillis; Wickham, Marian

    1984-01-01

    Three short articles about large type transcribing are provided for braille transcribers and teachers of the visually handicapped. The first article lists general suggestions for simple typewriter maintenance. The second article reviews the guidelines for typing fractions in large type for mathematics exercises. The third article describes a…

  2. Team Learning in Large Classes.

    ERIC Educational Resources Information Center

    Roueche, Suanne D., Ed.

    1984-01-01

    Information and suggestions are provided on the use of team learning in large college classes. Introductory material discusses the negative cycle of student-teacher interaction that may be provoked by large classes, and the use of permanent, heterogeneous, six- or seven-member student learning groups as the central focus of class activity as a…

  3. Sharpen Your Skills: Large Type.

    ERIC Educational Resources Information Center

    Knisely, Phyllis

    1983-01-01

    Three short articles about large type transcribing are provided for braille transcribers and teachers of the visually handicapped. The first article explains section IV-B-2 of the National Braille Association Manual for Large Type Transcribing. The second article presents the results of a survey on the kinds of typewriters, types of…

  4. Querying Large Biological Network Datasets

    ERIC Educational Resources Information Center

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  5. Measuring happiness in large population

    NASA Astrophysics Data System (ADS)

    Wenas, Annabelle; Sjahputri, Smita; Takwin, Bagus; Primaldhi, Alfindra; Muhamad, Roby

    2016-01-01

    The ability to know emotional states for large number of people is important, for example, to ensure the effectiveness of public policies. In this study, we propose a measure of happiness that can be used in large scale population that is based on the analysis of Indonesian language lexicons. Here, we incorporate human assessment of Indonesian words, then quantify happiness on large-scale of texts gathered from twitter conversations. We used two psychological constructs to measure happiness: valence and arousal. We found that Indonesian words have tendency towards positive emotions. We also identified several happiness patterns during days of the week, hours of the day, and selected conversation topics.

  6. Large Area Vacuum Deposited Coatings

    SciTech Connect

    Martin, Peter M.

    2003-04-30

    It's easy to make the myriad of types of large area and decorative coatings for granted. We probably don't even think about most of them; the low-e and heat mirror coatings on our windows and car windows, the mirrors in displays, antireflection coatings on windows and displays, protective coatings on aircraft windows, heater coatings on windshields and aircraft windows, solar reflectors, thin film solar cells, telescope mirrors, Hubble mirrors, transparent conductive coatings, and the list goes on. All these products require large deposition systems and chambers. Also, don't forget that large batches of small substrates or parts are coated in large chambers. In order to be cost effective hundreds of ophthalmic lenses, automobile reflectors, display screens, lamp reflectors, cell phone windows, laser reflectors, DWDM filters, are coated in batches.

  7. Large engines and vehicles, 1958

    NASA Technical Reports Server (NTRS)

    1978-01-01

    During the mid-1950s, the Air Force sponsored work on the feasibility of building large, single-chamber engines, presumably for boost-glide aircraft or spacecraft. In 1956, the Army missile development group began studies of large launch vehicles. The possibilities opened up by Sputnik accelerated this work and gave the Army an opportunity to bid for the leading role in launch vehicles. The Air Force had the responsibility for the largest ballistic missiles and hence a ready-made base for extending their capability for spaceflight. During 1958, actions taken to establish a civilian space agency, and the launch vehicle needs seen by its planners, added a third contender to the space vehicle competition. These activities during 1958 are examined as to how they resulted in the initiation of a large rocket engine and the first large launch vehicle.

  8. Building large structures in space

    NASA Technical Reports Server (NTRS)

    Hagler, T.

    1976-01-01

    The building of large structures in space would be required for the establishment of a variety of systems needed for different forms of space utilization. The problems involved in the building of such structures in space and the approaches which can be used to solve these problems are illustrated with the aid of an example involving a concept for packaging, transporting, and assembling two representative large space structures. The structure of a radio-astronomy telescope 200 m in diam was felt to be representative of the many medium-size structures of the Shuttle era. A typical very large structure is represented by the supporting structure for the transmission system of a 5000-Mw space solar power station.

  9. Does Yellowstone need large fires

    SciTech Connect

    Romme, W.H. ); Turner, M.G.; Gardner, R.H.; Hargrove, W.W. )

    1994-06-01

    This paper synthesizes several studies initiated after the 1988 Yellowstone fires, to address the question whether the ecological effects of large fires differ qualitatively as well as quantitatively from small fires. Large burn patches had greater dominance and contagion of burn severity classes, and a higher proportion of crown fire. Burned aspen stands resprouted vigorously over an extensive area, but heavy ungulate browsing prevented establishment of new tree-sized stems. A burst of sexual reproduction occurred in forest herbs that usually reproduce vegetatively, and new aspen clones became established from seed - a rare event in this region. We conclude that the effects of large fires are qualitatively different, but less dramatically so than expected.

  10. Inflating with large effective fields

    SciTech Connect

    Burgess, C.P.; Cicoli, M.; Quevedo, F.; Williams, M. E-mail: mcicoli@ictp.it E-mail: mwilliams@perimeterinsititute.ca

    2014-11-01

    We re-examine large scalar fields within effective field theory, in particular focussing on the issues raised by their use in inflationary models (as suggested by BICEP2 to obtain primordial tensor modes). We argue that when the large-field and low-energy regimes coincide the scalar dynamics is most effectively described in terms of an asymptotic large-field expansion whose form can be dictated by approximate symmetries, which also help control the size of quantum corrections. We discuss several possible symmetries that can achieve this, including pseudo-Goldstone inflatons characterized by a coset G/H (based on abelian and non-abelian, compact and non-compact symmetries), as well as symmetries that are intrinsically higher dimensional. Besides the usual trigonometric potentials of Natural Inflation we also find in this way simple large-field power laws (like V ∝ φ{sup 2}) and exponential potentials, V(φ) = ∑{sub k}V{sub x}e{sup −kφ/M}. Both of these can describe the data well and give slow-roll inflation for large fields without the need for a precise balancing of terms in the potential. The exponential potentials achieve large r through the limit |η| || ε and so predict r ≅ (8/3)(1-n{sub s}); consequently n{sub s} ≅ 0.96 gives r ≅ 0.11 but not much larger (and so could be ruled out as measurements on r and n{sub s} improve). We examine the naturalness issues for these models and give simple examples where symmetries protect these forms, using both pseudo-Goldstone inflatons (with non-abelian non-compact shift symmetries following familiar techniques from chiral perturbation theory) and extra-dimensional models.

  11. Large Millimeter Telescope (LMT) status

    NASA Astrophysics Data System (ADS)

    Schloerb, F. Peter; Carrasco, Luis; Wilson, Grant W.

    2003-02-01

    We present a summary of the Large Millimeter Telescope Project and its present status. The Large Millimeter Telescope (LMT) is a joint project of the University of Massachusetts (UMass) in the USA and the Instituto Nacional de Astrofisica, Optica y Electronica (INAOE) in Mexico to build a 50m-diameter millimeter-wave telescope. The LMT is being built at an altitude of 4600 m atop Volcan Sierra Negra, an extinct volcanic peak in the state of Puebla, Mexico, approximately 100 km east of the city of Puebla. Construction of the antenna is now well underway, and it is expected to be completed in 2004.

  12. Teaching a Large Undergraduate Class

    ERIC Educational Resources Information Center

    Trowbridge, Norma

    1975-01-01

    A professor describes how she teaches large undergraduate classes by dividing the group into smaller groups and including: weekly films or audiovisual aids, weekly sessions with smaller groups, definite reading material, independent study, and a multiple evaluation system. (Author/PG)

  13. Large angle measurement by interferometry

    NASA Astrophysics Data System (ADS)

    Apostol, Dan; Blanaru, Constantin; Damian, Victor S.; Logofatu, Petre-Catalin; Tumbar, R.; Dobroiu, Adrian

    1995-03-01

    An interferometric set-up able to measure angles as large as +180 degree(s) is presented. The principle of the method is to measure a linear displacement (translation) produced by a crank-gear mechanism which converts the angular movement of a rotating table. The optical scheme and consideration on the accuracy of the method are presented.

  14. CERN's Large Hadron Collider project

    NASA Astrophysics Data System (ADS)

    Fearnley, Tom A.

    1997-03-01

    The paper gives a brief overview of CERN's Large Hadron Collider (LHC) project. After an outline of the physics motivation, we describe the LHC machine, interaction rates, experimental challenges, and some important physics channels to be studied. Finally we discuss the four experiments planned at the LHC: ATLAS, CMS, ALICE and LHC-B.

  15. Large gap magnetic suspension system

    NASA Technical Reports Server (NTRS)

    Abdelsalam, Moustafa K.; Eyssa, Y. M.

    1991-01-01

    The design of a large gap magnetic suspension system is discussed. Some of the topics covered include: the system configuration, permanent magnet material, levitation magnet system, superconducting magnets, resistive magnets, superconducting levitation coils, resistive levitation coils, levitation magnet system, and the nitrogen cooled magnet system.

  16. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  17. Fermi's Large Area Telescope (LAT)

    NASA Video Gallery

    Fermi’s Large Area Telescope (LAT) is the spacecraft’s main scientificinstrument. This animation shows a gamma ray (purple) entering the LAT,where it is converted into an electron (red) and a...

  18. Large area CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Turchetta, R.; Guerrini, N.; Sedgwick, I.

    2011-01-01

    CMOS image sensors, also known as CMOS Active Pixel Sensors (APS) or Monolithic Active Pixel Sensors (MAPS), are today the dominant imaging devices. They are omnipresent in our daily life, as image sensors in cellular phones, web cams, digital cameras, ... In these applications, the pixels can be very small, in the micron range, and the sensors themselves tend to be limited in size. However, many scientific applications, like particle or X-ray detection, require large format, often with large pixels, as well as other specific performance, like low noise, radiation hardness or very fast readout. The sensors are also required to be sensitive to a broad spectrum of radiation: photons from the silicon cut-off in the IR down to UV and X- and gamma-rays through the visible spectrum as well as charged particles. This requirement calls for modifications to the substrate to be introduced to provide optimized sensitivity. This paper will review existing CMOS image sensors, whose size can be as large as a single CMOS wafer, and analyse the technical requirements and specific challenges of large format CMOS image sensors.

  19. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  20. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe.

  1. The physics of large eruptions

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Agust

    2015-04-01

    Based on eruptive volumes, eruptions can be classified as follows: small if the volumes are from less than 0.001 km3 to 0.1 km3, moderate if the volumes are from 0.1 to 10 km3, and large if the volumes are from 10 km3 to 1000 km3 or larger. The largest known explosive and effusive eruptions have eruptive volumes of 4000-5000 km3. The physics of small to moderate eruptions is reasonably well understood. For a typical mafic magma chamber in a crust that behaves as elastic, about 0.1% of the magma leaves the chamber (erupted and injected as a dyke) during rupture and eruption. Similarly, for a typical felsic magma chamber, the eruptive/injected volume during rupture and eruption is about 4%. To provide small to moderate eruptions, chamber volumes of the order of several tens to several hundred cubic kilometres would be needed. Shallow crustal chambers of these sizes are common, and deep-crustal and upper-mantle reservoirs of thousands of cubic kilometres exist. Thus, elastic and poro-elastic chambers of typical volumes can account for small to moderate eruptive volumes. When the eruptions become large, with volumes of tens or hundreds of cubic kilometres or more, an ordinary poro-elastic mechanism can no longer explain the eruptive volumes. The required sizes of the magma chambers and reservoirs to explain such volumes are simply too large to be plausible. Here I propose that the mechanics of large eruptions is fundamentally different from that of small to moderate eruptions. More specifically, I suggest that all large eruptions derive their magmas from chambers and reservoirs whose total cavity-volumes are mechanically reduced very much during the eruption. There are two mechanisms by which chamber/reservoir cavity-volumes can be reduced rapidly so as to squeeze out much of, or all, their magmas. One is piston-like caldera collapse. The other is graben subsidence. During large slip on the ring-faults/graben-faults the associated chamber/reservoir shrinks in volume

  2. Large Amplitude Oscillations in Prominences

    NASA Astrophysics Data System (ADS)

    Luna, Manuel

    2016-07-01

    Large-amplitude Oscillations in prominences are spectacular manifestations of the solar activity. In such events nearby energetic disturbances induce periodic motions on filaments with displacements comparable to the size of the filaments themselves and with velocities larger than 20 km/s. Recent studies have shown that such oscillations open a new window on coronal connectivity, as well as novel diagnostics for hard-to-measure prominence properties such as magnetic field strength and geometry. In addition, this oscillation could be related with activation of filaments prior to eruptions. In this talk I will show past and current research on this subject in order to understand the nature of the solar prominences. Additionally, a large catalogue of such events will be presented.

  3. Blue moons and large fires.

    PubMed

    Porch, W M

    1989-05-15

    Theoretical analysis of simulations of optical effects from the 1950 Canadian forest fires has revealed what conditions are necessary for large fires to cause blue moons and suns. This study shows how large fires can be used to improve our understanding of long range pollution transport on a global scale as well as the evolution of aerosol radiative effects so important to global climate studies. The most important aerosol characteristics are the initial submicron smoke particle concentration and areal extent of the fire and its effect on fire plume dispersion. Capping clouds above the fire and near saturation humidity effects are simulated and found to help establish anomalous optical effects. Data are included showing probable anomalous extinction events associated with concentrated fire plumes.

  4. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  5. Large aperture scanning airborne lidar

    NASA Technical Reports Server (NTRS)

    Smith, J.; Bindschadler, R.; Boers, R.; Bufton, J. L.; Clem, D.; Garvin, J.; Melfi, S. H.

    1988-01-01

    A large aperture scanning airborne lidar facility is being developed to provide important new capabilities for airborne lidar sensor systems. The proposed scanning mechanism allows for a large aperture telescope (25 in. diameter) in front of an elliptical flat (25 x 36 in.) turning mirror positioned at a 45 degree angle with respect to the telescope optical axis. The lidar scanning capability will provide opportunities for acquiring new data sets for atmospheric, earth resources, and oceans communities. This completed facility will also make available the opportunity to acquire simulated EOS lidar data on a near global basis. The design and construction of this unique scanning mechanism presents exciting technological challenges of maintaining the turning mirror optical flatness during scanning while exposed to extreme temperatures, ambient pressures, aircraft vibrations, etc.

  6. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  7. Extremely Large Cusp Diamagnetic Cavities

    NASA Astrophysics Data System (ADS)

    Chen, J.; Fritz, T. A.

    2002-05-01

    Extremely large diamagnetic cavities with a size of as large as 6 Re have been observed in the dayside high-altitude cusp regions. Some of the diamagnetic cavities were independent of the IMF directions, which is unexpected by the current MHD (or ISM) models, suggesting that the cusp diamagnetic cavities are different from the magnetospheric sash, which provides a challenge to the existing MHD (or ISM) models. Associated with these cavities are ions with energies from 40 keV up to 8 MeV. The charge state distribution of these cusp cavity ions was indicative of their seed populations being a mixture of the ionospheric and the solar wind particles. The intensities of the cusp cavity energetic ions were observed to increase by as large as four orders of the magnitudes. During high solar wind pressure period on April 21, 1999, the POLAR spacecraft observed lower ion flux in the dayside high-latitude magnetosheath than that in the neighbouring cusp cavities. These observations indicate that the dayside high-altitude cusp diamagnetic cavity is a key region for transferring the solar wind energy, mass, and momentum into the Earth's magnetosphere. These energetic particles in the cusp diamagnetic cavity together with the cusp's connectivity have significant global impacts on the geospace environment research and will be shedding light on the long-standing unsolved fundamental issue about the origins of the energetic particles in the ring current and in upstream ion events.

  8. Extremely large cusp diamagnetic cavities

    NASA Astrophysics Data System (ADS)

    Chen, J.; Fritz, T.; Siscoe, G.

    Extremely large diamagnetic cavities with a size of as large as 6 Re have been observed in the dayside high-altitude cusp regions. These diamagnetic cavities are always there day by day. Some of the diamagnetic cavities have been observed in the morningside during intervals when the IMF By component was positive (duskward), suggesting that the cusp diamagnetic cavities are different from the magnetospheric sash predicted by MHD simulations. Associated with these cavities are ions with energies from 40 keV up to 8 MeV. The charge state distribution of these cusp cavity ions was indicative of their seed populations being a mixture of the ionospheric and the solar wind particles. The intensities of the cusp cavity energetic ions were observed to increase by as large as four orders of the magnitudes. These observations indicate that the dayside high-altitude cusp diamagnetic cavity is a key region for transferring the solar wind energy, mass, and momentum into the Earth's magnetosphere. These energetic particles in the cusp diamagnetic cavity together with the cusp's connectivity to the entire magnetopause may have significant global impacts on the geospace environment. They will possibly be shedding light on the long-standing unsolved fundamental issue about the origins of the energetic particles in the ring current and in the regions upstream of the subsolar magnetopause where energetic ion events frequently are observed.

  9. Large Component Removal/Disposal

    SciTech Connect

    Wheeler, D. M.

    2002-02-27

    This paper describes the removal and disposal of the large components from Maine Yankee Atomic Power Plant. The large components discussed include the three steam generators, pressurizer, and reactor pressure vessel. Two separate Exemption Requests, which included radiological characterizations, shielding evaluations, structural evaluations and transportation plans, were prepared and issued to the DOT for approval to ship these components; the first was for the three steam generators and one pressurizer, the second was for the reactor pressure vessel. Both Exemption Requests were submitted to the DOT in November 1999. The DOT approved the Exemption Requests in May and July of 2000, respectively. The steam generators and pressurizer have been removed from Maine Yankee and shipped to the processing facility. They were removed from Maine Yankee's Containment Building, loaded onto specially designed skid assemblies, transported onto two separate barges, tied down to the barges, th en shipped 2750 miles to Memphis, Tennessee for processing. The Reactor Pressure Vessel Removal Project is currently under way and scheduled to be completed by Fall of 2002. The planning, preparation and removal of these large components has required extensive efforts in planning and implementation on the part of all parties involved.

  10. Deflectometric measurement of large mirrors

    NASA Astrophysics Data System (ADS)

    Olesch, Evelyn; Häusler, Gerd; Wörnlein, André; Stinzing, Friedrich; van Eldik, Christopher

    2014-06-01

    We discuss the inspection of large-sized, spherical mirror tiles by `Phase Measuring Deflectometry' (PMD). About 10 000 of such mirror tiles, each satisfying strict requirements regarding the spatial extent of the point-spread-function (PSF), are planned to be installed on the Cherenkov Telescope Array (CTA), a future ground-based instrument to observe the sky in very high energy gamma-rays. Owing to their large radii of curvature of up to 60 m, a direct PSF measurement of these mirrors with concentric geometry requires large space. We present a PMD sensor with a footprint of only 5×2×1.2 m3 that overcomes this limitation. The sensor intrinsically acquires the surface slope; the shape data are calculated by integration. In this way, the PSF can be calculated for real case scenarios, e.g., when the light source is close to infinity and off-axis. The major challenge is the calibration of the PMD sensor, specifically because the PSF data have to be reconstructed from different camera views. The calibration of the setup is described, and measurements presented and compared to results obtained with the direct approach.

  11. TURP for BPH. How Large is Too Large?

    PubMed Central

    Georgescu, D; Arabagiu, I; Cauni, V; Moldoveanu, C; Geavlete, P

    2010-01-01

    BPH remains one of the most common disease that the urologist has to manage. The last decade brought numerous new techniques, aiming to improve the minimally invasive approach to BPH, but none had, for the moment, changed the place of TURP as the gold standard treatment for medium sized prostates. Based on a large personal experience, the authors present a study in which TURP is used for prostates over 80ml, the cutoff point set by the guidelines of the European Association of Urology. The rationale for this study is that many situations require minimally invasive treatment, based on the express request of the patient, other conditions that makes open surgery very difficult or impossible, or the need for a quick discharge in an overcrowded service. The aim of the study was to prove that TURP is safe and effective even in larger prostates. The technique used is basically the classic one, with minor tactical alterations in some cases. Some cases required a two-stage approach, but offered good functional results after the first stage. The results proved that, with a good technique, a skilled urologist might achieve the same results by using TURP or open surgery for large sized prostates. PMID:21254734

  12. Large wood recruitment and transport during large floods: A review

    NASA Astrophysics Data System (ADS)

    Comiti, F.; Lucía, A.; Rickenmann, D.

    2016-09-01

    Large wood (LW) elements transported during large floods are long known to have the capacity to induce dangerous obstructions along the channel network, mostly at bridges and at hydraulic structures such as weirs. However, our current knowledge of wood transport dynamics during high-magnitude flood events is still very scarce, mostly because these are (locally) rare and thus unlikely to be directly monitored. Therefore, post-event surveys are invaluable ways to get insights (although indirectly) on LW recruitment processes, transport distance, and factors inducing LW deposition - all aspects that are crucial for the proper management of river basins related to flood hazard mitigation. This paper presents a review of the (quite limited) literature available on LW transport during large floods, drawing extensively on the authors' own experience in mountain and piedmont rivers, published and unpublished. The overall picture emerging from these studies points to a high, catchment-specific variability in all the different processes affecting LW dynamics during floods. Specifically, in the LW recruitment phase, the relative floodplain (bank erosion) vs. hillslope (landslide and debris flows) contribution in mountain rivers varies substantially, as it relates to the extent of channel widening (which depends on many variables itself) but also to the hillslope-channel connectivity of LW mobilized on the slopes. As to the LW transport phase within the channel network, it appears to be widely characterized by supply-limited conditions; whereby LW transport rates (and thus volumes) are ultimately constrained by the amount of LW that is made available to the flow. Indeed, LW deposition during floods was mostly (in terms of volume) observed at artificial structures (bridges) in all the documented events. This implies that the estimation of LW recruitment and the assessment of clogging probabilities for each structure (for a flood event of given magnitude) are the most important

  13. Radiosurgery for Large Brain Metastases

    SciTech Connect

    Han, Jung Ho; Kim, Dong Gyu; Chung, Hyun-Tai; Paek, Sun Ha; Park, Chul-Kee; Jung, Hee-Won

    2012-05-01

    Purpose: To determine the efficacy and safety of radiosurgery in patients with large brain metastases treated with radiosurgery. Patients and Methods: Eighty patients with large brain metastases (>14 cm{sup 3}) were treated with radiosurgery between 1998 and 2009. The mean age was 59 {+-} 11 years, and 49 (61.3%) were men. Neurologic symptoms were identified in 77 patients (96.3%), and 30 (37.5%) exhibited a dependent functional status. The primary disease was under control in 36 patients (45.0%), and 44 (55.0%) had a single lesion. The mean tumor volume was 22.4 {+-} 8.8 cm{sup 3}, and the mean marginal dose prescribed was 13.8 {+-} 2.2 Gy. Results: The median survival time from radiosurgery was 7.9 months (95% confidence interval [CI], 5.343-10.46), and the 1-year survival rate was 39.2%. Functional improvement within 1-4 months or the maintenance of the initial independent status was observed in 48 (60.0%) and 20 (25.0%) patients after radiosurgery, respectively. Control of the primary disease, a marginal dose of {>=}11 Gy, and a tumor volume {>=}26 cm{sup 3} were significantly associated with overall survival (hazard ratio, 0.479; p = .018; 95% CI, 0.261-0.880; hazard ratio, 0.350; p = .004; 95% CI, 0.171-0.718; hazard ratio, 2.307; p = .006; 95% CI, 1.274-4.180, respectively). Unacceptable radiation-related toxicities (Radiation Toxicity Oncology Group central nervous system toxicity Grade 3, 4, and 5 in 7, 6, and 2 patients, respectively) developed in 15 patients (18.8%). Conclusion: Radiosurgery seems to have a comparable efficacy with surgery for large brain metastases. However, the rate of radiation-related toxicities after radiosurgery should be considered when deciding on a treatment modality.

  14. Large-bore pipe decontamination

    SciTech Connect

    Ebadian, M.A.

    1998-01-01

    The decontamination and decommissioning (D and D) of 1200 buildings within the US Department of Energy-Office of Environmental Management (DOE-EM) Complex will require the disposition of miles of pipe. The disposition of large-bore pipe, in particular, presents difficulties in the area of decontamination and characterization. The pipe is potentially contaminated internally as well as externally. This situation requires a system capable of decontaminating and characterizing both the inside and outside of the pipe. Current decontamination and characterization systems are not designed for application to this geometry, making the direct disposal of piping systems necessary in many cases. The pipe often creates voids in the disposal cell, which requires the pipe to be cut in half or filled with a grout material. These methods are labor intensive and costly to perform on large volumes of pipe. Direct disposal does not take advantage of recycling, which could provide monetary dividends. To facilitate the decontamination and characterization of large-bore piping and thereby reduce the volume of piping required for disposal, a detailed analysis will be conducted to document the pipe remediation problem set; determine potential technologies to solve this remediation problem set; design and laboratory test potential decontamination and characterization technologies; fabricate a prototype system; provide a cost-benefit analysis of the proposed system; and transfer the technology to industry. This report summarizes the activities performed during fiscal year 1997 and describes the planned activities for fiscal year 1998. Accomplishments for FY97 include the development of the applicable and relevant and appropriate regulations, the screening of decontamination and characterization technologies, and the selection and initial design of the decontamination system.

  15. Foreshock occurrence before large earthquakes

    USGS Publications Warehouse

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  16. LHC: The Large Hadron Collider

    SciTech Connect

    Lincoln, Don

    2015-03-04

    The Large Hadron Collider (or LHC) is the world’s most powerful particle accelerator. In 2012, scientists used data taken by it to discover the Higgs boson, before pausing operations for upgrades and improvements. In the spring of 2015, the LHC will return to operations with 163% the energy it had before and with three times as many collisions per second. It’s essentially a new and improved version of itself. In this video, Fermilab’s Dr. Don Lincoln explains both some of the absolutely amazing scientific and engineering properties of this modern scientific wonder.

  17. LHC: The Large Hadron Collider

    ScienceCinema

    Lincoln, Don

    2016-07-12

    The Large Hadron Collider (or LHC) is the world’s most powerful particle accelerator. In 2012, scientists used data taken by it to discover the Higgs boson, before pausing operations for upgrades and improvements. In the spring of 2015, the LHC will return to operations with 163% the energy it had before and with three times as many collisions per second. It’s essentially a new and improved version of itself. In this video, Fermilab’s Dr. Don Lincoln explains both some of the absolutely amazing scientific and engineering properties of this modern scientific wonder.

  18. Guam shaken by large quake

    NASA Astrophysics Data System (ADS)

    White, M. Catherine

    On August 8, an earthquake registering 8.1 on the Richter scale shook the Mariana Trench, which lies to the southeast of Guam, causing a rupture 55-60 km deep. Because large earthquakes are quite rare in the Mariana region, this significant thrust-type event offers a unique opportunity for study.The U.S. Geological Survey reports that seismic activity continued for several days after the earthquake. On August 11, an aftershock measuring 6.1 struck 40 miles southeast of Agana, Guam.

  19. Aeroacoustics of large wind turbines

    NASA Technical Reports Server (NTRS)

    Hubbard, Harvey H.; Shepherd, Kevin P.

    1991-01-01

    This paper reviews published information on aerodynamically generated noise from large horizontal axis wind turbines operated for electric power generation. Methods are presented for predicting both the discrete frequency rotational noise components and the broadband noise components, and results are compared with measurements. Refraction effects that result in the formation of high-frequency shadow zones in the upwind direction and channeling effects for the low frequencies in the downwind direction are illustrated. Special topics such as distributed source effects in prediction and the role of building dynamics in perception are also included.

  20. Large block test status report

    SciTech Connect

    Wilder, D.G.; Lin, W.; Blair, S.C.

    1997-08-26

    This report is intended to serve as a status report, which essentially transmits the data that have been collected to date on the Large Block Test (LBT). The analyses of data will be performed during FY98, and then a complete report will be prepared. This status report includes introductory material that is not needed merely to transmit data but is available at this time and therefore included. As such, this status report will serve as the template for the future report, and the information is thus preserved.

  1. the Large Aperture GRB Observatory

    SciTech Connect

    Bertou, Xavier

    2009-04-30

    The Large Aperture GRB Observatory (LAGO) aims at the detection of high energy photons from Gamma Ray Bursts (GRB) using the single particle technique (SPT) in ground based water Cherenkov detectors (WCD). To reach a reasonable sensitivity, high altitude mountain sites have been selected in Mexico (Sierra Negra, 4550 m a.s.l.), Bolivia (Chacaltaya, 5300 m a.s.l.) and Venezuela (Merida, 4765 m a.s.l.). We report on the project progresses and the first operation at high altitude, search for bursts in 6 months of preliminary data, as well as search for signal at ground level when satellites report a burst.

  2. Large scale cluster computing workshop

    SciTech Connect

    Dane Skow; Alan Silverman

    2002-12-23

    Recent revolutions in computer hardware and software technologies have paved the way for the large-scale deployment of clusters of commodity computers to address problems heretofore the domain of tightly coupled SMP processors. Near term projects within High Energy Physics and other computing communities will deploy clusters of scale 1000s of processors and be used by 100s to 1000s of independent users. This will expand the reach in both dimensions by an order of magnitude from the current successful production facilities. The goals of this workshop were: (1) to determine what tools exist which can scale up to the cluster sizes foreseen for the next generation of HENP experiments (several thousand nodes) and by implication to identify areas where some investment of money or effort is likely to be needed. (2) To compare and record experimences gained with such tools. (3) To produce a practical guide to all stages of planning, installing, building and operating a large computing cluster in HENP. (4) To identify and connect groups with similar interest within HENP and the larger clustering community.

  3. Damping characterization in large structures

    NASA Technical Reports Server (NTRS)

    Eke, Fidelis O.; Eke, Estelle M.

    1991-01-01

    This research project has as its main goal the development of methods for selecting the damping characteristics of components of a large structure or multibody system, in such a way as to produce some desired system damping characteristics. The main need for such an analytical device is in the simulation of the dynamics of multibody systems consisting, at least partially, of flexible components. The reason for this need is that all existing simulation codes for multibody systems require component-by-component characterization of complex systems, whereas requirements (including damping) often appear at the overall system level. The main goal was met in large part by the development of a method that will in fact synthesize component damping matrices from a given system damping matrix. The restrictions to the method are that the desired system damping matrix must be diagonal (which is almost always the case) and that interbody connections must be by simple hinges. In addition to the technical outcome, this project contributed positively to the educational and research infrastructure of Tuskegee University - a Historically Black Institution.

  4. Large phased-array radars

    SciTech Connect

    Brookner, D.E.

    1988-12-15

    Large phased-array radars can play a very important part in arms control. They can be used to determine the number of RVs being deployed, the type of targeting of the RVs (the same or different targets), the shape of the deployed objects, and possibly the weight and yields of the deployed RVs. They can provide this information at night as well as during the day and during rain and cloud covered conditions. The radar can be on the ground, on a ship, in an airplane, or space-borne. Airborne and space-borne radars can provide high resolution map images of the ground for reconnaissance, of anti-ballistic missile (ABM) ground radar installations, missile launch sites, and tactical targets such as trucks and tanks. The large ground based radars can have microwave carrier frequencies or be at HF (high frequency). For a ground-based HF radar the signal is reflected off the ionosphere so as to provide over-the-horizon (OTH) viewing of targets. OTH radars can potentially be used to monitor stealth targets and missile traffic.

  5. Network discovery with large DCMs

    PubMed Central

    Seghier, Mohamed L.; Friston, Karl J.

    2013-01-01

    In this work, we address the problem of using dynamic causal modelling (DCM) to estimate the coupling parameters (effective connectivity) of large models with many regions. This is a potentially important problem because meaningful graph theoretic analyses of effective connectivity rest upon the statistics of the connections (edges). This calls for characterisations of networks with an appreciable number of regions (nodes). The problem here is that the number of coupling parameters grows quadratically with the number of nodes—leading to severe conditional dependencies among their estimates and a computational load that quickly becomes unsustainable. Here, we describe a simple solution, in which we use functional connectivity to provide prior constraints that bound the effective number of free parameters. In brief, we assume that priors over connections between individual nodes can be replaced by priors over connections between modes (patterns over nodes). By using a small number of modes, we can reduce the dimensionality of the problem in an informed way. The modes we use are the principal components or eigenvectors of the functional connectivity matrix. However, this approach begs the question of how many modes to use. This question can be addressed using Bayesian model comparison to optimise the number of modes. We imagine that this form of prior – over the extrinsic (endogenous) connections in large DCMs – may be useful for people interested in applying graph theory to distributed networks in the brain or to characterise connectivity beyond the subgraphs normally examined in DCM. PMID:23246991

  6. How Large Asexual Populations Adapt

    NASA Astrophysics Data System (ADS)

    Desai, Michael

    2007-03-01

    We often think of beneficial mutations as being rare, and of adaptation as a sequence of selected substitutions: a beneficial mutation occurs, spreads through a population in a selective sweep, then later another beneficial mutation occurs, and so on. This simple picture is the basis for much of our intuition about adaptive evolution, and underlies a number of practical techniques for analyzing sequence data. Yet many large and mostly asexual populations -- including a wide variety of unicellular organisms and viruses -- live in a very different world. In these populations, beneficial mutations are common, and frequently interfere or cooperate with one another as they all attempt to sweep simultaneously. This radically changes the way these populations adapt: rather than an orderly sequence of selective sweeps, evolution is a constant swarm of competing and interfering mutations. I will describe some aspects of these dynamics, including why large asexual populations cannot evolve very quickly and the character of the diversity they maintain. I will explain how this changes our expectations of sequence data, how sex can help a population adapt, and the potential role of ``mutator'' phenotypes with abnormally high mutation rates. Finally, I will discuss comparisons of these predictions with evolution experiments in laboratory yeast populations.

  7. Large aperture diffractive space telescope

    DOEpatents

    Hyde, Roderick A.

    2001-01-01

    A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.

  8. Histotripsy Liquefaction of Large Hematomas.

    PubMed

    Khokhlova, Tatiana D; Monsky, Wayne L; Haider, Yasser A; Maxwell, Adam D; Wang, Yak-Nam; Matula, Thomas J

    2016-07-01

    Intra- and extra-muscular hematomas result from repetitive injury as well as sharp and blunt limb trauma. The clinical consequences can be serious, including debilitating pain and functional deficit. There are currently no short-term treatment options for large hematomas, only lengthy conservative treatment. The goal of this work was to evaluate the feasibility of a high intensity focused ultrasound (HIFU)-based technique, termed histotripsy, for rapid (within a clinically relevant timeframe of 15-20 min) liquefaction of large volume (up to 20 mL) extra-vascular hematomas for subsequent fine-needle aspiration. Experiments were performed using in vitro extravascular hematoma phantoms-fresh bovine blood poured into 50 mL molds and allowed to clot. The resulting phantoms were treated by boiling histotripsy (BH), cavitation histotripsy (CH) or a combination in a degassed water tank under ultrasound guidance. Two different transducers operating at 1 MHz and 1.5 MHz with f-number = 1 were used. The liquefied lysate was aspirated and analyzed by histology and sized in a Coulter Counter. The peak instantaneous power to achieve BH was lower than (at 1.5 MHz) or equal to (at 1 MHz) that which was required to initiate CH. Under the same exposure duration, BH-induced cavities were one and a half to two times larger than the CH-induced cavities, but the CH-induced cavities were more regularly shaped, facilitating easier aspiration. The lysates contained a small amount of debris larger than 70 μm, and 99% of particulates were smaller than 10 μm. A combination treatment of BH (for initial debulking) and CH (for liquefaction of small residual fragments) yielded 20 mL of lysate within 17.5 minutes of treatment and was found to be most optimal for liquefaction of large extravascular hematomas. PMID:27126244

  9. Histotripsy Liquefaction of Large Hematomas.

    PubMed

    Khokhlova, Tatiana D; Monsky, Wayne L; Haider, Yasser A; Maxwell, Adam D; Wang, Yak-Nam; Matula, Thomas J

    2016-07-01

    Intra- and extra-muscular hematomas result from repetitive injury as well as sharp and blunt limb trauma. The clinical consequences can be serious, including debilitating pain and functional deficit. There are currently no short-term treatment options for large hematomas, only lengthy conservative treatment. The goal of this work was to evaluate the feasibility of a high intensity focused ultrasound (HIFU)-based technique, termed histotripsy, for rapid (within a clinically relevant timeframe of 15-20 min) liquefaction of large volume (up to 20 mL) extra-vascular hematomas for subsequent fine-needle aspiration. Experiments were performed using in vitro extravascular hematoma phantoms-fresh bovine blood poured into 50 mL molds and allowed to clot. The resulting phantoms were treated by boiling histotripsy (BH), cavitation histotripsy (CH) or a combination in a degassed water tank under ultrasound guidance. Two different transducers operating at 1 MHz and 1.5 MHz with f-number = 1 were used. The liquefied lysate was aspirated and analyzed by histology and sized in a Coulter Counter. The peak instantaneous power to achieve BH was lower than (at 1.5 MHz) or equal to (at 1 MHz) that which was required to initiate CH. Under the same exposure duration, BH-induced cavities were one and a half to two times larger than the CH-induced cavities, but the CH-induced cavities were more regularly shaped, facilitating easier aspiration. The lysates contained a small amount of debris larger than 70 μm, and 99% of particulates were smaller than 10 μm. A combination treatment of BH (for initial debulking) and CH (for liquefaction of small residual fragments) yielded 20 mL of lysate within 17.5 minutes of treatment and was found to be most optimal for liquefaction of large extravascular hematomas.

  10. Biotherapies in large vessel vasculitis.

    PubMed

    Ferfar, Y; Mirault, T; Desbois, A C; Comarmond, C; Messas, E; Savey, L; Domont, F; Cacoub, P; Saadoun, D

    2016-06-01

    Giant cell arteritis (GCA) and Takayasu's arteritis (TA) are large vessel vasculitis (LVV) and aortic involvement is not uncommon in Behcet's disease (BD) and relapsing polychondritis (RP). Glucocorticosteroids are the mainstay of therapy in LVV. However, a significant proportion of patients have glucocorticoid dependance, serious side effects or refractory disease to steroids and other immunosuppressive treatments such as cyclophosphamide, azathioprine, mycophenolate mofetil and methotrexate. Recent advances in the understanding of the pathogenesis have resulted in the use of biological agents in patients with LVV. Anti-tumor necrosis factor-α drugs seem effective in patients with refractory Takayasu arteritis and vascular BD but have failed to do so in giant cell arteritis. Preliminary reports on the use of the anti-IL6-receptor antibody (tocilizumab), in LVV have been encouraging. The development of new biologic targeted therapies will probably open a promising future for patients with LVV. PMID:26883459

  11. Safe handling of large animals.

    PubMed

    Grandin, T

    1999-01-01

    The major causes of accidents with cattle, horses, and other grazing animals are: panic due to fear, male dominance aggression, or the maternal aggression of a mother protecting her newborn. Danger is inherent when handling large animals. Understanding their behavior patterns improves safety, but working with animals will never be completely safe. Calm, quiet handling and non-slip flooring are beneficial. Rough handling and excessive use of electric prods increase chances of injury to both people and animals, because fearful animals may jump, kick, or rear. Training animals to voluntarily cooperate with veterinary procedures reduces stress and improves safety. Grazing animals have a herd instinct, and a lone, isolated animal can become agitated. Providing a companion animal helps keep an animal calm. PMID:10329901

  12. Black rings at large D

    NASA Astrophysics Data System (ADS)

    Tanabe, Kentaro

    2016-02-01

    We study the effective theory of slowly rotating black holes at the infinite limit of the spacetime dimension D. This large D effective theory is obtained by integrating the Einstein equation with respect to the radial direction. The effective theory gives equations for non-linear dynamical deformations of a slowly rotating black hole by effective equations. The effective equations contain the slowly rotating Myers-Perry black hole, slowly boosted black string, non-uniform black string and black ring as stationary solutions. We obtain the analytic solution of the black ring by solving effective equations. Furthermore, by perturbation analysis of effective equations, we find a quasinormal mode condition of the black ring in analytic way. As a result we confirm that thin black ring is unstable against non-axisymmetric perturbations. We also include 1 /D corrections to the effective equations and discuss the effects by 1 /D corrections.

  13. Gyrodampers for large space structures

    NASA Technical Reports Server (NTRS)

    Aubrun, J. N.; Margulies, G.

    1979-01-01

    The problem of controlling the vibrations of a large space structures by the use of actively augmented damping devices distributed throughout the structure is addressed. The gyrodamper which consists of a set of single gimbal control moment gyros which are actively controlled to extract the structural vibratory energy through the local rotational deformations of the structure, is described and analyzed. Various linear and nonlinear dynamic simulations of gyrodamped beams are shown, including results on self-induced vibrations due to sensor noise and rotor imbalance. The complete nonlinear dynamic equations are included. The problem of designing and sizing a system of gyrodampers for a given structure, or extrapolating results for one gyrodamped structure to another is solved in terms of scaling laws. Novel scaling laws for gyro systems are derived, based upon fundamental physical principles, and various examples are given.

  14. Large Aperture Electrostatic Dust Detector

    SciTech Connect

    C.H. Skinner, R. Hensley, and A.L Roquemore

    2007-10-09

    Diagnosis and management of dust inventories generated in next-step magnetic fusion devices is necessary for their safe operation. A novel electrostatic dust detector, based on a fine grid of interlocking circuit traces biased to 30 or 50 ν has been developed for the detection of dust particles on remote surfaces in air and vacuum environments. Impinging dust particles create a temporary short circuit and the resulting current pulse is recorded by counting electronics. Up to 90% of the particles are ejected from the grid or vaporized suggesting the device may be useful for controlling dust inventories. We report measurements of the sensitivity of a large area (5x5 cm) detector to microgram quantities of dust particles and review its applications to contemporary tokamaks and ITER.

  15. Adaptive Optics for Large Telescopes

    SciTech Connect

    Olivier, S

    2008-06-27

    The use of adaptive optics was originally conceived by astronomers seeking to correct the blurring of images made with large telescopes due to the effects of atmospheric turbulence. The basic idea is to use a device, a wave front corrector, to adjust the phase of light passing through an optical system, based on some measurement of the spatial variation of the phase transverse to the light propagation direction, using a wave front sensor. Although the original concept was intended for application to astronomical imaging, the technique can be more generally applied. For instance, adaptive optics systems have been used for several decades to correct for aberrations in high-power laser systems. At Lawrence Livermore National Laboratory (LLNL), the world's largest laser system, the National Ignition Facility, uses adaptive optics to correct for aberrations in each of the 192 beams, all of which must be precisely focused on a millimeter scale target in order to perform nuclear physics experiments.

  16. Sweetwater, Texas Large N Experiment

    NASA Astrophysics Data System (ADS)

    Sumy, D. F.; Woodward, R.; Barklage, M.; Hollis, D.; Spriggs, N.; Gridley, J. M.; Parker, T.

    2015-12-01

    From 7 March to 30 April 2014, NodalSeismic, Nanometrics, and IRIS PASSCAL conducted a collaborative, spatially-dense seismic survey with several thousand nodal short-period geophones complemented by a backbone array of broadband sensors near Sweetwater, Texas. This pilot project demonstrates the efficacy of industry and academic partnerships, and leveraged a larger, commercial 3D survey to collect passive source seismic recordings to image the subsurface. This innovative deployment of a large-N mixed-mode array allows industry to explore array geometries and investigate the value of broadband recordings, while affording academics a dense wavefield imaging capability and an operational model for high volume instrument deployment. The broadband array consists of 25 continuously-recording stations from IRIS PASSCAL and Nanometrics, with an array design that maximized recording of horizontal-traveling seismic energy for surface wave analysis over the primary target area with sufficient offset for imaging objectives at depth. In addition, 2639 FairfieldNodal Zland nodes from NodalSeismic were deployed in three sub-arrays: the outlier, backbone, and active source arrays. The backbone array consisted of 292 nodes that covered the entire survey area, while the outlier array consisted of 25 continuously-recording nodes distributed at a ~3 km distance away from the survey perimeter. Both the backbone and outlier array provide valuable constraints for the passive source portion of the analysis. This project serves as a learning platform to develop best practices in the support of large-N arrays with joint industry and academic expertise. Here we investigate lessons learned from a facility perspective, and present examples of data from the various sensors and array geometries. We will explore first-order results from local and teleseismic earthquakes, and show visualizations of the data across the array. Data are archived at the IRIS DMC under stations codes XB and 1B.

  17. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  18. Large and small photovoltaic powerplants

    NASA Astrophysics Data System (ADS)

    Cormode, Daniel

    The installed base of photovoltaic power plants in the United States has roughly doubled every 1 to 2 years between 2008 and 2015. The primary economic drivers of this are government mandates for renewable power, falling prices for all PV system components, 3rd party ownership models, and a generous tariff scheme known as net-metering. Other drivers include a desire for decreasing the environmental impact of electricity generation and a desire for some degree of independence from the local electric utility. The result is that in coming years, PV power will move from being a minor niche to a mainstream source of energy. As additional PV power comes online this will create challenges for the electric grid operators. We examine some problems related to large scale adoption of PV power in the United States. We do this by first discussing questions of reliability and efficiency at the PV system level. We measure the output of a fleet of small PV systems installed at Tucson Electric Power, and we characterize the degradation of those PV systems over several years. We develop methods to predict energy output from PV systems and quantify the impact of negatives such as partial shading, inverter inefficiency and malfunction of bypass diodes. Later we characterize the variability from large PV systems, including fleets of geographically diverse utility scale power plants. We also consider the power and energy requirements needed to smooth those systems, both from the perspective of an individual system and as a fleet. Finally we report on experiments from a utility scale PV plus battery hybrid system deployed near Tucson, Arizona where we characterize the ability of this system to produce smoothly ramping power as well as production of ancillary energy services such as frequency response.

  19. Large hole rotary drill performance

    SciTech Connect

    Workman, J.L.; Calder, P.N.

    1996-12-31

    Large hole rotary drilling is one of the most common methods of producing blastholes in open pit mining. Large hole drilling generally refers to diameters from 9 to 17 inch (229 to 432 mm), however a considerable amount of rotary drilling is done in diameters from 6{1/2} to 9 inch (165 to 229 mm). These smaller diameters are especially prevalent in gold mining and quarrying. Rotary drills are major mining machines having substantial capital cost. Drill bit costs can also be high, depending on the bit type and formation being drilled. To keep unit costs low the drills must perform at a high productivity level. The most important factor in rotary drilling is the penetration rate. This paper discusses the factors affecting penetration rate. An empirical factor in rotary drilling is the penetration rate. This paper discusses the factors affecting penetration rate. An empirical factor is given for calculating the penetration rate based on rock strength, pulldown weight and the RPM. The importance of using modern drill performance monitoring systems to calibrate the penetration equation for specific rock formations is discussed. Adequate air delivered to the bottom of the hole is very important to achieving maximum penetration rates. If there is insufficient bailing velocity cuttings will not be transported from the bottom of the hole rapidly enough and the penetration rate is very likely to decrease. An expression for the balancing air velocity is given. The amount by which the air velocity must exceed the balancing velocity for effective operation is discussed. The effect of altitude on compressor size is also provided.

  20. Large Space Antenna Systems Technology, 1984

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1985-01-01

    Mission applications for large space antenna systems; large space antenna structural systems; materials and structures technology; structural dynamics and control technology, electromagnetics technology, large space antenna systems and the Space Station; and flight test and evaluation were examined.

  1. Gastric Large Cell Neuroendocrine Carcinoma

    PubMed Central

    Rustagi, Tarun; Alekshun, Todd J.

    2010-01-01

    Case: A 63-year-old male presented with unintentional weight loss of 20 pounds over a 4-month duration. He reported loss of appetite, intermittent post-prandial nausea, bloating and early satiety. He also complained of dyspepsia and had been treated for reflux during the previous 2 years. He denied vomiting, dysphagia, odynophagia, abdominal pain, melena, hematochezia, or alterations in bowel habits. Additionally, he denied fevers, night sweats, cough, or dyspnea. He quit smoking 25 years ago, and denied alcohol use. His past medical history was significant for basal cell carcinoma treated with local curative therapy and he was without recurrence on surveillance. Pertinent family history included a paternal uncle with lung cancer at the age of 74. Physical examination was unremarkable except for occult heme-positive stools. Laboratory evaluation revealed elevated liver enzymes (ALT-112, AST-81, AlkPhos-364). CT scan of the chest, abdomen and pelvis showed diffuse heterogeneous liver with extensive nodularity, raising the concern for metastases. Serum tumor-markers: PSA, CEA, CA 19-9, and AFP were all within normal limits. Screening colonoscopy was normal, but esophagogastroduodenoscopy revealed a malignant-appearing ulcerative lesion involving the gastro-esophageal junction and gastric cardia. Pathology confirmed an invasive gastric large cell neuroendocrine carcinoma. Ultrasound-guided fine needle aspiration of a hepatic lesion revealed malignant cells with cytologic features consistent with large-cell type carcinoma and positive immunostaining for synaptophysin favoring neuroendocrine differentiation. A PET-CT demonstrated intense diffuse FDG uptake of the liver, suggesting diffuse hepatic parenchymal infiltration by tumor. There were multiple foci of intense osseous FDG uptake with corresponding osteolytic lesions seen on CT scan. The remaining intra-abdominal and intra-thoracic structures were unremarkable. The patient will receive palliative systemic therapy

  2. Large Block Test Final Report

    SciTech Connect

    Lin, W

    2001-12-01

    This report documents the Large-Block Test (LBT) conducted at Fran Ridge near Yucca Mountain, Nevada. The LBT was a thermal test conducted on an exposed block of middle non-lithophysal Topopah Spring tuff (Tptpmn) and was designed to assist in understanding the thermal-hydrological-mechanical-chemical (THMC) processes associated with heating and then cooling a partially saturated fractured rock mass. The LBT was unique in that it was a large (3 x 3 x 4.5 m) block with top and sides exposed. Because the block was exposed at the surface, boundary conditions on five of the six sides of the block were relatively well known and controlled, making this test both easier to model and easier to monitor. This report presents a detailed description of the test as well as analyses of the data and conclusions drawn from the test. The rock block that was tested during the LBT was exposed by excavation and removal of the surrounding rock. The block was characterized and instrumented, and the sides were sealed and insulated to inhibit moisture and heat loss. Temperature on the top of the block was also controlled. The block was heated for 13 months, during which time temperature, moisture distribution, and deformation were monitored. After the test was completed and the block cooled down, a series of boreholes were drilled, and one of the heater holes was over-cored to collect samples for post-test characterization of mineralogy and mechanical properties. Section 2 provides background on the test. Section 3 lists the test objectives and describes the block site, the site configuration, and measurements made during the test. Section 3 also presents a chronology of events associated with the LBT, characterization of the block, and the pre-heat analyses of the test. Section 4 describes the fracture network contained in the block. Section 5 describes the heating/cooling system used to control the temperature in the block and presents the thermal history of the block during the test

  3. Large Volcanic Rises on Venus

    NASA Technical Reports Server (NTRS)

    Smrekar, Suzanne E.; Kiefer, Walter S.; Stofan, Ellen R.

    1997-01-01

    Large volcanic rises on Venus have been interpreted as hotspots, or the surface manifestation of mantle upwelling, on the basis of their broad topographic rises, abundant volcanism, and large positive gravity anomalies. Hotspots offer an important opportunity to study the behavior of the lithosphere in response to mantle forces. In addition to the four previously known hotspots, Atla, Bell, Beta, and western Eistla Regiones, five new probable hotspots, Dione, central Eistla, eastern Eistla, Imdr, and Themis, have been identified in the Magellan radar, gravity and topography data. These nine regions exhibit a wider range of volcano-tectonic characteristics than previously recognized for venusian hotspots, and have been classified as rift-dominated (Atla, Beta), coronae-dominated (central and eastern Eistla, Themis), or volcano-dominated (Bell, Dione, western Eistla, Imdr). The apparent depths of compensation for these regions ranges from 65 to 260 km. New estimates of the elastic thickness, using the 90 deg and order spherical harmonic field, are 15-40 km at Bell Regio, and 25 km at western Eistla Regio. Phillips et al. find a value of 30 km at Atla Regio. Numerous models of lithospheric and mantle behavior have been proposed to interpret the gravity and topography signature of the hotspots, with most studies focusing on Atla or Beta Regiones. Convective models with Earth-like parameters result in estimates of the thickness of the thermal lithosphere of approximately 100 km. Models of stagnant lid convection or thermal thinning infer the thickness of the thermal lithosphere to be 300 km or more. Without additional constraints, any of the model fits are equally valid. The thinner thermal lithosphere estimates are most consistent with the volcanic and tectonic characteristics of the hotspots. Estimates of the thermal gradient based on estimates of the elastic thickness also support a relatively thin lithosphere (Phillips et al.). The advantage of larger estimates of

  4. The Large Millimeter Telescope (LMT)

    NASA Astrophysics Data System (ADS)

    Young, J. S.; Carrasco, L.; Schloerb, F. P.

    2002-05-01

    The Large Millimeter Telescope (LMT) project is a collaboration between the University of Massachusetts (UMass) in the USA and the Instituto Nacional de Astrofisica, Optica y Electronica (INAOE) in Mexico to build a 50m-diameter millimeter-wave antenna which will operate with good efficiency at wavelengths as short as 1 mm. The LMT is being built at an altitude of 4600 m atop Volcan Sierra Negra, an extinct volcanic peak in the state of Puebla, Mexico, approximately 100 km east of the city of Puebla. At 18 degrees 59' N latitude, the site offers an excellent view of the Galactic Center and good sky coverage of both hemispheres. Construction of the telescope is now well underway, and it is expected to be completed in late 2004. The LMT specifications call for an overall effective surface accuracy of 75 microns rms and a pointing accuracy of 1" rms. The strategy for meeting these performance goals supplements conventional antenna designs with various "active" systems to bring the final performance within the requirements. For surface accuracy, the LMT will rely on an open loop active surface which includes 180 moveable surface segments. For pointing accuracy, we will use traditional approaches supplemented by measurements to characterize the behavior of the structure, including inclinometers and temperature sensors which may be used with finite element models to determine structural deformations and predict pointing behavior. The initial complement of instruments will include a 32 element, heterodyne focal plane array at 3mm; a large format, focal plane bolometer array; a unique wide band receiver and spectrometer to determine the redshifts of primordial galaxies; and a 4 element receiver for the 1mm band. With its excellent sensitivity and angular resolution, the LMT will enable unique studies of the early universe and galaxy evolution, the interstellar medium and star formation in galaxies, and planetary science. In particular, with nearly 2000 m2 of collecting

  5. Large cities are less green

    PubMed Central

    Oliveira, Erneson A.; Andrade, José S.; Makse, Hernán A.

    2014-01-01

    We study how urban quality evolves as a result of carbon dioxide emissions as urban agglomerations grow. We employ a bottom-up approach combining two unprecedented microscopic data on population and carbon dioxide emissions in the continental US. We first aggregate settlements that are close to each other into cities using the City Clustering Algorithm (CCA) defining cities beyond the administrative boundaries. Then, we use data on CO2 emissions at a fine geographic scale to determine the total emissions of each city. We find a superlinear scaling behavior, expressed by a power-law, between CO2 emissions and city population with average allometric exponent β = 1.46 across all cities in the US. This result suggests that the high productivity of large cities is done at the expense of a proportionally larger amount of emissions compared to small cities. Furthermore, our results are substantially different from those obtained by the standard administrative definition of cities, i.e. Metropolitan Statistical Area (MSA). Specifically, MSAs display isometric scaling emissions and we argue that this discrepancy is due to the overestimation of MSA areas. The results suggest that allometric studies based on administrative boundaries to define cities may suffer from endogeneity bias. PMID:24577263

  6. Natural Selection in Large Populations

    NASA Astrophysics Data System (ADS)

    Desai, Michael

    2011-03-01

    I will discuss theoretical and experimental approaches to the evolutionary dynamics and population genetics of natural selection in large populations. In these populations, many mutations are often present simultaneously, and because recombination is limited, selection cannot act on them all independently. Rather, it can only affect whole combinations of mutations linked together on the same chromosome. Methods common in theoretical population genetics have been of limited utility in analyzing this coupling between the fates of different mutations. In the past few years it has become increasingly clear that this is a crucial gap in our understanding, as sequence data has begun to show that selection appears to act pervasively on many linked sites in a wide range of populations, including viruses, microbes, Drosophila, and humans. I will describe approaches that combine analytical tools drawn from statistical physics and dynamical systems with traditional methods in theoretical population genetics to address this problem, and describe how experiments in budding yeast can help us directly observe these evolutionary dynamics.

  7. Chemotaxis of large granular lymphocytes

    SciTech Connect

    Pohajdak, B.; Gomez, J.; Orr, F.W.; Khalil, N.; Talgoy, M.; Greenberg, A.H.

    1986-01-01

    The hypothesis that large granular lymphocytes (LGL) are capable of directed locomotion (chemotaxis) was tested. A population of LGL isolated from discontinuous Percoll gradients migrated along concentration gradients of N-formyl-methionyl-leucyl-phenylalanine (f-MLP), casein, and C5a, well known chemoattractants for polymorphonuclear leukocytes and monocytes, as well as interferon-..beta.. and colony-stimulating factor. Interleukin 2, tuftsin, platelet-derived growth factor, and fibronectin were inactive. Migratory responses were greater in Percoll fractions with the highest lytic activity and HNK-1/sup +/ cells. The chemotactic response to f-MLP, casein, and C5a was always greater when the chemoattractant was present in greater concentration in the lower compartment of the Boyden chamber. Optimum chemotaxis was observed after a 1 hr incubation that made use of 12 ..mu..m nitrocellulose filters. LGL exhibited a high degree of nondirected locomotion when allowed to migrate for longer periods (> 2 hr), and when cultured in vitro for 24 to 72 hr in the presence or absence of IL 2 containing phytohemagluttinin-conditioned medium. LGL chemotaxis to f-MLP could be inhibited in a dose-dependent manner by the inactive structural analog CBZ-phe-met, and the RNK tumor line specifically bound f-ML(/sup 3/H)P, suggesting that LGL bear receptors for the chemotactic peptide.

  8. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years.

  9. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years. PMID:25156190

  10. The Mass of Large Impactors

    NASA Technical Reports Server (NTRS)

    Parisi, M. G.; Brunini, A.

    1996-01-01

    By means of a simplified dynamical model, we have computed the eccentricity change in the orbit of each giant planet, caused by a single, large impact at the end of the accretion process. In order to set an upper bound on this eccentricity change, we have considered the giant planets' present eccentricities as primordial ones. By means of this procedure, we were able to obtain an implicit relation for the impactor masses and maximum velocities. We have estimated by this method the maximum allowed mass to impact Jupiter to be approx. 1.136 x 10(exp -1), being in the case of Neptune approx. 3.99 x 10(exp -2) (expressed in units of each planet final mass). Due to the similar present eccentricities of Saturn, Uranus and Jupiter, the constraint masses and velocities of the bodies to impact them (in units of each planet final mass and velocity respectively) are almost the same for the three planets. These results are in good agreement with those obtained by Lissauer and Safronov. These bounds might be used to derive the mass distribution of planetesimals in the early solar system.

  11. How large should whales be?

    PubMed

    Clauset, Aaron

    2013-01-01

    The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2 g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000 g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000 g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them. PMID:23342050

  12. Large cities are less green.

    PubMed

    Oliveira, Erneson A; Andrade, José S; Makse, Hernán A

    2014-02-28

    We study how urban quality evolves as a result of carbon dioxide emissions as urban agglomerations grow. We employ a bottom-up approach combining two unprecedented microscopic data on population and carbon dioxide emissions in the continental US. We first aggregate settlements that are close to each other into cities using the City Clustering Algorithm (CCA) defining cities beyond the administrative boundaries. Then, we use data on CO2 emissions at a fine geographic scale to determine the total emissions of each city. We find a superlinear scaling behavior, expressed by a power-law, between CO2 emissions and city population with average allometric exponent β = 1.46 across all cities in the US. This result suggests that the high productivity of large cities is done at the expense of a proportionally larger amount of emissions compared to small cities. Furthermore, our results are substantially different from those obtained by the standard administrative definition of cities, i.e. Metropolitan Statistical Area (MSA). Specifically, MSAs display isometric scaling emissions and we argue that this discrepancy is due to the overestimation of MSA areas. The results suggest that allometric studies based on administrative boundaries to define cities may suffer from endogeneity bias.

  13. Anthropogenic Triggering of Large Earthquakes

    PubMed Central

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1–10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor “foreshocks”, since the induction may occur with a delay up to several years. PMID:25156190

  14. Control of large space structures

    NASA Technical Reports Server (NTRS)

    Gran, R.; Rossi, M.; Moyer, H. G.; Austin, F.

    1979-01-01

    The control of large space structures was studied to determine what, if any, limitations are imposed on the size of spacecraft which may be controlled using current control system design technology. Using a typical structure in the 35 to 70 meter size category, a control system design that used actuators that are currently available was designed. The amount of control power required to maintain the vehicle in a stabilized gravity gradient pointing orientation that also damped various structural motions was determined. The moment of inertia and mass properties of this structure were varied to verify that stability and performance were maintained. The study concludes that the structure's size is required to change by at least a factor of two before any stability problems arise. The stability margin that is lost is due to the scaling of the gravity gradient torques (the rigid body control) and as such can easily be corrected by changing the control gains associated with the rigid body control. A secondary conclusion from the study is that the control design that accommodates the structural motions (to damp them) is a little more sensitive than the design that works on attitude control of the rigid body only.

  15. Large cities are less green

    NASA Astrophysics Data System (ADS)

    Oliveira, Erneson A.; Andrade, José S.; Makse, Hernán A.

    2014-02-01

    We study how urban quality evolves as a result of carbon dioxide emissions as urban agglomerations grow. We employ a bottom-up approach combining two unprecedented microscopic data on population and carbon dioxide emissions in the continental US. We first aggregate settlements that are close to each other into cities using the City Clustering Algorithm (CCA) defining cities beyond the administrative boundaries. Then, we use data on CO2 emissions at a fine geographic scale to determine the total emissions of each city. We find a superlinear scaling behavior, expressed by a power-law, between CO2 emissions and city population with average allometric exponent β = 1.46 across all cities in the US. This result suggests that the high productivity of large cities is done at the expense of a proportionally larger amount of emissions compared to small cities. Furthermore, our results are substantially different from those obtained by the standard administrative definition of cities, i.e. Metropolitan Statistical Area (MSA). Specifically, MSAs display isometric scaling emissions and we argue that this discrepancy is due to the overestimation of MSA areas. The results suggest that allometric studies based on administrative boundaries to define cities may suffer from endogeneity bias.

  16. How large should whales be?

    PubMed

    Clauset, Aaron

    2013-01-01

    The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2 g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000 g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000 g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them.

  17. Large Isotope Spectrometer for Astromag

    NASA Technical Reports Server (NTRS)

    Binns, W. R.; Klarmann, J.; Israel, M. H.; Garrard, T. L.; Mewaldt, R. A.; Stone, E. C.; Ormes, J. F.; Streitmatter, R. E.; Rasmussen, I. L.; Wiedenbeck, M. E.

    1990-01-01

    The Large Isotope Spectrometer for Astromag (LISA) is an experiment designed to measure the isotopic composition and energy spectra of cosmic rays for elements extending from beryllium through zinc. The overall objectives of this investigation are to study the origin and evolution of galactic matter; the acceleration, transport, and time scales of cosmic rays in the galaxy; and search for heavy antinuclei in the cosmic radiation. To achieve these objectives, the LISA experiment will make the first identifications of individual heavy cosmic ray isotopes in the energy range from about 2.5 to 4 GeV/n where relativistic time dilation effects enhance the abundances of radioactive clocks and where the effects of solar modulation and cross-section variations are minimized. It will extend high resolution measurements of individual element abundances and their energy spectra to energies of nearly 1 TeV/n, and has the potential for discovering heavy anti-nuclei which could not have been formed except in extragalactic sources.

  18. Facilitating Navigation Through Large Archives

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Smith, Stephanie L.; Troung, Dat; Hodgson, Terry R.

    2005-01-01

    Automated Visual Access (AVA) is a computer program that effectively makes a large collection of information visible in a manner that enables a user to quickly and efficiently locate information resources, with minimal need for conventional keyword searches and perusal of complex hierarchical directory systems. AVA includes three key components: (1) a taxonomy that comprises a collection of words and phrases, clustered according to meaning, that are used to classify information resources; (2) a statistical indexing and scoring engine; and (3) a component that generates a graphical user interface that uses the scoring data to generate a visual map of resources and topics. The top level of an AVA display is a pictorial representation of an information archive. The user enters the depicted archive by either clicking on a depiction of subject area cluster, selecting a topic from a list, or entering a query into a text box. The resulting display enables the user to view candidate information entities at various levels of detail. Resources are grouped spatially by topic with greatest generality at the top layer and increasing detail with depth. The user can zoom in or out of specific sites or into greater or lesser content detail.

  19. Large excimer lasers for fusion

    SciTech Connect

    Jensen, R.J.

    1986-01-01

    Important goals in DOE and DOD programs require multimegajoule laser pulses. For inertial confinement fusion there is also a requirement to deliver the pulse in about 25 nsec with a very particular power vs time profile - all at high overall efficiency and low cost per joule. After exhaustive consideration of various alternatives, our studies have shown that the most cost effective approach to energy scaling is to increase the size of the final amplifiers up to the 200 to 300 kJ level. This conclusion derives largely from the fact that, at a given complexity, costs increase slowly with increasing part size while output energy should increase dramatically. Extrapolations to low cost by drastic cuts in the unit cost of smaller devices through mass production are considered highly risky. At a minimum the requirement to provide, space, optics and mounts for such systems will remain expensive. In recent years there have been dramatic advances in scaling. The Los Alamos LAM has produced over 10 kJ in a single 1/2 nsec pulse. In this paper we explore the issues involved in scaling to higher energy while still maintaining high efficiencies. In the remainder of this paper we will discuss KrF laser scaling for the fusion mission. We will omit most of the discussion of the laser system design, but address only KrF amplifiers.

  20. Large Scale Magnetostrictive Valve Actuator

    NASA Technical Reports Server (NTRS)

    Richard, James A.; Holleman, Elizabeth; Eddleman, David

    2008-01-01

    Marshall Space Flight Center's Valves, Actuators and Ducts Design and Development Branch developed a large scale magnetostrictive valve actuator. The potential advantages of this technology are faster, more efficient valve actuators that consume less power and provide precise position control and deliver higher flow rates than conventional solenoid valves. Magnetostrictive materials change dimensions when a magnetic field is applied; this property is referred to as magnetostriction. Magnetostriction is caused by the alignment of the magnetic domains in the material s crystalline structure and the applied magnetic field lines. Typically, the material changes shape by elongating in the axial direction and constricting in the radial direction, resulting in no net change in volume. All hardware and testing is complete. This paper will discuss: the potential applications of the technology; overview of the as built actuator design; discuss problems that were uncovered during the development testing; review test data and evaluate weaknesses of the design; and discuss areas for improvement for future work. This actuator holds promises of a low power, high load, proportionally controlled actuator for valves requiring 440 to 1500 newtons load.

  1. Disorder in large- N theories

    NASA Astrophysics Data System (ADS)

    Aharony, Ofer; Komargodski, Zohar; Yankielowicz, Shimon

    2016-04-01

    We consider Euclidean Conformal Field Theories perturbed by quenched disorder, namely by random fluctuations in their couplings. Such theories are relevant for second-order phase transitions in the presence of impurities or other forms of disorder. Theories with quenched disorder often flow to new fixed points of the renormalization group. We begin with disorder in free field theories. Imry and Ma showed that disordered free fields can only exist for d > 4. For d > 4 we show that disorder leads to new fixed points which are not scale-invariant. We then move on to large- N theories (vector models or gauge theories in the `t Hooft limit). We compute exactly the beta function for the disorder, and the correlation functions of the disordered theory. We generalize the results of Imry and Ma by showing that such disordered theories exist only when disorder couples to operators of dimension Δ > d/4. Sometimes the disordered fixed points are not scale-invariant, and in other cases they have unconventional dependence on the disorder, including non-trivial effects due to irrelevant operators. Holography maps disorder in conformal theories to stochastic differential equations in a higher dimensional space. We use this dictionary to reproduce our field theory results. We also study the leading 1 /N corrections, both by field theory methods and by holography. These corrections are particularly important when disorder scales with the number of degrees of freedom.

  2. How Large Should Whales Be?

    PubMed Central

    Clauset, Aaron

    2013-01-01

    The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2 g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000 g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000 g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them. PMID:23342050

  3. Large optics inspection, tilting, and washing stand

    DOEpatents

    Ayers, Marion Jay; Ayers, Shannon Lee

    2010-08-24

    A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

  4. Large optics inspection, tilting, and washing stand

    DOEpatents

    Ayers, Marion Jay; Ayers, Shannon Lee

    2012-10-09

    A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

  5. Fronts in Large Marine Ecosystems

    NASA Astrophysics Data System (ADS)

    Belkin, Igor M.; Cornillon, Peter C.; Sherman, Kenneth

    2009-04-01

    Oceanic fronts shape marine ecosystems; therefore front mapping and characterization are among the most important aspects of physical oceanography. Here we report on the first global remote sensing survey of fronts in the Large Marine Ecosystems (LME). This survey is based on a unique frontal data archive assembled at the University of Rhode Island. Thermal fronts were automatically derived with the edge detection algorithm of Cayula and Cornillon (1992, 1995, 1996) from 12 years of twice-daily, global, 9-km resolution satellite sea surface temperature (SST) fields to produce synoptic (nearly instantaneous) frontal maps, and to compute the long-term mean frequency of occurrence of SST fronts and their gradients. These synoptic and long-term maps were used to identify major quasi-stationary fronts and to derive provisional frontal distribution maps for all LMEs. Since SST fronts are typically collocated with fronts in other water properties such as salinity, density and chlorophyll, digital frontal paths from SST frontal maps can be used in studies of physical-biological correlations at fronts. Frontal patterns in several exemplary LMEs are described and compared, including those for: the East and West Bering Sea LMEs, Sea of Okhotsk LME, East China Sea LME, Yellow Sea LME, North Sea LME, East and West Greenland Shelf LMEs, Newfoundland-Labrador Shelf LME, Northeast and Southeast US Continental Shelf LMEs, Gulf of Mexico LME, and Patagonian Shelf LME. Seasonal evolution of frontal patterns in major upwelling zones reveals an order-of-magnitude growth of frontal scales from summer to winter. A classification of LMEs with regard to the origin and physics of their respective dominant fronts is presented. The proposed classification lends itself to comparative studies of frontal ecosystems.

  6. Ultra-Large Solar Sail

    NASA Technical Reports Server (NTRS)

    Burton, Rodney; Coverstone, Victoria

    2009-01-01

    UltraSail is a next-generation ultra-large (km2 class) sail system. Analysis of the launch, deployment, stabilization, and control of these sails shows that high-payload-mass fractions for interplanetary and deep-space missions are possible. UltraSail combines propulsion and control systems developed for formation-flying microsatellites with a solar sail architecture to achieve controllable sail areas approaching 1 km2. Electrically conductive CP-1 polyimide film results in sail subsystem area densities as low as 5 g/m2. UltraSail produces thrust levels many times those of ion thrusters used for comparable deep-space missions. The primary innovation involves the near-elimination of sail-supporting structures by attaching each blade tip to a formation- flying microsatellite, which deploys the sail and then articulates the sail to provide attitude control, including spin stabilization and precession of the spin axis. These microsatellite tips are controlled by microthrusters for sail-film deployment and mission operations. UltraSail also avoids the problems inherent in folded sail film, namely stressing, yielding, or perforating, by storing the film in a roll for launch and deployment. A 5-km long by 2 micrometer thick film roll on a mandrel with a 1 m circumference (32 cm diameter) has a stored thickness of 5 cm. A 5 m-long mandrel can store a film area of 25,000 m2, and a four-blade system has an area of 0.1 sq km.

  7. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  8. India's National Large Solar Telescope

    NASA Astrophysics Data System (ADS)

    Hasan, S. S.

    2012-12-01

    India's 2-m National Large Solar Telescope (NLST) is aimed primarily at carrying out observations of the solar atmosphere with high spatial and spectral resolution. A comprehensive site characterization program, that commenced in 2007, has identified two superb sites in the Himalayan region at altitudes greater than 4000-m that have extremely low water vapor content and are unaffected by monsoons. With an innovative optical design, the NLST is an on-axis Gregorian telescope with a low number of optical elements to reduce the number of reflections and yield a high throughput with low polarization. In addition, it is equipped with a high-order adaptive optics to produce close to diffraction limited performance. To control atmospheric and thermal perturbations of the observations, the telescope will function with a fully open dome, to achieve its full potential atop a 25 m tower. Given its design, NLST can also operate at night, without compromising its solar performance. The post-focus instruments include broad-band and tunable Fabry-Pérot narrow-band imaging instruments; a high resolution spectropolarimeter and an Echelle spectrograph for night time astronomy. This project is led by the Indian Institute of Astrophysics and has national and international partners. Its geographical location will fill the longitudinal gap between Japan and Europe and is expected to be the largest solar telescope with an aperture larger than 1.5 m till the ATST and EST come into operation. An international consortium has been identified to build the NLST. The facility is expected to be commissioned by 2016.

  9. The Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Ivezic, Zeljko

    2007-05-01

    The Large Synoptic Survey Telescope (LSST) is currently by far the most ambitious proposed ground-based optical survey. With initial funding from the US National Science Foundation (NSF), Department of Energy (DOE) laboratories and private sponsors, the design and development efforts are well underway at many institutions, including top universities and leading national laboratories. The main science themes that drive the LSST system design are Dark Energy and Matter, the Solar System Inventory, Transient Optical Sky and the Milky Way Mapping. The LSST system, with its 8.4m telescope and 3,200 Megapixel camera, will be sited at Cerro Pachon in northern Chile, with the first light scheduled for 2014. In a continuous observing campaign, LSST will cover the entire available sky every three nights in two photometric bands to a depth of V=25 per visit (two 15 second exposures), with exquisitely accurate astrometry and photometry. Over the proposed survey lifetime of 10 years, each sky location would be observed about 1000 times, with the total exposure time of 8 hours distributed over six broad photometric bandpasses (ugrizY). This campaign will open a movie-like window on objects that change brightness, or move, on timescales ranging from 10 seconds to 10 years, and will produce a catalog containing over 10 billion galaxies and a similar number of stars. The survey will have a data rate of about 30 TB/night, and will collect over 60 PB of raw data over its lifetime, resulting in an incredibly rich and extensive public archive that will be a treasure trove for breakthroughs in many areas of astronomy and astrophysics.

  10. Large methane reserves beneath Antarctica?

    NASA Astrophysics Data System (ADS)

    Wadham, J. L.; Tulaczyk, S. M.; Stibal, M.; Arndt, S.; Telling, J.; Lis, G.; Lawson, E. C.; Dubnick, A.; Tranter, M.; Sharp, M. J.; Anesio, A.

    2010-12-01

    Once thought to be devoid of life, the Antarctic Ice Sheet is now known to be a dynamic reservoir of metabolically active microbial cells and organic carbon. Its potential to support the degradation of organic carbon to methane, however, has not yet been evaluated. Large marine sedimentary basins beneath the ice sheet (estimated to cover up to 50% by area and contain sedimentary sequences up to 3 km thick) remain thawed during glaciation. These basins are estimated to contain ~7000 Pg of organic carbon, assuming that sedimentary basins account for 1 and 2 M km2 of the West and East Antarctic Ice Sheets respectively, the organic carbon content of overridden marine sediments is 0.5 % and the mean sediment depth is 1 km. We predict that this carbon is microbially cycled to methane under anoxic conditions beneath the ice sheet. Laboratory experimental data are consistent with this and show that organic carbon overridden by glaciers and ice sheets produces methane under anoxic conditions, and at rates similar to those observed in sub-seafloor sediments. We numerically model the accumulation of methane in Antarctic sedimentary basins and show that sediment porewaters become over-saturated with methane over >1 Myr and that typical pressure/temperature conditions favour methane hydrate formation down to between ~500m and ~1000m in the sedimentary column. We calculate conservatively that a minimum of ~70 and ~360 PgC of releasable methane (clathrate + free gas) could be produced beneath the West and East Antarctic Ice Sheets over 3 and 30 Myr of glaciation respectively, which is of a similar order of magnitude to methane present as hydrate in Arctic permafrost. The stability of this releasable methane reserve depends sensitively upon in situ pressure conditions, and hence ice thickness. We show that only modest ice sheet retreat rates (700-2000 km2 a-1) are required to stimulate out gassing of releasable methane from Antarctic sedimentary basins at rates sufficient to

  11. The Large Millimeter Telescope (LMT)

    NASA Astrophysics Data System (ADS)

    Baars, J. W. M.; Carrasco, L.; Schloerb, F. P.

    1999-05-01

    The University of Massachusetts at Amherst, through the FCRAO, and the Instituto Nacional de Astrofisica, Optica y Electronica (INAOE) in Puebla, Mexico, are collaborating in the design, construction and joint operation of the Large Millimeter Telescope (LMT). The LMT is a full aperture telescope of 50 m diameter for operation to a shortest wavelength of 1 mm. First generation facility instruments include a 32-channel spectroscopy receiver for the 85-115 GHz band and a 144-channel bolometer system at 250 GHz. A joint institute, the LMT Observatory, will operate the telescope for the astronomers from the participating institutes and outside observers. Commissioning of the LMT is scheduled to start in 2001. The LMT is expected to contribute in particular to the study of the Universe at high redshifts. Its size and southern location also make it a powerful member of the growing mm-wavelength VLBI activity. The LMT is located on Cerro la Negra in Central Mexico at 4600 m altitude and a latitude of 19 degrees. The site is 100 km east of Puebla. The opacity shows median tau-values of less than 0.15 at 230 GHz from Sep through May, good for operation to 300 GHz. Site preparation and installation of utilities is under way. Work on the telescope foundation will begin in Spring 1999 with steel assembly expected to commence in early 2000. The LMT is being designed by MAN Technologie. It is an exposed, alt-azimuth antenna with a wheel-on-track azimuth drive and double bull-gear elevation drive. An advanced servo-system will aid in achieving the pointing accuracy of 1''. A spacious receiver cabin behind the reflector, allows the deployment of and easy access to several receiver systems. The reflector is a space-frame structure, supporting 130 reflector subframes of about 5x3 m2 which carry the reflector surface panels. The subframes are supported on actuators to enable real-time correction of the reflector surface for deformations, caused by gravity, temperature gradients and

  12. The Very Large Ecological Array

    NASA Astrophysics Data System (ADS)

    Hamilton, M. P.; Dawson, T. E.; Thompson, S. E.

    2011-12-01

    Regional climatic change and variability is expected to alter the boundary conditions to which ecosystems and landscapes are subject. Unambiguously identifying how these changes alter the biophysics of ecosystems or the phenology or behavior of individual organisms, however, remains challenging due to the complexity and heterogeneity of real landscapes. One of the aims of the Very Large Ecological Array (VeLEA) - a landscape-scale distributed wireless environmental monitoring system under deployment at the University of California, Blue Oak Ranch Reserve (Mount Hamilton Range, Santa Clara County, California) - is to allow a sufficiently fine-resolution understanding of spatial and temporal variability in the landscape that such changes can be reliably quantified. The VeLEA is structured around two wireless mesh radio networks, with solar-powered nodes spaced by up to 2 miles. This allows widely distributed arrays of instrumentation to be deployed over hundreds to thousands of hectares. The first network supports ten weather stations (recording barometric pressure, temperature, humidity, wind, rainfall, total solar radiation and leaf wetness), along with sixty nodes measuring humidity and air temperature at 1m above ground. Future deployments will extend the network to include soil moisture, soil temperature, piezometric head and streamflow across the site. The second network supports an array of 10 networked cameras providing real-time viewing and time-lapse recording of animal behavior, vegetation phenology and aquatic variability. An important goal of the VeLEA project is to optimize the deployment of wireless nodes with respect to spatial and temporal variation at the site. Preliminary data obtained from the initial deployments are being used to characterize spatial and temporal variability across the site and to investigate mechanistic and statistical methods for interpolating and up-scaling that data. Observing and characterizing such spatio

  13. Large Alluvial Fans on Mars

    NASA Technical Reports Server (NTRS)

    Moore, Jeffrey M.; Howard, Alan D.

    2004-01-01

    Several dozen distinct alluvial fans, 10 to greater than 40 km long downslope are observed exclusively in highlands craters. Within a search region between 0 deg. and 30 deg. S, alluvial fan-containing craters were only found between 18 and 29 S, and they all occur at around plus or minus 1 km of the MOLA-defined Martian datum. Within the study area they are not randomly distributed but instead form three distinct clusters. Fans typically descend greater than 1 km from where they disgorge from their alcoves. Longitudinal profiles show that their surfaces are very slightly concave with a mean slope of 2 degrees. Many fans exhibit very long, narrow low-relief ridges radially oriented down-slope, often branching at their distal ends, suggestive of distributaries. Morphometric data for 31 fans was derived from MOLA data and compared with terrestrial fans with high-relief source areas, terrestrial low gradient alluvial ramps in inactive tectonic settings, and older Martian alluvial ramps along crater floors. The Martian alluvial fans generally fall on the same trends as the terrestrial alluvial fans, whereas the gentler Martian crater floor ramps are similar in gradient to the low relief terrestrial alluvial surfaces. For a given fan gradient, Martian alluvial fans generally have greater source basin relief than terrestrial fans in active tectonic settings. This suggests that the terrestrial source basins either yield coarser debris or have higher sediment concentrations than their Martian counterpoints. Martian fans and Basin and Range fans have steeper gradients than the older Martian alluvial ramps and terrestrial low relief alluvial surfaces, which is consistent with a supply of coarse sediment. Martian fans are relatively large and of low gradient, similar to terrestrial fluvial fans rather than debris flow fans. However, gravity scaling uncertainties make the flow regime forming Martian fans uncertain. Martian fans, at least those in Holden crater, apparently

  14. Large space systems technology, 1981. [conferences

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1982-01-01

    A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems. Specifically, program status, structures, materials, and analyses, and control of large space systems are addressed.

  15. Large Space Antenna Systems Technology, 1984

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1985-01-01

    Papers are presented which provide a comprehensive review of space missions requiring large antenna systems and of the status of key technologies required to enable these missions. Topic areas include mission applications for large space antenna systems, large space antenna structural systems, materials and structures technology, structural dynamics and control technology, electromagnetics technology, large space antenna systems and the space station, and flight test and evaluation.

  16. Large Devaluations and the Real Exchange Rate

    ERIC Educational Resources Information Center

    Burstein, Ariel; Eichenbaum, Martin; Rebelo, Sergio

    2005-01-01

    In this paper we argue that the primary force behind the large drop in real exchange rates that occurs after large devaluations is the slow adjustment in the prices of nontradable goods and services. Our empirical analysis uses data from five large devaluation episodes: Argentina (2002), Brazil (1999), Korea (1997), Mexico (1994), and Thailand…

  17. Large space systems technology, 1980, volume 1

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    The technological and developmental efforts in support of the large space systems technology are described. Three major areas of interests are emphasized: (1) technology pertient to large antenna systems; (2) technology related to large space systems; and (3) activities that support both antenna and platform systems.

  18. 27 CFR 19.915 - Large plants.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Large plants. 19.915... OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Distilled Spirits For Fuel Use Permits § 19.915 Large plants. Any person wishing to establish a large plant shall make application for and obtain...

  19. Large variable conductance heat pipe. Transverse header

    NASA Technical Reports Server (NTRS)

    Edelstein, F.

    1975-01-01

    The characteristics of gas-loaded, variable conductance heat pipes (VCHP) are discussed. The difficulties involved in developing a large VCHP header are analyzed. The construction of the large capacity VCHP is described. A research project to eliminate some of the problems involved in large capacity VCHP operation is explained.

  20. Reading Materials in Large Type. Reference Circular.

    ERIC Educational Resources Information Center

    Ovenshire, Ruthann, Comp.

    Listed in the circular are approximately 32 commercial and volunteer producers of large type materials, approximately 50 large type books for reference and special needs, and five further sources of large type materials. Usually given for each alphabetically listed producer are the address, specialty (whether producer of specific categories or of…

  1. Large Space Systems Technology, Part 2, 1981

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1982-01-01

    Four major areas of interest are covered: technology pertinent to large antenna systems; technology related to the control of large space systems; basic technology concerning structures, materials, and analyses; and flight technology experiments. Large antenna systems and flight technology experiments are described. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. These research studies represent state-of-the art technology that is necessary for the development of large space systems. A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems.

  2. Large fluctuations at the lasing threshold of solid- and liquid-state dye lasers

    NASA Astrophysics Data System (ADS)

    Basak, Supratim; Blanco, Alvaro; López, Cefe

    2016-08-01

    Intensity fluctuations in lasers are commonly studied above threshold in some special configurations (especially when emission is fed back into the cavity or when two lasers are coupled) and related with their chaotic behaviour. Similar fluctuating instabilities are usually observed in random lasers, which are open systems with plenty of quasi-modes whose non orthogonality enables them to exchange energy and provides the sort of loss mechanism whose interplay with pumping leads to replica symmetry breaking. The latter however, had never been observed in plain cavity lasers where disorder is absent or not intentionally added. Here we show a fluctuating lasing behaviour at the lasing threshold both in solid and liquid dye lasers. Above and below a narrow range around the threshold the spectral line-shape is well correlated with the pump energy. At the threshold such correlation disappears, and the system enters a regime where emitted laser fluctuates between narrow, intense and broad, weak peaks. The immense number of modes and the reduced resonator quality favour the coupling of modes and prepares the system so that replica symmetry breaking occurs without added disorder.

  3. Large fluctuations at the lasing threshold of solid- and liquid-state dye lasers

    PubMed Central

    Basak, Supratim; Blanco, Alvaro; López, Cefe

    2016-01-01

    Intensity fluctuations in lasers are commonly studied above threshold in some special configurations (especially when emission is fed back into the cavity or when two lasers are coupled) and related with their chaotic behaviour. Similar fluctuating instabilities are usually observed in random lasers, which are open systems with plenty of quasi-modes whose non orthogonality enables them to exchange energy and provides the sort of loss mechanism whose interplay with pumping leads to replica symmetry breaking. The latter however, had never been observed in plain cavity lasers where disorder is absent or not intentionally added. Here we show a fluctuating lasing behaviour at the lasing threshold both in solid and liquid dye lasers. Above and below a narrow range around the threshold the spectral line-shape is well correlated with the pump energy. At the threshold such correlation disappears, and the system enters a regime where emitted laser fluctuates between narrow, intense and broad, weak peaks. The immense number of modes and the reduced resonator quality favour the coupling of modes and prepares the system so that replica symmetry breaking occurs without added disorder. PMID:27558968

  4. Large fluctuations at the lasing threshold of solid- and liquid-state dye lasers.

    PubMed

    Basak, Supratim; Blanco, Alvaro; López, Cefe

    2016-01-01

    Intensity fluctuations in lasers are commonly studied above threshold in some special configurations (especially when emission is fed back into the cavity or when two lasers are coupled) and related with their chaotic behaviour. Similar fluctuating instabilities are usually observed in random lasers, which are open systems with plenty of quasi-modes whose non orthogonality enables them to exchange energy and provides the sort of loss mechanism whose interplay with pumping leads to replica symmetry breaking. The latter however, had never been observed in plain cavity lasers where disorder is absent or not intentionally added. Here we show a fluctuating lasing behaviour at the lasing threshold both in solid and liquid dye lasers. Above and below a narrow range around the threshold the spectral line-shape is well correlated with the pump energy. At the threshold such correlation disappears, and the system enters a regime where emitted laser fluctuates between narrow, intense and broad, weak peaks. The immense number of modes and the reduced resonator quality favour the coupling of modes and prepares the system so that replica symmetry breaking occurs without added disorder. PMID:27558968

  5. Large fluctuations at the lasing threshold of solid- and liquid-state dye lasers.

    PubMed

    Basak, Supratim; Blanco, Alvaro; López, Cefe

    2016-08-25

    Intensity fluctuations in lasers are commonly studied above threshold in some special configurations (especially when emission is fed back into the cavity or when two lasers are coupled) and related with their chaotic behaviour. Similar fluctuating instabilities are usually observed in random lasers, which are open systems with plenty of quasi-modes whose non orthogonality enables them to exchange energy and provides the sort of loss mechanism whose interplay with pumping leads to replica symmetry breaking. The latter however, had never been observed in plain cavity lasers where disorder is absent or not intentionally added. Here we show a fluctuating lasing behaviour at the lasing threshold both in solid and liquid dye lasers. Above and below a narrow range around the threshold the spectral line-shape is well correlated with the pump energy. At the threshold such correlation disappears, and the system enters a regime where emitted laser fluctuates between narrow, intense and broad, weak peaks. The immense number of modes and the reduced resonator quality favour the coupling of modes and prepares the system so that replica symmetry breaking occurs without added disorder.

  6. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  7. Large fullerenes and metallofullerenes: Structure and stability

    SciTech Connect

    Achiba, Y.

    1993-12-31

    Isolation and characterization of large fullerenes up to n=160 is described. The fullerenes were extracted from carbon soot and isolated by a high performance liquid chromatography. Preparative amounts of well-separated large fullerenes were characterized by UV/visible absorption, IR, {sup 13}C NMR in solution, and VUV photoelectron measurements. Isolation and characterization of isomers on some of large fullerenes is also be described. Tentative report on the isolation of metallofullerenes such as LaC82 are presented.

  8. Shape control of large space structures

    NASA Technical Reports Server (NTRS)

    Hagan, M. T.

    1982-01-01

    A survey has been conducted to determine the types of control strategies which have been proposed for controlling the vibrations in large space structures. From this survey several representative control strategies were singled out for detailed analyses. The application of these strategies to a simplified model of a large space structure has been simulated. These simulations demonstrate the implementation of the control algorithms and provide a basis for a preliminary comparison of their suitability for large space structure control.

  9. Fabrication of large ceramic electrolyte disks

    NASA Technical Reports Server (NTRS)

    Ring, S. A.

    1972-01-01

    Process for sintering compressed ceramic powders produces large ceramic disks for use as electrolytes in high-temperature electrolytic cells. Thin, strain-free uniformly dense disks as large as 30 cm squared have been fabricated by slicing ceramic slugs produced by this technique.

  10. Collaborative Working for Large Digitisation Projects

    ERIC Educational Resources Information Center

    Yeates, Robin; Guy, Damon

    2006-01-01

    Purpose: To explore the effectiveness of large-scale consortia for disseminating local heritage via the web. To describe the creation of a large geographically based cultural heritage consortium in the South East of England and management lessons resulting from a major web site digitisation project. To encourage the improved sharing of experience…

  11. Implementing Large Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Coppit, David

    2006-01-01

    In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that…

  12. Meson dynamics in large-N limit

    SciTech Connect

    Tan, C.I.

    1983-01-01

    The large-N limit of QCD with matter fields present is considered in a Hamiltonian loop space approach. The semi-classical nature of the large-N limit is clarified where a valence approximation emerges naturally. A pseudospin algebra is introduced for handling fermions.

  13. The algebras of large N matrix mechanics

    SciTech Connect

    Halpern, M.B.; Schwartz, C.

    1999-09-16

    Extending early work, we formulate the large N matrix mechanics of general bosonic, fermionic and supersymmetric matrix models, including Matrix theory: The Hamiltonian framework of large N matrix mechanics provides a natural setting in which to study the algebras of the large N limit, including (reduced) Lie algebras, (reduced) supersymmetry algebras and free algebras. We find in particular a broad array of new free algebras which we call symmetric Cuntz algebras, interacting symmetric Cuntz algebras, symmetric Bose/Fermi/Cuntz algebras and symmetric Cuntz superalgebras, and we discuss the role of these algebras in solving the large N theory. Most important, the interacting Cuntz algebras are associated to a set of new (hidden!) local quantities which are generically conserved only at large N. A number of other new large N phenomena are also observed, including the intrinsic nonlocality of the (reduced) trace class operators of the theory and a closely related large N field identification phenomenon which is associated to another set (this time nonlocal) of new conserved quantities at large N.

  14. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  15. 76 FR 17521 - Assessments, Large Bank Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-30

    ... Register of February 25, 2011 (76 FR 10672), regarding Assessments, Large Bank Pricing. This correction... 17th Street, NW., Washington, DC 20429. SUPPLEMENTARY INFORMATION: In FR Doc. 2011-3086, appearing on... 327 RIN 3064-AD66 Assessments, Large Bank Pricing AGENCY: Federal Deposit Insurance Corporation...

  16. 75 FR 73983 - Assessments, Large Bank Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    ... Kapoor, Counsel, Legal Division, (202) 898-3960. Correction In proposed rule FR Doc. 2010-29138...; ] FEDERAL DEPOSIT INSURANCE CORPORATION 12 CFR Part 327 RIN 3064-AD66 Assessments, Large Bank Pricing AGENCY..., 2010, regarding Assessments, Large Bank Pricing. This correction clarifies that the comment period...

  17. LARGE AND GREAT RIVERS: NEW ASSESSMENT TOOLS

    EPA Science Inventory

    The Ecological Exposure Research Division has been conducting research to support the development of the next generation of bioassessment and monitoring tools for large and great rivers. Focus has largely been on the development of standardized protocols for the traditional indi...

  18. World atlas of large optical telescopes

    NASA Technical Reports Server (NTRS)

    Meszaros, S. P.

    1979-01-01

    By 1980 there will be approximately 100 large optical telescopes in the world with mirror or lens diameters of one meter (39 inches) and larger. This atlas gives information on these telescopes and shows their locations on continent-sized maps. Observatory locations considered suitable for the construction of future large telescopes are also shown.

  19. Scalar gain interpretation of large order filters

    NASA Technical Reports Server (NTRS)

    Mason, Paul A. C.; Mook, D. Joseph

    1993-01-01

    A technique is developed which demonstrates how to interpret a large fully-populated filter gain matrix as a set of scalar gains. The inverse problem is also solved, namely, how to develop a large-order filter gain matrix from a specified set of scalar gains. Examples are given to illustrate the method.

  20. Perception for a large deployable reflector telescope

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. M.; Swanson, P. N.; Meinel, A. B.; Meinel, M. P.

    1984-01-01

    Optical science and technology concepts for a large deployable reflector for far-infrared and submillimeter astronomy from above the earth's atmosphere are discussed. Requirements given at the Asilomar Conference are reviewed. The technical challenges of this large-aperture (about 20-meter) telescope, which will be diffraction limited in the infrared, are highlighted in a brief discussion of one particular configuration.

  1. Automating large-scale reactor systems

    SciTech Connect

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig.

  2. Measuring Leakage From Large, Complicated Machinery

    NASA Technical Reports Server (NTRS)

    Bottemiller, S.

    1987-01-01

    Test chamber improvised from large bag. Cumulative sizes of leaks in large, complicated machinery measure with relatively simple variation of helium leak-checking technique. When used to check Space Shuttle main engine, new technique gave repeatable and correct results within 0.5 stdin.3/min (1.4 x 10 negative to the seventh power stdm3/s).

  3. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  4. Environmental effects and large space systems

    NASA Technical Reports Server (NTRS)

    Garrett, H. B.

    1981-01-01

    When planning large scale operations in space, environmental impact must be considered in addition to radiation, spacecraft charging, contamination, high power and size. Pollution of the atmosphere and space is caused by rocket effluents and by photoelectrons generated by sunlight falling on satellite surfaces even light pollution may result (the SPS may reflect so much light as to be a nuisance to astronomers). Large (100 Km 2) structures also will absorb the high energy particles that impinge on them. Altogether, these effects may drastically alter the Earth's magnetosphere. It is not clear if these alterations will in any way affect the Earth's surface climate. Large structures will also generate large plasma wakes and waves which may cause interference with communications to the vehicle. A high energy, microwave beam from the SPS will cause ionospheric turbulence, affecting UHF and VHF communications. Although none of these effects may ultimately prove critical, they must be considered in the design of large structures.

  5. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  6. Generically large nongaussianity in small multifield inflation

    SciTech Connect

    Bramante, Joseph

    2015-07-07

    If forthcoming measurements of cosmic photon polarization restrict the primordial tensor-to-scalar ratio to r<0.01, small field inflation will be a principal candidate for the origin of the universe. Here we show that small multifield inflation, without the hybrid mechanism, typically results in large squeezed nongaussianity. Small multifield potentials contain multiple flat field directions, often identified with the gauge invariant field directions in supersymmetric potentials. We find that unless these field directions have equal slopes, large nongaussianity arises. After identifying relevant differences between large and small two-field potentials, we demonstrate that the latter naturally fulfill the Byrnes-Choi-Hall large nongaussianity conditions. Computations of the primordial power spectrum, spectral index, and squeezed bispectrum, reveal that small two-field models which otherwise match observed primordial perturbations, produce excludably large nongaussianity if the inflatons’ field directions have unequal slopes.

  7. Testing Large Structures in the Field

    NASA Technical Reports Server (NTRS)

    James, George; Carne, Thomas G.

    2009-01-01

    Field testing large structures creates unique challenges such as limited choices for boundary conditions and the fact that natural excitation sources cannot be removed. Several critical developments in field testing of large structures are reviewed, including: step relaxation testing which has been developed into a useful technique to apply large forces to operational systems by careful windowing; the capability of large structures testing with free support conditions which has been expanded by implementing modeling of the support structure; natural excitation which has been developed as a viable approach to field testing; and the hybrid approach which has been developed to allow forces to be estimated in operating structures. These developments have increased the ability to extract information from large structures and are highlighted in this presentation.

  8. Large eddy simulations of compressible turbulent flows

    NASA Technical Reports Server (NTRS)

    Porter-Locklear, Freda

    1995-01-01

    An evaluation of existing models for Large Eddy Simulations (LES) of incompressible turbulent flows has been completed. LES is a computation in which the large, energy-carrying structures to momentum and energy transfer is computed exactly, and only the effect of the smallest scales of turbulence is modeled. That is, the large eddies are computed and the smaller eddies are modeled. The dynamics of the largest eddies are believed to account for most of sound generation and transport properties in a turbulent flow. LES analysis is based on an observation that pressure, velocity, temperature, and other variables are the sum of their large-scale and small-scale parts. For instance, u(i) (velocity) can be written as the sum of bar-u(i) and u(i)-prime, where bar-u(i) is the large-scale and u(i)-prime is the subgrid-scale (SGS). The governing equations for large eddies in compressible flows are obtained after filtering the continuity, momentum, and energy equations, and recasting in terms of Favre averages. The filtering operation maintains only large scales. The effects of the small-scales are present in the governing equations through the SGS stress tensor tau(ij) and SGS heat flux q(i). The mathematical formulation of the Favre-averaged equations of motion for LES is complete.

  9. Do large hiatal hernias affect esophageal peristalsis?

    PubMed Central

    Roman, Sabine; Kahrilas, Peter J; Kia, Leila; Luger, Daniel; Soper, Nathaniel; Pandolfino, John E

    2013-01-01

    Background & Aim Large hiatal hernias can be associated with a shortened or tortuous esophagus. We hypothesized that these anatomic changes may alter esophageal pressure topography (EPT) measurements made during high-resolution manometry (HRM). Our aim was to compare EPT measures of esophageal motility in patients with large hiatal hernias to those of patients without hernia. Methods Among 2000 consecutive clinical EPT, we identified 90 patients with large (>5 cm) hiatal hernias on endoscopy and at least 7 evaluable swallows on EPT. Within the same database a control group without hernia was selected. EPT was analyzed for lower esophageal sphincter (LES) pressure, Distal Contractile Integral (DCI), contraction amplitude, Contractile Front Velocity (CFV) and Distal Latency time (DL). Esophageal length was measured on EPT from the distal border of upper esophageal sphincter to the proximal border of the LES. EPT diagnosis was based on the Chicago Classification. Results The manometry catheter was coiled in the hernia and did not traverse the crural diaphragm in 44 patients (49%) with large hernia. Patients with large hernias had lower average LES pressures, lower DCI, slower CFV and shorter DL than patients without hernia. They also exhibited a shorter mean esophageal length. However, the distribution of peristaltic abnormalities was not different in patients with and without large hernia. Conclusions Patients with large hernias had an alteration of EPT measurements as a consequence of the associated shortened esophagus. However, the distribution of peristaltic disorders was unaffected by the presence of hernia. PMID:22508779

  10. Learning to build large structures in space

    NASA Technical Reports Server (NTRS)

    Hagler, T.; Patterson, H. G.; Nathan, C. A.

    1977-01-01

    The paper examines some of the key technologies and forms of construction know-how that will have to be developed and tested for eventual application to building large structures in space. Construction of a shuttle-tended space construction/demonstration platform would comprehensively demonstrate large structure technology, develop construction capability, and furnish a construction platform for a variety of operational large structures. Completion of this platform would lead to demonstrations of the Satellite Power System (SPS) concept, including microwave transmission, fabrication of 20-m-deep beams, conductor installation, rotary joint installation, and solar blanket installation.

  11. Is the universe homogeneous on large scale?

    NASA Astrophysics Data System (ADS)

    Zhu, Xingfen; Chu, Yaoquan

    Wether the distribution of matter in the universe is homogeneous or fractal on large scale is vastly debated in observational cosmology recently. Pietronero and his co-workers have strongly advocated that the fractal behaviour in the galaxy distribution extends to the largest scale observed (≍1000h-1Mpc) with the fractal dimension D ≍ 2. Most cosmologists who hold the standard model, however, insist that the universe be homogeneous on large scale. The answer of whether the universe is homogeneous or not on large scale should wait for the new results of next generation galaxy redshift surveys.

  12. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  13. Passive Cooling For Large Infrared Telescopes

    NASA Technical Reports Server (NTRS)

    Lin, Edward I.

    1993-01-01

    Conceptual passive-cooling technique enables very large infrared telescope in vacuum of outer space cooled to below 20 K without using cryogen. Telescope orbiting Earth at high altitude of around 100,000 km. Scheme also offers very small gradient of temperature across primary telescope reflector, so thermal distortions smaller; accuracy of surface figure of reflector significantly enhanced. Passive-cooling technique also applied to building of very large cryostats and to development of very large sun shields in traditional manner, and some elements of technique adapted for current small observatories.

  14. Very Large System Dynamics Models - Lessons Learned

    SciTech Connect

    Jacob J. Jacobson; Leonard Malczynski

    2008-10-01

    This paper provides lessons learned from developing several large system dynamics (SD) models. System dynamics modeling practice emphasize the need to keep models small so that they are manageable and understandable. This practice is generally reasonable and prudent; however, there are times that large SD models are necessary. This paper outlines two large SD projects that were done at two Department of Energy National Laboratories, the Idaho National Laboratory and Sandia National Laboratories. This paper summarizes the models and then discusses some of the valuable lessons learned during these two modeling efforts.

  15. The Amateurs' Love Affair with Large Datasets

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Jacoby, S. H.; Henden, A.

    2006-12-01

    Amateur astronomers are professionals in other areas. They bring expertise from such varied and technical careers as computer science, mathematics, engineering, and marketing. These skills, coupled with an enthusiasm for astronomy, can be used to help manage the large data sets coming online in the next decade. We will show specific examples where teams of amateurs have been involved in mining large, online data sets and have authored and published their own papers in peer-reviewed astronomical journals. Using the proposed LSST database as an example, we will outline a framework for involving amateurs in data analysis and education with large astronomical surveys.

  16. [Surgeon's strategy in forming large intestine anastomoses].

    PubMed

    Shestopalov, S S; Mikhaĭlova, S A; Ibatullin, R D; Bogdanov, A V; Komkov, A V; Dan'ko, N A

    2009-01-01

    The article includes experience with treatment of 103 patients with the formed different large intestine anastomoses. Primary operations for cancer of the rectum were made on 76 patients, restorative operations--on 27 patients. The following techniques were used: manual formation of the large intestine anastomosis, apparatus anastomoses using AKA-2, "ETHICON CDH" and double apparatuses method using "CONTOUR" and "ETHICON CDH". It was found that the application of stitching apparatuses required shorter time necessary for applying large intestine anastomosis and for operation. When forming the large intestine anastomoses in the abdominal cavity the manual method should be preferred. The formation of anastomosis in the small pelvis cavity is accompanied by technical problems and requires using stitching apparatuses. The method using apparatuses "CONTOUR" and "ETHICON CDH" decreases the number of postoperative complications and can extend the list of indications for performing sphincter-sparing operations.

  17. Large & Small: Exploring the Laws of Nature

    ERIC Educational Resources Information Center

    Creutz, E.

    1976-01-01

    Illustrates how both large entities (such as stars and galaxies) and small entities (such as fundamental particles) obey the same physical laws. Discusses quantum mechanics, Newton's laws, and general relativity. (MLH)

  18. Large Space Antenna Systems Technology, part 1

    NASA Technical Reports Server (NTRS)

    Lightner, E. B. (Compiler)

    1983-01-01

    A compilation of the unclassified papers presented at the NASA Conference on Large Space Antenna Systems Technology covers the following areas: systems, structures technology, control technology, electromagnetics, and space flight test and evaluation.

  19. Large Grain Superconducting RF Cavities at DESY

    SciTech Connect

    Singer, W.; Brinkmann, A.; Ermakov, A.; Iversen, J.; Kreps, G.; Matheisen, A.; Proch, D.; Reschke, D.; Singer, X.; Spiwek, M.; Wen, H.; Brokmeier, H. G.

    2007-08-09

    The DESY R and D program on cavities fabricated from large grain niobium explores the potential of this material for the production of approx. 1000 nine-cell cavities for the European XFEL. The program investigates basic material properties, comparing large grain material to standard sheet niobium, as well as fabrication and preparation aspects. Several single-cell cavities of TESLA shape have been fabricated from large grain niobium. A gradient up to 41 MV/m at Q0 = 1.4{center_dot}1010 (TB = 2K) was measured after electropolishing. The first three large grain nine-cell cavities worldwide have been produced under contract of DESY with ACCEL Instruments Co. The first tests have shown that all three cavities reach an accelerating gradient up to 30 MV/m after BCP (Buffered Chemical Polishing) treatment, what exceeds the XFEL requirements for RF test in the vertical cryostat.

  20. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  1. Teaching a Large Physics Class at Cornell.

    ERIC Educational Resources Information Center

    Orear, Jay

    1979-01-01

    A professor discusses the advantages and disadvantages of teaching a physics class in a large research-oriented university. Various innovative teaching techniques and the ways in which they benefit the students, are presented. (SA)

  2. Large Numbers and Calculators: A Classroom Activity.

    ERIC Educational Resources Information Center

    Arcavi, Abraham; Hadas, Nurit

    1989-01-01

    Described is an activity demonstrating how a scientific calculator can be used in a mathematics classroom to introduce new content while studying a conventional topic. Examples of reading and writing large numbers, and reading hidden results are provided. (YP)

  3. Large, horizontal-axis wind turbines

    NASA Technical Reports Server (NTRS)

    Linscott, B. S.; Perkins, P.; Dennett, J. T.

    1984-01-01

    Development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generating systems are presented. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. There are several ongoing large wind system development projects and applied research efforts directed toward meeting the technology requirements for utility applications. Detailed information on these projects is provided. The Mod-O research facility and current applied research effort in aerodynamics, structural dynamics and aeroelasticity, composite and hybrid composite materials, and multiple system interaction are described. A chronology of component research and technology development for large, horizontal axis wind turbines is presented. Wind characteristics, wind turbine economics, and the impact of wind turbines on the environment are reported. The need for continued wind turbine research and technology development is explored. Over 40 references are sited and a bibliography is included.

  4. A Computer Program for Clustering Large Matrices

    ERIC Educational Resources Information Center

    Koch, Valerie L.

    1976-01-01

    A Fortran V program is described derived for the Univac 1100 Series Computer for clustering into hierarchical structures large matrices, up to 1000 x 1000 and larger, of interassociations between objects. (RC)

  5. Large Meteor Tracked over Northeast Alabama

    NASA Video Gallery

    On the evening of May 18, NASA all-sky meteor cameras located at NASA’s Marshall Space Flight Center and at the Walker County Science Center near Chickamauga, Ga. tracked the entry of a large meteo...

  6. Have Large Dams Altered Extreme Precipitation Patterns?

    NASA Astrophysics Data System (ADS)

    Hossain, Faisal; Jeyachandran, Indumathi; Pielke, Roger

    2009-12-01

    Dams and their impounded waters are among the most common civil infrastructures, with a long heritage of modern design and operations experience. In particular, large dams, defined by the International Commission on Large Dams (ICOLD) as having a height greater than 15 meters from the foundation and holding a reservoir volume of more than 3 million cubic meters, have the potential to vastly transform local climate, landscapes, regional economics, and urbanization patterns. In the United States alone, about 75,000 dams are capable of storing a volume of water equaling almost 1 year's mean runoff of the nation [Graf, 1999]. The World Commission on Dams (WCD) reports that at least 45,000 large dams have been built worldwide since the 1930s. These sheer numbers raise the question of the extent to which large dams and their impounded waters alter patterns that would have been pervasive had the dams not been built.

  7. The large bowel--a supplementary rumen?

    PubMed

    Argenzio, R A; Stevens, C E

    1984-01-01

    The rumen and the mammalian large intestine are similar in many respects. Microbial protein appears to be synthesized and degraded in the digesta of both organs in a comparable manner. The VFA end-products of carbohydrate fermentation are produced in similar concentrations. Digesta pH is maintained with buffer added by the saliva or ileal fluid, HCO3 released into the lumen and rapid absorption of the organic acids. VFA are absorbed at equivalent rates by rumen epithelium and large intestinal mucosa. Over-production of VFA produces similar adverse effects. There is a considerable amount of species variation in the relative length and volume as well as the extent of sacculation of the large intestine. The caecum is the primary site for retention of digesta and microbial fermentation in the large intestine of rabbits, rodents and a few other species. However, the proximal colon is the major site of retention and fermentation in most mammals. Absorptions of Na and VFA appear to account for absorption of most of the water removed during passage of digesta through the large intestine. A relatively slow rate of Na absorption and release of HCO3 appears to provide the fluid and buffering capacity needed for efficient microbial digestion in the rumen and in the large intestine of some species. A more rapid absorption of Na by the large intestine of other species would aid in the conservation of Na and water. The many similarities between the large intestine and the rumen suggest that further comparison can provide additional information on both the function and diseases of these two organs. The rumen has proved to be accessible to a variety of procedures useful for the study of microbial digestive processes and its epithelium has provided a non-glandular tissue for studies of inorganic ion transport as well as the transport and metabolism of VFA. Comparative studies of the large intestine also can provide a better understanding of the functions and malfunctions of the

  8. Large diameter carbon-boron fiber

    NASA Technical Reports Server (NTRS)

    Veltri, R. D.; Jacob, B. A.; Galasso, F. S.

    1975-01-01

    Investigations concerned with a development of large-diameter carbon fibers are considered, taking into account the employment of vapor deposition techniques. In the experiments a carbon monofilament substrate is used together with reacting gases which consist of combinations of hydrogen, methane, and boron trichloride. It is found that the described approach can be used to obtain a large-diameter carbon filament containing boron. The filament has reasonable strength and modulus properties.

  9. Adaptive Machining Of Large, Somewhat Flexible Parts

    NASA Technical Reports Server (NTRS)

    Gutow, David; Wagner, Garrett; Gilbert, Jeffrey L.; Deily, David

    1996-01-01

    Adaptive machining is method of machining large, somewhat flexible workpieces to close tolerances. Devised for machining precise weld lands on aft skirts of rocket nozzles, but underlying concept generally applicable to precise machining of any of large variety of workpieces deformed by thermal, gravitational, and/or machining forces. For example, in principle, method used to bore precise hole on unanchored end of long cantilever beam.

  10. Radio astronomy with very large arrray.

    PubMed

    Hjellming, R M; Bignell, R C

    1982-06-18

    The construction of the Very Large Array of radio telescopes has been completed, and this new research instrument is now being used to make radio images of astronomical objects with a resolution comparable to or better than that of ground-based optical telescopes. The role of the Very Large Array in current and future research is discussed both in principle and in terms of a sample of observing projects.

  11. Zone generator for Large Space Telescope technology

    NASA Technical Reports Server (NTRS)

    Erickson, K. E.

    1974-01-01

    A concept is presented for monitoring the optical adjustment and performance of a Large Space Telescope which consists of a 1.2m diameter turntable with a laser stylus to operate at speeds up to 30 rpm. The focus of the laser stylus is under closed loop control. A technique for scribing zones of suitable depth, width, and uniformity applicable to large telescope mirrors is also reported.

  12. Large Format Detector Arrays for Astrophysics

    NASA Technical Reports Server (NTRS)

    Moseley, Harvey

    2006-01-01

    Improvements in detector design and advances in fabrication techniques has resulted in devices which can reach fundamental sensitivity limits in many cases. Many pressing astrophysical questions require large arrays of such sensitive detectors. I will describe the state of far infrared through millimeter detector development at NASA/GSFC, the design and production of large format arrays, and the initial deployment of these powerful new tools.

  13. 26 CFR 54.4980H-2 - Applicable large employer and applicable large employer member.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... employer/controlled group). (i) Facts. For all of 2015 and 2016, Corporation Z owns 100 percent of all... employees during 2015, Corporations Z, Y, and X together are an applicable large employer for 2016. Each of Corporations Z, Y and X is an applicable large employer member for 2016. Example 2 (Applicable large...

  14. Large Payload Ground Transportation and Test Considerations

    NASA Technical Reports Server (NTRS)

    Rucker, Michelle A.

    2016-01-01

    Many spacecraft concepts under consideration by the National Aeronautics and Space Administration’s (NASA’s) Evolvable Mars Campaign take advantage of a Space Launch System payload shroud that may be 8 to 10 meters in diameter. Large payloads can theoretically save cost by reducing the number of launches needed--but only if it is possible to build, test, and transport a large payload to the launch site in the first place. Analysis performed previously for the Altair project identified several transportation and test issues with an 8.973 meters diameter payload. Although the entire Constellation Program—including Altair—has since been canceled, these issues serve as important lessons learned for spacecraft designers and program managers considering large payloads for future programs. A transportation feasibility study found that, even broken up into an Ascent and Descent Module, the Altair spacecraft would not fit inside available aircraft. Ground transportation of such large payloads over extended distances is not generally permitted, so overland transportation alone would not be an option. Limited ground transportation to the nearest waterway may be possible, but water transportation could take as long as 67 days per production unit, depending on point of origin and acceptance test facility; transportation from the western United States would require transit through the Panama Canal to access the Kennedy Space Center launch site. Large payloads also pose acceptance test and ground processing challenges. Although propulsion, mechanical vibration, and reverberant acoustic test facilities at NASA’s Plum Brook Station have been designed to accommodate large spacecraft, special handling and test work-arounds may be necessary, which could increase cost, schedule, and technical risk. Once at the launch site, there are no facilities currently capable of accommodating the combination of large payload size and hazardous processing such as hypergolic fuels

  15. Large Payload Transportation and Test Considerations

    NASA Technical Reports Server (NTRS)

    Rucker, Michelle A.; Pope, James C.

    2011-01-01

    Ironically, the limiting factor to a national heavy lift strategy may not be the rocket technology needed to throw a heavy payload, but rather the terrestrial infrastructure - roads, bridges, airframes, and buildings - necessary to transport, acceptance test, and process large spacecraft. Failure to carefully consider how large spacecraft are designed, and where they are manufactured, tested, or launched, could result in unforeseen cost to modify/develop infrastructure, or incur additional risk due to increased handling or elimination of key verifications. During test and verification planning for the Altair project, a number of transportation and test issues related to the large payload diameter were identified. Although the entire Constellation Program - including Altair - was canceled in the 2011 NASA budget, issues identified by the Altair project serve as important lessons learned for future payloads that may be developed to support national "heavy lift" strategies. A feasibility study performed by the Constellation Ground Operations (CxGO) project found that neither the Altair Ascent nor Descent Stage would fit inside available transportation aircraft. Ground transportation of a payload this large over extended distances is generally not permitted by most states, so overland transportation alone would not have been an option. Limited ground transportation to the nearest waterway may be permitted, but water transportation could take as long as 66 days per production unit, depending on point of origin and acceptance test facility; transportation from the western United States would require transit through the Panama Canal to access the Kennedy Space Center launch site. Large payloads also pose acceptance test and ground processing challenges. Although propulsion, mechanical vibration, and reverberant acoustic test facilities at NASA s Plum Brook Station have been designed to accommodate large spacecraft, special handling and test work-arounds may be necessary

  16. Megascours: the morphodynamics of large river confluences

    NASA Astrophysics Data System (ADS)

    Dixon, Simon; Sambrook Smith, Greg; Nicholas, Andrew; Best, Jim; Bull, Jon; Vardy, Mark; Goodbred, Steve; Haque Sarker, Maminul

    2015-04-01

    River confluences are wildly acknowledged as crucial controlling influences upon upstream and downstream morphology and thus landscape evolution. Despite their importance very little is known about their evolution and morphodynamics, and there is a consensus in the literature that confluences represent fixed, nodal points in the fluvial network. Confluences have been shown to generate substantial bed scours around five times greater than mean depth. Previous research on the Ganges-Jamuna junction has shown large river confluences can be highly mobile, potentially 'combing' bed scours across a large area, although the extent to which this is representative of large confluences in general is unknown. Understanding the migration of confluences and associated scours is important for multiple applications including: designing civil engineering infrastructure (e.g. bridges, laying cable, pipelines, etc.), sequence stratigraphic interpretation for reconstruction of past environmental and sea level change, and in the hydrocarbon industry where it is crucial to discriminate autocyclic confluence scours from widespread allocyclic surfaces. Here we present a wide-ranging global review of large river confluence planforms based on analysis of Landsat imagery from 1972 through to 2014. This demonstrates there is an array of confluence morphodynamic types: from freely migrating confluences such as the Ganges-Jamuna, through confluences migrating on decadal timescales and fixed confluences. Along with data from recent geophysical field studies in the Ganges-Brahmaputra-Meghna basin we propose a conceptual model of large river confluence types and hypothesise how these influence morphodynamics and preservation of 'megascours' in the rock record. This conceptual model has implications for sequence stratigraphic models and the correct identification of surfaces related to past sea level change. We quantify the abundance of mobile confluence types by classifying all large confluences

  17. Metrology of Large Parts. Chapter 5

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2012-01-01

    As discussed in the first chapter of this book, there are many different methods to measure a part using optical technology. Chapter 2 discussed the use of machine vision to measure macroscopic features such as length and position, which was extended to the use of interferometry as a linear measurement tool in chapter 3, and laser or other trackers to find the relation of key points on large parts in chapter 4. This chapter looks at measuring large parts to optical tolerances in the sub-micron range using interferometry, ranging, and optical tools discussed in the previous chapters. The purpose of this chapter is not to discuss specific metrology tools (such as interferometers or gauges), but to describe a systems engineering approach to testing large parts. Issues such as material warpage and temperature drifts that may be insignificant when measuring a part to micron levels under a microscope, as will be discussed in later chapters, can prove to be very important when making the same measurement over a larger part. In this chapter, we will define a set of guiding principles for successfully overcoming these challenges and illustrate the application of these principles with real world examples. While these examples are drawn from specific large optical testing applications, they inform the problems associated with testing any large part to optical tolerances. Manufacturing today relies on micrometer level part performance. Fields such as energy and transportation are demanding higher tolerances to provide increased efficiencies and fuel savings. By looking at how the optics industry approaches sub-micrometer metrology, one can gain a better understanding of the metrology challenges for any larger part specified to micrometer tolerances. Testing large parts, whether optical components or precision structures, to optical tolerances is just like testing small parts, only harder. Identical with what one does for small parts, a metrologist tests large parts and optics

  18. Development of large aperture composite adaptive optics

    NASA Astrophysics Data System (ADS)

    Kmetik, Viliam; Vitovec, Bohumil; Jiran, Lukas; Nemcova, Sarka; Zicha, Josef; Inneman, Adolf; Mikulickova, Lenka; Pavlica, Richard

    2015-01-01

    Large aperture composite adaptive optics for laser applications is investigated in cooperation of Institute of Plasma Physic, Department of Instrumentation and Control Engineering FME CTU and 5M Ltd. We are exploring opportunity of a large-size high-power-laser deformable-mirror production using a lightweight bimorph actuated structure with a composite core. In order to produce a sufficiently large operational free aperture we are developing new technologies for production of flexible core, bimorph actuator and deformable mirror reflector. Full simulation of a deformable-mirrors structure was prepared and validated by complex testing. A deformable mirror actuation and a response of a complicated structure are investigated for an accurate control of the adaptive optics. An original adaptive optics control system and a bimorph deformable mirror driver were developed. Tests of material samples, components and sub-assemblies were completed. A subscale 120 mm bimorph deformable mirror prototype was designed, fabricated and thoroughly tested. A large-size 300 mm composite-core bimorph deformable mirror was simulated and optimized, fabrication of a prototype is carried on. A measurement and testing facility is modified to accommodate large sizes optics.

  19. Large Fluvial Fans and Exploration for Hydrocarbons

    NASA Technical Reports Server (NTRS)

    Wilkinson, Murray Justin

    2005-01-01

    A report discusses the geological phenomena known, variously, as modern large (or large modern) fluvial fans or large continental fans, from a perspective of exploring for hydrocarbons. These fans are partial cones of river sediment that spread out to radii of 100 km or more. Heretofore, they have not been much recognized in the geological literature probably because they are difficult to see from the ground. They can, however, be seen in photographs taken by astronauts and on other remotely sensed imagery. Among the topics discussed in the report is the need for research to understand what seems to be an association among fluvial fans, alluvial fans, and hydrocarbon deposits. Included in the report is an abstract that summarizes the global distribution of large modern fluvial fans and a proposal to use that distribution as a guide to understanding paleo-fluvial reservoir systems where oil and gas have formed. Also included is an abstract that summarizes what a continuing mapping project has thus far revealed about the characteristics of large fans that have been found in a variety of geological environments.

  20. Astronomy Outreach for Large and Unique Audiences

    NASA Astrophysics Data System (ADS)

    Lubowich, D.; Sparks, R. T.; Pompea, S. M.; Kendall, J. S.; Dugan, C.

    2013-04-01

    In this session, we discuss different approaches to reaching large audiences. In addition to star parties and astronomy events, the audiences for some of the events include music concerts or festivals, sick children and their families, minority communities, American Indian reservations, and tourist sites such as the National Mall. The goal is to bring science directly to the public—to people who attend astronomy events and to people who do not come to star parties, science museums, or science festivals. These programs allow the entire community to participate in astronomy activities to enhance the public appreciation of science. These programs attract large enthusiastic crowds often with young children participating in these family learning experiences. The public will become more informed, educated, and inspired about astronomy and will also be provided with information that will allow them to continue to learn after this outreach activity. Large and unique audiences often have common problems, and their solutions and the lessons learned will be presented. Interaction with the participants in this session will provide important community feedback used to improve astronomy outreach for large and unique audiences. New ways to expand astronomy outreach to new large audiences will be discussed.

  1. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  2. Large cell lymphoma stage IA/IAE.

    PubMed

    Nussbaum, H; Koo, C; Kagan, A R; Rao, A; Ryoo, M C

    1991-06-01

    Fifty-two patients with large cell lymphoma stage IA/IAE were retrospectively reviewed for the purpose of evaluation of treatment methods. All pathology slides were reviewed by one pathologist with a special interest in lymphoma. There were 24 patients at stage IA and 28 at stage IAE. Twenty-six patients were treated with radiation alone (10 IA, 16 IAE) and 26 patients were treated with radiation therapy and chemotherapy (13 IA, 13 IAE). Patients treated with radiation therapy alone and those with combined modality therapy (CMT) have similar survival curves with p values greater than 0.05. Recurrence patterns are similar for either method of treatment. While the majority of the literature recommends CMT for large cell lymphoma, our study of 52 patients reveals no difference in survival or recurrence patterns for these patients by either method of treatment. We recommend radiation therapy alone for stage IA/IAE large cell lymphoma, with chemotherapy held in reserve for failure.

  3. Visualizing large geospatial datasets with KML Regions

    NASA Astrophysics Data System (ADS)

    Ilyushchenko, S.; Wheeler, D.; Ummel, K.; Hammer, D.; Kraft, R.

    2008-12-01

    Regions are a powerful KML feature that helps viewing very large datasets in Google Earth without sacrificing performance. Data is loaded and drawn only when it falls within the user's view and occupies a certain portion of the screen. Using Regions, it is possible to supply separate levels of detail for the data, so that fine details are loaded only when the data fills a portion of the screen that is large enough for the details to be visible. It becomes easy to create compelling interactive presentations of geospatial datasets that are meaningful at both large and small scale. We present two example datasets: worldwide past, present and future carbon dioxide emissions by power plants provided by Carbon Monitoring for Action, Center for Global Development (http://carma.org), as well as 2007 US bridge safety ratings from Federal Highway Administration (http://www.fhwa.dot.gov/BRIDGE/nbi/ascii.cfm).

  4. Large-scale motions in the universe

    SciTech Connect

    Rubin, V.C.; Coyne, G.V.

    1988-01-01

    The present conference on the large-scale motions of the universe discusses topics on the problems of two-dimensional and three-dimensional structures, large-scale velocity fields, the motion of the local group, small-scale microwave fluctuations, ab initio and phenomenological theories, and properties of galaxies at high and low Z. Attention is given to the Pisces-Perseus supercluster, large-scale structure and motion traced by galaxy clusters, distances to galaxies in the field, the origin of the local flow of galaxies, the peculiar velocity field predicted by the distribution of IRAS galaxies, the effects of reionization on microwave background anisotropies, the theoretical implications of cosmological dipoles, and n-body simulations of universe dominated by cold dark matter.

  5. Atom interferometry with large momentum transfer

    NASA Astrophysics Data System (ADS)

    Lan, Shau-Yu; Kuan, Pei-Chen; Estey, Brian; Müller, Holger

    2011-05-01

    The sensitivity of light-pulse atom interferometers can be greatly improved by large momentum transfer (LMT) beam splitters and long interrogation times. Large momentum space separation Δp between two interferometric arms result in increased phase shift proportional to Δp or even (Δp)2, and therefore leads to superior tools for precision measurements. ``BBB'' beam splitters, using high order Bragg diffraction combined with Bloch oscillations, have already been demonstrated and are scalable, as their momentum transfer is not limited by the available laser power. By running an additional conjugate interferometer at the same time, noises common to both interferometers can be eliminated. We will present our work aiming at further improvements, which would allow applications requiring extremely large enclosed areas, such as test of the Einstein equivalence principle, measurements of fundamental constants, or searching for new gravitational effects.

  6. Atom interferometry with large momentum transfer

    NASA Astrophysics Data System (ADS)

    Kuan, Peichen; Lan, Shau-Yu; Estey, Brian; Müller, Holger

    2011-05-01

    The sensitivity of light-pulse atom interferometers can be greatly improved by large momentum transfer (LMT) beam splitters and long interrogation times. Large momentum space separation Δp between two interferometric arms result in an increased phase shift proportional to Δp or even (Δp) 2, and therefore leads to superior tools for precision measurements. ``BBB'' beam splitters, using high order Bragg diffraction combined with Bloch oscillations, have already been demonstrated and are scalable, as their momentum transfer is not limited by the available laser power. By running an additional conjugate interferometer at the same time, noise common to both interferometers can be eliminated. We will present our work aiming at further improvements, which would allow applications requiring extremely large enclosed areas, such as test of the Einstein equivalence principle, measurements of fundamental constants, or searching for new gravitational effects.

  7. First-Principle Calculations of Large Fullerenes.

    PubMed

    Calaminici, Patrizia; Geudtner, Gerald; Köster, Andreas M

    2009-01-13

    State of-the-art density functional theory calculations have been performed for the large fullerenes C180, C240, C320, and C540 using the linear combination of Gaussian-type orbitals density functional theory (LCGTO-DFT) approach. For the calculations all-electron basis sets were employed. All fullerene structures were fully optimized without symmetry constrains. The analysis of the obtained structures as well as a study on the evolution of the bond lengths and calculated binding energies are presented. The fullerene results are compared to diamond and graphene which were calculated at the same level of theory. This represents the first systematic study on these large fullerenes based on nonsymmetry adapted first-principle calculations, and it demonstrates the capability of DFT calculations for energy and structure computations of large scale structures without any symmetry constraint.

  8. NASA technology for large space antennas

    NASA Technical Reports Server (NTRS)

    Russell, R. A.; Campbell, T. G.; Freeland, R. E.

    1979-01-01

    Technology developed by NASA in conjunction with industry for potential large, deployable space antennas with applications in communication, radio astronomy and earth observation is reviewed. Concepts for deployable antennas that have been developed to the point of detail design are summarized, including the advanced sunflower precision antenna, the radial rib antenna, the maypole (hoop/column) antenna and the parabolic erectable truss antenna. The assessment of state-of-the-art deployable antenna technology is discussed, and the approach taken by the NASA Large Space Systems Technology (LSST) Program to the development of technology for large space antenna systems is outlined. Finally, the further development of the wrap-rib antenna and the maypole (hoop/column) concept, which meet mission model requirements, to satisfy LSST size and frequency requirements is discussed.

  9. Large bulk Micromegas detectors for TPC applications

    NASA Astrophysics Data System (ADS)

    Anvar, S.; Baron, P.; Boyer, M.; Beucher, J.; Calvet, D.; Colas, P.; De La Broise, X.; Delagnes, E.; Delbart, A.; Druillole, F.; Emery, S.; Giganti, C.; Giomataris, I.; Mazzucato, E.; Monmarthe, E.; Nizery, F.; Pierre, F.; Ritou, J.-L.; Sarrat, A.; Zito, M.; Catanesi, M. G.; Radicioni, E.; De Oliveira, R.; Blondel, A.; Di Marco, M.; Ferrere, D.; Perrin, E.; Ravonel, M.; Jover, G.; Lux, T.; Rodriguez, A. Y.; Sanchez, F.; Cervera, A.; Hansen, C.; Monfregola, L.

    2009-04-01

    A large volume TPC will be used in the near future in a variety of experiments including T2K. The bulk Micromegas detector for this TPC is built using a novel production technique particularly suited for compact, thin and robust low mass detectors. The capability to pave a large surface with a simple mounting solution and small dead space is of particular interest for these applications. We have built several large bulk Micromegas detectors ( 36×34 cm2) and we have tested one in the former HARP field cage with a magnetic field. Prototypes cards of the T2K front end electronics, based on the AFTER ASIC chip, have been used in this TPC test for the first time. Cosmic ray data have been acquired in a variety of experimental conditions. Good detector performances, space point resolution and energy loss measurement have been achieved.

  10. Bulk micromegas detectors for large TPC applications

    NASA Astrophysics Data System (ADS)

    Bouchez, J.; Burke, D. R.; Cavata, Ch.; Colas, P.; De La Broise, X.; Delbart, A.; Giganon, A.; Giomataris, I.; Graffin, P.; Mols, J.-Ph.; Pierre, F.; Ritou, J.-L.; Sarrat, A.; Virique, E.; Zito, M.; Radicioni, E.; De Oliveira, R.; Dumarchez, J.; Abgrall, N.; Bene, P.; Blondel, A.; Cervera, A.; Ferrere, D.; Maschiocchi, F.; Perrin, E.; Richeux, J.-P.; Schroeter, R.; Jover, G.; Lux, T.; Rodriguez, A. Y.; Sanchez, F.

    2007-05-01

    A large volume TPC will be used in the near future in a variety of experiments including T2K. The bulk Micromegas detector for this TPC is built using a novel production technique particularly suited for compact and robust low mass detectors. The capability to pave a large surface with a simple mounting solution and small dead space between modules is of particular interest for these applications. We have built several large bulk Micromegas detectors (27×26 cm2) and we have tested them in the former HARP field cage setup with a magnetic field. Cosmic ray data have been acquired in a variety of experimental conditions. Good detector performances and space point resolution have been achieved.

  11. Public health impact of large airports.

    PubMed

    Passchier, W; Knottnerus, A; Albering, H; Walda, I

    2000-01-01

    Large airports with the related infrastructure, businesses and industrial activities affect the health of the population living, travelling and working in the surroundings of or at the airport. The employment and contributions to economy from the airport and related operations are expected to have a beneficial effect, which, however, is difficult to quantify. More pertinent data are available on the, largely negative, health effects of environmental factors, such as air and soil pollution, noise, accident risk, and landscape changes. Information on the concurrent and cumulative impact of these factors is lacking, but is of primary relevance for public health policy. A committee of the Health Council of The Netherlands recently reviewed the data on the health impact of large airports. It was concluded that, generally, integrated health assessments are not available. Such assessments, as part of sustainable mobility policy, should accompany the further development of the global aviation system.

  12. Challenges in large scale distributed computing: bioinformatics.

    SciTech Connect

    Disz, T.; Kubal, M.; Olson, R.; Overbeek, R.; Stevens, R.; Mathematics and Computer Science; Univ. of Chicago; The Fellowship for the Interpretation of Genomes

    2005-01-01

    The amount of genomic data available for study is increasing at a rate similar to that of Moore's law. This deluge of data is challenging bioinformaticians to develop newer, faster and better algorithms for analysis and examination of this data. The growing availability of large scale computing grids coupled with high-performance networking is challenging computer scientists to develop better, faster methods of exploiting parallelism in these biological computations and deploying them across computing grids. In this paper, we describe two computations that are required to be run frequently and which require large amounts of computing resource to complete in a reasonable time. The data for these computations are very large and the sequential computational time can exceed thousands of hours. We show the importance and relevance of these computations, the nature of the data and parallelism and we show how we are meeting the challenge of efficiently distributing and managing these computations in the SEED project.

  13. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  14. Large Dynamic Range Beam Profile Measurements

    SciTech Connect

    A. P. Freyberger

    2005-06-01

    Large dynamic range (Peak/Noise >10{sup 5}) beam profile measurements are routinely performed in the Hall-B beamline at Jefferson Lab. These measurements are made with a 1 to 10nA electron beam current with energies between 1 to 6 GeV. The electron beam scatters off of a thin W or Fe wire and the scattered particle/shower is detected via scintillation or Cerenkov light several meters downstream of the wire. This report describes results on increasing the dynamic range by using multiple wires of varying diameters. Profile measurements with this large dynamic range are of use for accelerators with very stored energy (e.g. energy recovering linacs [ERL]) where small beam loss represents a significant amount of beam power. Results on measuring the transverse profile with large dynamic range during the CEBAF energy recovery experiment is also presented.

  15. Large Deformations of a Soft Porous Material

    NASA Astrophysics Data System (ADS)

    MacMinn, Christopher W.; Dufresne, Eric R.; Wettlaufer, John S.

    2016-04-01

    Compressing a porous material will decrease the volume of the pore space, driving fluid out. Similarly, injecting fluid into a porous material can expand the pore space, distorting the solid skeleton. This poromechanical coupling has applications ranging from cell and tissue mechanics to geomechanics and hydrogeology. The classical theory of linear poroelasticity captures this coupling by combining Darcy's law with Terzaghi's effective stress and linear elasticity in a linearized kinematic framework. Linear poroelasticity is a good model for very small deformations, but it becomes increasingly inappropriate for moderate to large deformations, which are common in the context of phenomena such as swelling and damage, and for soft materials such as gels and tissues. The well-known theory of large-deformation poroelasticity combines Darcy's law with Terzaghi's effective stress and nonlinear elasticity in a rigorous kinematic framework. This theory has been used extensively in biomechanics to model large elastic deformations in soft tissues and in geomechanics to model large elastoplastic deformations in soils. Here, we first provide an overview and discussion of this theory with an emphasis on the physics of poromechanical coupling. We present the large-deformation theory in an Eulerian framework to minimize the mathematical complexity, and we show how this nonlinear theory simplifies to linear poroelasticity under the assumption of small strain. We then compare the predictions of linear poroelasticity with those of large-deformation poroelasticity in the context of two uniaxial model problems: fluid outflow driven by an applied mechanical load (the consolidation problem) and compression driven by a steady fluid throughflow. We explore the steady and dynamical errors associated with the linear model in both situations, as well as the impact of introducing a deformation-dependent permeability. We show that the error in linear poroelasticity is due primarily to kinematic

  16. Large-scale nanophotonic phased array.

    PubMed

    Sun, Jie; Timurdogan, Erman; Yaacobi, Ami; Hosseini, Ehsan Shah; Watts, Michael R

    2013-01-10

    Electromagnetic phased arrays at radio frequencies are well known and have enabled applications ranging from communications to radar, broadcasting and astronomy. The ability to generate arbitrary radiation patterns with large-scale phased arrays has long been pursued. Although it is extremely expensive and cumbersome to deploy large-scale radiofrequency phased arrays, optical phased arrays have a unique advantage in that the much shorter optical wavelength holds promise for large-scale integration. However, the short optical wavelength also imposes stringent requirements on fabrication. As a consequence, although optical phased arrays have been studied with various platforms and recently with chip-scale nanophotonics, all of the demonstrations so far are restricted to one-dimensional or small-scale two-dimensional arrays. Here we report the demonstration of a large-scale two-dimensional nanophotonic phased array (NPA), in which 64 × 64 (4,096) optical nanoantennas are densely integrated on a silicon chip within a footprint of 576 μm × 576 μm with all of the nanoantennas precisely balanced in power and aligned in phase to generate a designed, sophisticated radiation pattern in the far field. We also show that active phase tunability can be realized in the proposed NPA by demonstrating dynamic beam steering and shaping with an 8 × 8 array. This work demonstrates that a robust design, together with state-of-the-art complementary metal-oxide-semiconductor technology, allows large-scale NPAs to be implemented on compact and inexpensive nanophotonic chips. In turn, this enables arbitrary radiation pattern generation using NPAs and therefore extends the functionalities of phased arrays beyond conventional beam focusing and steering, opening up possibilities for large-scale deployment in applications such as communication, laser detection and ranging, three-dimensional holography and biomedical sciences, to name just a few.

  17. Large muon electric dipole moment from flavor?

    SciTech Connect

    Hiller, Gudrun; Huitu, Katri; Rueppell, Timo; Laamanen, Jari

    2010-11-01

    We study the prospects and opportunities of a large muon electric dipole moment (EDM) of the order (10{sup -24}-10{sup -22}) ecm. We investigate how natural such a value is within the general minimal supersymmetric extension of the standard model with CP violation from lepton flavor violation in view of the experimental constraints. In models with hybrid gauge-gravity-mediated supersymmetry breaking, a large muon EDM is indicative for the structure of flavor breaking at the Planck scale, and points towards a high messenger scale.

  18. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  19. Knowledge Discovery in Large Data Sets

    SciTech Connect

    Simas, Tiago; Silva, Gabriel; Miranda, Bruno; Ribeiro, Rita

    2008-12-05

    In this work we briefly address the problem of unsupervised classification on large datasets, magnitude around 100,000,000 objects. The objects are variable objects, which are around 10% of the 1,000,000,000 astronomical objects that will be collected by GAIA/ESA mission. We tested unsupervised classification algorithms on known datasets such as OGLE and Hipparcos catalogs. Moreover, we are building several templates to represent the main classes of variable objects as well as new classes to build a synthetic dataset of this dimension. In the future we will run the GAIA satellite scanning law on these templates to obtain a testable large dataset.

  20. Large Scale Shape Optimization for Accelerator Cavities

    SciTech Connect

    Akcelik, Volkan; Lee, Lie-Quan; Li, Zenghai; Ng, Cho; Xiao, Li-Ling; Ko, Kwok; /SLAC

    2011-12-06

    We present a shape optimization method for designing accelerator cavities with large scale computations. The objective is to find the best accelerator cavity shape with the desired spectral response, such as with the specified frequencies of resonant modes, field profiles, and external Q values. The forward problem is the large scale Maxwell equation in the frequency domain. The design parameters are the CAD parameters defining the cavity shape. We develop scalable algorithms with a discrete adjoint approach and use the quasi-Newton method to solve the nonlinear optimization problem. Two realistic accelerator cavity design examples are presented.

  1. Design and construction of large capacitor banks

    SciTech Connect

    Whitham, K.; Gritton, D.G.; Holloway, R.W.; Merritt, B.T.

    1983-01-01

    Over the past 12 years, the Laser Program at LLNL has actively pursued laser fusion, using a series of large, solid-state lasers to develop target data leading to reactor designs using the concept of inertial confinement fusion. These lasers are all linear chains of flashlamp driven, Nd-doped glass amplifiers with a master oscillator at the front end. Techniques have been developed during this time to scale the lasers to an arbitrarily large size. A table shows the series of lasers and their parameters that have been developed to date.

  2. Sclerotherapy for large hydrocoeles in Nigeria.

    PubMed

    Onu, P E

    2000-07-01

    Sclerotherapy with tetracycline hydrochloride was used to treat 99 patients with large hydrocoeles (range 300-1500 ml). The mean age of these patients was 52 years. In 55.5% of the patients one treatment was adequate. Two treatments were required in 22%; three in 10%; four in 3%; and five in 7% of the patients. In two patients sclerotherapy failed. Complications were minimal. Only 15% of the patients complained of severe pain. The overall success rate was 98%. Tetracycline sclerotherapy for large hydrocoeles is effective, safe and economical and is preferred for older patients who are at risk from anaesthetic complications.

  3. [Large vessels vasculopathy in systemic sclerosis].

    PubMed

    Tejera Segura, Beatriz; Ferraz-Amaro, Iván

    2015-12-01

    Vasculopathy in systemic sclerosis is a severe, in many cases irreversible, manifestation that can lead to amputation. While the classical clinical manifestations of the disease have to do with the involvement of microcirculation, proximal vessels of upper and lower limbs can also be affected. This involvement of large vessels may be related to systemic sclerosis, vasculitis or atherosclerotic, and the differential diagnosis is not easy. To conduct a proper and early diagnosis, it is essential to start prompt appropriate treatment. In this review, we examine the involvement of large vessels in scleroderma, an understudied manifestation with important prognostic and therapeutic implications. PMID:25726305

  4. Indian LSSC (Large Space Simulation Chamber) facility

    NASA Technical Reports Server (NTRS)

    Brar, A. S.; Prasadarao, V. S.; Gambhir, R. D.; Chandramouli, M.

    1988-01-01

    The Indian Space Agency has undertaken a major project to acquire in-house capability for thermal and vacuum testing of large satellites. This Large Space Simulation Chamber (LSSC) facility will be located in Bangalore and is to be operational in 1989. The facility is capable of providing 4 meter diameter solar simulation with provision to expand to 4.5 meter diameter at a later date. With such provisions as controlled variations of shroud temperatures and availability of infrared equipment as alternative sources of thermal radiation, this facility will be amongst the finest anywhere. The major design concept and major aspects of the LSSC facility are presented here.

  5. Large natural geophysical events: planetary planning

    SciTech Connect

    Knox, J.B.; Smith, J.V.

    1984-09-01

    Geological and geophysical data suggest that during the evolution of the earth and its species, that there have been many mass extinctions due to large impacts from comets and large asteroids, and major volcanic events. Today, technology has developed to the stage where we can begin to consider protective measures for the planet. Evidence of the ecological disruption and frequency of these major events is presented. Surveillance and warning systems are most critical to develop wherein sufficient lead times for warnings exist so that appropriate interventions could be designed. The long term research undergirding these warning systems, implementation, and proof testing is rich in opportunities for collaboration for peace.

  6. Large Terrain Modeling and Visualization for Planets

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher

    2011-01-01

    Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.

  7. Method and apparatus for extruding large honeycombs

    DOEpatents

    Kragle, Harry A.; Lambert, David W.; Lipp, G. Daniel

    1996-09-03

    Extrusion die apparatus and an extrusion method for extruding large-cross-section honeycomb structures from plasticized ceramic batch materials are described, the apparatus comprising a die having a support rod connected to its central portion, the support rod being anchored to support means upstream of the die. The support rod and support means act to limit die distortion during extrusion, reducing die strain and stress to levels permitting large honeycomb extrusion without die failure. Dies of optimal thickness are disclosed which reduce the maximum stresses exerted on the die during extrusion.

  8. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  9. Production of a large, quiescent, magnetized plasma

    NASA Technical Reports Server (NTRS)

    Landt, D. L.; Ajmera, R. C.

    1976-01-01

    An experimental device is described which produces a large homogeneous quiescent magnetized plasma. In this device, the plasma is created in an evacuated brass cylinder by ionizing collisions between electrons emitted from a large-diameter electron gun and argon atoms in the chamber. Typical experimentally measured values of the electron temperature and density are presented which were obtained with a glass-insulated planar Langmuir probe. It is noted that the present device facilitates the study of phenomena such as waves and diffusion in magnetized plasmas.

  10. Structure of large dsDNA viruses

    PubMed Central

    Klose, Thomas; Rossmann, Michael G.

    2015-01-01

    Nucleocytoplasmic large dsDNA viruses (NCLDVs) encompass an ever-increasing group of large eukaryotic viruses, infecting a wide variety of organisms. The set of core genes shared by all these viruses includes a major capsid protein with a double jelly-roll fold forming an icosahedral capsid, which surrounds a double layer membrane that contains the viral genome. Furthermore, some of these viruses, such as the members of the Mimiviridae and Phycodnaviridae have a unique vertex that is used during infection to transport DNA into the host. PMID:25003382

  11. Large volume flow-through scintillating detector

    DOEpatents

    Gritzo, Russ E.; Fowler, Malcolm M.

    1995-01-01

    A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

  12. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Astrophysics Data System (ADS)

    Eimer, Joseph; Ali, A.; Amiri, M.; Appel, J. W.; Araujo, D.; Bennett, C. L.; Boone, F.; Chan, M.; Cho, H.; Chuss, D. T.; Colazo, F.; Crowe, E.; Denis, K.; Dünner, R.; Essinger-Hileman, T.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G. F.; Huang, C.; Irwin, K.; Jones, G.; Karakla, J.; Kogut, A. J.; Larson, D.; Limon, M.; Lowry, L.; Marriage, T.; Mehrle, N.; Miller, A. D.; Miller, N.; Moseley, S. H.; Novak, G.; Reintsema, C.; Rostem, K.; Stevenson, T.; Towner, D.; U-Yen, K.; Wagner, E.; Watts, D.; Wollack, E.; Xu, Z.; Zeng, L.

    2014-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is an array of telescopes designed to search for the signature of inflation in the polarization of the Cosmic Microwave Background (CMB). By combining the strategy of targeting large scales (>2 deg) with novel front-end polarization modulation and novel detectors at multiple frequencies, CLASS will pioneer a new frontier in ground-based CMB polarization surveys. In this talk, I give an overview of the CLASS instrument, survey, and outlook on setting important new limits on the energy scale of inflation.

  13. Lattice study of large Nc QCD

    NASA Astrophysics Data System (ADS)

    DeGrand, Thomas; Liu, Yuzhi

    2016-08-01

    We present a lattice simulation study of large Nc regularities of meson and baryon spectroscopy in S U (Nc) gauge theory with two flavors of dynamical fundamental representation fermions. Systems investigated include Nc=2 , 3, 4, and 5, over a range of fermion masses parametrized by a squared pseudoscalar to vector meson mass ratio between about 0.2 to 0.7. Good agreement with large Nc scaling is observed in the static potential, in meson masses and decay constants, and in baryon spectroscopy.

  14. Pions in Large N Quantum Chromodynamics

    SciTech Connect

    Weinberg, Steven

    2010-12-31

    An effective field theory of quarks, gluons, and pions, with the number N of colors treated as large, is proposed as a basis for calculations of hadronic phenomena at moderate energies. The qualitative consequences of the large N limit are similar though not identical to those in pure quantum chromodynamics, but because constituent quark masses appear in the effective Lagrangian, the 't Hooft coupling in the effective theory need not be strong at moderate energies. To leading order in 1/N the effective theory is renormalizable, with only a finite number of terms in the Lagrangian.

  15. Fermi Observations of the Large Magellanic Cloud

    NASA Astrophysics Data System (ADS)

    Knödlseder, J.

    2010-05-01

    We report on observations of the Large Magellanic Cloud (LMC) with the Fermi Gamma-Ray Space Telescope. The LMC is clearly detected with the Large Area Telescope (LAT) and for the first time the emission is spatially well resolved in gamma-rays. Our observations reveal that the bulk of the gamma-ray emission arises from the 30 Doradus region. We discuss this result in light of the massive star populations that are hosted in this area and address implications for cosmic ray physics. We conclude by exploring the scientific potential of the ongoing Fermi observations on the study of high-energy phenomena in massive stars.

  16. Ground test experiment for large space structures

    NASA Technical Reports Server (NTRS)

    Tollison, D. K.; Waites, H. B.

    1985-01-01

    In recent years a new body of control theory has been developed for the design of control systems for Large Space Structures (LSS). The problems of testing this theory on LSS hardware are aggravated by the expense and risk of actual in orbit tests. Ground tests on large space structures can provide a proving ground for candidate control systems, but such tests require a unique facility for their execution. The current development of such a facility at the NASA Marshall Space Flight Center (MSFC) is the subject of this report.

  17. Boost covariant gluon distributions in large nuclei

    NASA Astrophysics Data System (ADS)

    McLerran, Larry; Venugopalan, Raju

    1998-04-01

    It has been shown recently that there exist analytical solutions of the Yang-Mills equations for non-Abelian Weizsäcker-Williams fields which describe the distribution of gluons in large nuclei at small x. These solutions however depend on the color charge distribution at large rapidities. We here construct a model of the color charge distribution of partons in the fragmentation region and use it to compute the boost covariant momentum distributions of wee gluons. The phenomenological applications of our results are discussed.

  18. Large eddy simulation in the ocean

    NASA Astrophysics Data System (ADS)

    Scotti, Alberto

    2010-12-01

    Large eddy simulation (LES) is a relative newcomer to oceanography. In this review, both applications of traditional LES to oceanic flows and new oceanic LES still in an early stage of development are discussed. The survey covers LES applied to boundary layer flows, traditionally an area where LES has provided considerable insight into the physics of the flow, as well as more innovative applications, where new SGS closure schemes need to be developed. The merging of LES with large-scale models is also briefly reviewed.

  19. Large Horizontal-Axis Wind Turbines

    NASA Technical Reports Server (NTRS)

    Thresher, R. W. (Editor)

    1982-01-01

    The proceedings of a workshop held in Cleveland, July 28-30, 1981 are described. The workshop emphasized recent experience in building and testing large propeller-type wind turbines, expanding upon the proceedings of three previous DOE/NASA workshops at which design and analysis topics were considered. A total of 41 papers were presented on the following subjects: current and advanced large wind turbine systems, rotor blade design and manufacture, electric utility activities, research and supporting technology, meteorological characteristics for design and operation, and wind resources assessments for siting.

  20. Black holes at the Large Hadron Collider.

    PubMed

    Dimopoulos, S; Landsberg, G

    2001-10-15

    If the scale of quantum gravity is near TeV, the CERN Large Hadron Collider will be producing one black hole (BH) about every second. The decays of the BHs into the final states with prompt, hard photons, electrons, or muons provide a clean signature with low background. The correlation between the BH mass and its temperature, deduced from the energy spectrum of the decay products, can test Hawking's evaporation law and determine the number of large new dimensions and the scale of quantum gravity.

  1. Construction and assembly of large space structures

    NASA Technical Reports Server (NTRS)

    Mar, J. W.; Miller, R. H.; Bowden, M. L.

    1980-01-01

    Three aspects of the construction and assembly of large space structures, namely transportation costs, human productivity in space and the source of materials (lunar vs terrestrial), are considered. Studies on human productivity have been so encouraging that the cost of human labor is now regarded as much less important than transportation costs. It is pointed out that these costs, although high, are extremely demand-sensitive. Even with high demand, however, the construction of several large systems would warrant the use of lunar materials and space manufacturing. The importance of further research is stressed in order to establish the optimum tradeoff between automation and manual assembly.

  2. Deflection of large near-earth objects

    SciTech Connect

    Canavan, G.H.

    1999-01-11

    The Earth is periodically hit by near Earth objects (NEOs) ranging in size from dust to mountains. The small ones are a useful source of information, but those larger than about 1 km can cause global damage. The requirements for the deflection of NEOs with significant material strength are known reasonably well; however, the strength of large NEOs is not known, so those requirements may not apply. Meteor impacts on the Earth`s atmosphere give some information on strength as a function of object size and composition. This information is used here to show that large, weak objects could also be deflected efficiently, if addressed properly.

  3. Formal Verification of Large Software Systems

    NASA Technical Reports Server (NTRS)

    Yin, Xiang; Knight, John

    2010-01-01

    We introduce a scalable proof structure to facilitate formal verification of large software systems. In our approach, we mechanically synthesize an abstract specification from the software implementation, match its static operational structure to that of the original specification, and organize the proof as the conjunction of a series of lemmas about the specification structure. By setting up a different lemma for each distinct element and proving each lemma independently, we obtain the important benefit that the proof scales easily for large systems. We present details of the approach and an illustration of its application on a challenge problem from the security domain

  4. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  5. Analysis of large soil samples for actinides

    DOEpatents

    Maxwell, III; Sherrod L.

    2009-03-24

    A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

  6. Control problems in very large accelerators

    NASA Astrophysics Data System (ADS)

    Crowley-Milling, M. C.

    1985-06-01

    There is no fundamental difference of kind in the control requirements between a small and a large accelerator since they are built of the same types of components, which individually have similar control inputs and outputs. The main difference is one of scale; the large machine has many more components of each type, and the distances involved are much greater. Both of these factors must be taken into account in determining the optimum way of carrying out the control functions. Small machines should use standard equipment and software for control as much as possible, as special developments for small quantities cannot normally be justified if all costs are taken into account. On the other hand, the very great number of devices needed for a large machine means that, if special developments can result in simplification, they may make possible an appreciable reduction in the control equipment costs. It is the purpose of this report to look at the special control problems of large accelerators, which the author shall arbitarily define as those with a length of circumference in excess of 10 km, and point out where special developments, or the adoption of developments from outside the accelerator control field, can be of assistance in minimizing the cost of the control system. Most of the first part of this report was presented as a paper to the 1985 Particle Accelerator Conference. It has now been extended to include a discussion on the special case of the controls for the SSC.

  7. Linking Large-Scale Reading Assessments: Comment

    ERIC Educational Resources Information Center

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  8. Unification and large-scale structure.

    PubMed Central

    Laing, R A

    1995-01-01

    The hypothesis of relativistic flow on parsec scales, coupled with the symmetrical (and therefore subrelativistic) outer structure of extended radio sources, requires that jets decelerate on scales observable with the Very Large Array. The consequences of this idea for the appearances of FRI and FRII radio sources are explored. PMID:11607609

  9. Visualization of large elongated DNA molecules.

    PubMed

    Lee, Jinyong; Kim, Yongkyun; Lee, Seonghyun; Jo, Kyubong

    2015-09-01

    Long and linear DNA molecules are the mainstream single-molecule analytes for a variety of biochemical analysis within microfluidic devices, including functionalized surfaces and nanostructures. However, for biochemical analysis, large DNA molecules have to be unraveled, elongated, and visualized to obtain biochemical and genomic information. To date, elongated DNA molecules have been exploited in the development of a number of genome analysis systems as well as for the study of polymer physics due to the advantage of direct visualization of single DNA molecule. Moreover, each single DNA molecule provides individual information, which makes it useful for stochastic event analysis. Therefore, numerous studies of enzymatic random motions have been performed on a large elongated DNA molecule. In this review, we introduce mechanisms to elongate DNA molecules using microfluidics and nanostructures in the beginning. Secondly, we discuss how elongated DNA molecules have been utilized to obtain biochemical and genomic information by direct visualization of DNA molecules. Finally, we reviewed the approaches used to study the interaction of proteins and large DNA molecules. Although DNA-protein interactions have been investigated for many decades, it is noticeable that there have been significant achievements for the last five years. Therefore, we focus mainly on recent developments for monitoring enzymatic activity on large elongated DNA molecules.

  10. The Global Climatology of Large Lakes

    NASA Astrophysics Data System (ADS)

    Merchant, Christopher J.; MacCallum, Stuart N.

    2010-12-01

    There are ~250 large lakes in the world, large here meaning lakes with surface areas exceeding 500 km2 . Lakes are potentially sensitive indicators of regional climatic changes and large lakes also significantly modulate climate locally, via their effects on surface atmosphere fluxes and atmospheric stability. The interactions of lakes and atmosphere are therefore significant for climatology and weather forecasting. Relatively few lakes are permanently instrumented (mostly in N America and Europe), and in assimilation systems for numerical weather prediction, highly simplistic approximations for lake surface temperature are giving way to lake temperature models within the land-atmosphere exchange schemes. It is therefore important to exploit satellite observations to inform and constrain lake and weather models, and to provide observations of lake temperature changes for a wider range of lakes. Here we present results from the first phase of an ESA-funded project, ARCLake, to demonstrate accurate lake surface temperatures and detection of ice cover, using ATSR-2 and AATSR, for large lakes globally. It is sometimes assumed that sea surface temperature (SST) techniques are applicable, but in fact, lake-specific approaches are required for cloud detection and for temperature retrieval. In this paper, we present preliminary results from application of the techniques described in a companion paper [1].

  11. Responses of large mammals to climate change

    PubMed Central

    Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan

    2014-01-01

    Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change.

  12. Black Holes and the Large Hadron Collider

    ERIC Educational Resources Information Center

    Roy, Arunava

    2011-01-01

    The European Center for Nuclear Research or CERN's Large Hadron Collider (LHC) has caught our attention partly due to the film "Angels and Demons." In the movie, an antimatter bomb attack on the Vatican is foiled by the protagonist. Perhaps just as controversial is the formation of mini black holes (BHs). Recently, the American Physical Society…

  13. Kids and Chemistry: Large Event Guide.

    ERIC Educational Resources Information Center

    Tinnesand, Michael

    This guide is intended to provide Kids and Chemistry (K&C) with a variety of age-appropriate, fun, and safe demonstrations. It features information on planning a large event and includes safety guidelines. Several activities are included under each major topic. Topics include: (1) Acids and Bases; (2) Unsigned; (3) Kool Tie-Dye; (4) Secret…

  14. The repetition of large-earthquake ruptures.

    PubMed Central

    Sieh, K

    1996-01-01

    This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events. Images Fig. 3 Fig. 7 Fig. 9 PMID:11607662

  15. Sterilization for Large Volunteer Temporary Clinics.

    PubMed

    Cuny, Eve

    2015-12-01

    Large portable clinics staffed by volunteers present many unique challenges, including establishing appropriate instrument processing services. This article explores many of the specific steps an organization can take to ensure a safe care environment for patients and a safe working environment for volunteers.

  16. Technology for large tandem mirror experiments

    SciTech Connect

    Thomassen, K.I.

    1980-09-04

    Construction of a large tandem mirror (MFTF-B) will soon begin at Lawrence Livermore National Laboratory (LLNL). Designed to reach break-even plasma conditions, the facility will significantly advance the physics and technology of magnetic-mirror-based fusion reactors. This paper describes the objectives and the design of the facility.

  17. Improving Interactions in the Large Language Class.

    ERIC Educational Resources Information Center

    Raymond, Patricia M.; Raymond, Jacques; Pilon, Daniel

    1998-01-01

    Describes a prototypical microcomputer system that improves the interactions between teacher and large language classes in a traditional language classroom setting. This system achieves dynamic interactions through multiple student/professor interventions, immediate and delayed feedback, and individual teacher/student conferences. The system uses…

  18. Demonstrations to Wake Up Large Classes.

    ERIC Educational Resources Information Center

    Howes, Ruth; Watson, James

    1982-01-01

    A general strategy and specific examples for demonstrations in large physics classes are given. Such "action" demonstrations may involve students moving around the class to demonstrate molecular behavior in different states of matter, and effect of heat in changing state. (JN)

  19. The Imaging of Large Nerve Perineural Spread.

    PubMed

    Gandhi, Mitesh; Sommerville, Jennifer

    2016-04-01

    We present a review of the imaging findings of large nerve perineural spread within the skull base. The MRI techniques and reasons for performing different sequences are discussed. A series of imaging examples illustrates the appearance of perineural tumor spread with an emphasis on the zonal staging system.

  20. 76 FR 46959 - Large Trader Reporting

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-03

    ...: Effective Date: October 3, 2011. Compliance Dates: December 1, 2011 for the requirement on large traders to... events, and has highlighted the need for an efficient and effective mechanism for gathering data on the.... 62174 (May 26, 2010), 75 FR 32556 (June 8, 2010) (proposed Consolidated Audit Trail) (File No....

  1. Forecasting distribution of numbers of large fires

    USGS Publications Warehouse

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  2. The Large Area Crop Inventory Experiment (LACIE)

    NASA Technical Reports Server (NTRS)

    Macdonald, R. B.

    1976-01-01

    A Large Area Crop Inventory Experiment (LACIE) was undertaken to prove out an economically important application of remote sensing from space. The experiment focused upon determination of wheat acreages in the U.S. Great Plains and upon the development and testing of yield models. The results and conclusions are presented.

  3. Very large radio surveys of the sky.

    PubMed

    Condon, J J

    1999-04-27

    Recent advances in electronics and computing have made possible a new generation of large radio surveys of the sky that yield an order-of-magnitude higher sensitivity and positional accuracy. Combined with the unique properties of the radio universe, these quantitative improvements open up qualitatively different and exciting new scientific applications of radio surveys.

  4. Very large radio surveys of the sky

    PubMed Central

    Condon, J. J.

    1999-01-01

    Recent advances in electronics and computing have made possible a new generation of large radio surveys of the sky that yield an order-of-magnitude higher sensitivity and positional accuracy. Combined with the unique properties of the radio universe, these quantitative improvements open up qualitatively different and exciting new scientific applications of radio surveys. PMID:10220365

  5. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  6. Camera Systems Rapidly Scan Large Structures

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  7. Large Indoor Sports and Recreation Facilities.

    ERIC Educational Resources Information Center

    Seidler, Todd

    This paper presents an overview and analysis of field houses, stadiums, arenas, and campus recreation centers. All are large indoor sports or recreation facilities. In general, stadiums and arenas are spectator facilities while field houses and campus recreation centers are primarily designed for activity. A college field house is a structure that…

  8. Large astronomical catalog management for telescope operations

    NASA Astrophysics Data System (ADS)

    Baruffolo, Andrea; Benacchio, Leopoldo

    1998-07-01

    Large astronomical catalogues containing from a million up to hundreds of millions records are currently available, even larger catalogues will be released in the near future. They will have an important operational role since they will be used throughout the observing cycle of next generation large telescopes, for proposal and observation preparation, telescope scheduling, selection of guide stars, etc. These large databases pose new problems for fast and general access. Solutions based on custom software or on customized versions of specific catalogues have been proposed, but the problem will benefit from a more general database approach. While traditional database technologies have proven to be inadequate for this task, new technologies are emerging, in particular that of Object Relational DBMSs, that seem to be suitable to solve the problem. In this paper we describe our experiences in experimenting with ORDBMSs for the management of large astronomical catalogues. We worked especially on the database query language and access methods. In the first field to extend the database query language capabilities with astronomical functionalities and to support typical astronomical queries.In the second, to speed up the execution of queries containing astronomical predicates.

  9. Mass extinctions caused by large bolide impacts

    SciTech Connect

    Alvarez, L.W.

    1987-07-01

    Evidence indicates that the collision of Earth and a large piece of Solar System derbris such as a meteoroid, asteroid or comet caused the great extinctions of 65 million years ago, leading to the transition from the age of the dinosaurs to the age of the mammals.

  10. Large N duality beyond the genus expansion

    NASA Astrophysics Data System (ADS)

    Mariño, Marcos; Pasquetti, Sara; Putrov, Pavel

    2010-07-01

    We study non-perturbative aspects of the large N duality between Chern-Simons theory and topological strings, and we find a rich structure of large N phase transitions in the complex plane of the 't Hooft parameter. These transitions are due to large N instanton effects, and they can be regarded as a deformation of the Stokes phenomenon. Moreover, we show that, for generic values of the 't Hooft coupling, instanton effects are not exponentially suppressed at large N and they correct the genus expansion. This phenomenon was first discovered in the context of matrix models, and we interpret it as a generalization of the oscillatory asymptotics along anti-Stokes lines. In the string dual, the instanton effects can be interpreted as corrections to the saddle string geometry due to discretized neighbouring geometries. As a mathematical application, we obtain the 1/ N asymptotics of the partition function of Chern-Simons theory on the lens space L(2, 1), and we test it numerically to high precision in order to exhibit the importance of instanton effects.

  11. Mass extinctions caused by large bolide impacts.

    PubMed

    Alvarez, L W

    1987-07-01

    Evidence indicates that the collisions of Earth and a large piece of Solar System debris such as a meteoroid, asteroid or comet caused the great extinctions of 65 million years ago, leading to the transition from the age of the dinosaurs to the age of the mammals.

  12. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  13. Design evolution of large wind turbine generators

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1979-01-01

    During the past five years, the goals of economy and reliability have led to a significant evolution in the basic design--both external and internal--of large wind turbine systems. To show the scope and nature of recent changes in wind turbine designs, development of three types are described: (1) system configuration developments; (2) computer code developments; and (3) blade technology developments.

  14. Colonic stenting in malignant large bowel obstruction.

    PubMed

    Rajadurai, Vinita A; Levitt, Michael

    2016-06-01

    In patients who are surgical candidates, colonic stenting is beneficial for preoperative decompression in large bowel obstruction, as it can convert a surgical procedure from an emergent two-step approach into an elective one-step resection with a primary anastomosis. PMID:27398210

  15. Large area space solar cell assemblies

    NASA Technical Reports Server (NTRS)

    Spitzer, M. B.; Nowlan, M. J.

    1982-01-01

    Development of a large area space solar cell assembly is presented. The assembly consists of an ion implanted silicon cell and glass cover. The important attributes of fabrication are (1) use of a back surface field which is compatible with a back surface reflector, and (2) integration of coverglass application and call fabrication.

  16. Slug flow in a large diameter pipe

    SciTech Connect

    Crowley, C.J.; Sam, R.G.; Wallis, G.B.; Mehta, D.C.

    1985-01-01

    Experimental and anlytical results are presented for two-phase slug flow in a transparent, large diameter pipe (6.75 inches ID) at high gas density. Slug characteristics of velocity, length, frequency, carpet profile and carpet velocity, as well as pressure drop, have been measured and compared with correlations and mechanistic models.

  17. Microwave performance characterization of large space antennas

    NASA Technical Reports Server (NTRS)

    Bathker, D. A. (Editor)

    1977-01-01

    Performance capabilities of large microwave space antenna configurations with apertures generally from 100 wavelengths upwards are discussed. Types of antennas considered include: phased arrays, lenses, reflectors, and hybrid combinations of phased arrays with reflectors or lenses. The performance characteristics of these broad classes of antennas are examined and compared in terms of applications.

  18. Understanding Student Performance in a Large Class

    ERIC Educational Resources Information Center

    Snowball, Jen D.; Boughey, Chrissie

    2012-01-01

    Across the world, university teachers are increasingly being required to engage with diversity in the classes they teach. Using the data from a large Economics 1 class at a South African university, this attempts to understand the effects of diversity on chances of success and how assessment can impact on this. By demonstrating how theory can be…

  19. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  20. Large Format Multicolor QWIP Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Soibel, A.; Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Mumolo, J. M.; Ting, D. Z.; Hill, C. J.; Nguyen, J.

    2009-01-01

    Mid-wave infrared (MWIR) and long-wave infrared (LWIR) multicolor focal plane array (FPA) cameras are essential for many DoD and NASA applications including Earth and planetary remote sensing. In this paper we summarize our recent development of large format multicolor QWIP FPA that cover MWIR and LWIR bands.

  1. Global Alignment System for Large Genomic Sequencing

    2002-03-01

    AVID is a global alignment system tailored for the alignment of large genomic sequences up to megabases in length. Features include the possibility of one sequence being in draft form, fast alignment, robustness and accuracy. The method is an anchor based alignment using maximal matches derived from suffix trees.

  2. Reading the World through Very Large Numbers

    ERIC Educational Resources Information Center

    Greer, Brian; Mukhopadhyay, Swapna

    2010-01-01

    One original, and continuing, source of interest in large numbers is observation of the natural world, such as trying to count the stars on a clear night or contemplation of the number of grains of sand on the seashore. Indeed, a search of the internet quickly reveals many discussions of the relative numbers of stars and grains of sand. Big…

  3. Large communications platforms versus smaller satellites

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Communications systems using large platforms are compared with systems using conventional satellites. Systems models were generated and compared for U.S. domestic application and for 1 INTELSAT's international and domestic transponder lease application. Technology advances were assumed the platforms and the evolution of conventional satellites.

  4. Sterilization for Large Volunteer Temporary Clinics.

    PubMed

    Cuny, Eve

    2015-12-01

    Large portable clinics staffed by volunteers present many unique challenges, including establishing appropriate instrument processing services. This article explores many of the specific steps an organization can take to ensure a safe care environment for patients and a safe working environment for volunteers. PMID:26819989

  5. Responses of large mammals to climate change

    PubMed Central

    Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan

    2014-01-01

    Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change. PMID:27583293

  6. Employment of the Disabled in Large Corporations.

    ERIC Educational Resources Information Center

    Rabby, Rami

    1983-01-01

    Large corporations are in a unique position to employ the disabled, but they sometimes lack the motivation to do so. The author discusses elements of a corporate policy for the disabled, ways of formulating and disseminating it, assignment of responsibility, changes in management attitudes, and the special case of the multinational company.…

  7. Estimation and Compression over Large Alphabets

    ERIC Educational Resources Information Center

    Acharya, Jayadev

    2014-01-01

    Compression, estimation, and prediction are basic problems in Information theory, statistics and machine learning. These problems have been extensively studied in all these fields, though the primary focus in a large portion of the work has been on understanding and solving the problems in the asymptotic regime, "i.e." the alphabet size…

  8. Large size space construction for space exploitation

    NASA Astrophysics Data System (ADS)

    Kondyurin, Alexey

    2016-07-01

    Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).

  9. Large abdominoscrotal hydrocele: Uncommon surgical entity

    PubMed Central

    Kamble, Pramod M.; Deshpande, Aparna A.; Thapar, Vinaykumar B.; Das, Krishanu

    2015-01-01

    Introduction An abdominoscrotal hydrocele (ASH) consists of a large inguinoscrotal hydrocele which communicates in an hour glass fashion with a large “intraabdominal component”. Mostly affects single testis but very rarely can present bilaterally. Presentation of case We are presenting here a young 25 year old patient with large right sided scrotal swelling encroaching over lower abdomen. Clinically it was abdominoscrotal hydrocele which was confirmed with CT abdomen and later on subjected for surgery. Discussion Abdominoscrotal hydrocele is rarest type of hydrocele; first described by Dupuytren. The etiology of ASH is unknown; however, different theories have been described in literature to explain the pathogenesis. Diagnosis of ASH is done by clinical examination and is confirmed by radiological examination. Though ultrasonography is the first choice, in few selected cases contrast enhanced computerized tomography or magnetic resonant imaging may be helpful for more anatomical delineation. It may present with various complications secondary to pressure exerted by the components of the ASH. Surgical excision of the sac is the only definitive treatment option. There is no role of conservative treatment. Sometimes, decompression of the cyst needed to ease the dissection of the sac. Conclusion Abdominoscrotal hydrocele differential should be considered while dealing with large lower abdominal swelling along with scrotal swelling. PMID:26363104

  10. Responses of large mammals to climate change.

    PubMed

    Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan

    2014-01-01

    Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change. PMID:27583293

  11. Interdisciplinary science with large aperture detectors

    NASA Astrophysics Data System (ADS)

    Wiencke, Lawrence

    2013-06-01

    Large aperture detector systems to measure high energy cosmic rays also offer unique opportunities in other areas of science. Disciplines include geophysics such as seismic and volcanic activity, and atmospheric science ranging from clouds to lightning to aerosols to optical transients. This paper will discuss potential opportunities based on the ongoing experience of the Pierre Auger Observatory.

  12. Small and large number discrimination in guppies.

    PubMed

    Piffer, Laura; Agrillo, Christian; Hyde, Daniel C

    2012-03-01

    Non-verbal numerical behavior in human infants, human adults, and non-human primates appears to be rooted in two distinct mechanisms: a precise system for tracking and comparing small numbers of items simultaneously (up to 3 or 4 items) and an approximate system for estimating numerical magnitude of a group of objects. The most striking evidence that these two mechanisms are distinct comes from the apparent inability of young human infants and non-human primates to compare quantites across the small (<3 or 4)/large (>4) number boundary. We ask whether this distinction is present in lower animal species more distantly related to humans, guppies (Poecilia reticulata). We found that, like human infants and non-human primates, fish succeed at comparisons between large numbers only (5 vs. 10), succeed at comparisons between small numbers only (3 vs. 4), but systematically fail at comparisons that closely span the small/large boundary (3 vs. 5). Furthermore, increasing the distance between the small and large number resulted in successful discriminations (3 vs. 6, 3 vs. 7, and 3 vs. 9). This pattern of successes and failures is similar to those observed in human infants and non-human primates to suggest that the two systems are present and functionally distinct across a wide variety of animal species. PMID:21909934

  13. Aerodynamic beam generator for large particles

    DOEpatents

    Brockmann, John E.; Torczynski, John R.; Dykhuizen, Ronald C.; Neiser, Richard A.; Smith, Mark F.

    2002-01-01

    A new type of aerodynamic particle beam generator is disclosed. This generator produces a tightly focused beam of large material particles at velocities ranging from a few feet per second to supersonic speeds, depending on the exact configuration and operating conditions. Such generators are of particular interest for use in additive fabrication techniques.

  14. A large vascular leiomyoma of the leg.

    PubMed

    Cigna, E; Maruccia, M; Malzone, G; Malpassini, F; Soda, G; Drudi, F M

    2012-06-01

    A 69-year-old woman with a subcutaneous, large vascular leiomyoma of the leg is presented. The patient had a painful, slow-growing, right medial malleolus mass. Clinical symptoms, US images and histopathologic features are reported. Vascular leiomyoma should be included in the differential diagnosis of painful, lower extremity subcutaneous masses also in lesions of larger dimensions.

  15. An electronic aromaticity index for large rings.

    PubMed

    Matito, Eduard

    2016-04-28

    We introduce a new electronic aromaticity index, AV1245, consisting of an average of the 4-center multicenter indices (MCI) along the ring that keeps a positional relationship of 1, 2, 4, 5. AV1245 measures the extent of transferability of the delocalized electrons between bonds 1-2 and 4-5, which is expected to be large in conjugated circuits and, therefore, in aromatic molecules. A new algorithm for the calculation of MCI for large rings is also introduced and used to produce the data for the calibration of the new aromaticity index. AV1245 does not rely on reference values, does not suffer from large numerical precision errors, and it does not present any limitation on the nature of atoms, the molecular geometry or the level of calculation. It is a size-extensive measure with low computational cost that grows linearly with the number of ring members. Therefore, it is especially suitable to study the aromaticity of large molecular rings such as those occurring in belt-shaped Möbius structures or porphyrins. The analysis of AV1245 in free-base and bis-metalated Pd [32]octaphyrins(1,0,1,0,1,0,1,0) completes this study. PMID:26878146

  16. Computerized Torque Control for Large dc Motors

    NASA Technical Reports Server (NTRS)

    Willett, Richard M.; Carroll, Michael J.; Geiger, Ronald V.

    1987-01-01

    Speed and torque ranges in generator mode extended. System of shunt resistors, electronic switches, and pulse-width modulation controls torque exerted by large, three-phase, electronically commutated dc motor. Particularly useful for motor operating in generator mode because it extends operating range to low torque and high speed.

  17. Large Deviations: Advanced Probability for Undergrads

    ERIC Educational Resources Information Center

    Rolls, David A.

    2007-01-01

    In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…

  18. Responses of large mammals to climate change.

    PubMed

    Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan

    2014-01-01

    Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change.

  19. Solar Rejection Filter for Large Telescopes

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid; Lesh, James

    2009-01-01

    To reject solar radiation photons at the front aperture for large telescopes, a mosaic of large transmission mode filters is placed in front of the telescope or at the aperture of the dome. Filtering options for effective rejection of sunlight include a smaller filter down-path near the focus of the telescope, and a large-diameter filter located in the front of the main aperture. Two types of large filters are viable: reflectance mode and transmittance mode. In the case of reflectance mode, a dielectric coating on a suitable substrate (e.g. a low-thermal-expansion glass) is arranged to reflect only a single, narrow wavelength and to efficiently transmit all other wavelengths. These coatings are commonly referred to as notch filter. In this case, the large mirror located in front of the telescope aperture reflects the received (signal and background) light into the telescope. In the case of transmittance mode, a dielectric coating on a suitable substrate (glass, sapphire, clear plastic, membrane, and the like) is arranged to transmit only a single wavelength and to reject all other wavelengths (visible and near IR) of light. The substrate of the large filter will determine its mass. At first glance, a large optical filter with a diameter of up to 10 m, located in front of the main aperture, would require a significant thickness to avoid sagging. However, a segmented filter supported by a structurally rugged grid can support smaller filters. The obscuration introduced by the grid is minimal because the total area can be made insignificant. This configuration can be detrimental to a diffraction- limited telescope due to diffraction effects at the edges of each sub-panel. However, no discernable degradation would result for a 20 diffraction-limit telescope (a photon bucket). Even the small amount of sagging in each subpanel should have minimal effect in the performance of a non-diffraction limited telescope because the part has no appreciable optical power. If the

  20. Large Payload Ground Transportation and Test Considerations

    NASA Technical Reports Server (NTRS)

    Rucker, Michelle A.

    2016-01-01

    During test and verification planning for the Altair lunar lander project, a National Aeronautics and Space Administration (NASA) study team identified several ground transportation and test issues related to the large payload diameter. Although the entire Constellation Program-including Altair-has since been canceled, issues identified by the Altair project serve as important lessons learned for payloads greater than 7 m diameter being considered for NASA's new Space Launch System (SLS). A transportation feasibility study found that Altair's 8.97 m diameter Descent Module would not fit inside available aircraft. Although the Ascent Module cabin was only 2.35 m diameter, the long reaction control system booms extended nearly to the Descent Module diameter, making it equally unsuitable for air transportation without removing the booms and invalidating assembly workmanship screens or acceptance testing that had already been performed. Ground transportation of very large payloads over extended distances is not generally permitted by most states, so overland transportation alone would not be an option. Limited ground transportation to the nearest waterway may be possible, but water transportation could take as long as 66 days per production unit, depending on point of origin and acceptance test facility; transportation from the western United States would require transit through the Panama Canal to access the Kennedy Space Center launch site. Large payloads also pose acceptance test and ground processing challenges. Although propulsion, mechanical vibration, and reverberant acoustic test facilities at NASA's Plum Brook Station have been designed to accommodate large spacecraft, special handling and test work-arounds may be necessary, which could increase cost, schedule, and technical risk. Once at the launch site, there are no facilities currently capable of accommodating the combination of large payload size and hazardous processing such as hypergolic fuels

  1. Hayward fault: Large earthquakes versus surface creep

    USGS Publications Warehouse

    Lienkaemper, James J.; Borchardt, Glenn; Borchardt, Glenn; Hirschfeld, Sue E.; Lienkaemper, James J.; McClellan, Patrick H.; Williams, Patrick L.; Wong, Ivan G.

    1992-01-01

    The Hayward fault, thought a likely source of large earthquakes in the next few decades, has generated two large historic earthquakes (about magnitude 7), one in 1836 and another in 1868. We know little about the 1836 event, but the 1868 event had a surface rupture extending 41 km along the southern Hayward fault. Right-lateral surface slip occurred in 1868, but was not well measured. Witness accounts suggest coseismic right slip and afterslip of under a meter. We measured the spatial variation of the historic creep rate along the Hayward fault, deriving rates mainly from surveys of offset cultural features, (curbs, fences, and buildings). Creep occurs along at least 69 km of the fault's 82-km length (13 km is underwater). Creep rate seems nearly constant over many decades with short-term variations. The creep rate mostly ranges from 3.5 to 6.5 mm/yr, varying systemically along strike. The fastest creep is along a 4-km section near the south end. Here creep has been about 9mm/yr since 1921, and possibly since the 1868 event as indicated by offset railroad track rebuilt in 1869. This 9mm/yr slip rate may approach the long-term or deep slip rate related to the strain buildup that produces large earthquakes, a hypothesis supported by geoloic studies (Lienkaemper and Borchardt, 1992). If so, the potential for slip in large earthquakes which originate below the surficial creeping zone, may now be 1/1m along the southern (1868) segment and ≥1.4m along the northern (1836?) segment. Substracting surface creep rates from a long-term slip rate of 9mm/yr gives present potential for surface slip in large earthquakes of up to 0.8m. Our earthquake potential model which accounts for historic creep rate, microseismicity distribution, and geodetic data, suggests that enough strain may now be available for large magnitude earthquakes (magnitude 6.8 in the northern (1836?) segment, 6.7 in the southern (1868) segment, and 7.0 for both). Thus despite surficial creep, the fault may be

  2. From the Law of Large Numbers to Large Deviation Theory in Statistical Physics: An Introduction

    NASA Astrophysics Data System (ADS)

    Cecconi, Fabio; Cencini, Massimo; Puglisi, Andrea; Vergni, Davide; Vulpiani, Angelo

    This contribution aims at introducing the topics of this book. We start with a brief historical excursion on the developments from the law of large numbers to the central limit theorem and large deviations theory. The same topics are then presented using the language of probability theory. Finally, some applications of large deviations theory in physics are briefly discussed through examples taken from statistical mechanics, dynamical and disordered systems.

  3. Large-scale instabilities of helical flows

    NASA Astrophysics Data System (ADS)

    Cameron, Alexandre; Alexakis, Alexandros; Brachet, Marc-Étienne

    2016-10-01

    Large-scale hydrodynamic instabilities of periodic helical flows of a given wave number K are investigated using three-dimensional Floquet numerical computations. In the Floquet formalism the unstable field is expanded in modes of different spacial periodicity. This allows us (i) to clearly distinguish large from small scale instabilities and (ii) to study modes of wave number q of arbitrarily large-scale separation q ≪K . Different flows are examined including flows that exhibit small-scale turbulence. The growth rate σ of the most unstable mode is measured as a function of the scale separation q /K ≪1 and the Reynolds number Re. It is shown that the growth rate follows the scaling σ ∝q if an AKA effect [Frisch et al., Physica D: Nonlinear Phenomena 28, 382 (1987), 10.1016/0167-2789(87)90026-1] is present or a negative eddy viscosity scaling σ ∝q2 in its absence. This holds both for the Re≪1 regime where previously derived asymptotic results are verified but also for Re=O (1 ) that is beyond their range of validity. Furthermore, for values of Re above a critical value ReSc beyond which small-scale instabilities are present, the growth rate becomes independent of q and the energy of the perturbation at large scales decreases with scale separation. The nonlinear behavior of these large-scale instabilities is also examined in the nonlinear regime where the largest scales of the system are found to be the most dominant energetically. These results are interpreted by low-order models.

  4. Large ejecta fragments from asteroids. [Abstract only

    NASA Technical Reports Server (NTRS)

    Asphaug, E.

    1994-01-01

    The asteroid 4 Vesta, with its unique basaltic crust, remains a key mystery of planetary evolution. A localized olivine feature suggests excavation of subcrustal material in a crater or impact basin comparable in size to the planetary radius (R(sub vesta) is approximately = 280 km). Furthermore, a 'clan' of small asteroids associated with Vesta (by spectral and orbital similarities) may be ejecta from this impact 151 and direct parents of the basaltic achondrites. To escape, these smaller (about 4-7 km) asteroids had to be ejected at speeds greater than the escape velocity, v(sub esc) is approximately = 350 m/s. This evidence that large fragments were ejected at high speed from Vesta has not been reconciled with the present understanding of impact physics. Analytical spallation models predict that an impactor capable of ejecting these 'chips off Vesta' would be almost the size of Vesta! Such an impact would lead to the catastrophic disruption of both bodies. A simpler analysis is outlined, based on comparison with cratering on Mars, and it is shown that Vesta could survive an impact capable of ejecting kilometer-scale fragments at sufficient speed. To what extent does Vesta survive the formation of such a large crater? This is best addressed using a hydrocode such as SALE 2D with centroidal gravity to predict velocities subsequent to impact. The fragmentation outcome and velocity subsequent to the impact described to demonstrate that Vesta survives without large-scale disassembly or overturning of the crust. Vesta and its clan represent a valuable dataset for testing fragmentation hydrocodes such as SALE 2D and SPH 3D at planetary scales. Resolution required to directly model spallation 'chips' on a body 100 times as large is now marginally possible on modern workstations. These boundaries are important in near-surface ejection processes and in large-scale disruption leading to asteroid families and stripped cores.

  5. Navigation and control of large satellite formations

    NASA Astrophysics Data System (ADS)

    Bamford, William Alfred, Jr.

    In recent years, there has been substantial interest in autonomous satellite formations, driven by the new technologies that enable smaller and cheaper spacecraft. Formation flying allows for mission designs, such as stereoscopic imaging, that are impractical or impossible for a single satellite. Much of the current work focuses upon small formations, which can be defined as four or less satellites in a relatively tight grouping. Next generation formations may be composed of more satellites spanning greater spatial distances. The large formation problem becomes more difficult for several reasons, including an increased amount of communication required between the satellites, and orbit perturbations, which become more important as the formation size grows. The purpose of this dissertation is to examine formation flying for large formations, and determine whether or not generalizations can be made linking the large and small formation regimes. In order to model formations with many satellites, a simulation environment was constructed in which different observers, controllers, and formation architectures can be modelled. This dissertation focuses on a decentralized control scheme, but the software is general enough to accommodate a variety of control architectures. Validation of the large formation models is accomplished by initially modelling only a pair of satellites and comparing the results against those found in the literature. As a demonstration of the theoretical results, a real-time, closed-loop, hardware-in-the-loop simulation was constructed using GPS receivers as the measurement source. A large constellation, real-tune simulation system was developed that utilized the Internet to connect simulation equipment from research centers in different locations.

  6. The paradox of large alluvial rivers (Invited)

    NASA Astrophysics Data System (ADS)

    Latrubesse, E. M.

    2010-12-01

    Large alluvial rivers exhibit large floodplains, very gentle slopes, a good selection of bed materials (generally sand), low specific stream power, and could represent the ultimate examples of “dynamic equilibrium” in fluvial systems. However, equilibrium can be discussed at different temporal scales. Base level changes by tectonic or climatic effects, modifications in sediment and water supply or different kinds of human impacts are the traditional causes that could trigger “disequilibrium” and changes in the longitudinal profile. Simultaneously, adjustments of longitudinal profiles were thought to be evolving from downstream to upstream by several processes, being the most common receding erosion. Some authors,have demonstrated that when changes in base level happen, a variety of adjustments can be reached in the lower course in function of the available sediment and water discharge, slopes articulations between the fluvial reach and the continental shelve, among others, and that the adjustments can be transferred upstream significantly in small rivers but not far upstream along large fluvial systems. When analyzing the Quaternary fluvial belts of large rivers in the millennium scale, paleohydrological changes and modifications in floodplain constructional processes or erosion, are associated normally to late Quaternary climatic changes. The study of several of the largest rivers demonstrates that climatic changes and fluvial responses are not always working totally in phase and those direct cause-consequences relations are not a rule. This paper describes floodplain evolution and the lagged geomorphic responses of some large river system to recent climatic changes. Information from some of the largest rivers of the world such as the Amazon, Parana, several tributaries of the Amazon (Negro, Xingú, Tapajos) as well as some large Siberian Rivers was used. Since the last deglaciation, these large fluvial systems have not had enough time to reach equilibrium

  7. Large spin magnetism with cold atoms

    NASA Astrophysics Data System (ADS)

    Laburthe-Tolra, Bruno

    2016-05-01

    The properties of quantum gases made of ultra-cold atoms strongly depend on the interactions between atoms. These interactions lead to condensed-matter-like collective behavior, so that quantum gases appear to be a new platform to study quantum many-body physics. In this seminar, I will focus on the case where the atoms possess an internal (spin) degrees of freedom. The spin of atoms is naturally larger than that of electrons. Therefore, the study of the magnetic properties of ultra-cold gases allows for an exploration of magnetism beyond the typical situation in solid-state physics where magnetism is associated to the s = 1/2 spin of the electron. I will describe three specific cases: spinor Bose-Einstein condensates, where spin-dependent contact interactions introduce new quantum phases and spin dynamics; large spin magnetic atoms where strong dipole-dipole interactions lead to exotic quantum magnetism; large spin Fermi gases.

  8. Sea-level changes before large earthquakes

    USGS Publications Warehouse

    Wyss, M.

    1978-01-01

    Changes in sea level have long been used as a measure of local uplift and subsidence associated with large earthquakes. For instance, in 1835, the British naturalist Charles Darwin observed that sea level dropped by 2.7 meters during the large earthquake in Concepcion, CHile. From this piece of evidence and the terraces along the beach that he saw, Darwin concluded that the Andes had grown to their present height through earthquakes. Much more recently, George Plafker and James C. Savage of the U.S Geological Survey have shown, from barnacle lines, that the great 1960 Chile and the 1964 Alaska earthquakes caused several meters of vertical displacement of the shoreline. 

  9. Large scale study of tooth enamel

    SciTech Connect

    Bodart, F.; Deconninck, G.; Martin, M.Th.

    1981-04-01

    Human tooth enamel contains traces of foreign elements. The presence of these elements is related to the history and the environment of the human body and can be considered as the signature of perturbations which occur during the growth of a tooth. A map of the distribution of these traces on a large scale sample of the population will constitute a reference for further investigations of environmental effects. One hundred eighty samples of teeth were first analysed using PIXE, backscattering and nuclear reaction techniques. The results were analysed using statistical methods. Correlations between O, F, Na, P, Ca, Mn, Fe, Cu, Zn, Pb and Sr were observed and cluster analysis was in progress. The techniques described in the present work have been developed in order to establish a method for the exploration of very large samples of the Belgian population.

  10. Distributed control of large space antennas

    NASA Technical Reports Server (NTRS)

    Cameron, J. M.; Hamidi, M.; Lin, Y. H.; Wang, S. J.

    1983-01-01

    A systematic way to choose control design parameters and to evaluate performance for large space antennas is presented. The structural dynamics and control properties for a Hoop and Column Antenna and a Wrap-Rib Antenna are characterized. Some results of the effects of model parameter uncertainties to the stability, surface accuracy, and pointing errors are presented. Critical dynamics and control problems for these antenna configurations are identified and potential solutions are discussed. It was concluded that structural uncertainties and model error can cause serious performance deterioration and can even destabilize the controllers. For the hoop and column antenna, large hoop and long meat and the lack of stiffness between the two substructures result in low structural frequencies. Performance can be improved if this design can be strengthened. The two-site control system is more robust than either single-site control systems for the hoop and column antenna.

  11. Design problems of large space mirror radiotelescopes

    NASA Astrophysics Data System (ADS)

    Gvamichava, A. S.; Buiakas, V. I.; Kardashev, N. S.; Melnikov, N. P.; Sokolov, A. S.; Tsarevskii, G. S.; Usiukin, V. I.

    1981-04-01

    It is noted that large space antennas can solve problems of theoretical and practical importance. Large-diameter (tens or hundreds of meters) mirror antennas have been designed to use an automatically deployed truss frame as a base onto which the radio-reflecting surface is pulled (long-wave version) or on which controllable panels are mounted (short-wave version). The reasons why antennas of mm range can be promptly developed are discussed. Consideration is given to the factors that influence the precision of the reflecting surface of the space antenna, that is technological errors during the process of frame manufacture, technological errors during the manufacture of the reflecting surface, and deformation arising from thermal or force effects. The need to design antennas with automatic control of the reflecting surface in order to operate in the mm wavelength range is stressed.

  12. Online Community Detection for Large Complex Networks

    PubMed Central

    Pan, Gang; Zhang, Wangsheng; Wu, Zhaohui; Li, Shijian

    2014-01-01

    Complex networks describe a wide range of systems in nature and society. To understand complex networks, it is crucial to investigate their community structure. In this paper, we develop an online community detection algorithm with linear time complexity for large complex networks. Our algorithm processes a network edge by edge in the order that the network is fed to the algorithm. If a new edge is added, it just updates the existing community structure in constant time, and does not need to re-compute the whole network. Therefore, it can efficiently process large networks in real time. Our algorithm optimizes expected modularity instead of modularity at each step to avoid poor performance. The experiments are carried out using 11 public data sets, and are measured by two criteria, modularity and NMI (Normalized Mutual Information). The results show that our algorithm's running time is less than the commonly used Louvain algorithm while it gives competitive performance. PMID:25061683

  13. Timing Characteristics of Large Area Picosecond Photodetectors

    SciTech Connect

    Adams, Bernhard W.; Elagin, Andrey L.; Frisch, H.; Obaid, Razib; Oberla, E; Vostrikov, Alexander; Wagner, Robert G.; Wang, Jingbo; Wetstein, Matthew J.; Northrop, R

    2015-09-21

    The LAPPD Collaboration was formed to develop ultralast large-area imaging photodetectors based on new methods for fabricating microchannel plates (MCPs). In this paper we characterize the time response using a pulsed, sub picosecond laser. We observe single photoelectron time resolutions of a 20 cm x 20 cm MCP consistently below 70 ps, spatial resolutions of roughly 500 pm, and median gains higher than 10(7). The RMS measured at one particular point on an LAPPD detector is 58 ps, with in of 47 ps. The differential time resolution between the signal reaching the two ends of the delay line anode is measured to be 5.1 ps for large signals, with an asymptotic limit falling below 2 ps as noise-over-signal approaches zero.

  14. Large volume axionic Swiss cheese inflation

    NASA Astrophysics Data System (ADS)

    Misra, Aalok; Shukla, Pramod

    2008-09-01

    Continuing with the ideas of (Section 4 of) [A. Misra, P. Shukla, Moduli stabilization, large-volume dS minimum without anti-D3-branes, (non-)supersymmetric black hole attractors and two-parameter Swiss cheese Calabi Yau's, arXiv: 0707.0105 [hep-th], Nucl. Phys. B, in press], after inclusion of perturbative and non-perturbative α corrections to the Kähler potential and (D1- and D3-) instanton generated superpotential, we show the possibility of slow roll axionic inflation in the large volume limit of Swiss cheese Calabi Yau orientifold compactifications of type IIB string theory. We also include one- and two-loop corrections to the Kähler potential but find the same to be subdominant to the (perturbative and non-perturbative) α corrections. The NS NS axions provide a flat direction for slow roll inflation to proceed from a saddle point to the nearest dS minimum.

  15. Large area position sensitive β-detector

    NASA Astrophysics Data System (ADS)

    Vaintraub, S.; Hass, M.; Edri, H.; Morali, N.; Segal, T.

    2015-03-01

    A new conceptual design of a large area electron detector, which is position and energy sensitive, was developed. This detector is designed for beta decay energies up to 4 MeV, but in principle can be re-designed for higher energies. The detector incorporates one large plastic scintillator and, in general, a limited number of photomultipliers (7 presently). The current setup was designed and constructed after an extensive Geant4 simulation study. By comparison of a single hit light distribution between the various photomultipliers to a pre-measured accurate position-response map, the anticipated position resolution is around 5 mm. The first benchmark experiments have been conducted in order to calibrate and confirm the position resolution of the detector. The new method, results of the first test experiments and comparison to simulations are presented.

  16. The NASA Lewis large wind turbine program

    NASA Technical Reports Server (NTRS)

    Thomas, R. L.; Baldwin, D. H.

    1981-01-01

    The program is directed toward development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generation systems. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. Advances are made by gaining a better understanding of the system design drivers, improvements in the analytical design tools, verification of design methods with operating field data, and the incorporation of new technology and innovative designs. An overview of the program activities is presented and includes results from the first and second generation field machines (Mod-OA, -1, and -2), the design phase of the third generation wind turbine (Mod-5) and the advanced technology projects. Also included is the status of the Department of Interior WTS-4 machine.

  17. Materials response to large plastic deformation

    SciTech Connect

    Stout, M.G.; Hecker, S.S.

    1982-01-01

    Strain hardening at large plastic strains cannot be inferred from small-strain tensile tests. Most metals and alloys at room temperature do not reach steady state saturation at strain levels of 3 to 5. Typically, some disturbing influence offsets the balance between dislocation generation and annihilation. The most prominent of these appears to be texture formation. However, grain size, second-phase particles, and deformation on shear bands are also important. The effect on hardening of most of these features depends on geometry (or deformation mode) and, hence, no single intrinsic hardening curve can be expected at large strains. It should be noted that high material purity and a torsional deformation mode favor saturation. 42 references, 15 figures.

  18. Large igneous provinces linked to supercontinent assembly

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Santosh, M.; Luo, Zhaohua; Hao, Jinhua

    2015-04-01

    Models for the disruption of supercontinents have considered mantle plumes as potential triggers for continental extension and the formation of large igneous provinces (LIPs). An alternative hypothesis of top-down tectonics links large volcanic eruptions to lithospheric delamination. Here we argue that the formation of several LIPs in Tarim, Yangtze, Lhasa and other terranes on the Eurasian continent was coeval with the assembly of the Pangean supercontinent, in the absence of plumes rising up from the mantle transition zone or super-plumes from the core-mantle boundary. The formation of these LIPs was accompanied by subduction and convergence of continents and micro-continents, with no obvious relation to major continental rifting or mantle plume activity. Our model correlates LIPs with lithospheric extension caused by asthenospheric flow triggered by multiple convergent systems associated with supercontinent formation.

  19. A superconducting large-angle magnetic suspension

    NASA Technical Reports Server (NTRS)

    Downer, James; Goldie, James; Torti, Richard

    1991-01-01

    The component technologies were developed required for an advanced control moment gyro (CMG) type of slewing actuator for large payloads. The key component of the CMG is a large-angle magnetic suspension (LAMS). The LAMS combines the functions of the gimbal structure, torque motors, and rotor bearings of a CMG. The LAMS uses a single superconducting source coil and an array of cryoresistive control coils to produce a specific output torque more than an order of magnitude greater than conventional devices. The designed and tested LAMS system is based around an available superconducting solenoid, an array of twelve room-temperature normal control coils, and a multi-input, multi-output control system. The control laws were demonstrated for stabilizing and controlling the LAMS system.

  20. Estimation for large non-centrality parameters

    NASA Astrophysics Data System (ADS)

    Inácio, Sónia; Mexia, João; Fonseca, Miguel; Carvalho, Francisco

    2016-06-01

    We introduce the concept of estimability for models for which accurate estimators can be obtained for the respective parameters. The study was conducted for model with almost scalar matrix using the study of estimability after validation of these models. In the validation of these models we use F statistics with non centrality parameter τ =‖λ/‖2 σ2 when this parameter is sufficiently large we obtain good estimators for λ and α so there is estimability. Thus, we are interested in obtaining a lower bound for the non-centrality parameter. In this context we use for the statistical inference inducing pivot variables, see Ferreira et al. 2013, and asymptotic linearity, introduced by Mexia & Oliveira 2011, to derive confidence intervals for large non-centrality parameters (see Inácio et al. 2015). These results enable us to measure relevance of effects and interactions in multifactors models when we get highly statistically significant the values of F tests statistics.

  1. Pivoting micromirror designs for large orientation angles

    NASA Astrophysics Data System (ADS)

    Garcia, Ernest J.

    2000-08-01

    This paper describes mechanical designed concepts for a class of pivoting micromirrors that permit relatively large angles of orientation to be obtained when configured in large arrays. Micromirror arrays can be utilized in a variety of applications ranging from optical switching to beam-front correction in a variety of technologies. This particular work is concerned with silicon surface micromachining. The multi-layer polysilicon surface micromachined process developed at Sandia National Laboratories is used to fabricate micromirror arrays that consists of capacitive electrode pairs which are used to electrostatically actuator mirrors to their desired positions and suitable elastic suspensions which support the 2 micrometers thick mirror structures. The designs described have been fabricated and successfully operated.

  2. Vibration suppression in a large space structure

    NASA Technical Reports Server (NTRS)

    Narendra, Kumpati S.

    1988-01-01

    The Yale University Center for Systems Science and the NASA Johnson Space Center collaborated in a study of vibration suppression in a large space structure during the period January 1985 to August 1987. The research proposal submitted by the Center to NASA concerned disturbance isolation in flexible space structures. The general objective of the proposal was to create within the Center a critical mass of expertise on problems related to the dynamics and control of large flexible space structures. A specific objective was to formulate both passive and active control strategies for the disturbance isolation problem. Both objectives were achieved during the period of the contract. While an extensive literature exists on the control of flexible space structures, it is generally acknowledged that many important questions remain open at even a fundamental level. Hence, instead of studying grossly simplified models of complex structural systems, it was decided as a first step to confine attention to detailed and thorough analyses of simple structures.

  3. GLAST Large Area Telescope Multiwavelength Planning

    SciTech Connect

    Reimer, O.; Michelson, P.F.; Cameron, R.A.; Digel, S.W.; Thompson, D.J.; Wood, K.S.

    2007-01-03

    Gamma-ray astrophysics depends in many ways on multiwavelength studies. The Gamma-ray Large Area Space Telescope (GLAST) Large Area Telescope (LAT) Collaboration has started multiwavelength planning well before the scheduled 2007 launch of the observatory. Some of the high-priority multiwavelength needs include: (1) availability of contemporaneous radio and X-ray timing of pulsars; (2) expansion of blazar catalogs, including redshift measurements; (3) improved observations of molecular clouds, especially at high galactic latitudes; (4) simultaneous broad-band blazar monitoring; (5) characterization of gamma-ray transients, including gamma ray bursts; (6) radio, optical, X-ray and TeV counterpart searches for reliable and effective sources identification and characterization. Several of these activities are needed to be in place before launch.

  4. Future large broadband switched satellite communications networks

    NASA Technical Reports Server (NTRS)

    Staelin, D. H.; Harvey, R. R.

    1979-01-01

    Critical technical, market, and policy issues relevant to future large broadband switched satellite networks are summarized. Our market projections for the period 1980 to 2000 are compared. Clusters of switched satellites, in lieu of large platforms, etc., are shown to have significant advantages. Analysis of an optimum terrestrial network architecture suggests the proper densities of ground stations and that link reliabilities 99.99% may entail less than a 10% cost premium for diversity protection at 20/30 GHz. These analyses suggest that system costs increase as the 0.6 power of traffic. Cost estimates for nominal 20/30 GHz satellite and ground facilities suggest optimum system configurations might employ satellites with 285 beams, multiple TDMA bands each carrying 256 Mbps, and 16 ft ground station antennas. A nominal development program is outlined.

  5. Actinide Recovery Method for Large Soil Samples

    SciTech Connect

    Maxwell, S.L. III; Nichols, S.

    1998-11-01

    A new Actinide Recovery Method has been developed by the Savannah River Site Central Laboratory to preconcentrate actinides in very large soil samples. Diphonix Resin(r) is used eliminate soil matrix interferences and preconcentrate actinides after soil leaching or soil fusion. A rapid microwave digestion technique is used to remove the actinides from the Diphonix Resin(r). After the resin digestion, the actinides are recovered in a small volume of nitric acid which can be easily loaded onto small extraction-chromatography columns, such as TEVA Resin(r), U-TEVA Resin(r) or TRU Resin(r) (Eichrom Industries). This method enables the application of small, selective extraction-columns to recover actinides from very large soil samples with high selectivity, consistent tracer recoveries and minimal liquid waste.

  6. Advances in Structures for Large Space Systems

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith

    2004-01-01

    The development of structural systems for scientific remote sensing and space exploration has been underway for four decades. The seminal work from 1960 to 1980 provided the basis for many of the design principles of modern space systems. From 1980- 2000 advances in active materials and structures and the maturing of composites technology led to high precision active systems such those used in the Space Interferometry Mission. Recently, thin-film membrane or gossamer structures are being investigated for use in large area space systems because of their low mass and high packaging efficiency. Various classes of Large Space Systems (LSS) are defined in order to describe the goals and system challenges in structures and materials technologies. With an appreciation of both past and current technology developments, future technology challenges are used to develop a list of technology investments that can have significant impacts on LSS development.

  7. Very Large Aperture Diffractive Space Telescope

    SciTech Connect

    Hyde, Roderick Allen

    1998-04-20

    A very large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass ''aiming'' at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The magnifying glass includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the magnifying glass, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets.

  8. Quasisymmetric toroidal plasmas with large mean flows

    SciTech Connect

    Sugama, H.; Watanabe, T.-H.; Nunami, M.; Nishimura, S.

    2011-08-15

    Geometric conditions for quasisymmetric toroidal plasmas with large mean flows on the order of the ion thermal speed are investigated. Equilibrium momentum balance equations including the inertia term due to the large flow velocity are used to show that, for rotating quasisymmetric plasmas with no local currents crossing flux surfaces, all components of the metric tensor should be independent of the toroidal angle in the Boozer coordinates, and consequently these systems need to be rigorously axisymmetric. Unless the local radial currents vanish, the Boozer coordinates do not exist and the toroidal flow velocity cannot take any value other than a very limited class of eigenvalues corresponding to very rapid rotation especially for low beta plasmas.

  9. Optical encryption for large-sized images

    NASA Astrophysics Data System (ADS)

    Sanpei, Takuho; Shimobaba, Tomoyoshi; Kakue, Takashi; Endo, Yutaka; Hirayama, Ryuji; Hiyama, Daisuke; Hasegawa, Satoki; Nagahama, Yuki; Sano, Marie; Oikawa, Minoru; Sugie, Takashige; Ito, Tomoyoshi

    2016-02-01

    We propose an optical encryption framework that can encrypt and decrypt large-sized images beyond the size of the encrypted image using our two methods: random phase-free method and scaled diffraction. In order to record the entire image information on the encrypted image, the large-sized images require the random phase to widely diffuse the object light over the encrypted image; however, the random phase gives rise to the speckle noise on the decrypted images, and it may be difficult to recognize the decrypted images. In order to reduce the speckle noise, we apply our random phase-free method to the framework. In addition, we employ scaled diffraction that calculates light propagation between planes with different sizes by changing the sampling rates.

  10. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  11. Operations analysis for a large lunar telescope

    NASA Technical Reports Server (NTRS)

    Thyen, Christopher

    1992-01-01

    Consideration is given to a study of the operations and assembly of a 16-m large lunar telescope (LLT), which deals with the operations and assembly of the telescope from LEO to the lunar surface for assembly. The study of LLT operations and assembly is broken down into three divisions to allow easier operations analysis: earth to orbit operations, LEO operations (transfer to lunar surface operations), and lunar surface operations. The following guidelines were set down to ensure a reasonable starting point for a large, lunar, untended installation: the existence of a lunar base, a space transportation system from LEO to the lunar surface, continuous manning of the lunar base during the assembly period, and availability/capability to perform lunar assembly with the lunar base crew. The launch/vehicle packaging options, lunar site selection and assembly options, and assembly crew assumptions are discussed.

  12. Large charged drop levitation against gravity

    NASA Technical Reports Server (NTRS)

    Rhim, Won-Kyu; Chung, Sang Kun; Hyson, Michael T.; Trinh, Eugene H.; Elleman, Daniel D.

    1987-01-01

    A hybrid electrostatic-acoustic levitator that can levitate and manipulate a large liquid drop in one gravity is presented. To the authors' knowledge, this is the first time such large drops (up to 4 mm in diameter in the case of water) have been levitated against 1-gravity. This makes possible, for the first time, many new experiments both in space and in ground-based laboratories, such as 1)supercooling and superheating, 2) containerless crystal growth from various salt solutions or melts, 3) drop dynamics of oscillating or rotating liquid drops, 4) drop evaporation and Rayleigh bursting, and 5) containerless material processing in space. The digital control system, liquid drop launch process, principles of electrode design, and design of a multipurpose room temperature levitation chamber are described. Preliminary results that demonstrate drop oscillation and rotation, and crystal growth from supersaturated salt solutions are presented.

  13. A charged membrane paradigm at large D

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Sayantani; Mandlik, Mangesh; Minwalla, Shiraz; Thakur, Somyadip

    2016-04-01

    We study the effective dynamics of black hole horizons in Einstein-Maxwell theory in a large number of spacetime dimensions D. We demonstrate that horizon dynamics may be recast as a well posed initial value problem for the motion of a codimension one non gravitational membrane moving in flat space. The dynamical degrees of freedom of this membrane are its shape, charge density and a divergence free velocity field. We determine the equations that govern membrane dynamics at leading order in the large D expansion. Our derivation of the membrane equations assumes that the solution preserves an SO( D - p - 2) isometry with p held fixed as D is taken to infinity. However we are able to cast our final membrane equations into a completely geometric form that makes no reference to this symmetry algebra.

  14. The missing large impact craters on Ceres

    NASA Astrophysics Data System (ADS)

    Marchi, S.; Ermakov, A. I.; Raymond, C. A.; Fu, R. R.; O'Brien, D. P.; Bland, M. T.; Ammannito, E.; de Sanctis, M. C.; Bowling, T.; Schenk, P.; Scully, J. E. C.; Buczkowski, D. L.; Williams, D. A.; Hiesinger, H.; Russell, C. T.

    2016-07-01

    Asteroids provide fundamental clues to the formation and evolution of planetesimals. Collisional models based on the depletion of the primordial main belt of asteroids predict 10-15 craters >400 km should have formed on Ceres, the largest object between Mars and Jupiter, over the last 4.55 Gyr. Likewise, an extrapolation from the asteroid Vesta would require at least 6-7 such basins. However, Ceres' surface appears devoid of impact craters >~280 km. Here, we show a significant depletion of cerean craters down to 100-150 km in diameter. The overall scarcity of recognizable large craters is incompatible with collisional models, even in the case of a late implantation of Ceres in the main belt, a possibility raised by the presence of ammoniated phyllosilicates. Our results indicate that a significant population of large craters has been obliterated, implying that long-wavelength topography viscously relaxed or that Ceres experienced protracted widespread resurfacing.

  15. Large-scale Advanced Propfan (LAP) program

    NASA Technical Reports Server (NTRS)

    Sagerser, D. A.; Ludemann, S. G.

    1985-01-01

    The propfan is an advanced propeller concept which maintains the high efficiencies traditionally associated with conventional propellers at the higher aircraft cruise speeds associated with jet transports. The large-scale advanced propfan (LAP) program extends the research done on 2 ft diameter propfan models to a 9 ft diameter article. The program includes design, fabrication, and testing of both an eight bladed, 9 ft diameter propfan, designated SR-7L, and a 2 ft diameter aeroelastically scaled model, SR-7A. The LAP program is complemented by the propfan test assessment (PTA) program, which takes the large-scale propfan and mates it with a gas generator and gearbox to form a propfan propulsion system and then flight tests this system on the wing of a Gulfstream 2 testbed aircraft.

  16. GLAST Large Area Telescope Multiwavelength Planning

    NASA Technical Reports Server (NTRS)

    Reimer, O.; Michelson, P. F.; Cameron, R. A.; Digel, S. W.; Thompson, D. J.; Wood, K. S.

    2007-01-01

    Gamma-ray astrophysics depends in many ways on multiwavelength studies. The Gamma-ray Large Area Space Telescope (GLAST) Large Area Telescope (LAT) Collaboration has started multiwavelength planning well before the scheduled 2007 launch of the observatory. Some of the high-priority multiwavelength needs include: (1) availability of contemporaneous radio and X-ray timing of pulsars; (2) expansion of blazar catalogs, including redshift measurements; (3) improved observations of molecular clouds, especially at high galactic latitudes; (4) simultaneous broad-spectrum blazar monitoring; (5) characterization of gamma-ray transients, including gamma ray bursts; (6) radio, optical, X-ray and TeV counterpart searches for reliable and effective sources identification and characterization. Several of these activities are needed to be in place before launch.

  17. Metrication study for large space telescope

    NASA Technical Reports Server (NTRS)

    Creswick, F. A.; Weller, A. E.

    1973-01-01

    Various approaches which could be taken in developing a metric-system design for the Large Space Telescope, considering potential penalties on development cost and time, commonality with other satellite programs, and contribution to national goals for conversion to the metric system of units were investigated. Information on the problems, potential approaches, and impacts of metrication was collected from published reports on previous aerospace-industry metrication-impact studies and through numerous telephone interviews. The recommended approach to LST metrication formulated in this study cells for new components and subsystems to be designed in metric-module dimensions, but U.S. customary practice is allowed where U.S. metric standards and metric components are not available or would be unsuitable. Electrical/electronic-system design, which is presently largely metric, is considered exempt from futher metrication. An important guideline is that metric design and fabrication should in no way compromise the effectiveness of the LST equipment.

  18. Numerical solution of large Lyapunov equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    A few methods are proposed for solving large Lyapunov equations that arise in control problems. The common case where the right hand side is a small rank matrix is considered. For the single input case, i.e., when the equation considered is of the form AX + XA(sup T) + bb(sup T) = 0, where b is a column vector, the existence of approximate solutions of the form X = VGV(sup T) where V is N x m and G is m x m, with m small is established. The first class of methods proposed is based on the use of numerical quadrature formulas, such as Gauss-Laguerre formulas, applied to the controllability Grammian. The second is based on a projection process of Galerkin type. Numerical experiments are presented to test the effectiveness of these methods for large problems.

  19. Timing characteristics of Large Area Picosecond Photodetectors

    NASA Astrophysics Data System (ADS)

    Adams, B. W.; Elagin, A.; Frisch, H. J.; Obaid, R.; Oberla, E.; Vostrikov, A.; Wagner, R. G.; Wang, J.; Wetstein, M.

    2015-09-01

    The LAPPD Collaboration was formed to develop ultrafast large-area imaging photodetectors based on new methods for fabricating microchannel plates (MCPs). In this paper we characterize the time response using a pulsed, sub-picosecond laser. We observe single-photoelectron time resolutions of a 20 cm × 20 cm MCP consistently below 70 ps, spatial resolutions of roughly 500 μm, and median gains higher than 107. The RMS measured at one particular point on an LAPPD detector is 58 ps, with ± 1σ of 47 ps. The differential time resolution between the signal reaching the two ends of the delay line anode is measured to be 5.1 ps for large signals, with an asymptotic limit falling below 2 ps as noise-over-signal approaches zero.

  20. Genetics of hereditary large vessel diseases.

    PubMed

    Morisaki, Takayuki; Morisaki, Hiroko

    2016-01-01

    Recent progress in the study of hereditary large vessel diseases such as Marfan syndrome (MFS) have not only identified responsible genes but also provided better understanding of the pathophysiology and revealed possible new therapeutic targets. Genes identified for these diseases include FBN1, TGFBR1, TGFBR2, SMAD3, TGFB2, TGFB3, SKI, EFEMP2, COL3A1, FLNA, ACTA2, MYH11, MYLK and SLC2A10, as well as others. Their dysfunction disrupts the function of transforming growth factor-β (TGF-β) signaling pathways, as well as that of the extracellular matrix and smooth muscle contractile apparatus, resulting in progression of structural damage to large vessels, including aortic aneurysms and dissections. Notably, it has been shown that the TGF-β signaling pathway has a key role in the pathogenesis of MFS and related disorders, which may be important for development of strategies for medical and surgical treatment of thoracic aortic aneurysms and dissections. PMID:26446364

  1. Airway obstruction secondary to large thyroid adenolipoma

    PubMed Central

    Fitzpatrick, Nicholas; Malik, Paras; Hinton-Bayre, Anton; Lewis, Richard

    2014-01-01

    Adenolipoma of the thyroid gland is a rare benign neoplasm composed of normal thyroid and mature adipose tissue. Ordinarily, only a small amount of fat exists in a normal thyroid gland. CT and MRI may differentiate between benign and malignant lesions, and fine-needle aspirate often assists diagnosis. Surgical excision for adenolipoma is considered curative. We report the case of a 67-year-old man presenting with a large neck lump and evidence of airway obstruction. Imaging revealed a 97×70 mm left thyroid mass with retropharyngeal extension and laryngotracheal compression. Hemithyroidectomy was performed with subsequent histology confirming a large thyroid adenolipoma. The patient's symptoms resolved and he remains asymptomatic with no sign of recurrence 2 years postsurgery. PMID:25199190

  2. Examination of large intestine resection specimens

    PubMed Central

    Burroughs, S; Williams, G

    2000-01-01

    Macroscopic examination of large intestinal resection specimens by the surgical pathologist provides important diagnostic and prognostic information. This review summarises current recommended protocols and evidence based guidelines for gross description, dissection, and histological block selection in both neoplastic and non-neoplastic colorectal disease. Specific lesions discussed include colorectal cancer, polypectomies and polyposis syndromes, and inflammatory bowel disease. Microscopic examination is briefly described, with emphasis on certain pitfalls that might be encountered in routine practice. A section covering special techniques for the investigation of occult bleeding is included. J Clin Pathol(J Clin Pathol 2000;53:344–349) Key Words: large intestine • colorectal cancer • inflammatory bowel disease PMID:10889815

  3. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  4. The Large Hadron Collider: Redefining High Energy

    SciTech Connect

    Demers, Sarah

    2007-06-19

    Particle physicists have a description of the forces of nature known as the Standard Model that has successfully withstood decades of testing at laboratories around the world. Though the Standard Model is powerful, it is not complete. Important details like the masses of particles are not explained well, and realities as fundamental as gravity, dark matter, and dark energy are left out altogether. I will discuss gaps in the model and why there is hope that some puzzles will be solved by probing high energies with the Large Hadron Collider. Beginning next year, this machine will accelerate protons to record energies, hurling them around a 27 kilometer ring before colliding them 40 million times per second. Detectors the size of five-story buildings will record the debris of these collisions. The new energy frontier made accessible by the Large Hadron Collider will allow thousands of physicists to explore nature's fundamental forces and particles from a fantastic vantage point.

  5. Big Science and the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Giudice, Gian Francesco

    2012-03-01

    The Large Hadron Collider (LHC), the particle accelerator operating at CERN, is probably the most complex and ambitious scientific project ever accomplished by humanity. The sheer size of the enterprise, in terms of financial and human resources, naturally raises the question whether society should support such costly basic-research programs. I address this question by first reviewing the process that led to the emergence of Big Science and the role of large projects in the development of science and technology. I then compare the methodologies of Small and Big Science, emphasizing their mutual linkage. Finally, after examining the cost of Big Science projects, I highlight several general aspects of their beneficial implications for society.

  6. An economic model of large Medicaid practices.

    PubMed

    Cromwell, J; Mitchell, J B

    1984-06-01

    Public attention given to Medicaid "mills" prompted this more general investigation of the origins of large Medicaid practices. A dual market demand model is proposed showing how Medicaid competes with private insurers for scarce physician time. Various program parameters--fee schedules, coverage, collection costs--are analyzed along with physician preferences, specialties, and other supply-side characteristics. Maximum likelihood techniques are used to test the model. The principal finding is that in raising Medicaid fees, as many physicians opt into the program as expand their Medicaid caseloads to exceptional levels, leaving the maldistribution of patients unaffected while notably improving access. Still, the fact that Medicaid fees are lower than those of private insurers does lead to reduced access to more qualified practitioners. Where anti-Medicaid sentiment is stronger, access is also reduced and large Medicaid practices more likely to flourish.

  7. Genesis of the Large Hadron Collider.

    PubMed

    Smith, Chris Llewellyn

    2015-01-13

    This paper describes the scientific, technical and political genesis of the Large Hadron Collider (LHC). It begins with an outline of the early history of the LHC, from first thoughts and accelerator and detector developments that underwrote the project, through the first studies of the LHC and its scientific potential and the genesis of the experimental programme, to the presentation of the proposal to build the LHC to the CERN Council in December 1993. The events that led to the proposal to build the LHC in two stages, which was approved in December 1994, are then described. Next, the role of non-Member State contributions and of the agreement that CERN could take loans, which allowed single stage construction to be approved in December 1996, despite a cut in the Members' contributions, are explained. The paper concludes by identifying points of potential relevance for the approval of possible future large particle physics projects.

  8. Development of large-area glass GEM

    NASA Astrophysics Data System (ADS)

    Mitsuya, Yuki; Fujiwara, Takeshi; Fushie, Takashi; Maekawa, Tatsuyuki; Takahashi, Hiroyuki

    2015-09-01

    We have developed a new gaseous radiation detector, referred to as the Glass GEM (G-GEM). The G-GEM is composed of a photosensitive etching glass (PEG3) substrate from HOYA Corporation, Japan. Since a large-area detector is required for imaging device applications, we newly developed a large-area G-GEM prototype with a sensitive area of 280×280 mm2. In this study, we investigated its basic characteristics and confirmed that it worked properly and had sufficient uniformity across the entire sensitive area. It had high gas gain of up to approximately 7700, along with good energy resolution of 26.2% (FWHM) for a 5.9-keV X-ray with a gas mixture of Ar (90%) and CH4 (10%). The gain variation across the sensitive area was almost within the range of ±10%.

  9. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  10. Exploring Human Cognition Using Large Image Databases.

    PubMed

    Griffiths, Thomas L; Abbott, Joshua T; Hsu, Anne S

    2016-07-01

    Most cognitive psychology experiments evaluate models of human cognition using a relatively small, well-controlled set of stimuli. This approach stands in contrast to current work in neuroscience, perception, and computer vision, which have begun to focus on using large databases of natural images. We argue that natural images provide a powerful tool for characterizing the statistical environment in which people operate, for better evaluating psychological theories, and for bringing the insights of cognitive science closer to real applications. We discuss how some of the challenges of using natural images as stimuli in experiments can be addressed through increased sample sizes, using representations from computer vision, and developing new experimental methods. Finally, we illustrate these points by summarizing recent work using large image databases to explore questions about human cognition in four different domains: modeling subjective randomness, defining a quantitative measure of representativeness, identifying prior knowledge used in word learning, and determining the structure of natural categories. PMID:27489200

  11. Large orb-webs adapted to maximise total biomass not rare, large prey

    PubMed Central

    Harmer, Aaron M. T.; Clausen, Philip D.; Wroe, Stephen; Madin, Joshua S.

    2015-01-01

    Spider orb-webs are the ultimate anti-ballistic devices, capable of dissipating the relatively massive kinetic energy of flying prey. Increased web size and prey stopping capacity have co-evolved in a number orb-web taxa, but the selective forces driving web size and performance increases are under debate. The rare, large prey hypothesis maintains that the energetic benefits of rare, very large prey are so much greater than the gains from smaller, more common prey that smaller prey are irrelevant for reproduction. Here, we integrate biophysical and ecological data and models to test a major prediction of the rare, large prey hypothesis, that selection should favour webs with increased stopping capacity and that large prey should comprise a significant proportion of prey stopped by a web. We find that larger webs indeed have a greater capacity to stop large prey. However, based on prey ecology, we also find that these large prey make up a tiny fraction of the total biomass (=energy) potentially captured. We conclude that large webs are adapted to stop more total biomass, and that the capacity to stop rare, but very large, prey is an incidental consequence of the longer radial silks that scale with web size. PMID:26374379

  12. LEAP - A LargE Area Burst Polarimeter for the ISS

    NASA Astrophysics Data System (ADS)

    McConnell, M. L.; LEAP Collaboration

    2016-10-01

    The LargE Area burst Polarimeter (LEAP) is a mission concept for a 50-500 keV Compton scatter polarimeter instrument that would be deployed on the ISS. It will be proposed as an astrophysics Mission of Opportunity (MoO) in late 2016.

  13. Large-area lanthanum hexaboride electron emitter

    NASA Astrophysics Data System (ADS)

    Goebel, D. M.; Hirooka, Y.; Sketchley, T. A.

    1985-09-01

    The characteristics of lanthanum-boron thermionic electron emitters are discussed, and a large-area, continuously operating cathode assembly and heater are described. Impurity production and structural problems involving the support of the LaB6 have been eliminated in the presented configuration. The performance of the cathode in a plasma discharge, where surface modification occurs by ion sputtering, is presented. Problem areas which affect lifetime and emission current capability are discussed.

  14. Large capacity cryopropellant orbital storage facility

    NASA Technical Reports Server (NTRS)

    Schuster, J. R.

    1987-01-01

    A comprehensive study was performed to develop the major features of a large capacity orbital propellant storage facility for the space-based cryogenic orbital transfer vehicle. Projected propellant usage and delivery schedules can be accommodated by two orbital tank sets of 100,000 lb storage capacity, with advanced missions expected to require increased capacity. Information is given on tank pressurization schemes, propellant transfer configurations, pump specifications, the refrigeration system, and flight tests.

  15. Accuracy potentials for large space antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1980-01-01

    The relationships among materials selection, truss design, and manufacturing techniques in the interest of surface accuracies for large space antennas are discussed. Among the antenna configurations considered are: tetrahedral truss, pretensioned truss, and geodesic dome and radial rib structures. Comparisons are made of the accuracy achievable by truss and dome structure types for a wide variety of diameters, focal lengths, and wavelength of radiated signal, taking into account such deforming influences as solar heating-caused thermal transients and thermal gradients.

  16. Large electron screening effect in different environments

    SciTech Connect

    Cvetinović, Aleksandra Lipoglavšek, Matej; Markelj, Sabina; Vesić, Jelena

    2015-10-15

    Electron screening effect was studied in the {sup 1}H({sup 7}Li,α){sup 4}He, {sup 1}H({sup 11}B,α){sup 4}He and {sup 1}H({sup 19}F,αγ){sup 16}O reactions in inverse kinematics on different hydrogen implanted targets. Results show large electron screening potentials strongly dependent on the proton number Z of the projectile.

  17. Large-Angle Anomalies in the CMB

    DOE PAGES

    Copi, Craig J.; Huterer, Dragan; Schwarz, Dominik J.; Starkman, Glenn D.

    2010-01-01

    We review the recently found large-scale anomalies in the maps of temperature anisotropies in the cosmic microwave background. These include alignments of the largest modes of CMB anisotropy with each other and with geometry and direction of motion of the solar ssystem, and the unusually low power at these largest scales. We discuss these findings in relation to expectation from standard inflationary cosmology, their statistical significance, the tools to study them, and the various attempts to explain them.

  18. Large icebergs characteristics from altimeter waveforms analysis

    NASA Astrophysics Data System (ADS)

    Tournadre, J.; Bouhier, N.; Girard-Ardhuin, F.; Rémy, F.

    2015-03-01

    Large uncertainties exist on the volume of ice transported by the Southern Ocean large icebergs, a key parameter for climate studies, because of the paucity of information, especially on iceberg thickness. Using icebergs tracks from the National Ice Center (NIC) and Brigham Young University (BYU) databases to select altimeter data over icebergs and a method of analysis of altimeter waveforms, a database of 5366 icebergs freeboard elevation, length, and backscatter covering the 2002-2012 period has been created. The database is analyzed in terms of distributions of freeboard, length, and backscatter showing differences as a function of the iceberg's quadrant of origin. The database allows to analyze the temporal evolution of icebergs and to estimate a melt rate of 35-39 m·yr-1 (neglecting the firn compaction). The total daily volume of ice, estimated by combining the NIC and altimeter sizes and the altimeter freeboards, regularly decreases from 2.2 104km3 in 2002 to 0.9 104km3 in 2012. During this decade, the total loss of ice (˜1800 km3·yr-1) is twice as large as than the input (˜960 km3·yr-1) showing that the system is out of equilibrium after a very large input of ice between 1997 and 2002. Breaking into small icebergs represents 80% (˜1500 km3·yr-1) of the total ice loss while basal melting is only 18% (˜320 km3·yr-1). Small icebergs are thus the major vector of freshwater input in the Southern Ocean.

  19. Big Data Challenges for Large Radio Arrays

    NASA Technical Reports Server (NTRS)

    Jones, Dayton L.; Wagstaff, Kiri; Thompson, David; D'Addario, Larry; Navarro, Robert; Mattmann, Chris; Majid, Walid; Lazio, Joseph; Preston, Robert; Rebbapragada, Umaa

    2012-01-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields.

  20. Pioneer Venus large probe neutral mass spectrometer

    NASA Technical Reports Server (NTRS)

    Hoffman, J.

    1982-01-01

    The deuterium hydrogen abundance ratio in the Venus atmosphere was measured while the inlets to the Pioneer Venus large probe mass spectrometer were coated with sulfuric acid from Venus' clouds. The ratio is (1.6 + or - 0.2) x 10 to the minus two power. It was found that the 100 fold enrichment of deuterium means that Venus outgassed at least 0.3% of a terrestrial ocean and possibly more.

  1. Large Penile Mass With Unusual Benign Histopathology.

    PubMed

    Johnson, Nate; Voznesensky, Maria; VerLee, Graham

    2015-09-01

    Pseudoepitheliomatous hyperplasia is an extremely rare condition presenting as a lesion on the glans penis in older men. Physical exam without biopsy cannot differentiate malignant from nonmalignant growth. We report a case of large penile mass in an elderly male with a history of lichen sclerosis, highly suspicious for malignancy. Subsequent surgical removal and biopsy demonstrated pseudoepitheliomatous hyperplasia, an unusual benign histopathologic diagnosis with unclear prognosis. We review the literature and discuss options for treatment and surveillance. PMID:26793536

  2. Large lighter-than-air vehicles

    NASA Technical Reports Server (NTRS)

    Mayer, N. J.

    1979-01-01

    The background of experience and the results achieved in building large airships are discussed. Two current applications are identified. These are in heavy vertical lift and in long endurance patrol. The most promising concepts for these missions include hybrid combinations of helicopters and aerostats and more conventional rigid types. These new approaches will require some technology development in aerodynamics and structures, but all vehicles will benefit from application of modern methods and materials.

  3. Large scale properties of the Webgraph

    NASA Astrophysics Data System (ADS)

    Donato, D.; Laura, L.; Leonardi, S.; Millozzi, S.

    2004-03-01

    In this paper we present an experimental study of the properties of web graphs. We study a large crawl from 2001 of 200M pages and about 1.4 billion edges made available by the WebBase project at Stanford[CITE]. We report our experimental findings on the topological properties of such graphs, such as the number of bipartite cores and the distribution of degree, PageRank values and strongly connected components.

  4. Direct clipping of large basilar trunk aneurysm.

    PubMed

    Kimura, Toshikazu; Nakagawa, Daichi; Kawai, Kensuke

    2015-01-01

    A large basilar trunk aneurysm was incidentally found in a 77-year-old woman in examination for headache. Though it was asymptomatic, high signal intensity was noticed in the brainstem around the aneurysm on FLAIR image of MRI. As she was otherwise healthy, surgical clipping was performed through anterior temporal approach. The video can be found here: http://youtu.be/0soWM8meCW8 . PMID:25554839

  5. Managing large SNP datasets with SNPpy.

    PubMed

    Mitha, Faheem

    2013-01-01

    Using relational databases to manage SNP datasets is a very useful technique that has significant advantages over alternative methods, including the ability to leverage the power of relational databases to perform data validation, and the use of the powerful SQL query language to export data. SNPpy is a Python program which uses the PostgreSQL database and the SQLAlchemy Python library to automate SNP data management. This chapter shows how to use SNPpy to store and manage large datasets.

  6. (Workshop on nuclear physics with large arrays)

    SciTech Connect

    Beene, J.R.

    1989-11-17

    The traveler attended the third and final part of the three-month-long Workshop on Nuclear Structure in the Era of New Spectroscopy, held from September through November at the Niels Bohr Institute in Copenhagen, Denmark. The third or C part of this ambitious series of workshops was titled Nuclear Physics with Large Arrays.'' The author presented four talks over a two-week period, at the invitation of the organizers.

  7. Large Hadron Collider commissioning and first operation.

    PubMed

    Myers, S

    2012-02-28

    A history of the commissioning and the very successful early operation of the Large Hadron Collider (LHC) is described. The accident that interrupted the first commissioning, its repair and the enhanced protection system put in place are fully described. The LHC beam commissioning and operational performance are reviewed for the period from 2010 to mid-2011. Preliminary plans for operation and future upgrades for the LHC are given for the short and medium term.

  8. Large rivers of the United States

    USGS Publications Warehouse

    Iseri, Kathleen T.; Langbein, Walter Basil

    1974-01-01

    Information on the flow of the 28 largest rivers in the United States is presented for the base periods 1931-60 and 1941-70. Drainage area, stream length, source, and mouth are included. Table 1 shows the average discharge at downstream gaging stations. Table 2 lists large rivers in order of average discharge at the mouth, based on the period 1941-70.

  9. Large-area thin-film modules

    NASA Technical Reports Server (NTRS)

    Tyan, Y. S.; Perez-Albuerne, E. A.

    1985-01-01

    The low cost potential of thin film solar cells can only be fully realized if large area modules can be made economically with good production yields. This paper deals with two of the critical challenges. A scheme is presented which allows the simple, economical realization of the long recognized, preferred module structure of monolithic integration. Another scheme reduces the impact of shorting defects and, as a result, increases the production yields. Analytical results demonstrating the utilization and advantages of such schemes are discussed.

  10. Quality Function Deployment for Large Systems

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1992-01-01

    Quality Function Deployment (QFD) is typically applied to small subsystems. This paper describes efforts to extend QFD to large scale systems. It links QFD to the system engineering process, the concurrent engineering process, the robust design process, and the costing process. The effect is to generate a tightly linked project management process of high dimensionality which flushes out issues early to provide a high quality, low cost, and, hence, competitive product. A pre-QFD matrix linking customers to customer desires is described.

  11. Mechanical Properties Of Large Sodium Iodide Crystals

    NASA Technical Reports Server (NTRS)

    Lee, Henry M.

    1988-01-01

    Report presents data on mechanical properties of large crystals of thallium-doped sodium iodide. Five specimens in shape of circular flat plates subjected to mechanical tests. Presents test results for each specimen as plots of differential pressure versus center displacement and differential pressure versus stress at center. Also tabulates raw data. Test program also developed procedure for screening candidate crystals for gamma-ray sensor. Procedure eliminates potentially weak crystals before installed and ensures material yielding kept to minimum.

  12. NASA/MSFC Large Stretch Press Study

    NASA Technical Reports Server (NTRS)

    Choate, M. W.; Nealson, W. P.; Jay, G. C.; Buss, W. D.

    1985-01-01

    The purpose of this study was to: A. assess and document the advantages/disadvantages of a government agency investment in a large stretch form press on the order of 5000 tons capacity (per jaw); B. develop a procurement specification for the press; and C. provide trade study data that will permit an optimum site location. Tasks were separated into four major elements: cost study, user survey, site selection, and press design/procurement specification.

  13. Large-Area Vacuum Ultraviolet Sensors

    NASA Technical Reports Server (NTRS)

    Aslam, Shahid; Franz, David

    2012-01-01

    Pt/(n-doped GaN) Schottky-barrier diodes having active areas as large as 1 cm square have been designed and fabricated as prototypes of photodetectors for the vacuum ultraviolet portion (wavelengths approximately equal 200 nm) of the solar spectrum. In addition to having adequate sensitivity to photons in this wavelength range, these photodetectors are required to be insensitive to visible and infrared components of sunlight and to have relatively low levels of dark current.

  14. Ground state energy of large polaron systems

    SciTech Connect

    Benguria, Rafael D.; Frank, Rupert L.; Lieb, Elliott H.

    2015-02-15

    The last unsolved problem about the many-polaron system, in the Pekar–Tomasevich approximation, is the case of bosons with the electron-electron Coulomb repulsion of strength exactly 1 (the “neutral case”). We prove that the ground state energy, for large N, goes exactly as −N{sup 7/5}, and we give upper and lower bounds on the asymptotic coefficient that agree to within a factor of 2{sup 2/5}.

  15. Large space structures control algorithm characterization

    NASA Technical Reports Server (NTRS)

    Fogel, E.

    1983-01-01

    Feedback control algorithms are developed for sensor/actuator pairs on large space systems. These algorithms have been sized in terms of (1) floating point operation (FLOP) demands; (2) storage for variables; and (3) input/output data flow. FLOP sizing (per control cycle) was done as a function of the number of control states and the number of sensor/actuator pairs. Storage for variables and I/O sizing was done for specific structure examples.

  16. Spatial occupancy models for large data sets

    USGS Publications Warehouse

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  17. Range-balancing the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Rakich, A.; Thompson, D.; Kuhn, O. P.

    2011-10-01

    The Large Binocular Telescope (LBT) consists of two 8.4 m telescopes mounted on a common alt-az gimbal. The telescope has various modes of operation, including prime-focus, bent- and direct-Gregorian modes. The telescopes can feed independent instruments or their light can be combined in one of two interferometric instruments, giving an interferometric baseline of over 22 m. With all large telescopes, including the LBT, collimation models or modeled values for hexapod positions, are required to maintain reasonable optical alignment over the working range of temperatures and telescope elevations. Unlike other telescopes, the LBT has a highly asymmetric mechanical structure, and as a result the collimation models are required to do a lot more "work", than on an equivalent aperture monocular telescope that are usually designed to incorporate a Serurrier truss arrangement. LBT has been phasing in science operations over the last 5 years, with first light on the prime-focus cameras in 2006, and first light in Gregorian mode in 2008. In this time the generation of collimation models for LBT has proven to be problematic, with large departures from a given model, and large changes in pointing, being the norm. A refined approach to generating collimation models, "range balancing", has greatly improved this situation. The range-balancing approach to generating collimation models has delivered reliable collimation and pointing in both prime focus and Gregorian modes which has led to greatly increased operational efficiency. The details of the range-balancing approach, involving the removal of pointing "contamination" from collimation data, are given in this paper.

  18. Large space antenna concepts for ESGP

    NASA Technical Reports Server (NTRS)

    Love, Allan W.

    1989-01-01

    It is appropriate to note that 1988 marks the 100th anniversary of the birth of the reflector antenna. It was in 1888 that Heinrich Hertz constructed the first one, a parabolic cylinder made of sheet zinc bent to shape and supported by a wooden frame. Hertz demonstrated the existence of the electromagnetic waves that had been predicted theoretically by James Clerk Maxwell some 22 years earlier. In the 100 years since Hertz's pioneering work the field of electromagnetics has grown explosively: one of the technologies is that of remote sensing of planet Earth by means of electromagnetic waves, using both passive and active sensors located on an Earth Science Geostationary Platform (ESEP). For these purposes some exquisitely sensitive instruments were developed, capable of reaching to the fringes of the known universe, and relying on large reflector antennas to collect the minute signals and direct them to appropriate receiving devices. These antennas are electrically large, with diameters of 3000 to 10,000 wavelengths and with gains approaching 80 to 90 dB. Some of the reflector antennas proposed for ESGP are also electrically large. For example, at 220 GHz a 4-meter reflector is nearly 3000 wavelengths in diameter, and is electrically quite comparable with a number of the millimeter wave radiotelescopes that are being built around the world. Its surface must meet stringent requirements on rms smoothness, and ability to resist deformation. Here, however, the environmental forces at work are different. There are no varying forces due to wind and gravity, but inertial forces due to mechanical scanning must be reckoned with. With this form of beam scanning, minimizing momentum transfer to the space platform is a problem that demands an answer. Finally, reflector surface distortion due to thermal gradients caused by the solar flux probably represents the most challenging problem to be solved if these Large Space Antennas are to achieve the gain and resolution required of

  19. Shielding and grounding in large detectors

    SciTech Connect

    Radeka, V.

    1998-09-01

    Prevention of electromagnetic interference (EMI), or ``noise pickup,`` is an important design aspect in large detectors in accelerator environments. Shielding effectiveness as a function of shield thickness and conductivity vs the type and frequency of the interference field is described. Noise induced in transmission lines by ground loop driven currents in the shield is evaluated and the importance of low shield resistance is emphasized. Some measures for prevention of ground loops and isolation of detector-readout systems are discussed.

  20. Design concepts for large reflector antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.; Adams, L. R.

    1983-01-01

    Practical approaches for establishing large, precise antenna reflectors in space are described. Reflector surfaces consisting of either solid panels or knitted mesh are considered. The approach using a deep articulated truss structure to support a mesh reflector is selected for detailed investigations. A new sequential deployment concept for the tetrahedral truss is explained. Good joint design is discussed, and examples are described both analytically and by means of demonstration models. The influence of curvature on the design and its vibration characteristics are investigated.

  1. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  2. Large animal hepatotoxic and nephrotoxic plants.

    PubMed

    Oladosu, L A; Case, A A

    1979-10-01

    The hepatotoxic and nephrotoxic plants of large domestic animals have been reviewed. The most important ones are those widely distributed as weeds over pastures, negelcted forests and grasslands, those used as ornamentals, the nitrate concentrating forage crops, and the cyanophoric plants. Crotolaria spp, the ragwort (Senecia jacobaea), the lantana spp. and heliotopum are common hepatoxic plants. Amaranthus retroflexus, Datura stramonium, Solanum rostratum, and the castor oil plant (Ricinus communis) are nephrotoxic plants.

  3. Large-scale fibre-array multiplexing

    SciTech Connect

    Cheremiskin, I V; Chekhlova, T K

    2001-05-31

    The possibility of creating a fibre multiplexer/demultiplexer with large-scale multiplexing without any basic restrictions on the number of channels and the spectral spacing between them is shown. The operating capacity of a fibre multiplexer based on a four-fibre array ensuring a spectral spacing of 0.7 pm ({approx} 10 GHz) between channels is demonstrated. (laser applications and other topics in quantum electronics)

  4. Large extinction ratio optical electrowetting shutter.

    PubMed

    Montoya, Ryan D; Underwood, Kenneth; Terrab, Soraya; Watson, Alexander M; Bright, Victor M; Gopinath, Juliet T

    2016-05-01

    A large extinction ratio optical shutter has been demonstrated using electrowetting liquids. The device is based on switching between a liquid-liquid interface curvature that produces total internal reflection and one that does not. The interface radius of curvature can be tuned continuously from 9 mm at 0 V to -45 mm at 26 V. Extinction ratios from 55.8 to 66.5 dB were measured. The device shows promise for ultracold chip-scale atomic clocks.

  5. Large proximal ureteral stones: Ideal treatment modality?

    PubMed Central

    Kadyan, B.; Sabale, V.; Mane, D.; Satav, V.; Mulay, A.; Thakur, N.; Kankalia, S. P.

    2016-01-01

    Background and Purpose: Ideal treatment modality for patients with large impacted proximal ureteral stone remains controversial. We compared laparoscopic transperitoneal ureterolithotomy (Lap-TPUL) and semirigid ureteroscopy for large proximal ureteric stones to evaluate their efficacy and safety. Patients and Methods: From November 2012 to December 2014, we enrolled 122 patients with large (≥1.5 cm) proximal ureteral stone in the study. Patients were randomly divided into two groups: Group A (60 patients), retrograde ureteroscopic lithotripsy using a semirigid ureteroscope; Group B (62 patients), transperitoneal LU (Lap-TPUL). Results: The overall stone-free rate was 71.6% and 93.5% for Group A and Group B respectively (P = 0.008). Auxiliary procedure rate was higher in Group A than in Group B (27.3% vs. 5.6%). The complication rate was 11.2% in Group B versus 25% in Group A. Mean procedure time was higher in laparoscopy group as compared to ureterorenoscopy (URS) groups (84.07 ± 16.80 vs. 62.82 ± 12.71 min). Hospital stay was 4.16 ± 0.67 days in laparoscopy group and 1.18 ± 0.38 days in URS group (P < 0.0001). Conclusion: Laparoscopic transperitoneal ureterolithotomy is a minimally invasive, safe and effective treatment modality and should be recommended to all patients of impacted large proximal stones, which are not amenable to URS or extracorporeal shock-wave lithotripsy or as a primary modality of choice especially if patient is otherwise candidate for open surgery. PMID:27141190

  6. Science Diplomacy in Large International Collaborations

    NASA Astrophysics Data System (ADS)

    Barish, Barry C.

    2011-04-01

    What opportunities and challenges does the rapidly growing internationalization of science, especially large scale science and technology projects, present for US science policy? On one hand, the interchange of scientists, the sharing of technology and facilities and the working together on common scientific goals promotes better understanding and better science. On the other hand, challenges are presented, because the science cannot be divorced from government policies, and solutions must be found for issues varying from visas to making reliable international commitments.

  7. Survey of future requirements for large space structures. [space platforms, large antennas, and power surfaces

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1976-01-01

    The future requirements for large space structures were examined and the foundation for long range planning of technology development for such structures is provided. Attention is concentrated on a period after 1985 for actual use. Basic ground rule of the study was that applications be of significant importance and have promise of direct economic benefit to mankind. The inputs to the study came from visits to a large number of government and industrial organizations, written studies in current literature, and approximate analyses of potential applications. The paper identifies diverse space applications for large area structures in three general categories: (1) large surfaces for power, (2) large antenna to receive and transmit energy over the radio frequency bandwidth, and (3) space platforms to provide area for general utilizations.

  8. Large inflatable deployable antenna flight experiment results

    NASA Astrophysics Data System (ADS)

    Freeland, R. E.; Bilyeu, G. D.; Veal, G. R.; Steiner, M. D.; Carson, D. E.

    Large space-based deployable antenna structures are needed for a variety of applications. However, recent reductions of antenna user resources have resulted in a real need for low-cost, large-size, light-weight, and reliable deployable space antenna structures. Fortunately, a new class of deployable space structures, called "inflatable space structures" is under development at L'Garde, Inc. The potential of this new concept was recognized by NASA who selected it for a flight experiment. The objective of the experiment was to develop a large, low-cost inflatable antenna structure and demonstrate its mechanical performance in the space environment. The carrier for this free-flying experiment was the STS-launched and recovered Spartan spacecraft. The experiment hardware consisted of a 14-meter diameter off-set parabolic reflector structure. The Spartan 207/IAE was successfully flown on STS 77, deployed on May 20, 1996 with Spartan recovery on May 21,1996. The basic antenna structure deployed successfully, but in an uncontrolled manner, that clearly demonstrated the robustness of this new type of space structure. The low cost of the flight antenna structure hardware and the outstanding mechanical packaging demonstrated on orbit clearly validated the potential of this new class of space structure for enabling new, low-cost missions.

  9. On large deviations for ensembles of distributions

    SciTech Connect

    Khrychev, D A

    2013-11-30

    The paper is concerned with the large deviations problem in the Freidlin-Wentzell formulation without the assumption of the uniqueness of the solution to the equation involving white noise. In other words, it is assumed that for each ε>0 the nonempty set P{sub ε} of weak solutions is not necessarily a singleton. Analogues of a number of concepts in the theory of large deviations are introduced for the set (P{sub ε}, ε>0), hereafter referred to as an ensemble of distributions. The ensembles of weak solutions of an n-dimensional stochastic Navier-Stokes system and stochastic wave equation with power-law nonlinearity are shown to be uniformly exponentially tight. An idempotent Wiener process in a Hilbert space and idempotent partial differential equations are defined. The accumulation points in the sense of large deviations of the ensembles in question are shown to be weak solutions of the corresponding idempotent equations. Bibliography: 14 titles.

  10. Implementing large projects in software engineering courses

    NASA Astrophysics Data System (ADS)

    Coppit, David

    2006-03-01

    In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that threaten the realism of large projects. Third, quantitative evaluation of individuals who work in groups is notoriously difficult. As a result, many software engineering courses compromise the project experience by reducing the team sizes, project scope, and risk. In this paper, we present an approach to teaching a one-semester software engineering course in which 20 to 30 students work together to construct a moderately sized (15KLOC) software system. The approach combines carefully coordinated lectures and homeworks, a hierarchical project management structure, modern communication technologies, and a web-based project tracking and individual assessment system. Our approach provides a more realistic project experience for the students, without incurring significant additional overhead for the instructor. We present our experiences using the approach the last 2 years for the software engineering course at The College of William and Mary. Although the approach has some weaknesses, we believe that they are strongly outweighed by the pedagogical benefits.

  11. Flexible language constructs for large parallel programs

    NASA Technical Reports Server (NTRS)

    Rosing, Matthew; Schnabel, Robert

    1993-01-01

    The goal of the research described is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (MIMD) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include SIMD (Single Instruction Multiple Data), SPMD (Single Program Multiple Data), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression of the variety of algorithms that occur in large scientific computations. An overview of a new language that combines many of these programming models in a clean manner is given. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. An overview of the language and discussion of some of the critical implementation details is given.

  12. Stability of subsea pipelines during large storms

    PubMed Central

    Draper, Scott; An, Hongwei; Cheng, Liang; White, David J.; Griffiths, Terry

    2015-01-01

    On-bottom stability design of subsea pipelines transporting hydrocarbons is important to ensure safety and reliability but is challenging to achieve in the onerous metocean (meteorological and oceanographic) conditions typical of large storms (such as tropical cyclones, hurricanes or typhoons). This challenge is increased by the fact that industry design guidelines presently give no guidance on how to incorporate the potential benefits of seabed mobility, which can lead to lowering and self-burial of the pipeline on a sandy seabed. In this paper, we demonstrate recent advances in experimental modelling of pipeline scour and present results investigating how pipeline stability can change in a large storm. An emphasis is placed on the initial development of the storm, where scour is inevitable on an erodible bed as the storm velocities build up to peak conditions. During this initial development, we compare the rate at which peak near-bed velocities increase in a large storm (typically less than 10−3 m s−2) to the rate at which a pipeline scours and subsequently lowers (which is dependent not only on the storm velocities, but also on the mechanism of lowering and the pipeline properties). We show that the relative magnitude of these rates influences pipeline embedment during a storm and the stability of the pipeline. PMID:25512592

  13. Large aperture adaptive optics for intense lasers

    NASA Astrophysics Data System (ADS)

    Deneuville, François; Ropert, Laurent; Sauvageot, Paul; Theis, Sébastien

    2015-05-01

    ISP SYSTEM has developed a range of large aperture electro-mechanical deformable mirrors (DM) suitable for ultra short pulsed intense lasers. The design of the MD-AME deformable mirror is based on force application on numerous locations thanks to electromechanical actuators driven by stepper motors. DM design and assembly method have been adapted to large aperture beams and the performances were evaluated on a first application for a beam with a diameter of 250mm at 45° angle of incidence. A Strehl ratio above 0.9 was reached for this application. Simulations were correlated with measurements on optical bench and the design has been validated by calculation for very large aperture (up to Ø550mm). Optical aberrations up to Zernike order 5 can be corrected with a very low residual error as for actual MD-AME mirror. Amplitude can reach up to several hundreds of μm for low order corrections. Hysteresis is lower than 0.1% and linearity better than 99%. Contrary to piezo-electric actuators, the μ-AME actuators avoid print-through effects and they permit to keep the mirror shape stable even unpowered, providing a high resistance to electro-magnetic pulses. The MD-AME mirrors can be adapted to circular, square or elliptical beams and they are compatible with all dielectric or metallic coatings.

  14. Basics for Testing Large Earthquake Precursors

    NASA Astrophysics Data System (ADS)

    Romashkova, L. L.; Kossobokov, V. G.; Peresan, A.

    2008-12-01

    Earthquakes, the large or significant ones in particular, are extreme events that, by definition, are the rare ones. Testing candidates to large earthquake precursors implies investigation a small sample of case- histories with the support of specific and sensitive statistical methods and data of different quality, collected in various conditions. Regretfully, in many seismological studies the methods of mathematical statistics are used outside their applicability: earthquakes are evidently not independent events and have heterogeneous, perhaps fractal distribution in space and time. Moreover, the naïve or, conversely, delicately-designed models are considered as a full replacement of seismic phenomena. Although there are many claims of earthquake precursors, most of them should remain in the list of precursor candidates, which have never been tested in any rigorous way, and, in fact, are anecdotal cases of coincidental occurrence. To establish a precursory link between sequences of events of the same or different phenomena, it is necessary to accumulate enough statistics in a rigorous forecast/prediction test, which results, i.e. success-to-failure scores and space-time volume of alarms, must appeal for rejecting hypotheses of random coincidental appearance. We reiterate suggesting to use so-called "Seismic Roulette" null-hypothesis as the most adequate random alternative accounting for the empirical spatial distribution of earthquakes in question and illustrate a few outcomes of Testing Large Earthquake Precursors.

  15. Flexible Language Constructs for Large Parallel Programs

    DOE PAGES

    Rosing, Matt; Schnabel, Robert

    1994-01-01

    The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less

  16. Actinide recovery method -- Large soil samples

    SciTech Connect

    Maxwell , S.L. III

    2000-04-25

    There is a need to measure actinides in environmental samples with lower and lower detection limits, requiring larger sample sizes. This analysis is adversely affected by sample-matrix interferences, which make analyzing soil samples above five-grams very difficult. A new Actinide-Recovery Method has been developed by the Savannah River Site Central Laboratory to preconcentrate actinides from large-soil samples. Diphonix Resin (Eichrom Industries), a 1994 R and D 100 winner, is used to preconcentrate the actinides from large soil samples, which are bound powerfully to the resin's diphosphonic acid groups. A rapid microwave-digestion technique is used to remove the actinides from the Diphonix Resin, which effectively eliminates interfering matrix components from the soil matrix. The microwave-digestion technique is more effective and less tedious than catalyzed hydrogen peroxide digestions of the resin or digestion of diphosphonic stripping agents such as HEDPA. After resin digestion, the actinides are recovered in a small volume of nitric acid which can be loaded onto small extraction chromatography columns, such as TEVA Resin, U-TEVA Resin or TRU Resin (Eichrom Industries). Small, selective extraction columns do not generate large volumes of liquid waste and provide consistent tracer recoveries after soil matrix elimination.

  17. Large classical universes emerging from quantum cosmology

    SciTech Connect

    Pinto-Neto, Nelson

    2009-04-15

    It is generally believed that one cannot obtain a large universe from quantum cosmological models without an inflationary phase in the classical expanding era because the typical size of the universe after leaving the quantum regime should be around the Planck length, and the standard decelerated classical expansion after that is not sufficient to enlarge the universe in the time available. For instance, in many quantum minisuperspace bouncing models studied in the literature, solutions where the universe leaves the quantum regime in the expanding phase with appropriate size have negligible probability amplitude with respect to solutions leaving this regime around the Planck length. In this paper, I present a general class of moving Gaussian solutions of the Wheeler-DeWitt equation where the velocity of the wave in minisuperspace along the scale factor axis, which is the new large parameter introduced in order to circumvent the above-mentioned problem, induces a large acceleration around the quantum bounce, forcing the universe to leave the quantum regime sufficiently big to increase afterwards to the present size, without needing any classical inflationary phase in between, and with reasonable relative probability amplitudes with respect to models leaving the quantum regime around the Planck scale. Furthermore, linear perturbations around this background model are free of any trans-Planckian problem.

  18. Bulk Micromegas detectors for large TPC applications

    NASA Astrophysics Data System (ADS)

    Sarrat, A.

    2007-10-01

    A large volume TPC will be used in the near future for a variety of experiments, including T2K and possibly the Linear Collider detector. The bulk Micromegas detector is a novel construction technique suited for building compact and robust low mass detectors. The ability to pave a large surface with a simple mechanical solution and negligible dead space between modules is of particular interest for these applications, offering a simple and low cost alternative to wire chambers. We have built and tested two large bulk detectors (26×27 cm2 with 8×8 mm2 pads) in the former HARP field cage setup at CERN, with cosmic ray data in a magnetic field up to 0.4 T. We present the excellent detector performances, with gains in excess of 104, space point resolution of 700 μm at 1 m drift, and dE/dx resolution of 12%. Improvement on the point resolution with the use of a resistive anode is also discussed.

  19. Stability analysis of large electric power systems

    SciTech Connect

    Elwood, D.M.

    1993-01-01

    Modern electric power systems are large and complicated, and, in many regions of the world, the generation and transmission systems are operating near their limits. Ensuring the reliable operation of the power system requires engineers to study the response of the system to various disturbances. The responses to large disturbances are examined by numerically solving the nonlinear differential-algebraic equations describing the power system. The response to small disturbances is typically studied via eigenanalysis. The Electric Power Research Institute (EPRI) recently developed the Extended Transient/Mid-term Stability Program (ETMSP) to study large disturbance stability and the Small Signal Stability Program Package (SSSP) to study small signal stability. The primary objectives of the work described in this report were to (1) explore ways of speeding up ETMSP, especially on mid-term voltage stability problems, (2) explore ways of speeding up the Multi-Area Small-Signal Stability program (MASS), one of the codes in SSSP, and (3) explore ways of increasing the size of problem that can be solved by the Cray version of MASS.

  20. Design of Large Momentum Acceptance Transport Systems

    SciTech Connect

    D.R. Douglas

    2005-05-01

    The use of energy recovery to enable high power linac operation often gives rise to an attendant challenge--the transport of high power beams subtending large phase space volumes. In particular applications--such as FEL driver accelerators--this manifests itself as a requirement for beam transport systems with large momentum acceptance. We will discuss the design, implementation, and operation of such systems. Though at times counterintuitive in behavior (perturbative descriptions may, for example, be misleading), large acceptance systems have been successfully utilized for generations as spectrometers and accelerator recirculators [1]. Such systems are in fact often readily designed using appropriate geometric descriptions of beam behavior; insight provided using such a perspective may in addition reveal inherent symmetries that simplify construction and improve operability. Our discussion will focus on two examples: the Bates-clone recirculator used in the Jefferson Lab 10 kW IR U pgrade FEL (which has an observed acceptance of 10% or more) and a compaction-managed mirror-bend achromat concept with an acceptance ranging from 50 to 150 MeV.

  1. Infrasonic observations of large scale HE events

    SciTech Connect

    Whitaker, R.W.; Mutschlecner, J.P.; Davidson, M.B.; Noel, S.D.

    1990-01-01

    The Los Alamos Infrasound Program has been operating since about mid-1982, making routine measurements of low frequency atmospheric acoustic propagation. Generally, we work between 0.1 Hz to 10 Hz; however, much of our work is concerned with the narrower range of 0.5 to 5.0 Hz. Two permanent stations, St. George, UT, and Los Alamos, NM, have been operational since 1983, collecting data 24 hours a day. This discussion will concentrate on measurements of large, high explosive (HE) events at ranges of 250 km to 5330 km. Because the equipment is well suited for mobile deployments, it can easily establish temporary observing sites for special events. The measurements in this report are from our permanent sites, as well as from various temporary sites. In this short report will not give detailed data from all sites for all events, but rather will present a few observations that are typical of the full data set. The Defense Nuclear Agency sponsors these large explosive tests as part of their program to study airblast effects. A wide variety of experiments are fielded near the explosive by numerous Department of Defense (DOD) services and agencies. This measurement program is independent of this work; use is made of these tests as energetic known sources, which can be measured at large distances. Ammonium nitrate and fuel oil (ANFO) is the specific explosive used by DNA in these tests. 6 refs., 6 figs.

  2. Compressed state Kalman filter for large systems

    NASA Astrophysics Data System (ADS)

    Kitanidis, Peter K.

    2015-02-01

    The Kalman filter (KF) is a recursive filter that allows the assimilation of data in real time and has found numerous applications. In earth sciences, the method is applied to systems with very large state vectors obtained from the discretization of functions such as pressure, velocity, solute concentration, and voltage. With state dimension running in the millions, the implementation of the standard or textbook version of KF is very expensive and low-rank approximations have been devised such as EnKF and SEEK. Although widely applied, the error behavior of these methods is not adequately understood. This article focuses on very large linear systems and presents a complete computational method that scales roughly linearly with the dimension of the state vector. The method is suited for problems for which the effective rank of the state covariance matrix is much smaller than its dimension. This method is closest to SEEK but uses a fixed basis that should be selected in accordance with the characteristics of the problem, mainly the transition matrix and the system noise covariance. The method is matrix free, i.e., does not require computation of Jacobian matrices and uses the forward model as a black box. Computational results demonstrate the ability of the method to solve very large, say 106 , state vectors.

  3. Control challenges for extremely large telescopes

    NASA Astrophysics Data System (ADS)

    MacMartin, Douglas G.

    2003-08-01

    The next generation of large ground-based optical telescopes are likely to involve a highly segmented primary mirror that must be controlled in the presence of wind and other disturbances, resulting in a new set of challenges for control. The current design concept for the California Extremely Large Telescope (CELT) includes 1080 segments in the primary mirror, with the out-of-plane degrees of freedom actively controlled. In addition to the 3240 primary mirror actuators, the secondary mirror of the telescope will also require at least 5 degree of freedom control. The bandwidth of both control systems will be limited by coupling to structural modes. I discuss three control issues for extremely large telescopes in the context of the CELT design, describing both the status and remaining challenges. First, with many actuators and sensors, the cost and reliability of the control hardware is critical; the hardware requirements and current actuator design are discussed. Second, wind buffeting due to turbulence inside the telescope enclosure is likely to drive the control bandwidth higher, and hence limitations resulting from control-structure-interaction must be understood. Finally, the impact on the control architecture is briefly discussed.

  4. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  5. Large-scale neuromorphic computing systems.

    PubMed

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers. PMID:27529195

  6. Accelerating Large Data Analysis By Exploiting Regularities

    NASA Technical Reports Server (NTRS)

    Moran, Patrick J.; Ellsworth, David

    2003-01-01

    We present techniques for discovering and exploiting regularity in large curvilinear data sets. The data can be based on a single mesh or a mesh composed of multiple submeshes (also known as zones). Multi-zone data are typical to Computational Fluid Dynamics (CFD) simulations. Regularities include axis-aligned rectilinear and cylindrical meshes as well as cases where one zone is equivalent to a rigid-body transformation of another. Our algorithms can also discover rigid-body motion of meshes in time-series data. Next, we describe a data model where we can utilize the results from the discovery process in order to accelerate large data visualizations. Where possible, we replace general curvilinear zones with rectilinear or cylindrical zones. In rigid-body motion cases we replace a time-series of meshes with a transformed mesh object where a reference mesh is dynamically transformed based on a given time value in order to satisfy geometry requests, on demand. The data model enables us to make these substitutions and dynamic transformations transparently with respect to the visualization algorithms. We present results with large data sets where we combine our mesh replacement and transformation techniques with out-of-core paging in order to achieve significant speed-ups in analysis.

  7. Plasma surface figuring of large optical components

    NASA Astrophysics Data System (ADS)

    Jourdain, R.; Castelli, M.; Morantz, P.; Shore, P.

    2012-04-01

    Fast figuring of large optical components is well known as a highly challenging manufacturing issue. Different manufacturing technologies including: magnetorheological finishing, loose abrasive polishing, ion beam figuring are presently employed. Yet, these technologies are slow and lead to expensive optics. This explains why plasma-based processes operating at atmospheric pressure have been researched as a cost effective means for figure correction of metre scale optical surfaces. In this paper, fast figure correction of a large optical surface is reported using the Reactive Atom Plasma (RAP) process. Achievements are shown following the scaling-up of the RAP figuring process to a 400 mm diameter area of a substrate made of Corning ULE®. The pre-processing spherical surface is characterized by a 3 metres radius of curvature, 2.3 μm PVr (373nm RMS), and 1.2 nm Sq nanometre roughness. The nanometre scale correction figuring system used for this research work is named the HELIOS 1200, and it is equipped with a unique plasma torch which is driven by a dedicated tool path algorithm. Topography map measurements were carried out using a vertical work station instrumented by a Zygo DynaFiz interferometer. Figuring results, together with the processing times, convergence levels and number of iterations, are reported. The results illustrate the significant potential and advantage of plasma processing for figuring correction of large silicon based optical components.

  8. Secondary containment large fertilizer storage tanks

    SciTech Connect

    Waddell, E.L.; Broder, M.F.

    1991-12-31

    The large quantities of fertilizer and pesticide, which are handled by retail facilities, have made these operations the target of regulations aimed at protecting water supplies. These regulations and dealers` desire to protect water supplies have made environmental protection a primary concern. Currently, nine states have adopted regulations which require secondary containment of fertilizers and agrichemicals. An additional seven states are developing regulations. Volume requirements and performance specifications of secondary containment structures for fertilizer storage tanks are included in all regulations. Among the different containment problems presented by retail sites, the large tanks (tanks with capacities greater than 100,000 gallons) present the greatest challenge for design and cost evaluation to determine the most effective containment system. The objective of this paper is to provide secondary containment designs for large fertilizer tanks using readily available construction materials. These designs may be innovative to some extent, but they must incorporate field experience and knowledge from trials, errors, and successful installations for existing and newly constructed fertilizer storage tanks. Case studies are presented to indicate projected costs for these alternatives.

  9. Secondary containment large fertilizer storage tanks

    SciTech Connect

    Waddell, E.L.; Broder, M.F.

    1991-01-01

    The large quantities of fertilizer and pesticide, which are handled by retail facilities, have made these operations the target of regulations aimed at protecting water supplies. These regulations and dealers' desire to protect water supplies have made environmental protection a primary concern. Currently, nine states have adopted regulations which require secondary containment of fertilizers and agrichemicals. An additional seven states are developing regulations. Volume requirements and performance specifications of secondary containment structures for fertilizer storage tanks are included in all regulations. Among the different containment problems presented by retail sites, the large tanks (tanks with capacities greater than 100,000 gallons) present the greatest challenge for design and cost evaluation to determine the most effective containment system. The objective of this paper is to provide secondary containment designs for large fertilizer tanks using readily available construction materials. These designs may be innovative to some extent, but they must incorporate field experience and knowledge from trials, errors, and successful installations for existing and newly constructed fertilizer storage tanks. Case studies are presented to indicate projected costs for these alternatives.

  10. Large-Scale Visual Data Analysis

    NASA Astrophysics Data System (ADS)

    Johnson, Chris

    2014-04-01

    Modern high performance computers have speeds measured in petaflops and handle data set sizes measured in terabytes and petabytes. Although these machines offer enormous potential for solving very large-scale realistic computational problems, their effectiveness will hinge upon the ability of human experts to interact with their simulation results and extract useful information. One of the greatest scientific challenges of the 21st century is to effectively understand and make use of the vast amount of information being produced. Visual data analysis will be among our most most important tools in helping to understand such large-scale information. Our research at the Scientific Computing and Imaging (SCI) Institute at the University of Utah has focused on innovative, scalable techniques for large-scale 3D visual data analysis. In this talk, I will present state- of-the-art visualization techniques, including scalable visualization algorithms and software, cluster-based visualization methods and innovate visualization techniques applied to problems in computational science, engineering, and medicine. I will conclude with an outline for a future high performance visualization research challenges and opportunities.

  11. Stability of subsea pipelines during large storms.

    PubMed

    Draper, Scott; An, Hongwei; Cheng, Liang; White, David J; Griffiths, Terry

    2015-01-28

    On-bottom stability design of subsea pipelines transporting hydrocarbons is important to ensure safety and reliability but is challenging to achieve in the onerous metocean (meteorological and oceanographic) conditions typical of large storms (such as tropical cyclones, hurricanes or typhoons). This challenge is increased by the fact that industry design guidelines presently give no guidance on how to incorporate the potential benefits of seabed mobility, which can lead to lowering and self-burial of the pipeline on a sandy seabed. In this paper, we demonstrate recent advances in experimental modelling of pipeline scour and present results investigating how pipeline stability can change in a large storm. An emphasis is placed on the initial development of the storm, where scour is inevitable on an erodible bed as the storm velocities build up to peak conditions. During this initial development, we compare the rate at which peak near-bed velocities increase in a large storm (typically less than 10(-3) m s(-2)) to the rate at which a pipeline scours and subsequently lowers (which is dependent not only on the storm velocities, but also on the mechanism of lowering and the pipeline properties). We show that the relative magnitude of these rates influences pipeline embedment during a storm and the stability of the pipeline.

  12. Large-eddy simulations with wall models

    NASA Technical Reports Server (NTRS)

    Cabot, W.

    1995-01-01

    The near-wall viscous and buffer regions of wall-bounded flows generally require a large expenditure of computational resources to be resolved adequately, even in large-eddy simulation (LES). Often as much as 50% of the grid points in a computational domain are devoted to these regions. The dense grids that this implies also generally require small time steps for numerical stability and/or accuracy. It is commonly assumed that the inner wall layers are near equilibrium, so that the standard logarithmic law can be applied as the boundary condition for the wall stress well away from the wall, for example, in the logarithmic region, obviating the need to expend large amounts of grid points and computational time in this region. This approach is commonly employed in LES of planetary boundary layers, and it has also been used for some simple engineering flows. In order to calculate accurately a wall-bounded flow with coarse wall resolution, one requires the wall stress as a boundary condition. The goal of this work is to determine the extent to which equilibrium and boundary layer assumptions are valid in the near-wall regions, to develop models for the inner layer based on such assumptions, and to test these modeling ideas in some relatively simple flows with different pressure gradients, such as channel flow and flow over a backward-facing step. Ultimately, models that perform adequately in these situations will be applied to more complex flow configurations, such as an airfoil.

  13. Large fragments of human serum albumin.

    PubMed

    Geisow, M J; Beaven, G H

    1977-03-01

    Large fragments of human serum albumin were produced by treatment of the native protein with pepsin at pH3.5. Published sequences of human albumin [Behrens, Spiekerman & Brown (1975) Fed. Proc. Fed. Am. Soc. Exp. Biol. 34, 591; Meloun, Moravek & Kostka (1975) FEBSLett.58, 134-137]were used to locate the fragments in the primary structure. The fragments support both the sequence and proposed disulphide-linkage pattern (Behrens et al., 1975). As the pH of a solution of albumin is lowered from pH4 to pH3.5, the protein undergoes a reversible conformational change known as the N-F transition. The distribution of large fragments of human albumin digested with pepsin in the above pH region was critically dependent on pH. It appeared that this distribution was dependent on the conformation of the protein at low pH, rather than the activity of pepsin. The yields of the large fragments produced by peptic digestion at different values of pH suggested that the C-terminal region of albumin unfolds or separates from the rest of the molecule during the N-F transition. The similarity of peptic fragments of human and bovine albumin produced under identical conditions supports the proposed similar tertiary structure of these molecules.

  14. Research in large adaptive antenna arrays

    NASA Technical Reports Server (NTRS)

    Berkowitz, R. S.; Dzekov, T.

    1976-01-01

    The feasibility of microwave holographic imaging of targets near the earth using a large random conformal array on the earth's surface and illumination by a CW source on a geostationary satellite is investigated. A geometrical formulation for the illuminator-target-array relationship is applied to the calculation of signal levels resulting from L-band illumination supplied by a satellite similar to ATS-6. The relations between direct and reflected signals are analyzed and the composite resultant signal seen at each antenna element is described. Processing techniques for developing directional beam formation as well as SNR enhancement are developed. The angular resolution and focusing characteristics of a large array covering an approximately circular area on the ground are determined. The necessary relations are developed between the achievable SNR and the size and number of elements in the array. Numerical results are presented for possible air traffic surveillance system. Finally, a simple phase correlation experiment is defined that can establish how large an array may be constructed.

  15. Large scale processes in the solar nebula.

    NASA Astrophysics Data System (ADS)

    Boss, A. P.

    Most proposed chondrule formation mechanisms involve processes occurring inside the solar nebula, so the large scale (roughly 1 to 10 AU) structure of the nebula is of general interest for any chrondrule-forming mechanism. Chondrules and Ca, Al-rich inclusions (CAIs) might also have been formed as a direct result of the large scale structure of the nebula, such as passage of material through high temperature regions. While recent nebula models do predict the existence of relatively hot regions, the maximum temperatures in the inner planet region may not be high enough to account for chondrule or CAI thermal processing, unless the disk mass is considerably greater than the minimum mass necessary to restore the planets to solar composition. Furthermore, it does not seem to be possible to achieve both rapid heating and rapid cooling of grain assemblages in such a large scale furnace. However, if the accretion flow onto the nebula surface is clumpy, as suggested by observations of variability in young stars, then clump-disk impacts might be energetic enough to launch shock waves which could propagate through the nebula to the midplane, thermally processing any grain aggregates they encounter, and leaving behind a trail of chondrules.

  16. Scale up of large ALON windows

    NASA Astrophysics Data System (ADS)

    Goldman, Lee M.; Balasubramanian, Sreeram; Kashalikar, Uday; Foti, Robyn; Sastri, Suri

    2013-06-01

    Aluminum Oxynitride (ALON® Optical Ceramic) combines broadband transparency with excellent mechanical properties. ALON's cubic structure means that it is transparent in its polycrystalline form, allowing it to be manufactured by conventional powder processing techniques. Surmet has established a robust manufacturing process, beginning with synthesis of ALON® powder, continuing through forming/heat treatment of blanks, and ending with optical fabrication of ALON® windows. Surmet has made significant progress in our production capability in recent years. Additional scale up of Surmet's manufacturing capability, for larger sizes and higher quantities, is currently underway. ALON® transparent armor represents the state of the art in protection against armor piercing threats, offering a factor of two in weight and thickness savings over conventional glass laminates. Tiled and monolithic windows have been successfully produced and tested against a range of threats. Large ALON® window are also of interest to a range of visible to Mid-Wave Infra-Red (MWIR) sensor applications. These applications often have stressing imaging requirements which in turn require that these large windows have optical characteristics including excellent homogeneity of index of refraction and very low stress birefringence. Surmet is currently scaling up its production facility to be able to make and deliver ALON® monolithic windows as large as ~19x36-in. Additionally, Surmet has plans to scale up to windows ~3ftx3ft in size in the coming years. Recent results with scale up and characterization of the resulting blanks will be presented.

  17. Visualization of Large Terrains Made Easy

    SciTech Connect

    Lindstrom, P; Pascucci, V

    2001-08-07

    We present an elegant and simple to implement framework for performing out-of-core visualization and view-dependent refinement of large terrain surfaces. Contrary to the recent trend of increasingly elaborate algorithms for large-scale terrain visualization, our algorithms and data structures have been designed with the primary goal of simplicity and efficiency of implementation. Our approach to managing large terrain data also departs from more conventional strategies based on data tiling. Rather than emphasizing how to segment and efficiently bring data in and out of memory, we focus on the manner in which the data is laid out to achieve good memory coherency for data accesses made in a top-down (coarse-to-fine) refinement of the terrain. We present and compare the results of using several different data indexing schemes, and propose a simple to compute index that yields substantial improvements in locality and speed over more commonly used data layouts. Our second contribution is a new and simple, yet easy to generalize method for view-dependent refinement. Similar to several published methods in this area, we use longest edge bisection in a top-down traversal of the mesh hierarchy to produce a continuous surface with subdivision connectivity. In tandem with the refinement, we perform view frustum culling and triangle stripping. These three components are done together in a single pass over the mesh. We show how this framework supports virtually any error metric, while still being highly memory and compute efficient.

  18. Large Angle Transient Dynamics (LATDYN) user's manual

    NASA Technical Reports Server (NTRS)

    Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.

    1991-01-01

    A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.

  19. W production at large transverse momentum at the CERN Large Hadron Collider.

    PubMed

    Gonsalves, Richard J; Kidonakis, Nikolaos; Sabio Vera, Agustín

    2005-11-25

    We study the production of W bosons at large transverse momentum in pp collisions at the CERN Large Hadron Collider. We calculate the complete next-to-leading order (NLO) corrections to the differential cross section. We find that the NLO corrections provide a large increase to the cross section but, surprisingly, do not reduce the scale dependence relative to leading order (LO). We also calculate next-to-next-to-leading-order (NNLO) soft-gluon corrections and find that, although they are small, they significantly reduce the scale dependence thus providing a more stable result.

  20. Geomechanical analysis of the large block test

    SciTech Connect

    Blair, S.C.; Berge, P.A.; Wang, H.F.

    1996-08-01

    The Yucca Mountain Site Characterization Project is investigating the Topopah Spring tuff at Yucca Mountain, Nevada, to determine whether it is suitable as a host rock for the disposal of high-level nuclear wastes. The Large Block Test (LBT) at Fran Ridge was planned as part of the project to investigate coupled thermal-mechanical-hydrological and geochemical processes that may occur in the repository near-field environment. This test would be performed on an excavated block of Topopah Spring tuff and would provide information at an intermediate scale (1-10 m) that would help evaluate existing models for repository performance. As part of the LBT, we are analyzing the coupled thermal-mechanical- hydrological behavior of the block in response to heating. Our objectives are to aid in the experimental design of the test, to evaluate different thermal and constitutive models, and to evaluate several different numerical methods. In this report, we present results of thermal-mechanical simulations of the heat-up phase of the LBT conducted using two different numerical codes that are commercially available: a two-dimensional (2D), finite- difference model called FLAC and a three-dimensional (3D), finite- element model, called ABAQUS. The purpose of this initial numerical modeling is to calculate temperatures, stresses, and displacements in two and three dimensions for a simplified representation of the large block. In reality, numerous joints and fractures complicate the behavior of the large block significantly. Nonetheless, these 1218 simulations provide a general understanding of the thermal-mechanical behavior to be expected in the LBT.

  1. Large Bore Powder Gun Qualification (U)

    SciTech Connect

    Rabern, Donald A.; Valdiviez, Robert

    2012-04-02

    A Large Bore Powder Gun (LBPG) is being designed to enable experimentalists to characterize material behavior outside the capabilities of the NNSS JASPER and LANL TA-55 PF-4 guns. The combination of these three guns will create a capability to conduct impact experiments over a wide range of pressures and shock profiles. The Large Bore Powder Gun will be fielded at the Nevada National Security Site (NNSS) U1a Complex. The Complex is nearly 1000 ft below ground with dedicated drifts for testing, instrumentation, and post-shot entombment. To ensure the reliability, safety, and performance of the LBPG, a qualification plan has been established and documented here. Requirements for the LBPG have been established and documented in WE-14-TR-0065 U A, Large Bore Powder Gun Customer Requirements. The document includes the requirements for the physics experiments, the gun and confinement systems, and operations at NNSS. A detailed description of the requirements is established in that document and is referred to and quoted throughout this document. Two Gun and Confinement Systems will be fielded. The Prototype Gun will be used primarily to characterize the gun and confinement performance and be the primary platform for qualification actions. This gun will also be used to investigate and qualify target and diagnostic modifications through the life of the program (U1a.104 Drift). An identical gun, the Physics Gun, will be fielded for confirmatory and Pu experiments (U1a.102D Drift). Both guns will be qualified for operation. The Gun and Confinement System design will be qualified through analysis, inspection, and testing using the Prototype Gun for the majority of process. The Physics Gun will be qualified through inspection and a limited number of qualification tests to ensure performance and behavior equivalent to the Prototype gun. Figure 1.1 shows the partial configuration of U1a and the locations of the Prototype and Physics Gun/Confinement Systems.

  2. Metrological large range scanning probe microscope

    NASA Astrophysics Data System (ADS)

    Dai, Gaoliang; Pohlenz, Frank; Danzebrink, Hans-Ulrich; Xu, Min; Hasche, Klaus; Wilkening, Guenter

    2004-04-01

    We describe a metrological large range scanning probe microscope (LR-SPM) with an Abbe error free design and direct interferometric position measurement capability, aimed at versatile traceable topographic measurements that require nanometer accuracy. A dual-stage positioning system was designed to achieve both a large measurement range and a high measurement speed. This dual-stage system consists of a commercially available stage, referred to as nanomeasuring machine (NMM), with a motion range of 25 mm×25 mm×5 mm along x, y, and z axes, and a compact z-axis piezoelectric positioning stage (compact z stage) with an extension range of 2 μm. The metrological LR-SPM described here senses the surface using a stationary fixed scanning force microscope (SFM) head working in contact mode. During operation, lateral scanning of the sample is performed solely by the NMM. Whereas the z motion, controlled by the SFM signal, is carried out by a combination of the NMM and the compact z stage. In this case the compact z stage, with its high mechanical resonance frequency (greater than 20 kHz), is responsible for the rapid motion while the NMM simultaneously makes slower movements over a larger motion range. To reduce the Abbe offset to a minimum the SFM tip is located at the intersection of three interferometer measurement beams orientated in x, y, and z directions. To improve real time performance two high-end digital signal processing (DSP) systems are used for NMM positioning and SFM servocontrol. Comprehensive DSP firmware and Windows XP-based software are implemented, providing a flexible and user-friendly interface. The instrument is able to perform large area imaging or profile scanning directly without stitching small scanned images. Several measurements on different samples such as flatness standards, nanostep height standards, roughness standards as well as sharp nanoedge samples and 1D gratings demonstrate the outstanding metrological capabilities of the instrument.

  3. National Large Solar Telescope of Russia

    NASA Astrophysics Data System (ADS)

    Demidov, Mikhail

    One of the most important task of the modern solar physics is multi-wavelength observations of the small-scale structure of solar atmosphere on different heights, including chromosphere and corona. To do this the large-aperture telescopes are necessary. At present time there several challenging projects of the large (and even giant) solar telescopes in the world are in the process of construction or designing , the most known ones among them are 4-meter class telescopes ATST in USA and EST in Europe. Since 2013 the development of the new Large Solar Telescope (LST) with 3 meter diameter of the main mirror is started in Russia as a part (sub-project) of National Heliogeophysical Complex (NHGC) of the Russian Academy of Sciences. It should be located at the Sayan solar observatory on the altitude more then 2000 m. To avoid numerous problems of the off-axis optical telescopes (despite of the obvious some advantages of the off-axis configuration) and to meet to available financial budget, the classical on-axis Gregorian scheme on the alt-azimuth mount has been chosen. The scientific equipment of the LST-3 will include several narrow-band tunable filter devices and spectrographs for different wavelength bands, including infrared. The units are installed either at the Nasmyth focus or/and on the rotating coude platform. To minimize the instrumental polarization the polarization analyzer is located near diagonal mirror after M2 mirror. High order adaptive optics is used to achieve the diffraction limited performances. It is expected that after some modification of the optical configuration the LST-3 will operate as an approximately 1-m mirror coronograph in the near infrared spectral lines. Possibilities for stellar observations during night time are provided as well.

  4. Large Geomagnetic Storms: Introduction to Special Section

    NASA Technical Reports Server (NTRS)

    Gopalswamy, N.

    2010-01-01

    Solar cycle 23 witnessed the accumulation of rich data sets that reveal various aspects of geomagnetic storms in unprecedented detail both at the Sun where the storm causing disturbances originate and in geospace where the effects of the storms are directly felt. During two recent coordinated data analysis workshops (CDAWs) the large geomagnetic storms (Dst < or = -100 nT) of solar cycle 23 were studied in order to understand their solar, interplanetary, and geospace connections. This special section grew out of these CDAWs with additional contributions relevant to these storms. Here I provide a brief summary of the results presented in the special section.

  5. Large Ensembles of Regional Climate Projections

    NASA Astrophysics Data System (ADS)

    Massey, Neil; Allen, Myles; Hall, Jim

    2016-04-01

    Projections of regional climate change have great utility for impact assessment at a local scale. The CORDEX climate projection framework presents a method of providing these regional projections by driving a regional climate model (RCM) with output from CMIP5 climate projection runs of global climate models (GCM). This produces an ensemble of regional climate projections, sampling the model uncertainty, the forcing uncertainty and the uncertainty of the response of the climate system to the increase in greenhouse gas (GHG) concentrations. Using the weather@home project to compute large ensembles of RCMs via volunteer distributed computing presents another method of generating projections of climate variables and also allows the sampling of the uncertainty due to internal variability. weather@home runs both a RCM and GCM on volunteer's home computers, with the free-running GCM driving the boundaries of the RCM. The GCM is an atmosphere only model and requires forcing at the lower boundary with sea-surface temperature (SST) and sea-ice concentration (SIC) data. By constructing SST and SIC projections, using projections of GHG and other atmospheric gases, and running the weather@home RCM and GCM with these forcings, large ensembles of projections of climate variables at regional scales can be made. To construct the SSTs and SICs, a statistical model is built to represent the response of SST and SIC to increases in GHG concentrations in the CMIP5 ensemble, for both the RCP4.5 and RCP8.5 scenarios. This statistical model uses empirical orthogonal functions (EOFs) to represent the change in the long term trend of SSTs in the CMIP5 projections. A multivariate distribution of the leading principle components (PC) is produced using a copula and sampled to produce a timeseries of PCs which are recombined with the EOFs to generate a timeseries of SSTs, with internal variability added from observations. Hence, a large ensemble of SST projections is generated, with each SST

  6. Large autonomous spacecraft electrical power system (LASEPS)

    NASA Technical Reports Server (NTRS)

    Dugal-Whitehead, Norma R.; Johnson, Yvette B.

    1992-01-01

    NASA - Marshall Space Flight Center is creating a large high voltage electrical power system testbed called LASEPS. This testbed is being developed to simulate an end-to-end power system from power generation and source to loads. When the system is completed it will have several power configurations, which will include several battery configurations. These configurations are: two 120 V batteries, one or two 150 V batteries, and one 250 to 270 V battery. This breadboard encompasses varying levels of autonomy from remote power converters to conventional software control to expert system control of the power system elements. In this paper, the construction and provisions of this breadboard are discussed.

  7. Large field cutoffs make perturbative series converge

    NASA Astrophysics Data System (ADS)

    Meurice, Yannick

    For λφ4 problems, convergent perturbative series can be obtained by cutting off the large field configurations. The modified series converge to values exponentially close to the exact ones. For λ larger than some critical value, the method outperforms Padé approximants and Borel summations. We discuss some aspects of the semi-classical methods used to calculate the modified Feynman rules and estimate the error associated with the procedure. We provide a simple numerical example where the procedure works despite the fact that the Borel sum has singularities on the positive real axis.

  8. Large field cutoffs make perturbative series converge

    NASA Astrophysics Data System (ADS)

    Meurice, Yannick

    2002-03-01

    For λφ 4 problems, convergent perturbative series can be obtained by cutting off the large field configurations. The modified series converge to values exponentially close to the exact ones. For λ larger than some critical value, the method outperforms Padé approximants and Borel summations. We discuss some aspects of the semi-classical methods used to calculate the modified Feynman rules and estimate the error associated with the procedure. We provide a simple numerical example where the procedure works despite the fact that the Borel sum has singularities on the positive real axis.

  9. Large-scale brightenings associated with flares

    NASA Technical Reports Server (NTRS)

    Mandrini, Cristina H.; Machado, Marcos E.

    1992-01-01

    It is shown that large-scale brightenings (LSBs) associated with solar flares, similar to the 'giant arches' discovered by Svestka et al. (1982) in images obtained by the SSM HXIS hours after the onset of two-ribbon flares, can also occur in association with confined flares in complex active regions. For these events, a clear link between the LSB and the underlying flare is clearly evident from the active-region magnetic field topology. The implications of these findings are discussed within the framework of the interacting loops of flares and the giant arch phenomenology.

  10. Microwave radiation hazards around large microwave antenna.

    NASA Technical Reports Server (NTRS)

    Klascius, A.

    1973-01-01

    The microwave radiation hazards associated with the use of large antennas become increasingly more dangerous to personnel as the transmitters go to ever higher powers. The near-field area is of the greatest concern. It has spill over from subreflector and reflections from nearby objects. Centimeter waves meeting in phase will reinforce each other and create hot spots of microwave energy. This has been measured in front of and around several 26-meter antennas. Hot spots have been found and are going to be the determining factor in delineating safe areas for personnel to work. Better techniques and instruments to measure these fields are needed for the evaluation of hazard areas.

  11. Antineutrino spectroscopy with large water Cerenkov detectors.

    PubMed

    Beacom, John F; Vagins, Mark R

    2004-10-22

    We propose modifying large water C erenkov detectors by the addition of 0.2% gadolinium trichloride, which is highly soluble, newly inexpensive, and transparent in solution. Since Gd has an enormous cross section for radiative neutron capture, with summation operatorE(gamma)=8 MeV, this would make neutrons visible for the first time in such detectors, allowing antineutrino tagging by the coincidence detection reaction nu (e)+p-->e(+)+n (similarly for nu (mu)). Taking Super-Kamiokande as a working example, dramatic consequences for reactor neutrino measurements, first observation of the diffuse supernova neutrino background, galactic supernova detection, and other topics are discussed. PMID:15525063

  12. Large scale phononic metamaterials for seismic isolation

    SciTech Connect

    Aravantinos-Zafiris, N.; Sigalas, M. M.

    2015-08-14

    In this work, we numerically examine structures that could be characterized as large scale phononic metamaterials. These novel structures could have band gaps in the frequency spectrum of seismic waves when their dimensions are chosen appropriately, thus raising the belief that they could be serious candidates for seismic isolation structures. Different and easy to fabricate structures were examined made from construction materials such as concrete and steel. The well-known finite difference time domain method is used in our calculations in order to calculate the band structures of the proposed metamaterials.

  13. Large-scale dynamics and global warming

    SciTech Connect

    Held, I.M. )

    1993-02-01

    Predictions of future climate change raise a variety of issues in large-scale atmospheric and oceanic dynamics. Several of these are reviewed in this essay, including the sensitivity of the circulation of the Atlantic Ocean to increasing freshwater input at high latitudes; the possibility of greenhouse cooling in the southern oceans; the sensitivity of monsoonal circulations to differential warming of the two hemispheres; the response of midlatitude storms to changing temperature gradients and increasing water vapor in the atmosphere; and the possible importance of positive feedback between the mean winds and eddy-induced heating in the polar stratosphere.

  14. Large Meteorite Impacts and Planetary Evolution

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Topics considered include: Petrography, geochemistry and geochronology; impact-induced hydrothermal base metal mineralization; nickel-and platinum group element -enriched quartz norite in the latest jurassic morokweng impact structure, south Africa; extraterrestrial helium trapped in fullerenes in the sudbury; synthetic aperture radar characteristics of a glacially modified meltsheet; the chicxulub seismic experiment; chemical compositions of chicxulub impact breccias; experimental investigation of the chemistry of vaporization of targets in relation to the chicxulub impact; artificial ozone hole generation following a large meteoroid impact into an oceanic site; three dimensional modeling of impactite bodies of popigai impact crater, Russia.

  15. Preconditioned techniques for large eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Wu, Kesheng

    1997-11-01

    This research focuses on finding a large number of eigenvalues and eigen-vectors of a sparse symmetric or Hermitian matrix, for example, finding 1000 eigenpairs of a 100,000 × 100,000 matrix. These eigenvalue problems are challenging because the matrix size is too large for traditional QR based algorithms and the number of desired eigenpairs is too large for most common sparse eigenvalue algorithms. In this thesis, we approach this problem in two steps. First, we identify a sound preconditioned eigenvalue procedure for computing multiple eigenpairs. Second, we improve the basic algorithm through new preconditioning schemes and spectrum transformations. Through careful analysis, we see that both the Arnoldi and Davidson methods have an appropriate structure for computing a large number of eigenpairs with preconditioning. We also study three variations of these two basic algorithms. Without preconditioning, these methods are mathematically equivalent but they differ in numerical stability and complexity. However, the Davidson method is much more successful when preconditioned. Despite its success, the preconditioning scheme in the Davidson method is seen as flawed because the preconditioner becomes ill-conditioned near convergence. After comparison with other methods, we find that the effectiveness of the Davidson method is due to its preconditioning step being an inexact Newton method. We proceed to explore other Newton methods for eigenvalue problems to develop preconditioning schemes without the same flaws. We found that the simplest and most effective preconditioner is to use the Conjugate Gradient method to approximately solve equations generated by the Newton methods. Also, a different strategy of enhancing the performance of the Davidson method is to alternate between the regular Davidson iteration and a polynomial method for eigenvalue problems. To use these polynomials, the user must decide which intervals of the spectrum the polynomial should suppress. We

  16. Construction and control of large space structures

    NASA Technical Reports Server (NTRS)

    Card, M. F.; Heard, W. L., Jr.; Akin, D. L.

    1986-01-01

    Recent NASA research efforts on space construction are reviewed. Preliminary results of the EASE/ACCESS Shuttle experiments are discussed. A 45-foot beam was constructed on orbit in 30 minutes using a manual assembly technique at a work station. A large tetrahedron was constructed several times using a free floating technique. The capability of repair, utilities installation, and handling the structures using a mobile foot restraint on the RMS was also demonstrated. Implications of the experiments for space station are presented. Models of 5-meter space station structure together with neutral buoyancy simulations suggest manual assembly techniques are feasible. Selected research on control of flexible structures is discussed.

  17. Resistojet propulsion for large spacecraft systems

    NASA Technical Reports Server (NTRS)

    Mirtich, M. J.

    1982-01-01

    Resistojet propulsion systems have characteristics that are ideally suited for the on-orbit and primary propulsion requirements of large spacecraft systems. These characteristics which offer advantages over other forms of propulsion are reviewed and presented. The feasibility of resistojets were demonstrated in space whereas only a limited number of ground life tests were performed. The major technology issues associated with these ground tests are evaluated. The past performance of resistojets is summarized and, looks into the present day technology status is reviewed. The material criteria, along with possible concepts, needed to attain high performance resistojets are presented.

  18. Capacity Choice in a Large Market

    PubMed Central

    Godenhielm, Mats; Kultti, Klaus

    2014-01-01

    We analyze endogenous capacity formation in a large frictional market with perfectly divisible goods. Each seller posts a price and decides on a capacity. The buyers base their decision on which seller to visit on both characteristics. In this setting we determine the conditions for the existence and uniqueness of a symmetric equilibrium. When capacity is unobservable there exists a continuum of equilibria. We show that the “best” of these equilibria leads to the same seller capacities and the same number of trades as the symmetric equilibrium under observable capacity. PMID:25133676

  19. Modelling sediment input in large river basins

    NASA Astrophysics Data System (ADS)

    Scherer, U.

    2012-04-01

    Erosion and sediment redistribution play a pivotal role in the terrestrial ecosystem as they directly influence soil functions and water quality. In particular surface waters are threatened by emissions of nutrients and contaminants via erosion. The sustainable management of sediments is thus a key challenge in river basin management. Beside the planning and implementation of mitigation measures typically focusing on small and mesoscale catchments, the knowledge of sediment emissions and associated substances in large drainage basins is of utmost importance for water quality protection of large rivers and the seas. The objective of this study was thus to quantify the sediment input into the large drainage basins of Germany (Rhine, Elbe, Odra, Weser, Ems, Danube) as a basis for nutrient and contaminant emissions via erosion. The sediment input was quantified for all watersheds of Germany and added up along the flow paths of the river systems. Due to the large scale, sediment production within the watersheds was estimated based on the USLE for cultivated land and naturally covered areas and on specific erosion rates for mountainous areas without vegetation cover. To quantify the sediment delivery ratio a model approach was developed using data on calculated sediment production rates and long term sediment loads observed at monitoring stations of 13 watersheds located in different landscape regions of Germany. A variety of morphological parameters and catchment properties such as slope, drainage density, share of morphological sinks, hypsometric integral, flow distance between sediment source areas and the next stream as well as soil and land use properties were tested to explain the variation in the sediment delivery ratios for the 13 watersheds. The sediment input into streams is mainly controlled by the location of sediment source areas and the morphology along the flow pathways to surface waters. Thus, this complex interaction of spatially distributed catchment

  20. Application of NASTRAN to large space structures

    NASA Technical Reports Server (NTRS)

    Balderes, T.; Zalesak, J.; Dyreyes, V.; Lee, E.

    1976-01-01

    The application of NASTRAN to design studies of two very large-area lightweight structures is described. The first is the Satellite Solar Power Station, while the second is a deployable three hundred meter diameter antenna. A brief discussion of the operation of the SSPS is given, followed by a description of the structure. The use of the NASTRAN program for static, vibration and thermal analysis is illustrated and some results are given. Next, the deployable antenna is discussed and the use of NASTRAN for static analysis, buckling analysis and vibration analysis is detailed.