Science.gov

Sample records for large non-orthogonal stbcs

  1. Heat welding of non-orthogonal X-junction of single-walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Yang, Xueming; Han, Zhonghe; Li, Yonghua; Chen, Dongci; Zhang, Pu; To, Albert C.

    2012-09-01

    Though X-junctions of single-walled carbon nanotubes (SWCNTs) have been intensively studied, studies concerning non-orthogonal X-junctions are still very rare. In this paper, the heat welding of defect-free non-orthogonal X-junctions with different crossed angles are investigated by molecular dynamics simulations. The difference between the heat welding of non-orthogonal and orthogonal X-junctions is described, and the angle effect on the configuration and stability of the heat welded non-orthogonal X-junctions is discussed. Compared with the orthogonal X-junction, two crossed SWCNTs with a smaller non-orthogonal angle are easier to join by heat welding, and this may be an important reason why the large tubes are difficult to join, whereas large nanotube bundles are easier to observe in experiments.

  2. Accurate Calculation of Oscillator Strengths for CI II Lines Using Non-orthogonal Wavefunctions

    NASA Technical Reports Server (NTRS)

    Tayal, S. S.

    2004-01-01

    Non-orthogonal orbitals technique in the multiconfiguration Hartree-Fock approach is used to calculate oscillator strengths and transition probabilities for allowed and intercombination lines in Cl II. The relativistic corrections are included through the Breit-Pauli Hamiltonian. The Cl II wave functions show strong term dependence. The non-orthogonal orbitals are used to describe the term dependence of radial functions. Large sets of spectroscopic and correlation functions are chosen to describe adequately strong interactions in the 3s(sup 2)3p(sup 3)nl (sup 3)Po, (sup 1)Po and (sup 3)Do Rydberg series and to properly account for the important correlation and relaxation effects. The length and velocity forms of oscillator strength show good agreement for most transitions. The calculated radiative lifetime for the 3s3p(sup 5) (sup 3)Po state is in good agreement with experiment.

  3. Asymptotic Performance Analysis of STBCs from Coordinate Interleaved Orthogonal Designs in Shadowed Rayleigh Fading Channels

    NASA Astrophysics Data System (ADS)

    Yoon, Chanho; Lee, Hoojin; Kang, Joonhyuk

    In this letter, we provide an asymptotic error rate performance evaluation of space-time block codes from coordinate interleaved orthogonal designs (STBCs-CIODs), especially in shadowed Rayleigh fading channels. By evaluating a simplified probability density function (PDF) of Rayleigh and Rayleigh-lognormal channels affecting the STBC-CIOD system, we derive an accurate closed-form approximation for the tight upper and lower bounds on the symbol error rate (SER). We show that shadowing asymptotically affects coding gain only, and conclude that an increase in diversity order under shadowing causes slower convergence to asymptotic bound due to the relatively larger loss of coding gain. By comparing the derived formulas and Monte-Carlo simulations, we validate the accuracy of the theoretical results.

  4. The VOLMAX Transient Electromagnetic Modeling System, Including Sub-Cell Slots and Wires on Random Non-Orthogonal Cells

    SciTech Connect

    Riley, D.J.; Turner, C.D.

    1997-12-31

    VOLMAX is a three-dimensional transient volumetric Maxwell equation solver that operates on standard rectilinear finite-difference time-domain (FDTD) grids, non-orthogonal unstructured grids, or a combination of both types (hybrid grids). The algorithm is fully explicit. Open geometries are typically solved by embedding multiple unstructured regions into a simple rectilinear FDTD mesh. The grid types are fully connected at the mesh interfaces without the need for complex spatial interpolation. The approach permits detailed modeling of complex geometry while mitigating the large cell count typical of non-orthogonal cells such as tetrahedral elements. To further improve efficiency, the unstructured region carries a separate time step that sub-cycles relative to the time-step used in the FDTD mesh.

  5. Implementation of generalized quantum measurements for unambiguous discrimination of multiple non-orthogonal coherent states.

    PubMed

    Becerra, F E; Fan, J; Migdall, A

    2013-01-01

    Generalized quantum measurements implemented to allow for measurement outcomes termed inconclusive can perform perfect discrimination of non-orthogonal states, a task which is impossible using only measurements with definitive outcomes. Here we demonstrate such generalized quantum measurements for unambiguous discrimination of four non-orthogonal coherent states and obtain their quantum mechanical description, the positive-operator valued measure. For practical realizations of this positive-operator valued measure, where noise and realistic imperfections prevent perfect unambiguous discrimination, we show that our experimental implementation outperforms any ideal standard-quantum-limited measurement performing the same non-ideal unambiguous state discrimination task for coherent states with low mean photon numbers. PMID:23774177

  6. Non-Orthogonality of Seafloor Spreading: A New Look at Fast Spreading Centers

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Gordon, R. G.

    2015-12-01

    Most of Earth's surface is created by seafloor spreading. While most seafloor spreading is orthogonal, that is, the strike of mid-ocean ridge segments is perpendicular to nearby transform faults, examples of significant non-orthogonality have been noted since the 1970s, in particular in regions of slow seafloor spreading such as the western Gulf of Aden with non-orthogonality up to 45°. In contrast, here we focus on fast and ultra-fast seafloor spreading along the East Pacific Rise. To estimate non-orthogonality, we compare ridge-segment strikes with the direction of plate motion determined from the angular velocity that best fits all the data along the boundary of a single plate pair [DeMets et al., 2010]. The advantages of this approach include greater accuracy and the ability to estimate non-orthogonality where there are no nearby transform faults. Estimating the strikes of fast-spreading mid-ocean ridge segments present several challenges as non-transform offsets on various scales affect the estimate of the strike. While spreading is orthogonal or nearly orthogonal along much of the East Pacific Rise, some ridge segments along the Pacific-Nazca boundary near 30°S and near 16°S-22°S deviate from orthogonality by as much as 6°-12° even when we exclude the portions of mid-ocean ridge segments involved in overlapping spreading centers. Thus modest but significant non-orthogonality occurs where seafloor spreading is the fastest on the planet. If a plume lies near the ridge segment, we assume it contributes to magma overpressure along the ridge segment [Abelson & Agnon, 1997]. We further assume that the contribution to magma overpressure is proportional to the buoyancy flux of the plume [Sleep, 1990] and inversely proportional to the distance between the mid-ocean ridge segment and a given plume. We find that the non-orthogonal angle tends to decrease with increasing spreading rate and with increasing distance between ridge segment and plume.

  7. The spatial-matched-filter beam pattern of a biaxial non-orthogonal velocity sensor

    NASA Astrophysics Data System (ADS)

    Lee, Charles Hung; Lee, Hye Rin Lindsay; Wong, Kainam Thomas; Razo, Mario

    2016-04-01

    This work derives the "spatial matched filter" beam pattern of a "u-u probe", which comprises two uniaxial velocity sensors, that are identical, collocated, and oriented supposedly in orthogonality. This non-orthogonality may be unrealized in real-world hardware implementation, and would consequentially cause a beamformer to have a systemic pointing error, which is derived analytically here in this paper. Other than this point error, this paper's analysis shows that the beam shape would otherwise be unchanged.

  8. The gravitational Hamiltonian in the presence of non-orthogonal boundaries

    NASA Astrophysics Data System (ADS)

    Hawking, S. W.; Hunter, C. J.

    1996-10-01

    This paper generalizes earlier work on Hamiltonian boundary terms by omitting the requirement that the spacelike hypersurfaces 0264-9381/13/10/012/img1 intersect the timelike boundary 0264-9381/13/10/012/img2 orthogonally. The expressions for the action and Hamiltonian are calculated and the required subtraction of a background contribution is discussed. The new features of a Hamiltonian formulation with non-orthogonal boundaries are then illustrated in two examples.

  9. Fairness for Non-Orthogonal Multiple Access in 5G Systems

    NASA Astrophysics Data System (ADS)

    Timotheou, Stelios; Krikidis, Ioannis

    2015-10-01

    In non-orthogonal multiple access (NOMA) downlink, multiple data flows are superimposed in the power domain and user decoding is based on successive interference cancellation. NOMA's performance highly depends on the power split among the data flows and the associated power allocation (PA) problem. In this letter, we study NOMA from a fairness standpoint and we investigate PA techniques that ensure fairness for the downlink users under i) instantaneous channel state information (CSI) at the transmitter, and ii) average CSI. Although the formulated problems are non-convex, we have developed low-complexity polynomial algorithms that yield the optimal solution in both cases considered.

  10. Non-Orthogonality of Seafloor Spreading: A New Look at Fast Spreading Centers

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Gordon, R. G.

    2014-12-01

    Most of Earth's surface is created by seafloor spreading, which is one of a handful of fundamental global tectonic processes. While most seafloor spreading is orthogonal, that is, the strike of mid-ocean ridge segments are perpendicular to transform faults, examples of significant non-orthogonality have been noted since the 1970s, in particular in regions of slow seafloor spreading such as the western Gulf of Aden with the non-orthogonality up to 45°. In contrast, here we focus on fast and ultra-fast seafloor spreading along the East Pacific Rise. For our analysis, instead of comparing the strike of mid-ocean ridges with the strike of nearby transform faults, the azimuth of which can be uncertain, we compare with the direction of plate motion determined from the angular velocity that best fits all the data along the boundary of a single plate pair [DeMet, Gordon, and Argus 2010]. The advantages of our approach include greater accuracy and the ability to estimate non-orthogonality where there are no nearby transform faults. Estimating the strikes of fast-spreading mid-ocean ridge segments present several challenges as non-transform offsets on various scales affect the estimate of the strike. Moreover, the strike may vary considerably within a single ridge segment bounded by transform faults. This is especially evident near overlapping spreading centers along with the strike varies rapidly with distance along a ridge segment. We use various bathymetric data sets to make our estimates including ETOPO1 [Amante and Eakins, 2009] and GeoMapApp [Ryan et al., 2009]. While spreading is orthogonal or nearly orthogonal along much of the East Pacific Rise, it appears that some ridge segments along the Pacific-Nazca boundary near 30°S and near 16°S-22°S deviate significantly from orthogonality by as much as 6°-12° even when we exclude the portions of mid-ocean ridge segments involved in overlapping spreading centers. Thus modest but significant non-orthogonality occurs

  11. Non-orthogonal optical multicarrier access based on filter bank and SCMA.

    PubMed

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-10-19

    This paper proposes a novel non-orthogonal optical multicarrier access system based on filter bank and sparse code multiple access (SCMA). It offers released frequency offset and better spectral efficiency for multicarrier access. An experiment of 73.68 Gb/s filter bank-based multicarrier (FBMC) SCMA system with 60 km single mode fiber link is performed to demonstrate the feasibility. The comparison between fast Fourier transform (FFT) based multicarrier and the proposed scheme is also investigated in the experiment. PMID:26480395

  12. Transducer Shadowing Explains Observed Underestimates in Vertical Wind Velocity from Non-orthogonal Sonic Anemometers

    NASA Astrophysics Data System (ADS)

    Frank, J. M.; Massman, W. J.; Swiatek, E.; Zimmerman, H.; Ewers, B. E.

    2014-12-01

    Sonic anemometry is fundamental to all eddy-covariance studies of surface energy and ecosystem carbon and water balance. While recent studies have shown that some anemometers underestimate vertical wind, we hypothesize that this is caused by the lack of transducer shadowing correction in non-orthogonal models. We tested this in an experiment comparing three sonic anemometer designs: orthogonal (O), non-orthogonal (NO), and quasi-orthogonal (QO); using four models: K-probe (O) and A-probe (NO) (Applied Technologies, Inc.) and CSAT3 (NO) and CSAT3V (QO) (Campbell Scientific, Inc.). For each of a 12-week experiment at the GLEES AmeriFlux site, five instruments from a pool of twelve (three of each model) were randomly selected and located around a control (CSAT3); mid-week all but the control were re-mounted horizontally. We used Bayesian analysis to test differences between models in half-hour standard deviations (σu, σv, σw, and σT), turbulent kinetic energy (TKE), and the ratio between vertical/horizontal TKE (VHTKE). The K-probe experiences horizontal transducer shadowing which is effectively corrected using an established wind-tunnel derived algorithm. We constructed shadow correction algorithms for the NO/QO anemometers by applying the K-probe function to each non-orthogonal transducer pair (SC1) as well as a stronger correction of twice the magnitude (SC2). While the partitioning of VHTKE was higher in O than NO/QO anemometers, the application of SC1 explained 45-60% of this discrepancy while SC2 overcorrected it. During the horizontal manipulation changes in the NO/QO were moderate in σu (4-8% decrease), very strong in σv (9-11% decrease), and minimal in σw (-3 to 4% change) while only σu measurements changed (3% decrease) with the K-probe. These changes were predicted by both shadow correction algorithms, with SC2 better explaining the data. This confirms our hypothesis while eliminating others that attribute the underestimate to a systematic bias in

  13. Non-Orthogonality of Seafloor Spreading: A New Global Survey Building on the MORVEL Plate Motion Project

    NASA Astrophysics Data System (ADS)

    Throckmorton, C. R.; Zhang, T.; Gordon, R. G.

    2013-12-01

    Most of Earth's surface is created by seafloor spreading, which is one of a handful of fundamental global tectonic processes. While most seafloor spreading is orthogonal, that is, the strike of mid-ocean ridge segments are perpendicular to transform faults, examples of significant non-orthogonality have been noted since the 1970s, in particular in regions of slow seafloor spreading such as the western Gulf of Aden. Here we present a new global analysis of non-orthogonality of seafloor spreading by building on the results of the MORVEL global plate motion project including both new estimates of plate angular velocities and global estimates of the strikes of mid-ocean ridge segments [DeMets, Gordon, & Argus, 2010]. For our analysis, instead of comparing the strike of mid-ocean ridges with the strike of nearby transform faults, the azimuth of which can be uncertain, we compare with the direction of plate motion determined from the angular velocity that best fits all the data along the boundary of a single plate pair. The advantages of our approach include greater accuracy and the ability to estimate non-orthogonality where there are no nearby transform faults. Unsurprisingly we confirm that most seafloor spreading is within a few degrees of orthogonality. Moreover we confirm non-orthogonality in many previously recognized regions of slow seafloor spreading. Surprisingly, however, we find non-orthogonality in several regions of fast seafloor spreading. Implications for mid-ocean ridge processes and hypothesized lithosphere deformation will be discussed.

  14. A New Algorithm for Complex Non-Orthogonal Joint Diagonalization Based on Shear and Givens Rotations

    NASA Astrophysics Data System (ADS)

    Mesloub, Ammar; Abed-Meraim, Karim; Belouchrani, Adel

    2014-04-01

    This paper introduces a new algorithm to approximate non orthogonal joint diagonalization (NOJD) of a set of complex matrices. This algorithm is based on the Frobenius norm formulation of the JD problem and takes advantage from combining Givens and Shear rotations to attempt the approximate joint diagonalization (JD). It represents a non trivial generalization of the JDi (Joint Diagonalization) algorithm (Souloumiac 2009) to the complex case. The JDi is first slightly modified then generalized to the CJDi (i.e. Complex JDi) using complex to real matrix transformation. Also, since several methods exist already in the literature, we propose herein a brief overview of existing NOJD algorithms then we provide an extensive comparative study to illustrate the effectiveness and stability of the CJDi w.r.t. various system parameters and application contexts.

  15. Simultaneous Source Localization and Polarization Estimation via Non-Orthogonal Joint Diagonalization with Vector-Sensors

    PubMed Central

    Gong, Xiao-Feng; Wang, Ke; Lin, Qiu-Hua; Liu, Zhi-Wen; Xu, You-Gen

    2012-01-01

    Joint estimation of direction-of-arrival (DOA) and polarization with electromagnetic vector-sensors (EMVS) is considered in the framework of complex-valued non-orthogonal joint diagonalization (CNJD). Two new CNJD algorithms are presented, which propose to tackle the high dimensional optimization problem in CNJD via a sequence of simple sub-optimization problems, by using LU or LQ decompositions of the target matrices as well as the Jacobi-type scheme. Furthermore, based on the above CNJD algorithms we present a novel strategy to exploit the multi-dimensional structure present in the second-order statistics of EMVS outputs for simultaneous DOA and polarization estimation. Simulations are provided to compare the proposed strategy with existing tensorial or joint diagonalization based methods. PMID:22737015

  16. Efficient computation of Hamiltonian matrix elements between non-orthogonal Slater determinants

    NASA Astrophysics Data System (ADS)

    Utsuno, Yutaka; Shimizu, Noritaka; Otsuka, Takaharu; Abe, Takashi

    2013-01-01

    We present an efficient numerical method for computing Hamiltonian matrix elements between non-orthogonal Slater determinants, focusing on the most time-consuming component of the calculation that involves a sparse array. In the usual case where many matrix elements should be calculated, this computation can be transformed into a multiplication of dense matrices. It is demonstrated that the present method based on the matrix-matrix multiplication attains ˜80% of the theoretical peak performance measured on systems equipped with modern microprocessors, a factor of 5-10 better than the normal method using indirectly indexed arrays to treat a sparse array. The reason for such different performances is discussed from the viewpoint of memory access.

  17. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors.

    PubMed

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  18. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors

    PubMed Central

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  19. Spatio-Temporal Evolutions of Non-Orthogonal Equatorial Wave Modes Derived from Observations

    NASA Astrophysics Data System (ADS)

    Barton, C.; Cai, M.

    2015-12-01

    Equatorial waves have been studied extensively due to their importance to the tropical climate and weather systems. Historically, their activity is diagnosed mainly in the wavenumber-frequency domain. Recently, many studies have projected observational data onto parabolic cylinder functions (PCF), which represent the meridional structure of individual wave modes, to attain time-dependent spatial wave structures. In this study, we propose a methodology that seeks to identify individual wave modes in instantaneous fields of observations by determining their projections on PCF modes according to the equatorial wave theory. The new method has the benefit of yielding a closed system with a unique solution for all waves' spatial structures, including IG waves, for a given instantaneous observed field. We have applied our method to the ERA-Interim reanalysis dataset in the tropical stratosphere where the wave-mean flow interaction mechanism for the quasi-biennial oscillation (QBO) is well-understood. We have confirmed the continuous evolution of the selection mechanism for equatorial waves in the stratosphere from observations as predicted by the theory for the QBO. This also validates the proposed method for decomposition of observed tropical wave fields into non-orthogonal equatorial wave modes.

  20. Optimized Non-Orthogonal Localized Orbitals for Linear Scaling Quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Williamson, Andrew; Reboredo, Fernando; Galli, Giulia

    2004-03-01

    It has been shown [1] that Quantum Monte Carlo calculations of total energies of interacting systems can be made to scale nearly linearly with the number of electrons (N), by using localized single particle orbitals to construct Slater determinants. Here we propose a new way of defining the localized orbitals required for O(N)-QMC calculation, by minimizing an appropriate cost function yielding a set of N non-orthogonal (NO) localized orbitals considerably smoother in real space than Maximally localized Wannier functions (MLWF). These NO orbitals have better localization properties than MLWFs. We show that for semiconducting systems NO orbitals can be localized in a much smaller region of space than orthogonal orbitals (typically, one eighth of the volume) and give total energies with the same accuracy, thus yielding a linear scaling QMC algorithm which is 5 times faster than the one originally proposed [1]. We also discuss the extension of O(N)-QMC with NO orbitals to the calculations of total energies of metallic systems. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. [1] A. J. Williamson, R.Q. Hood and J.C. Grossman, Phys. Rev. Lett. 87, 246406 (2001)

  1. A Non-Orthogonal Block-Localized Effective Hamiltonian Approach for Chemical and Enzymatic Reactions

    PubMed Central

    Cembran, Alessandro; Payaka, Apirak; Lin, Yen-lin; Xie, Wangshen; Mo, Yirong; Song, Lingchun; Gao, Jiali

    2010-01-01

    The effective Hamiltonian-molecular orbital and valence bond (EH-MOVB) method based on non-orthogonal block-localized fragment orbitals has been implemented into the program CHARMM for molecular dynamics simulations of chemical and enzymatic reactions, making use of semiempirical quantum mechanical models. Building upon ab initio MOVB theory, we make use of two parameters in the EH-MOVB method to fit the barrier height and the relative energy between the reactant and product state for a given chemical reaction to be in agreement with experiment or high-level ab initio or density functional results. Consequently, the EH-MOVB method provides a highly accurate and computationally efficient QM/MM model for dynamics simulation of chemical reactions in solution. The EH-MOVB method is illustrated by examination of the potential energy surface of the hydride transfer reaction from trimethylamine to a flavin cofactor model in the gas phase. In the present study, we employed the semiempirical AM1 model, which yields a reaction barrier that is more than 5 kcal/mol too high. We use a parameter calibration procedure for the EH-MOVB method similar to that employed to adjust the results of semiempirical and empirical models. Thus, the relative energy of these two diabatic states can be shifted to reproduce the experimental energy of reaction, and the barrier height is optimized to reproduce the desired (accurate) value by adding a constant to the off-diagonal matrix element. The present EH-MOVB method offers a viable approach to characterizing solvent and protein-reorganization effects in the realm of combined QM/MM simulations. PMID:20694172

  2. A program for calculating photonic band structures, Green's functions and transmission/reflection coefficients using a non-orthogonal FDTD method

    NASA Astrophysics Data System (ADS)

    Ward, A. J.; Pendry, J. B.

    2000-06-01

    In this paper we present an updated version of our ONYX program for calculating photonic band structures using a non-orthogonal finite difference time domain method. This new version employs the same transparent formalism as the first version with the same capabilities for calculating photonic band structures or causal Green's functions but also includes extra subroutines for the calculation of transmission and reflection coefficients. Both the electric and magnetic fields are placed onto a discrete lattice by approximating the spacial and temporal derivatives with finite differences. This results in discrete versions of Maxwell's equations which can be used to integrate the fields forwards in time. The time required for a calculation using this method scales linearly with the number of real space points used in the discretization so the technique is ideally suited to handling systems with large and complicated unit cells.

  3. Non-orthogonal spin-adaptation of coupled cluster methods: A new implementation of methods including quadruple excitations

    SciTech Connect

    Matthews, Devin A.; Stanton, John F.

    2015-02-14

    The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))

  4. On the Performance of Non-Orthogonal Multiple Access in 5G Systems with Randomly Deployed Users

    NASA Astrophysics Data System (ADS)

    Ding, Zhiguo; Yang, Zheng; Fan, Pingzhi; Poor, H. Vincent

    2014-12-01

    In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.

  5. The non-orthogonal fixed beam arrangement for the second proton therapy facility at the National Accelerator Center

    NASA Astrophysics Data System (ADS)

    Schreuder, A. N.; Jones, D. T. L.; Conradie, J. L.; Fourie, D. T.; Botha, A. H.; Müller, A.; Smit, H. A.; O'Ryan, A.; Vernimmen, F. J. A.; Wilson, J.; Stannard, C. E.

    1999-06-01

    The medical user group at the National Accelerator Center (NAC) is currently unable to treat all eligible patients with high energy protons. Developing a second proton treatment room is desirable since the 200 MeV proton beam from the NAC separated sector cyclotron is currently under-utilized during proton therapy sessions. During the patient positioning phase in one treatment room, the beam could be used for therapy in a second room. The second proton therapy treatment room at the NAC will be equipped with two non-orthogonal beam lines, one horizontal and one at 30 degrees to the vertical. The two beams will have a common isocentre. This beam arrangement together with a versatile patient positioning system (commercial robot arm) will provide the radiation oncologist with a diversity of possible beam arrangements and offers a reasonable cost-effective alternative to an isocentric gantry.

  6. The non-orthogonal fixed beam arrangement for the second proton therapy facility at the National Accelerator Center

    SciTech Connect

    Schreuder, A. N.; Jones, D. T. L.; Conradie, J. L.; Fourie, D. T.; Botha, A. H.; Mueller, A.; Smit, H. A.; O'Ryan, A.; Vernimmen, F. J. A.; Wilson, J.; Stannard, C. E.

    1999-06-10

    The medical user group at the National Accelerator Center (NAC) is currently unable to treat all eligible patients with high energy protons. Developing a second proton treatment room is desirable since the 200 MeV proton beam from the NAC separated sector cyclotron is currently under-utilized during proton therapy sessions. During the patient positioning phase in one treatment room, the beam could be used for therapy in a second room. The second proton therapy treatment room at the NAC will be equipped with two non-orthogonal beam lines, one horizontal and one at 30 degrees to the vertical. The two beams will have a common isocentre. This beam arrangement together with a versatile patient positioning system (commercial robot arm) will provide the radiation oncologist with a diversity of possible beam arrangements and offers a reasonable cost-effective alternative to an isocentric gantry.

  7. Size consistent formulations of the perturb-then-diagonalize Møller-Plesset perturbation theory correction to non-orthogonal configuration interaction.

    PubMed

    Yost, Shane R; Head-Gordon, Martin

    2016-08-01

    In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the number of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis. PMID:27497537

  8. Size consistent formulations of the perturb-then-diagonalize Møller-Plesset perturbation theory correction to non-orthogonal configuration interaction

    NASA Astrophysics Data System (ADS)

    Yost, Shane R.; Head-Gordon, Martin

    2016-08-01

    In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the number of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis.

  9. Multiphase flow modelling using non orthogonal collocated finite volumes : application to fluid catalytical cracking and large scale geophysical flows.

    NASA Astrophysics Data System (ADS)

    Martin, R. M.; Nicolas, A. N.

    2003-04-01

    A modeling approach of gas solid flow, taking into account different physical phenomena such as gas turbulence and inter-particle interactions is presented. Moment transport equations are derived for the second order fluctuating velocity tensor which allow to involve practical closures based on single phase turbulence modeling on one hand and kinetic theory of granular media on the other hand. The model is applied to fluid catalytic cracking processes and explosive volcanism. In the industry as well as in the geophysical community, multiphase flows are modeled using a finite volume approach and a multicorrector algorithm in time in order to determine implicitly the pressures, velocities and volume fractions for each phase. Pressures, and velocities are generally determined at mid-half mesh step from each other following the staggered grid approach. This ensures stability and prevents oscillations in pressure. It allows to treat almost all the Reynolds number ranges for all speeds and viscosities. The disadvantages appear when we want to treat more complex geometries or if a generalized curvilinear formulation of the conservation equations is considered. Too many interpolations have to be done and accuracy is then lost. In order to overcome these problems, we use here a similar algorithm in time and a Rhie and Chow interpolation (1983) of the collocated variables and essentially the velocities at the interface. The Rhie and Chow interpolation of the velocities at the finite volume interfaces allows to have no oscillations of the pressure without checkerboard effects and to stabilize all the algorithm. In a first predictor step, fluxes at the interfaces of the finite volumes are then computed using 2nd and 3rd order shock capturing schemes of MUSCL/TVD or Van Leer type, and the orthogonal stress components are treated implicitly while cross viscous/diffusion terms are treated explicitly. Pentadiagonal linear systems are solved in each geometrical direction (the so called Alternate Direction Implicit algorithm) to reduce the cost of computation. Then a multi-correction of interpolated velocities, pressures and volumic fractions of each phase are done in the Cartesian frame or the deformed local curvilinear coordinate system till convergence and mass conservation. In all this process the momentum exchange forces and the interphase heat exchanges are treated implicitly to ensure stability. To reduce the computational cost, a domain decomposition strategy is adopted with an overlapping procedure at the interface between subdomains. We show here two cases involving non-Cartesian computational domains: a two-phase volcanic flow along a realistic topography and a gas-particle flow in a complex vertical conduct (riser) used in industrial plants of fluid catalytical cracking processes geometry. With an initial Richardson number of 0.16 slightly higher than the critical Richardson number of 0.1, particles and water vapor are injected at the bottom of the riser. Countercurrents appear near the walls and gravity effects begin to dominate inducing an increase of particulate volumic fractions near the walls. We show here the hydrodynamics for 13s.

  10. Functional Implications of Ubiquitous Semicircular Canal Non-Orthogonality in Mammals

    PubMed Central

    Berlin, Jeri C.; Kirk, E. Christopher; Rowe, Timothy B.

    2013-01-01

    The ‘canonical model’ of semicircular canal orientation in mammals assumes that 1) the three ipsilateral canals of an inner ear exist in orthogonal planes (i.e., orthogonality), 2) corresponding left and right canal pairs have equivalent angles (i.e., angle symmetry), and 3) contralateral synergistic canals occupy parallel planes (i.e., coplanarity). However, descriptions of vestibular anatomy that quantify semicircular canal orientation in single species often diverge substantially from this model. Data for primates further suggest that semicircular canal orthogonality varies predictably with the angular head velocities encountered in locomotion. These observations raise the possibility that orthogonality, symmetry, and coplanarity are misleading descriptors of semicircular canal orientation in mammals, and that deviations from these norms could have significant functional consequences. Here we critically assess the canonical model of semicircular canal orientation using high-resolution X-ray computed tomography scans of 39 mammal species. We find that substantial deviations from orthogonality, angle symmetry, and coplanarity are the rule for the mammals in our comparative sample. Furthermore, the degree to which the semicircular canals of a given species deviate from orthogonality is negatively correlated with estimated vestibular sensitivity. We conclude that the available comparative morphometric data do not support the canonical model and that its overemphasis as a heuristic generalization obscures a large amount of functionally relevant variation in semicircular canal orientation between species. PMID:24260256

  11. Three Dimensional Wind Speed and Flux Measurement over a Rain-fed Soybean Field Using Orthogonal and Non-orthogonal Sonic Anemometer Designs

    NASA Astrophysics Data System (ADS)

    Thomas, T.; Suyker, A.; Burba, G. G.; Billesbach, D.

    2014-12-01

    The eddy covariance method for estimating fluxes of trace gases, energy and momentum in the constant flux layer above a plant canopy fundamentally relies on accurate measurements of the vertical wind speed. This wind speed is typically measured using a three dimensional ultrasonic anemometer. These anemometers incorporate designs with transducer sets that are aligned either orthogonally or non-orthogonally. Previous studies comparing the two designs suggest differences in measured 3D wind speed components, in particular vertical wind speed, from the non-orthogonal transducer relative to the orthogonal design. These differences, attributed to additional flow distortion caused by the non-orthogonal transducer arrangement, directly affect fluxes of trace gases, energy and momentum. A field experiment is being conducted over a rain-fed soybean field at the AmeriFlux site (US-Ne3) near Mead, Nebraska. In this study, ultrasonic anemometers featuring orthogonal transducer sets (ATI Vx Probe) and non-orthogonal transducer sets (Gill R3-100) collect high frequency wind vector and sonic temperature data. Sensible heat and momentum fluxes and other key sonic performance data are evaluated based on environmental parameters including wind speed, wind direction, temperature, and angle of attack. Preliminary field experiment results are presented.

  12. Reliable Attention Network Scores and Mutually Inhibited Inter-network Relationships Revealed by Mixed Design and Non-orthogonal Method

    PubMed Central

    Wang, Yi-Feng; Jing, Xiu-Juan; Liu, Feng; Li, Mei-Ling; Long, Zhi-Liang; Yan, Jin H.; Chen, Hua-Fu

    2015-01-01

    The attention system can be divided into alerting, orienting, and executive control networks. The efficiency and independence of attention networks have been widely tested with the attention network test (ANT) and its revised versions. However, many studies have failed to find effects of attention network scores (ANSs) and inter-network relationships (INRs). Moreover, the low reliability of ANSs can not meet the demands of theoretical and empirical investigations. Two methodological factors (the inter-trial influence in the event-related design and the inter-network interference in orthogonal contrast) may be responsible for the unreliability of ANT. In this study, we combined the mixed design and non-orthogonal method to explore ANSs and directional INRs. With a small number of trials, we obtained reliable and independent ANSs (split-half reliability of alerting: 0.684; orienting: 0.588; and executive control: 0.616), suggesting an individual and specific attention system. Furthermore, mutual inhibition was observed when two networks were operated simultaneously, indicating a differentiated but integrated attention system. Overall, the reliable and individual specific ANSs and mutually inhibited INRs provide novel insight into the understanding of the developmental, physiological and pathological mechanisms of attention networks, and can benefit future experimental and clinical investigations of attention using ANT. PMID:25997025

  13. Novel methods for configuration interaction and orbital optimization for wave functions containing non-orthogonal orbitals with applications to the chromium dimer and trimer.

    PubMed

    Olsen, Jeppe

    2015-09-21

    A novel algorithm for performing configuration interaction (CI) calculations using non-orthogonal orbitals is introduced. In the new algorithm, the explicit calculation of the Hamiltonian matrix is replaced by the direct evaluation of the Hamiltonian matrix times a vector, which allows expressing the CI-vector in a bi-orthonormal basis, thereby drastically reducing the computational complexity. A new non-orthogonal orbital optimization method that employs exponential mappings is also described. To allow non-orthogonal transformations of the orbitals, the standard exponential mapping using anti-symmetric operators is supplemented with an exponential mapping based on a symmetric operator in the active orbital space. Expressions are obtained for the orbital gradient and Hessian, which involve the calculation of at most two-body density matrices, thereby avoiding the time-consuming calculation of the three- and four-body density matrices of the previous approaches. An approach that completely avoids the calculation of any four-body terms with limited degradation of convergence is also devised. The novel methods for non-orthogonal configuration interaction and orbital optimization are applied to the chromium dimer and trimer. For internuclear distances that are typical for chromium clusters, it is shown that a reference configuration consisting of optimized singly occupied active orbitals is sufficient to give a potential curve that is in qualitative agreement with complete active space self-consistent field (CASSCF) calculations containing more than 500 × 10(6) determinants. To obtain a potential curve that deviates from the CASSCF curve by less than 1 mHartree, it is sufficient to add single and double excitations out from the reference configuration. PMID:26395682

  14. Novel methods for configuration interaction and orbital optimization for wave functions containing non-orthogonal orbitals with applications to the chromium dimer and trimer

    NASA Astrophysics Data System (ADS)

    Olsen, Jeppe

    2015-09-01

    A novel algorithm for performing configuration interaction (CI) calculations using non-orthogonal orbitals is introduced. In the new algorithm, the explicit calculation of the Hamiltonian matrix is replaced by the direct evaluation of the Hamiltonian matrix times a vector, which allows expressing the CI-vector in a bi-orthonormal basis, thereby drastically reducing the computational complexity. A new non-orthogonal orbital optimization method that employs exponential mappings is also described. To allow non-orthogonal transformations of the orbitals, the standard exponential mapping using anti-symmetric operators is supplemented with an exponential mapping based on a symmetric operator in the active orbital space. Expressions are obtained for the orbital gradient and Hessian, which involve the calculation of at most two-body density matrices, thereby avoiding the time-consuming calculation of the three- and four-body density matrices of the previous approaches. An approach that completely avoids the calculation of any four-body terms with limited degradation of convergence is also devised. The novel methods for non-orthogonal configuration interaction and orbital optimization are applied to the chromium dimer and trimer. For internuclear distances that are typical for chromium clusters, it is shown that a reference configuration consisting of optimized singly occupied active orbitals is sufficient to give a potential curve that is in qualitative agreement with complete active space self-consistent field (CASSCF) calculations containing more than 500 × 106 determinants. To obtain a potential curve that deviates from the CASSCF curve by less than 1 mHartree, it is sufficient to add single and double excitations out from the reference configuration.

  15. New Advances In Multiphase Flow Numerical Modelling Using A General Domain Decomposition and Non-orthogonal Collocated Finite Volume Algorithm: Application To Industrial Fluid Catalytical Cracking Process and Large Scale Geophysical Fluids.

    NASA Astrophysics Data System (ADS)

    Martin, R.; Gonzalez Ortiz, A.

    In the industry as well as in the geophysical community, multiphase flows are mod- elled using a finite volume approach and a multicorrector algorithm in time in order to determine implicitly the pressures, velocities and volume fractions for each phase. Pressures, and velocities are generally determined at mid-half mesh step from each other following the staggered grid approach. This ensures stability and prevents os- cillations in pressure. It allows to treat almost all the Reynolds number ranges for all speeds and viscosities. The disadvantages appear when we want to treat more complex geometries or if a generalized curvilinear formulation of the conservation equations is considered. Too many interpolations have to be done and accuracy is then lost. In order to overcome these problems, we use here a similar algorithm in time and a Rhie and Chow interpolation (1983) of the collocated variables and essentially the velocities at the interface. The Rhie and Chow interpolation of the velocities at the finite volume interfaces allows to have no oscillatons of the pressure without checkerboard effects and to stabilize all the algorithm. In a first predictor step, fluxes at the interfaces of the finite volumes are then computed using 2nd and 3rd order shock capturing schemes of MUSCL/TVD or Van Leer type, and the orthogonal stress components are treated implicitly while cross viscous/diffusion terms are treated explicitly. A pentadiagonal system in 2D or a septadiagonal in 3D must be solve but here we have chosen to solve 3 tridiagonal linear systems (the so called Alternate Direction Implicit algorithm), one in each spatial direction, to reduce the cost of computation. Then a multi-correction of interpolated velocities, pressures and volumic fractions of each phase are done in the cartesian frame or the deformed local curvilinear coordinate system till convergence and mass conservation. At the end the energy conservation equations are solved. In all this process the momentum exchange forces and the interphase heat exchanges are 1 treated implicitly to ensure stability. In order to reduce one more time the computa- tional cost, a decomposition of the global domain in N subdomains is introduced and all the previous algorithms applied to one block is performed in each block. At the in- terface between subdomains, an overlapping procedure is used. Another advantage is that different sets of equations can be solved in each block like fluid/structure interac- tions for instance. We show here the hydrodynamics of a two-phase flow in a vertical conduct as in industrial plants of fluid catalytical cracking processes with a complex geometry. With an initial Richardson number of 0.16 slightly higher than the critical Richardson number of 0.1, particles and water vapor are injected at the bottom of the riser. Countercurrents appear near the walls and gravity effects begin to dominate in- ducing an increase of particulate volumic fractions near the walls. We show here the hydrodynamics for 13s. 2

  16. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  17. Stability of a non-orthogonal stagnation flow to three dimensional disturbances

    NASA Technical Reports Server (NTRS)

    Lasseigne, D. G.; Jackson, T. L.

    1991-01-01

    A similarity solution for a low Mach number nonorthogonal flow impinging on a hot or cold plate is presented. For the constant density case, it is known that the stagnation point shifts in the direction of the incoming flow and that this shift increases as the angle of attack decreases. When the effects of density variations are included, a critical plate temperature exists; above this temperature the stagnation point shifts away from the incoming stream as the angle is decreased. This flow field is believed to have application to the reattachment zone of certain separated flows or to a lifting body at a high angle of attack. Finally, the stability of this nonorthogonal flow to self similar, 3-D disturbances is examined. Stability properties of the flow are given as a function of the parameters of this study; ratio of the plate temperature to that of the outer potential flow and angle of attack. In particular, it is shown that the angle of attack can be scaled out by a suitable definition of an equivalent wavenumber and temporal growth rate, and the stability problem for the nonorthogonal case is identical to the stability problem for the orthogonal case.

  18. Thinking large.

    PubMed

    Devries, Egbert

    2016-05-01

    Egbert Devries was brought up on a farm in the Netherlands and large animal medicine has always been his area of interest. After working in UK practice for 12 years he joined CVS and was soon appointed large animal director with responsibility for building a stronger large animal practice base. PMID:27154956

  19. A multireference perturbation method using non-orthogonal Hartree-Fock determinants for ground and excited states

    SciTech Connect

    Yost, Shane R.; Kowalczyk, Tim; Van Voorhis, Troy

    2013-11-07

    In this article we propose the ΔSCF(2) framework, a multireference strategy based on second-order perturbation theory, for ground and excited electronic states. Unlike the complete active space family of methods, ΔSCF(2) employs a set of self-consistent Hartree-Fock determinants, also known as ΔSCF states. Each ΔSCF electronic state is modified by a first-order correction from Møller-Plesset perturbation theory and used to construct a Hamiltonian in a configuration interactions like framework. We present formulas for the resulting matrix elements between nonorthogonal states that scale as N{sub occ}{sup 2}N{sub virt}{sup 3}. Unlike most active space methods, ΔSCF(2) treats the ground and excited state determinants even-handedly. We apply ΔSCF(2) to the H{sub 2}, hydrogen fluoride, and H{sub 4} systems and show that the method provides accurate descriptions of ground- and excited-state potential energy surfaces with no single active space containing more than 10 ΔSCF states.

  20. On the Relative Merits of Non-Orthogonal and Orthogonal Valence Bond Methods Illustrated on the Hydrogen Molecule

    ERIC Educational Resources Information Center

    Angeli, Celestino; Cimiraglia, Renzo; Malrieu, Jean-Paul

    2008-01-01

    Valence bond (VB) is one of the cornerstone theories of quantum chemistry. Even if in practical applications the molecular orbital (MO) approach has obtained more attention, some basic chemical concepts (such as the nature of the chemical bond and the failure of the single determinant-based MO methods in describing the bond cleavage) are normally…

  1. A multireference perturbation method using non-orthogonal Hartree-Fock determinants for ground and excited states.

    PubMed

    Yost, Shane R; Kowalczyk, Tim; Van Voorhis, Troy

    2013-11-01

    In this article we propose the ΔSCF(2) framework, a multireference strategy based on second-order perturbation theory, for ground and excited electronic states. Unlike the complete active space family of methods, ΔSCF(2) employs a set of self-consistent Hartree-Fock determinants, also known as ΔSCF states. Each ΔSCF electronic state is modified by a first-order correction from Mo̸ller-Plesset perturbation theory and used to construct a Hamiltonian in a configuration interactions like framework. We present formulas for the resulting matrix elements between nonorthogonal states that scale as N(occ)(2)N(virt)(3). Unlike most active space methods, ΔSCF(2) treats the ground and excited state determinants even-handedly. We apply ΔSCF(2) to the H2, hydrogen fluoride, and H4 systems and show that the method provides accurate descriptions of ground- and excited-state potential energy surfaces with no single active space containing more than 10 ΔSCF states. PMID:24206284

  2. Large-scale B-spline R-matrix calculations of electron impact excitation and ionization processes in complex atoms

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg

    2013-09-01

    In recent years, the B-spline R-matrix (BSR) method has been applied to the treatment of a large number of atomic structure and electron-atom collision problems. Characteristic features of the BSR approach include the use of B-splines as a universal basis to describe the projectile electron inside the R-matrix box and the employment of term-dependent, and hence non-orthogonal, orbitals to construct the target states. The latter flexibility has proven to be of crucial importance for complex targets with several partially filled subshells. The published computer code has since been updated and extended to allow for a fully relativistic description at the level of the Dirac-Coulomb hamiltonian. Also, the systematic inclusion of a large number of pseudo-states in the close-coupling expansion has made it possible to extend the range of applicability from elastic and inelastic low-energy near-threshold phenomena to intermediate energies (up to several times the ionization threshold) and, in particular, to describe ionization processes as well. The basic ideas of the BSR approach will be reviewed, and its application will be illustrated for a variety of targets. Particular emphasis will be placed on systems of relevance for applications in gaseous electronics, such as the generation of complete datasets for electron collisions with the heavy noble gases Ne-Xe. Many of our data, which are needed for the description of transport processes in plasmas, are available through the LXCat database. This work was performed in collaboration with Klaus Bartschat. It is supported by the National Science Foundation under Grant No. PHY-1212450 and the XSEDE Allocation PHY-090031.

  3. Large bowel resection - slideshow

    MedlinePlus

    ... this page: //medlineplus.gov/ency/presentations/100089.htm Large bowel resection - Series To use the sharing features ... 6 out of 6 Normal anatomy Overview The large bowel [large intestine or the colon] is part ...

  4. Instantons and Large N

    NASA Astrophysics Data System (ADS)

    Mariño, Marcos

    2015-09-01

    Preface; Part I. Instantons: 1. Instantons in quantum mechanics; 2. Unstable vacua in quantum field theory; 3. Large order behavior and Borel summability; 4. Non-perturbative aspects of Yang–Mills theories; 5. Instantons and fermions; Part II. Large N: 6. Sigma models at large N; 7. The 1=N expansion in QCD; 8. Matrix models and matrix quantum mechanics at large N; 9. Large N QCD in two dimensions; 10. Instantons at large N; Appendix A. Harmonic analysis on S3; Appendix B. Heat kernel and zeta functions; Appendix C. Effective action for large N sigma models; References; Author index; Subject index.

  5. Large intestine (colon) (image)

    MedlinePlus

    The large intestine is the portion of the digestive system most responsible for absorption of water from the indigestible ... the ileum (small intestine) passes material into the large intestine at the cecum. Material passes through the ...

  6. Large displacement spherical joint

    DOEpatents

    Bieg, Lothar F.; Benavides, Gilbert L.

    2002-01-01

    A new class of spherical joints has a very large accessible full cone angle, a property which is beneficial for a wide range of applications. Despite the large cone angles, these joints move freely without singularities.

  7. High-resolution combined global gravity field modelling: Solving large kite systems using distributed computational algorithms

    NASA Astrophysics Data System (ADS)

    Zingerle, Philipp; Fecher, Thomas; Pail, Roland; Gruber, Thomas

    2016-04-01

    One of the major obstacles in modern global gravity field modelling is the seamless combination of lower degree inhomogeneous gravity field observations (e.g. data from satellite missions) with (very) high degree homogeneous information (e.g. gridded and reduced gravity anomalies, beyond d/o 1000). Actual approaches mostly combine such data only on the basis of the coefficients, meaning that previously for both observation classes (resp. models) a spherical harmonic analysis is done independently, solving dense normal equations (NEQ) for the inhomogeneous model and block-diagonal NEQs for the homogeneous. Obviously those methods are unable to identify or eliminate effects as spectral leakage due to band limitations of the models and non-orthogonality of the spherical harmonic base functions. To antagonize such problems a combination of both models on NEQ-basis is desirable. Theoretically this can be achieved using NEQ-stacking. Because of the higher maximum degree of the homogeneous model a reordering of the coefficient is needed which leads inevitably to the destruction of the block diagonal structure of the appropriate NEQ-matrix and therefore also to the destruction of simple sparsity. Hence, a special coefficient ordering is needed to create some new favorable sparsity pattern leading to a later efficient computational solving method. Such pattern can be found in the so called kite-structure (Bosch, 1993), achieving when applying the kite-ordering to the stacked NEQ-matrix. In a first step it is shown what is needed to attain the kite-(NEQ)system, how to solve it efficiently and also how to calculate the appropriate variance information from it. Further, because of the massive computational workload when operating on large kite-systems (theoretically possible up to about max. d/o 100.000), the main emphasis is put on to the presentation of special distributed algorithms which may solve those systems parallel on an indeterminate number of processes and are

  8. Large mode radius resonators

    NASA Technical Reports Server (NTRS)

    Harris, Michael R.

    1987-01-01

    Resonator configurations permitting operation with large mode radius while maintaining good transverse mode discrimination are considered. Stable resonators incorporating an intracavity telescope and unstable resonator geometries utilizing an output coupler with a Gaussian reflectivity profile are shown to enable large radius single mode laser operation. Results of heterodyne studies of pulsed CO2 lasers with large (11mm e sup-2 radius) fundamental mode sizes are presented demonstrating minimal frequency sweeping in accordance with the theory of laser-induced medium perturbations.

  9. LARGE BUILDING RADON MANUAL

    EPA Science Inventory

    The report summarizes information on how bilding systems -- especially the heating, ventilating, and air-conditioning (HVAC) system -- inclurence radon entry into large buildings and can be used to mitigate radon problems. It addresses the fundamentals of large building HVAC syst...

  10. Large Print Bibliography, 1990.

    ERIC Educational Resources Information Center

    South Dakota State Library, Pierre.

    This bibliography lists materials that are available in large print format from the South Dakota State Library. The annotated entries are printed in large print and include the title of the material and its author, call number, publication date, and type of story or subject area covered. Some recorded items are included in the list. The entries…

  11. Large wind turbine generators

    NASA Technical Reports Server (NTRS)

    Thomas, R. L.; Donovon, R. M.

    1978-01-01

    The development associated with large wind turbine systems is briefly described. The scope of this activity includes the development of several large wind turbines ranging in size from 100 kW to several megawatt levels. A description of the wind turbine systems, their programmatic status and a summary of their potential costs is included.

  12. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  13. Large Customers (DR Sellers)

    SciTech Connect

    Kiliccot, Sila

    2011-10-25

    State of the large customers for demand response integration of solar and wind into electric grid; openADR; CAISO; DR as a pseudo generation; commercial and industrial DR strategies; California regulations

  14. Closed Large Cell Clouds

    Atmospheric Science Data Center

    2013-04-19

    article title:  Closed Large Cell Clouds in the South Pacific     ... unperturbed by cyclonic or frontal activity. When the cell centers are cloudy and the main sinking motion is concentrated at cell ...

  15. Large bowel resection - discharge

    MedlinePlus

    ... large bowel). You may also have had a colostomy . ... have diarrhea. You may have problems with your colostomy. ... protect it if needed. If you have a colostomy, follow care instructions from your provider. Sitting on ...

  16. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  17. Large pore alumina

    SciTech Connect

    Ternan, M. )

    1994-04-01

    Earlier the authors reported preparation conditions for an alumina material which contained large diameter macropores (0.1-100 [mu]). The preparation variable that caused the formation of the uncommonly large macropores was the large acid/alumina ratios which were very much greater than the ones used in the preparation of conventional porous aluminas. The alumina material had large BET surface areas (200 m[sup 2]/g) and small mercury porosimetry surface areas (1 m[sup 2]/g). This indicated that micropores (d[sub MIP]<2 nm) were present in the alumina, since they were large enough for nitrogen gas molecules to enter, but too small for mercury to enter. As a result they would be too small for significant diffusion rates of residuum molecules. In earlier work, the calcining temperature was fixed at 500[degrees]C. In the current work, variations in both calcining temperature and calcining time were used in an attempt to convert some of the micropores into mesopores. 12 refs., 2 figs., 1 tab.

  18. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  19. Large TV display system

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor)

    1986-01-01

    A relatively small and low cost system is provided for projecting a large and bright television image onto a screen. A miniature liquid crystal array is driven by video circuitry to produce a pattern of transparencies in the array corresponding to a television image. Light is directed against the rear surface of the array to illuminate it, while a projection lens lies in front of the array to project the image of the array onto a large screen. Grid lines in the liquid crystal array are eliminated by a spacial filter which comprises a negative of the Fourier transform of the grid.

  20. Large gauged Q balls

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, K. N.; Axenides, M.; Floratos, E. G.; Tetradis, N.

    2001-12-01

    We study Q balls associated with local U(1) symmetries. Such Q balls are expected to become unstable for large values of their charge because of the repulsion mediated by the gauge force. We consider the possibility that the repulsion is eliminated through the presence in the interior of the Q ball of fermions with charge opposite to that of the scalar condensate. Another possibility is that two scalar condensates of opposite charge form in the interior. We demonstrate that both these scenarios can lead to the existence of classically stable, large, gauged Q balls. We present numerical solutions, as well as an analytical treatment of the ``thin-wall'' limit.

  1. LARGE BUILDING HVAC SIMULATION

    EPA Science Inventory

    The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...

  2. Developing Large CAI Packages.

    ERIC Educational Resources Information Center

    Reed, Mary Jac M.; Smith, Lynn H.

    1983-01-01

    When developing large computer-assisted instructional (CAI) courseware packages, it is suggested that there be more attentive planning to the overall package design before actual lesson development is begun. This process has been simplified by modifying the systems approach used to develop single CAI lessons, followed by planning for the…

  3. Risks of Large Portfolios

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng

    2014-01-01

    The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851

  4. Teaching Large Evening Classes

    ERIC Educational Resources Information Center

    Wambuguh, Oscar

    2008-01-01

    High enrollments, conflicting student work schedules, and the sheer convenience of once-a-week classes are pushing many colleges to schedule evening courses. Held from 6 to 9 pm or 7 to 10 pm, these classes are typically packed, sometimes with more than 150 students in a large lecture theater. How can faculty effectively teach, control, or even…

  5. Teaching Very Large Classes

    ERIC Educational Resources Information Center

    DeRogatis, Amy; Honerkamp, Kenneth; McDaniel, Justin; Medine, Carolyn; Nyitray, Vivian-Lee; Pearson, Thomas

    2014-01-01

    The editor of "Teaching Theology and Religion" facilitated this reflective conversation with five teachers who have extensive experience and success teaching extremely large classes (150 students or more). In the course of the conversation these professors exchange and analyze the effectiveness of several active learning strategies they…

  6. Large N Cosmology

    NASA Astrophysics Data System (ADS)

    Hawking, S. W.

    2001-09-01

    The large N approximation should hold in cosmology even at the origin of the universe. I use ADS-CFT to calculate the effective action and obtain a cosmological model in which inflation is driven by the trace anomaly. Despite having ghosts, this model can agree with observations.

  7. Death Writ Large

    ERIC Educational Resources Information Center

    Kastenbaum, Robert

    2004-01-01

    Mainstream thanatology has devoted its efforts to improving the understanding, care, and social integration of people who are confronted with life-threatening illness or bereavement. This article suggests that it might now be time to expand the scope and mission to include large-scale death and death that occurs through complex and multi-domain…

  8. Estimating Large Numbers

    ERIC Educational Resources Information Center

    Landy, David; Silbert, Noah; Goldin, Aleah

    2013-01-01

    Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…

  9. Coherent large telescopes

    NASA Astrophysics Data System (ADS)

    Nelson, J. E.

    Present ground-based telescopes are compared with those of the future. The inherent limitations of ground-based telescopes are reviewed, and existing telescopes and their evolution are briefly surveyed in order to see the trends that led to the present period of innovative telescope design. The major telescope types and the critical design factors that must be considered in designing large telescopes for the future are reviewed, emphasizing economicality. As an example, the Ten Meter Telescope project at the University of California is discussed in detail, including the telescope buildings, domes, and apertures, the telescope moving weights, the image quality, and the equipment. Finally, a brief review of current work in progress on large telescopes is given.

  10. Launching large antennas

    NASA Astrophysics Data System (ADS)

    Brandli, H. W.

    1983-09-01

    Large antennas will provide communication to rural and remote areas in times of need. This is seen as facilitating the work of law enforcement agencies. All mobile radio communications will enjoy advantages in distances covered and information relayed owing to the large number of beams possible from super radio transmitters in space. If the antennas are placed in low-earth orbit, advantages will be realized in the remote sensing of the earth's resources. It is pointed out that with umbrella or bicyclelike antennas turned outward toward space, the universe could be scouted for signals from intelligent life. Various concepts that have been put forward by U.S. companies are described. These include the radial rib, wrap rib, and parabolic erectable truss designs. Others are the mesh hoop column collapsable umbrella made of gold and molybdenum and the maypole design.

  11. Hulls for Large Seaplanes

    NASA Technical Reports Server (NTRS)

    Magaldi, Giulio

    1925-01-01

    In reality, the principle of similitude is not applicable to the hulls, the designing of which increases in difficulty with increasing size of the seaplanes. In order to formulate, at least in a general way, the basic principles of calculation, we must first summarize the essential characteristics of a hull with reference to its gradual enlargement. In this study, we will disregard hulls with wing stubs, as being inapplicable to large seaplanes.

  12. The Large Area Telescope

    SciTech Connect

    Michelson, Peter F.; /KIPAC, Menlo Park /Stanford U., HEPL

    2007-11-13

    The Large Area Telescope (LAT), one of two instruments on the Gamma-ray Large Area Space Telescope (GLAST) mission, is an imaging, wide field-of-view, high-energy pair-conversion telescope, covering the energy range from {approx}20 MeV to more than 300 GeV. The LAT is being built by an international collaboration with contributions from space agencies, high-energy particle physics institutes, and universities in France, Italy, Japan, Sweden, and the United States. The scientific objectives the LAT will address include resolving the high-energy gamma-ray sky and determining the nature of the unidentified gamma-ray sources and the origin of the apparently isotropic diffuse emission observed by EGRET; understanding the mechanisms of particle acceleration in celestial sources, including active galactic nuclei, pulsars, and supernovae remnants; studying the high-energy behavior of gamma-ray bursts and transients; using high-energy gamma-rays to probe the early universe to z {ge} 6; and probing the nature of dark matter. The components of the LAT include a precision silicon-strip detector tracker and a CsI(Tl) calorimeter, a segmented anticoincidence shield that covers the tracker array, and a programmable trigger and data acquisition system. The calorimeter's depth and segmentation enable the high-energy reach of the LAT and contribute significantly to background rejection. The aspect ratio of the tracker (height/width) is 0.4, allowing a large field-of-view and ensuring that nearly all pair-conversion showers initiated in the tracker will pass into the calorimeter for energy measurement. This paper includes a description of each of these LAT subsystems as well as a summary of the overall performance of the telescope.

  13. Large area LED package

    NASA Astrophysics Data System (ADS)

    Goullon, L.; Jordan, R.; Braun, T.; Bauer, J.; Becker, F.; Hutter, M.; Schneider-Ramelow, M.; Lang, K.-D.

    2015-03-01

    Solid state lighting using LED-dies is a rapidly growing market. LED-dies with the needed increasing luminous flux per chip area produce a lot of heat. Therefore an appropriate thermal management is required for general lighting with LEDdies. One way to avoid overheating and shorter lifetime is the use of many small LED-dies on a large area heat sink (down to 70 μm edge length), so that heat can spread into a large area while at the same time light also appears on a larger area. The handling with such small LED-dies is very difficult because they are too small to be picked with common equipment. Therefore a new concept called collective transfer bonding using a temporary carrier chip was developed. A further benefit of this new technology is the high precision assembly as well as the plane parallel assembly of the LED-dies which is necessary for wire bonding. It has been shown that hundred functional LED-dies were transferred and soldered at the same time. After the assembly a cost effective established PCB-technology was applied to produce a large-area light source consisting of many small LED-dies and electrically connected on a PCB-substrate. The top contacts of the LED-dies were realized by laminating an adhesive copper sheet followed by LDI structuring as known from PCB-via-technology. This assembly can be completed by adding converting and light forming optical elements. In summary two technologies based on standard SMD and PCB technology have been developed for panel level LED packaging up to 610x 457 mm2 area size.

  14. Large space structures testing

    NASA Technical Reports Server (NTRS)

    Waites, Henry; Worley, H. Eugene

    1987-01-01

    There is considerable interest in the development of testing concepts and facilities that accurately simulate the pathologies believed to exist in future spacecraft. Both the Government and Industry have participated in the development of facilites over the past several years. The progress and problems associated with the development of the Large Space Structure Test Facility at the Marshall Flight Center are presented. This facility was in existence for a number of years and its utilization has run the gamut from total in-house involvement, third party contractor testing, to the mutual participation of other Government Agencies in joint endeavors.

  15. Gyrokinetic large eddy simulations

    SciTech Connect

    Morel, P.; Navarro, A. Banon; Albrecht-Marc, M.; Carati, D.; Merz, F.; Goerler, T.; Jenko, F.

    2011-07-15

    The large eddy simulation approach is adapted to the study of plasma microturbulence in a fully three-dimensional gyrokinetic system. Ion temperature gradient driven turbulence is studied with the GENE code for both a standard resolution and a reduced resolution with a model for the sub-grid scale turbulence. A simple dissipative model for representing the effect of the sub-grid scales on the resolved scales is proposed and tested. Once calibrated, the model appears to be able to reproduce most of the features of the free energy spectra for various values of the ion temperature gradient.

  16. Large Windblown Ripples

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-519, 20 October 2003

    This April 2003 Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) high resolution image shows a depression in the martian southern cratered highlands near 1.3oS, 244.3oW. The floor of the depression and some nearby craters are covered by large windblown ripples or small sand dunes. This image of ancient martian terrain covers an area 3 km (1.9 mi) across and is illuminated by sunlight from the upper left.

  17. Large, Bright Wind Ripples

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-397, 20 June 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows large, relatively bright ripples of windblown sediment in the Sinus Sabaeus region south of Schiaparelli Basin. The surrounding substrate is thickly mantled by very dark material, possibly windblown silt that settled out of the atmosphere. The picture is located near 7.1oS, 343.7oW. Sunlight illuminates the scene from the left.

  18. Large Spectral Library Problem

    SciTech Connect

    Chilton, Lawrence K.; Walsh, Stephen J.

    2008-10-03

    Hyperspectral imaging produces a spectrum or vector at each image pixel. These spectra can be used to identify materials present in the image. In some cases, spectral libraries representing atmospheric chemicals or ground materials are available. The challenge is to determine if any of the library chemicals or materials exist in the hyperspectral image. The number of spectra in these libraries can be very large, far exceeding the number of spectral channels collected in the ¯eld. Suppose an image pixel contains a mixture of p spectra from the library. Is it possible to uniquely identify these p spectra? We address this question in this paper and refer to it as the Large Spectral Library (LSL) problem. We show how to determine if unique identi¯cation is possible for any given library. We also show that if p is small compared to the number of spectral channels, it is very likely that unique identi¯cation is possible. We show that unique identi¯cation becomes less likely as p increases.

  19. Infinitely Large New Dimensions

    SciTech Connect

    Arkani-Hamed, Nima; Dimopoulos, Savas; Dvali, Gia; Kaloper, Nemanja

    1999-07-29

    We construct intersecting brane configurations in Anti-de-Sitter space localizing gravity to the intersection region, with any number n of extra dimensions. This allows us to construct two kinds of theories with infinitely large new dimensions, TeV scale quantum gravity and sub-millimeter deviations from Newton's Law. The effective 4D Planck scale M{sub Pl} is determined in terms of the fundamental Planck scale M{sub *} and the AdS radius of curvature L via the familiar relation M{sub Pl}{sup 2} {approx} M{sub *}{sup 2+n} L{sup n}; L acts as an effective radius of compactification for gravity on the intersection. Taking M{sub *} {approx} TeV and L {approx} sub-mm reproduces the phenomenology of theories with large extra dimensions. Alternately, taking M{sub *} {approx} L{sup -1} {approx} M{sub Pl}, and placing our 3-brane a distance {approx} 100M{sub Pl}{sup -1} away from the intersection gives us a theory with an exponential determination of the Weak/Planck hierarchy.

  20. Large Particle Titanate Sorbents

    SciTech Connect

    Taylor-Pashow, K.

    2015-10-08

    This research project was aimed at developing a synthesis technique for producing large particle size monosodium titanate (MST) to benefit high level waste (HLW) processing at the Savannah River Site (SRS). Two applications were targeted, first increasing the size of the powdered MST used in batch contact processing to improve the filtration performance of the material, and second preparing a form of MST suitable for deployment in a column configuration. Increasing the particle size should lead to improvements in filtration flux, and decreased frequency of filter cleaning leading to improved throughput. Deployment of MST in a column configuration would allow for movement from a batch process to a more continuous process. Modifications to the typical MST synthesis led to an increase in the average particle size. Filtration testing on dead-end filters showed improved filtration rates with the larger particle material; however, no improvement in filtration rate was realized on a crossflow filter. In order to produce materials suitable for column deployment several approaches were examined. First, attempts were made to coat zirconium oxide microspheres (196 µm) with a layer of MST. This proved largely unsuccessful. An alternate approach was then taken synthesizing a porous monolith of MST which could be used as a column. Several parameters were tested, and conditions were found that were able to produce a continuous structure versus an agglomeration of particles. This monolith material showed Sr uptake comparable to that of previously evaluated samples of engineered MST in batch contact testing.

  1. Large furlable antenna study

    NASA Technical Reports Server (NTRS)

    Campbell, G. K. C.

    1975-01-01

    The parametric study of the performance of large furlable antennas is described and the availability of various size antennas is discussed. Three types of unfurlable reflector designs are considered: the wrapped rib, the polyconic, and the maypole. On the basis of these approaches, a space shuttle launch capability, and state-of-the-art materials, it is possible to design unfurlable reflectors as large as 130 feet (40 meters) in diameter to operate at 10 GHz and 600 feet (183 meters) in diameter at 0.5 GHz. These figures can be increased if very low thermal coefficient of expansion materials can be developed over the next 2-5 years. It is recommended that a special effort be made to develop light weight materials that would provide nearly zero thermal coefficient of expansion and good thermal conductivity within the next 10 years. A conservative prediction of the kinds of unfurlable spacecraft antennas that will be available by 1985 with orbital performance predicted on the basis of test data and with developed manufacturing processes is summarized.

  2. Contrasting Large Solar Events

    NASA Astrophysics Data System (ADS)

    Lanzerotti, Louis J.

    2010-10-01

    After an unusually long solar minimum, solar cycle 24 is slowly beginning. A large coronal mass ejection (CME) from sunspot 1092 occurred on 1 August 2010, with effects reaching Earth on 3 August and 4 August, nearly 38 years to the day after the huge solar event of 4 August 1972. The prior event, which those of us engaged in space research at the time remember well, recorded some of the highest intensities of solar particles and rapid changes of the geomagnetic field measured to date. What can we learn from the comparisons of these two events, other than their essentially coincident dates? One lesson I took away from reading press coverage and Web reports of the August 2010 event is that the scientific community and the press are much more aware than they were nearly 4 decades ago that solar events can wreak havoc on space-based technologies.

  3. Synchronizing large systolic arrays

    SciTech Connect

    Fisher, A.L.; Kung, H.T.

    1982-04-01

    Parallel computing structures consist of many processors operating simultaneously. If a concurrent structure is regular, as in the case of systolic array, it may be convenient to think of all processors as operating in lock step. Totally synchronized systems controlled by central clocks are difficult to implement because of the inevitable problem of clock skews and delays. An alternate means of enforcing necessary synchronization is the use of self-timed, asynchronous schemes, at the cost of increased design complexity and hardware cost. Realizing that different circumstances call for different synchronization methods, this paper provides a spectrum of synchronization models; based on the assumptions made for each model, theoretical lower bounds on clock skew are derived, and appropriate or best-possible synchronization schemes for systolic arrays are proposed. This paper represents a first step towards a systematic study of synchronization problems for large systolic arrays.

  4. Large area plasma source

    NASA Technical Reports Server (NTRS)

    Foster, John (Inventor); Patterson, Michael (Inventor)

    2008-01-01

    An all permanent magnet Electron Cyclotron Resonance, large diameter (e.g., 40 cm) plasma source suitable for ion/plasma processing or electric propulsion, is capable of producing uniform ion current densities at its exit plane at very low power (e.g., below 200 W), and is electrodeless to avoid sputtering or contamination issues. Microwave input power is efficiently coupled with an ionizing gas without using a dielectric microwave window and without developing a throat plasma by providing a ferromagnetic cylindrical chamber wall with a conical end narrowing to an axial entrance hole for microwaves supplied on-axis from an open-ended waveguide. Permanent magnet rings are attached inside the wall with alternating polarities against the wall. An entrance magnet ring surrounding the entrance hole has a ferromagnetic pole piece that extends into the chamber from the entrance hole to a continuing second face that extends radially across an inner pole of the entrance magnet ring.

  5. Large Binocular Telescope Project

    NASA Astrophysics Data System (ADS)

    Hill, John M.; Salinari, Piero

    1998-08-01

    The Large Binocular Telescope (LBT) Project is a collaboration between institutions in Arizona, Germany, Italy, and Ohio. With the addition of the partners from Ohio State and Germany in February 1997, the Large Binocular Telescope Corporation has the funding required to build the full telescope populated with both 8.4 meter optical trans. The first of two 8.4 meter borosilicate honeycomb primary mirrors for LBT was cast at the Steward Observatory Mirror Lab in 1997. The baseline optical configuration of LBT includes adaptive infrared secondaries of a Gregorian design. The F/15 secondaries are undersized to provide a low thermal background focal plane. The interferometric focus combining the light from the two 8.4 meter primaries will reimage the two folded Gregorian focal planes to three central locations. The telescope elevation structure accommodates swing arms which allow rapid interchange of the various secondary and tertiary mirrors. Maximum stiffness and minimal thermal disturbance were important drivers for the design of the telescope in order to provide the best possible images for interferometric observations. The telescope structure accommodates installation of a vacuum bell jar for aluminizing the primary mirrors in-situ on the telescope. The detailed design of the telescope structure was completed in 1997 by ADS Italia (Lecco) and European Industrial Engineering (Mestre). A series of contracts for the fabrication and machining of the telescope structure had been placed at the end of 1997. The final enclosure design was completed at M3 Engineering & Technology (Tucson), EIE and ADS Italia. During 1997, the telescope pier and the concrete ring wall for the rotating enclosure were completed along with the steel structure of the fixed portion of the enclosure. The erection of the steel structure for the rotating portion of the enclosure will begin in the Spring of 1998.

  6. Large area Czochralski silicon

    NASA Technical Reports Server (NTRS)

    Rea, S. N.; Gleim, P. S.

    1977-01-01

    The overall cost effectiveness of the Czochralski process for producing large-area silicon was determined. The feasibility of growing several 12 cm diameter crystals sequentially at 12 cm/h during a furnace run and the subsequent slicing of the ingot using a multiblade slurry saw were investigated. The goal of the wafering process was a slice thickness of 0.25 mm with minimal kerf. A slice + kerf of 0.56 mm was achieved on 12 cm crystal using both 400 grit B4C and SiC abrasive slurries. Crystal growth experiments were performed at 12 cm diameter in a commercially available puller with both 10 and 12 kg melts. Several modifications to the puller hoz zone were required to achieve stable crystal growth over the entire crystal length and to prevent crystallinity loss a few centimeters down the crystal. The maximum practical growth rate for 12 cm crystal in this puller design was 10 cm/h, with 12 to 14 cm/h being the absolute maximum range at which melt freeze occurred.

  7. Large forging manufacturing process

    DOEpatents

    Thamboo, Samuel V.; Yang, Ling

    2002-01-01

    A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.

  8. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  9. Stability of large systems

    NASA Astrophysics Data System (ADS)

    Hastings, Harold

    2007-03-01

    We address a long-standing dilemma concerning stability of large systems. MacArthur (1955) and Hutchinson (1959) argued that more ``complex'' natural systems tended to be more stable than less complex systems based upon energy flow. May (1972) argued the opposite, using random matrix models; see Cohen and Newman (1984, 1985), Bai and Yin (1986). We show that in some sense both are right: under reasonable scaling assumptions on interaction strength, Lyapunov stability increases but structural stability decreases as complexity is increased (c.f. Harrison, 1979; Hastings, 1984). We apply this result to a variety of network systems. References: Bai, Z.D. & Yin, Y.Q. 1986. Probab. Th. Rel. Fields 73, 555. Cohen, J.E., & Newman, C.M. 1984. Annals Probab. 12, 283; 1985. Theoret. Biol. 113, 153. Harrison, G.W. 1979. Amer. Natur. 113, 659. Hastings, H.M. 1984. BioSystems 17, 171. Hastings, H.M., Juhasz, F., & Schreiber, M. 1992. .Proc. Royal Soc., Ser. B. 249, 223. Hutchinson, G.E. 1959. Amer. Natur. 93, 145, MacArthur, R. H. 1955. Ecology 35, 533, May, R.M. 1972. Nature 238, 413.

  10. The Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Hughes, D. H.; Schloerb, F. P.; LMT Project Team

    2009-05-01

    This paper, presented on behalf of the Large Millimeter Telescope (LMT) project team, describes the status and near-term plans for the telescope and its initial instrumentation. The LMT is a bi-national collaboration between México and the USA, led by the Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) and the University of Massachusetts at Amherst, to construct, commission and operate a 50 m diameter millimeter-wave radio telescope. Construction activities are nearly complete at the LMT site, at an altitude of ˜ 4600 m on the summit of Sierra Negra, an extinct volcano in the Mexican state of Puebla. Full movement of the telescope, under computer control in both azimuth and elevation, has been achieved. First-light at centimeter wavelengths on astronomical sources was obtained in November 2006. Installation of precision surface segments for millimeter-wave operation is underway, with the inner 32 m diameter of the surface now complete and ready to be used to obtain first-light at millimeter wavelengths in 2008. Installation of the remainder of the reflector will continue during the next year and be completed in 2009 for final commissioning of the antenna. The full LMT antenna, outfitted with its initial complement of scientific instruments, will be a world-leading scientific research facility for millimeter-wave astronomy.

  11. The Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Schloerb, F. Peter

    2008-07-01

    This paper, presented on behalf of the Large Millimeter Telescope (LMT) project team, describes the status and near-term plans for the telescope and its initial instrumentation. The LMT is a bi-national collaboration between Mexico and the USA, led by the Instituto Nacional de Astrofísica, Optica y Electronica (INAOE) and the University of Massachusetts at Amherst, to construct, commission and operate a 50m-diameter millimeter-wave radio telescope. Construction activities are nearly complete at the 4600m LMT site on the summit of Sierra Negra, an extinct volcano in the Mexican state of Puebla. Full movement of the telescope, under computer control in both azimuth and elevation, has been achieved. First-light at centimeter wavelengths on astronomical sources was obtained in November 2006. Installation of precision surface segments for millimeter-wave operation is underway, with the inner 32m-diameter of the surface now complete and ready to be used to obtain first light at millimeter wavelengths in 2008. Installation of the remainder of the reflector will continue during the next year and be completed in 2009 for final commissioning of the antenna. The full LMT antenna, outfitted with its initial complement of scientific instruments, will be a world-leading scientific research facility for millimeter-wave astronomy.

  12. The large binocular telescope.

    PubMed

    Hill, John M

    2010-06-01

    The Large Binocular Telescope (LBT) Observatory is a collaboration among institutions in Arizona, Germany, Italy, Indiana, Minnesota, Ohio, and Virginia. The telescope on Mount Graham in Southeastern Arizona uses two 8.4 m diameter primary mirrors mounted side by side. A unique feature of the LBT is that the light from the two Gregorian telescope sides can be combined to produce phased-array imaging of an extended field. This cophased imaging along with adaptive optics gives the telescope the diffraction-limited resolution of a 22.65 m aperture and a collecting area equivalent to an 11.8 m circular aperture. This paper describes the design, construction, and commissioning of this unique telescope. We report some sample astronomical results with the prime focus cameras. We comment on some of the technical challenges and solutions. The telescope uses two F/15 adaptive secondaries to correct atmospheric turbulence. The first of these adaptive mirrors has completed final system testing in Firenze, Italy, and is planned to be at the telescope by Spring 2010. PMID:20517352

  13. [Large granular lymphocyte leukemia].

    PubMed

    Lazaro, Estibaliz; Caubet, Olivier; Menard, Fanny; Pellegrin, Jean-Luc; Viallard, Jean-François

    2007-11-01

    Large granular lymphocyte (LGL) leukemia is a clonal proliferation of cytotoxic cells, either CD3(+) (T-cell) or CD3(-) (natural killer, or NK). Both subtypes can manifest as indolent or aggressive disorders. T-LGL leukemia is associated with cytopenias and autoimmune diseases and most often has an indolent course and good prognosis. Rheumatoid arthritis and Felty syndrome are frequent. NK-LGL leukemias can be more aggressive. LGL expansion is currently hypothesized to be a virus (Ebstein Barr or human T-cell leukemia viruses) antigen-driven T-cell response that involves disruption of apoptosis. The diagnosis of T-LGL is suggested by flow cytometry and confirmed by T-cell receptor gene rearrangement studies. Clonality is difficult to determine in NK-LGL but use of monoclonal antibodies specific for killer cell immunoglobulin-like receptor (KIR) has improved this process. Treatment is required when T-LGL leukemia is associated with recurrent infections secondary to chronic neutropenia. Long-lasting remission can be obtained with immunosuppressive treatments such as methotrexate, cyclophosphamide, and cyclosporine A. NK-LGL leukemias may be more aggressive and refractory to conventional therapy. PMID:17596907

  14. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  15. Large Format Radiographic Imaging

    SciTech Connect

    J. S. Rohrer; Lacey Stewart; M. D. Wilke; N. S. King; S. A Baker; Wilfred Lewis

    1999-08-01

    Radiographic imaging continues to be a key diagnostic in many areas at Los Alamos National Laboratory (LANL). Radiographic recording systems have taken on many form, from high repetition-rate, gated systems to film recording and storage phosphors. Some systems are designed for synchronization to an accelerator while others may be single shot or may record a frame sequence in a dynamic radiography experiment. While film recording remains a reliable standby in the radiographic community, there is growing interest in investigating electronic recording for many applications. The advantages of real time access to remote data acquisition are highly attractive. Cooled CCD camera systems are capable of providing greater sensitivity with improved signal-to-noise ratio. This paper begins with a review of performance characteristics of the Bechtel Nevada large format imaging system, a gated system capable of viewing scintillators up to 300 mm in diameter. We then examine configuration alternatives in lens coupled and fiber optically coupled electro-optical recording systems. Areas of investigation include tradeoffs between fiber optic and lens coupling, methods of image magnification, and spectral matching from scintillator to CCD camera. Key performance features discussed include field of view, resolution, sensitivity, dynamic range, and system noise characteristics.

  16. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  17. Large Deployable Reflectarray Antenna

    NASA Technical Reports Server (NTRS)

    Fang, Houfei; Huang, John; Lou, Michael

    2006-01-01

    A report discusses a 7-meter-diameter reflectarray antenna that has been conceived in a continuing effort to develop large reflectarray antennas to be deployed in outer space. Major underlying concepts were reported in three prior NASA Tech Briefs articles: "Inflatable Reflectarray Antennas" (NPO-20433), Vol. 23, No. 10 (October 1999), page 50; "Tape-Spring Reinforcements for Inflatable Structural Tubes" (NPO-20615), Vol. 24, No. 7 (July 2000), page 58; and "Self-Inflatable/Self-Rigidizable Reflectarray Antenna" (NPO-30662), Vol. 28, No. 1 (January 2004), page 61. Like previous antennas in the series, the antenna now proposed would include a reflectarray membrane stretched flat on a frame of multiple inflatable booms. The membrane and booms would be rolled up and folded for compact stowage during transport. Deployment in outer space would be effected by inflating the booms to unroll and then to unfold the membrane, thereby stretching the membrane out flat to its full size. The membrane would achieve the flatness for a Ka-band application. The report gives considerable emphasis to designing the booms to rigidify themselves upon deployment: for this purpose, the booms could be made as spring-tape-reinforced aluminum laminate tubes like those described in two of the cited prior articles.

  18. Applied large eddy simulation.

    PubMed

    Tucker, Paul G; Lardeau, Sylvain

    2009-07-28

    Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity. PMID:19531503

  19. Large Binocular Telescope Project

    NASA Astrophysics Data System (ADS)

    Hill, John M.

    1997-03-01

    The large binocular telescope (LBT) project have evolved from concepts first proposed in 1985. The present partners involved in the design and construction of this 2 by 8.4 meter binocular telescope are the University of Arizona, Italy represented by the Osservatorio Astrofisico di Arcetri and the Research Corporation based in Tucson, Arizona. These three partners have committed sufficient funds to build the enclosure and the telescope populated with a single 8.4 meter optical train -- approximately 40 million dollars (1989). Based on this commitment, design and construction activities are now moving forward. Additional partners are being sought. The next mirror to be cast at the Steward Observatory Mirror Lab in the fall of 1996 will be the first borosilicate honeycomb primary for LBT. The baseline optical configuration of LBT includes wide field Cassegrain secondaries with optical foci above the primaries to provide a corrected one degree field at F/4. The infrared F/15 secondaries are a Gregorian design to allow maximum flexibility for adaptive optics. The F/15 secondaries are undersized to provide a low thermal background focal plane which is unvignetted over a 4 arcminute diameter field-of-view. The interferometric focus combining the light from the two 8.4 meter primaries will reimage two folded Gregorian focal planes to a central location. The telescope elevation structure accommodates swing arms which allow rapid interchange of the various secondary and tertiary mirrors. Maximum stiffness and minimal thermal disturbance continue to be important drivers for the detailed design of the telescope. The telescope structure accommodates installation of a vacuum bell jar for aluminizing the primary mirrors in-situ on the telescope. The detailed design of the telescope structure will be completed in 1996 by ADS Italia (Lecco) and European Industrial Engineering (Mestre). The final enclosure design is now in progress at M3 Engineering (Tucson), EIE and ADS Italia

  20. Large planer for finishing smooth, flat surfaces of large pieces ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Large planer for finishing smooth, flat surfaces of large pieces of metal; in operating condition and used for public demonstrations. - Thomas A. Edison Laboratories, Building No. 5, Main Street & Lakeside Avenue, West Orange, Essex County, NJ

  1. Large for gestational age (LGA)

    MedlinePlus

    ... medlineplus.gov/ency/article/002248.htm Large for gestational age (LGA) To use the sharing features on this page, please enable JavaScript. Large for gestational age means that a fetus or infant is larger ...

  2. Large-D gravity and low-D strings.

    PubMed

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse. PMID:23829726

  3. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB).

    PubMed

    Grimme, Stefan; Bannwarth, Christoph

    2016-08-01

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H-Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  4. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB)

    NASA Astrophysics Data System (ADS)

    Grimme, Stefan; Bannwarth, Christoph

    2016-08-01

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H-Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  5. Health impacts of large dams

    SciTech Connect

    Lerer, L.B.; Scudder, T.

    1999-03-01

    Large dams have been criticized because of their negative environmental and social impacts. Public health interest largely has focused on vector-borne diseases, such as schistosomiasis, associated with reservoirs and irrigation projects. Large dams also influence health through changes in water and food security, increases in communicable diseases, and the social disruption caused by construction and involuntary resettlement. Communities living in close proximity to large dams often do not benefit from water transfer and electricity generation revenues. A comprehensive health component is required in environmental and social impact assessments for large dam projects.

  6. Analytic bootstrap at large spin

    NASA Astrophysics Data System (ADS)

    Kaviraj, Apratim; Sen, Kallol; Sinha, Aninda

    2015-11-01

    We use analytic conformal bootstrap methods to determine the anomalous dimensions and OPE coefficients for large spin operators in general conformal field theories in four dimensions containing a scalar operator of conformal dimension Δ ϕ . It is known that such theories will contain an infinite sequence of large spin operators with twists approaching 2Δ ϕ + 2 n for each integer n. By considering the case where such operators are separated by a twist gap from other operators at large spin, we analytically determine the n, Δ ϕ dependence of the anomalous dimensions. We find that for all n, the anomalous dimensions are negative for Δ ϕ satisfying the unitarity bound. We further compute the first subleading correction at large spin and show that it becomes universal for large twist. In the limit when n is large, we find exact agreement with the AdS/CFT prediction corresponding to the Eikonal limit of a 2-2 scattering with dominant graviton exchange.

  7. Querying Large Biological Network Datasets

    ERIC Educational Resources Information Center

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  8. Team Learning in Large Classes.

    ERIC Educational Resources Information Center

    Roueche, Suanne D., Ed.

    1984-01-01

    Information and suggestions are provided on the use of team learning in large college classes. Introductory material discusses the negative cycle of student-teacher interaction that may be provoked by large classes, and the use of permanent, heterogeneous, six- or seven-member student learning groups as the central focus of class activity as a…

  9. Sharpen Your Skills: Large Type.

    ERIC Educational Resources Information Center

    Knisely, Phillis; Wickham, Marian

    1984-01-01

    Three short articles about large type transcribing are provided for braille transcribers and teachers of the visually handicapped. The first article lists general suggestions for simple typewriter maintenance. The second article reviews the guidelines for typing fractions in large type for mathematics exercises. The third article describes a…

  10. Robust large dimension terahertz cloaking.

    PubMed

    Liang, Dachuan; Gu, Jianqiang; Han, Jiaguang; Yang, Yuanmu; Zhang, Shuang; Zhang, Weili

    2012-02-14

    A large scale homogenous invisibility cloak functioning at terahertz frequencies is reported. The terahertz invisibility device features a large concealed volume, low loss, and broad bandwidth. In particular, it is capable of hiding objects with a dimension nearly an order of magnitude larger than that of its lithographic counterpart, but without involving complex and time-consuming cleanroom processing. PMID:22253094

  11. Measuring happiness in large population

    NASA Astrophysics Data System (ADS)

    Wenas, Annabelle; Sjahputri, Smita; Takwin, Bagus; Primaldhi, Alfindra; Muhamad, Roby

    2016-01-01

    The ability to know emotional states for large number of people is important, for example, to ensure the effectiveness of public policies. In this study, we propose a measure of happiness that can be used in large scale population that is based on the analysis of Indonesian language lexicons. Here, we incorporate human assessment of Indonesian words, then quantify happiness on large-scale of texts gathered from twitter conversations. We used two psychological constructs to measure happiness: valence and arousal. We found that Indonesian words have tendency towards positive emotions. We also identified several happiness patterns during days of the week, hours of the day, and selected conversation topics.

  12. Large engines and vehicles, 1958

    NASA Technical Reports Server (NTRS)

    1978-01-01

    During the mid-1950s, the Air Force sponsored work on the feasibility of building large, single-chamber engines, presumably for boost-glide aircraft or spacecraft. In 1956, the Army missile development group began studies of large launch vehicles. The possibilities opened up by Sputnik accelerated this work and gave the Army an opportunity to bid for the leading role in launch vehicles. The Air Force had the responsibility for the largest ballistic missiles and hence a ready-made base for extending their capability for spaceflight. During 1958, actions taken to establish a civilian space agency, and the launch vehicle needs seen by its planners, added a third contender to the space vehicle competition. These activities during 1958 are examined as to how they resulted in the initiation of a large rocket engine and the first large launch vehicle.

  13. LSD: Large Survey Database framework

    NASA Astrophysics Data System (ADS)

    Juric, Mario

    2012-09-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.

  14. Does Yellowstone need large fires

    SciTech Connect

    Romme, W.H. ); Turner, M.G.; Gardner, R.H.; Hargrove, W.W. )

    1994-06-01

    This paper synthesizes several studies initiated after the 1988 Yellowstone fires, to address the question whether the ecological effects of large fires differ qualitatively as well as quantitatively from small fires. Large burn patches had greater dominance and contagion of burn severity classes, and a higher proportion of crown fire. Burned aspen stands resprouted vigorously over an extensive area, but heavy ungulate browsing prevented establishment of new tree-sized stems. A burst of sexual reproduction occurred in forest herbs that usually reproduce vegetatively, and new aspen clones became established from seed - a rare event in this region. We conclude that the effects of large fires are qualitatively different, but less dramatically so than expected.

  15. Inflating with large effective fields

    SciTech Connect

    Burgess, C.P.; Cicoli, M.; Quevedo, F.; Williams, M. E-mail: mcicoli@ictp.it E-mail: mwilliams@perimeterinsititute.ca

    2014-11-01

    We re-examine large scalar fields within effective field theory, in particular focussing on the issues raised by their use in inflationary models (as suggested by BICEP2 to obtain primordial tensor modes). We argue that when the large-field and low-energy regimes coincide the scalar dynamics is most effectively described in terms of an asymptotic large-field expansion whose form can be dictated by approximate symmetries, which also help control the size of quantum corrections. We discuss several possible symmetries that can achieve this, including pseudo-Goldstone inflatons characterized by a coset G/H (based on abelian and non-abelian, compact and non-compact symmetries), as well as symmetries that are intrinsically higher dimensional. Besides the usual trigonometric potentials of Natural Inflation we also find in this way simple large-field power laws (like V ∝ φ{sup 2}) and exponential potentials, V(φ) = ∑{sub k}V{sub x}e{sup −kφ/M}. Both of these can describe the data well and give slow-roll inflation for large fields without the need for a precise balancing of terms in the potential. The exponential potentials achieve large r through the limit |η| || ε and so predict r ≅ (8/3)(1-n{sub s}); consequently n{sub s} ≅ 0.96 gives r ≅ 0.11 but not much larger (and so could be ruled out as measurements on r and n{sub s} improve). We examine the naturalness issues for these models and give simple examples where symmetries protect these forms, using both pseudo-Goldstone inflatons (with non-abelian non-compact shift symmetries following familiar techniques from chiral perturbation theory) and extra-dimensional models.

  16. The MAGNEX large acceptance spectrometer

    SciTech Connect

    Cavallaro, M.; Cappuzzello, F.; Cunsolo, A.; Carbone, D.; Foti, A.

    2010-03-01

    The main features of the MAGNEX large acceptance magnetic spectrometer are described. It has a quadrupole + dipole layout and a hybrid detector located at the focal plane. The aberrations due to the large angular (50 msr) and momentum (+- 13%) acceptance are reduced by an accurate hardware design and then compensated by an innovative software ray-reconstruction technique. The obtained resolution in energy, angle and mass are presented in the paper. MAGNEX has been used up to now for different experiments in nuclear physics and astrophysics confirming to be a multipurpose device.

  17. Detecting communities in large networks

    NASA Astrophysics Data System (ADS)

    Capocci, A.; Servedio, V. D. P.; Caldarelli, G.; Colaiori, F.

    2005-07-01

    We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and link orientation. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable for the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.

  18. Energy conservation in large buildings

    NASA Astrophysics Data System (ADS)

    Rosenfeld, A.; Hafemeister, D.

    1985-11-01

    As energy prices rise, newly energy aware designers use better tools and technology to create energy efficient buildings. Thus the U.S. office stock (average age 20 years) uses 250 kBTU/ft2 of resource energy, but the guzzler of 1972 uses 500 (up×2), and the 1986 ASHRAE standards call for 100-125 (less than 25% of their 1972 ancestors). Surprisingly, the first real cost of these efficient buildings has not risen since 1972. Scaling laws are used to calculate heat gains and losses of buildings to obtain the ΔT(free) which can be as large as 15-30 °C (30-60 °F) for large buildings. The net thermal demand and thermal time constants are determined for the Swedish Thermodeck buildings which need essentially no heat in the winter and no chillers in summer. The BECA and other data bases for large buildings are discussed. Off-peak cooling for large buildings is analyzed in terms of saving peak-electrical power. By downsizing chillers and using cheaper, off-peak power, cost-effective thermal storage in new commercial buildings can reduce U.S. peak power demands by 10-20 GW in 15 years. A further potential of about 40 GW is available from adopting partial thermal storage and more efficient air conditioners in existing buildings.

  19. Ideas for Managing Large Classes.

    ERIC Educational Resources Information Center

    Kabel, Robert L.

    1983-01-01

    Describes management strategies used in a large kinetics/industrial chemistry course. Strategies are designed to make instruction in such classes more efficient and effective. Areas addressed include homework assignment, quizzes, final examination, grading and feedback, and rewards for conducting the class in the manner described. (JN)

  20. CERN's Large Hadron Collider project

    NASA Astrophysics Data System (ADS)

    Fearnley, Tom A.

    1997-03-01

    The paper gives a brief overview of CERN's Large Hadron Collider (LHC) project. After an outline of the physics motivation, we describe the LHC machine, interaction rates, experimental challenges, and some important physics channels to be studied. Finally we discuss the four experiments planned at the LHC: ATLAS, CMS, ALICE and LHC-B.

  1. Large area CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Turchetta, R.; Guerrini, N.; Sedgwick, I.

    2011-01-01

    CMOS image sensors, also known as CMOS Active Pixel Sensors (APS) or Monolithic Active Pixel Sensors (MAPS), are today the dominant imaging devices. They are omnipresent in our daily life, as image sensors in cellular phones, web cams, digital cameras, ... In these applications, the pixels can be very small, in the micron range, and the sensors themselves tend to be limited in size. However, many scientific applications, like particle or X-ray detection, require large format, often with large pixels, as well as other specific performance, like low noise, radiation hardness or very fast readout. The sensors are also required to be sensitive to a broad spectrum of radiation: photons from the silicon cut-off in the IR down to UV and X- and gamma-rays through the visible spectrum as well as charged particles. This requirement calls for modifications to the substrate to be introduced to provide optimized sensitivity. This paper will review existing CMOS image sensors, whose size can be as large as a single CMOS wafer, and analyse the technical requirements and specific challenges of large format CMOS image sensors.

  2. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  3. Fermi's Large Area Telescope (LAT)

    NASA Video Gallery

    Fermi’s Large Area Telescope (LAT) is the spacecraft’s main scientificinstrument. This animation shows a gamma ray (purple) entering the LAT,where it is converted into an electron (red) and a...

  4. The very large hadron collider

    SciTech Connect

    1998-09-01

    This paper reviews the purposes to be served by a very large hadron collider and the organization and coordination of efforts to bring it about. There is some discussion of magnet requirements and R&D and the suitability of the Fermilab site.

  5. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  6. Unusually large submandibular gland stone.

    PubMed

    Al-Hussona, Aws Adel

    2015-01-01

    Submandibular gland calculi is the most common disease of the gland. In this article, we report a case with unusually large stone located at the hilum of the gland causing necrosis of the overlying duct and the oral mucosa (floor of mouth). PMID:25934409

  7. Mass spectrometry of large complexes.

    PubMed

    Bich, Claudia; Zenobi, Renato

    2009-10-01

    Mass spectrometry is becoming a more and more powerful tool for investigating protein complexes. Recent developments, based on different ionization techniques, electrospray, desorption/ionization and others are contributing to the usefulness of MS to describe the organization and structure of large non-covalent assemblies. PMID:19782560

  8. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  9. Large gap magnetic suspension system

    NASA Technical Reports Server (NTRS)

    Abdelsalam, Moustafa K.; Eyssa, Y. M.

    1991-01-01

    The design of a large gap magnetic suspension system is discussed. Some of the topics covered include: the system configuration, permanent magnet material, levitation magnet system, superconducting magnets, resistive magnets, superconducting levitation coils, resistive levitation coils, levitation magnet system, and the nitrogen cooled magnet system.

  10. Large-scale polarimetry of large optical galaxies

    NASA Astrophysics Data System (ADS)

    Sholomitskii, G. B.; Maslov, I. A.; Vitrichenko, E. A.

    1999-11-01

    We present preliminary results of wide-field visual CCD polarimetry for large optical galaxies through a concentric multisector radial-tangential polaroid analyzer mounted at the intermediate focus of a Zeiss-1000 telescope. The mean degree of tangential polarization in a 13-arcmin field, which was determined by processing images with imprinted ``orthogonal'' sectors, ranges from several percent (M 82) and 0.51% (the spirals M 51, M 81) to lower values for elliptical galaxies (M 49, M 87). It is emphasized that the parameters of large-scale polarization can be properly determined by using physical models for galaxies; inclination and azimuthal dependences of the degree of polarization are given for spirals.

  11. The physics of large eruptions

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Agust

    2015-04-01

    Based on eruptive volumes, eruptions can be classified as follows: small if the volumes are from less than 0.001 km3 to 0.1 km3, moderate if the volumes are from 0.1 to 10 km3, and large if the volumes are from 10 km3 to 1000 km3 or larger. The largest known explosive and effusive eruptions have eruptive volumes of 4000-5000 km3. The physics of small to moderate eruptions is reasonably well understood. For a typical mafic magma chamber in a crust that behaves as elastic, about 0.1% of the magma leaves the chamber (erupted and injected as a dyke) during rupture and eruption. Similarly, for a typical felsic magma chamber, the eruptive/injected volume during rupture and eruption is about 4%. To provide small to moderate eruptions, chamber volumes of the order of several tens to several hundred cubic kilometres would be needed. Shallow crustal chambers of these sizes are common, and deep-crustal and upper-mantle reservoirs of thousands of cubic kilometres exist. Thus, elastic and poro-elastic chambers of typical volumes can account for small to moderate eruptive volumes. When the eruptions become large, with volumes of tens or hundreds of cubic kilometres or more, an ordinary poro-elastic mechanism can no longer explain the eruptive volumes. The required sizes of the magma chambers and reservoirs to explain such volumes are simply too large to be plausible. Here I propose that the mechanics of large eruptions is fundamentally different from that of small to moderate eruptions. More specifically, I suggest that all large eruptions derive their magmas from chambers and reservoirs whose total cavity-volumes are mechanically reduced very much during the eruption. There are two mechanisms by which chamber/reservoir cavity-volumes can be reduced rapidly so as to squeeze out much of, or all, their magmas. One is piston-like caldera collapse. The other is graben subsidence. During large slip on the ring-faults/graben-faults the associated chamber/reservoir shrinks in volume

  12. Large aperture Fresnel telescopes/011

    SciTech Connect

    Hyde, R.A., LLNL

    1998-07-16

    At Livermore we`ve spent the last two years examining an alternative approach towards very large aperture (VLA) telescopes, one based upon transmissive Fresnel lenses rather than on mirrors. Fresnel lenses are attractive for VLA telescopes because they are launchable (lightweight, packagable, and deployable) and because they virtually eliminate the traditional, very tight, surface shape requirements faced by reflecting telescopes. Their (potentially severe) optical drawback, a very narrow spectral bandwidth, can be eliminated by use of a second (much smaller) chromatically-correcting Fresnel element. This enables Fresnel VLA telescopes to provide either single band ({Delta}{lambda}/{lambda} {approximately} 0.1), multiple band, or continuous spectral coverage. Building and fielding such large Fresnel lenses will present a significant challenge, but one which appears, with effort, to be solvable.

  13. Chunking of Large Multidimensional Arrays

    SciTech Connect

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  14. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  15. Large aperture scanning airborne lidar

    NASA Technical Reports Server (NTRS)

    Smith, J.; Bindschadler, R.; Boers, R.; Bufton, J. L.; Clem, D.; Garvin, J.; Melfi, S. H.

    1988-01-01

    A large aperture scanning airborne lidar facility is being developed to provide important new capabilities for airborne lidar sensor systems. The proposed scanning mechanism allows for a large aperture telescope (25 in. diameter) in front of an elliptical flat (25 x 36 in.) turning mirror positioned at a 45 degree angle with respect to the telescope optical axis. The lidar scanning capability will provide opportunities for acquiring new data sets for atmospheric, earth resources, and oceans communities. This completed facility will also make available the opportunity to acquire simulated EOS lidar data on a near global basis. The design and construction of this unique scanning mechanism presents exciting technological challenges of maintaining the turning mirror optical flatness during scanning while exposed to extreme temperatures, ambient pressures, aircraft vibrations, etc.

  16. Progress on large area GEMs

    NASA Astrophysics Data System (ADS)

    Villa, Marco; Duarte Pinto, Serge; Alfonsi, Matteo; Brock, Ian; Croci, Gabriele; David, Eric; de Oliveira, Rui; Ropelewski, Leszek; Taureg, Hans; van Stenis, Miranda

    2011-02-01

    The Gas Electron Multiplier (GEM) manufacturing technique has recently evolved to allow the production of large area GEMs. A novel approach based on single mask photolithography eliminates the mask alignment issue, which limits the dimensions in the traditional double mask process. Moreover, a splicing technique overcomes the limited width of the raw material. Stretching and handling issues in large area GEMs have also been addressed. Using the new improvements it was possible to build a prototype triple-GEM detector of ˜2000 cm2 active area, aimed at an application for the TOTEM T1 upgrade. Further refinements of the single mask technique allow great control over the shape of the GEM holes and the size of the rims, which can be tuned as needed. In this framework, simulation studies can help to understand the GEM behavior depending on the hole shape.

  17. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  18. Measuring Diameters Of Large Vessels

    NASA Technical Reports Server (NTRS)

    Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.

    1990-01-01

    Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.

  19. Extremely Large Cusp Diamagnetic Cavities

    NASA Astrophysics Data System (ADS)

    Chen, J.; Fritz, T. A.

    2002-05-01

    Extremely large diamagnetic cavities with a size of as large as 6 Re have been observed in the dayside high-altitude cusp regions. Some of the diamagnetic cavities were independent of the IMF directions, which is unexpected by the current MHD (or ISM) models, suggesting that the cusp diamagnetic cavities are different from the magnetospheric sash, which provides a challenge to the existing MHD (or ISM) models. Associated with these cavities are ions with energies from 40 keV up to 8 MeV. The charge state distribution of these cusp cavity ions was indicative of their seed populations being a mixture of the ionospheric and the solar wind particles. The intensities of the cusp cavity energetic ions were observed to increase by as large as four orders of the magnitudes. During high solar wind pressure period on April 21, 1999, the POLAR spacecraft observed lower ion flux in the dayside high-latitude magnetosheath than that in the neighbouring cusp cavities. These observations indicate that the dayside high-altitude cusp diamagnetic cavity is a key region for transferring the solar wind energy, mass, and momentum into the Earth's magnetosphere. These energetic particles in the cusp diamagnetic cavity together with the cusp's connectivity have significant global impacts on the geospace environment research and will be shedding light on the long-standing unsolved fundamental issue about the origins of the energetic particles in the ring current and in upstream ion events.

  20. Extremely large cusp diamagnetic cavities

    NASA Astrophysics Data System (ADS)

    Chen, J.; Fritz, T.; Siscoe, G.

    Extremely large diamagnetic cavities with a size of as large as 6 Re have been observed in the dayside high-altitude cusp regions. These diamagnetic cavities are always there day by day. Some of the diamagnetic cavities have been observed in the morningside during intervals when the IMF By component was positive (duskward), suggesting that the cusp diamagnetic cavities are different from the magnetospheric sash predicted by MHD simulations. Associated with these cavities are ions with energies from 40 keV up to 8 MeV. The charge state distribution of these cusp cavity ions was indicative of their seed populations being a mixture of the ionospheric and the solar wind particles. The intensities of the cusp cavity energetic ions were observed to increase by as large as four orders of the magnitudes. These observations indicate that the dayside high-altitude cusp diamagnetic cavity is a key region for transferring the solar wind energy, mass, and momentum into the Earth's magnetosphere. These energetic particles in the cusp diamagnetic cavity together with the cusp's connectivity to the entire magnetopause may have significant global impacts on the geospace environment. They will possibly be shedding light on the long-standing unsolved fundamental issue about the origins of the energetic particles in the ring current and in the regions upstream of the subsolar magnetopause where energetic ion events frequently are observed.

  1. Large Component Removal/Disposal

    SciTech Connect

    Wheeler, D. M.

    2002-02-27

    This paper describes the removal and disposal of the large components from Maine Yankee Atomic Power Plant. The large components discussed include the three steam generators, pressurizer, and reactor pressure vessel. Two separate Exemption Requests, which included radiological characterizations, shielding evaluations, structural evaluations and transportation plans, were prepared and issued to the DOT for approval to ship these components; the first was for the three steam generators and one pressurizer, the second was for the reactor pressure vessel. Both Exemption Requests were submitted to the DOT in November 1999. The DOT approved the Exemption Requests in May and July of 2000, respectively. The steam generators and pressurizer have been removed from Maine Yankee and shipped to the processing facility. They were removed from Maine Yankee's Containment Building, loaded onto specially designed skid assemblies, transported onto two separate barges, tied down to the barges, th en shipped 2750 miles to Memphis, Tennessee for processing. The Reactor Pressure Vessel Removal Project is currently under way and scheduled to be completed by Fall of 2002. The planning, preparation and removal of these large components has required extensive efforts in planning and implementation on the part of all parties involved.

  2. Deflectometric measurement of large mirrors

    NASA Astrophysics Data System (ADS)

    Olesch, Evelyn; Häusler, Gerd; Wörnlein, André; Stinzing, Friedrich; van Eldik, Christopher

    2014-06-01

    We discuss the inspection of large-sized, spherical mirror tiles by `Phase Measuring Deflectometry' (PMD). About 10 000 of such mirror tiles, each satisfying strict requirements regarding the spatial extent of the point-spread-function (PSF), are planned to be installed on the Cherenkov Telescope Array (CTA), a future ground-based instrument to observe the sky in very high energy gamma-rays. Owing to their large radii of curvature of up to 60 m, a direct PSF measurement of these mirrors with concentric geometry requires large space. We present a PMD sensor with a footprint of only 5×2×1.2 m3 that overcomes this limitation. The sensor intrinsically acquires the surface slope; the shape data are calculated by integration. In this way, the PSF can be calculated for real case scenarios, e.g., when the light source is close to infinity and off-axis. The major challenge is the calibration of the PMD sensor, specifically because the PSF data have to be reconstructed from different camera views. The calibration of the setup is described, and measurements presented and compared to results obtained with the direct approach.

  3. Large wood recruitment and transport during large floods: A review

    NASA Astrophysics Data System (ADS)

    Comiti, F.; Lucía, A.; Rickenmann, D.

    2016-09-01

    Large wood (LW) elements transported during large floods are long known to have the capacity to induce dangerous obstructions along the channel network, mostly at bridges and at hydraulic structures such as weirs. However, our current knowledge of wood transport dynamics during high-magnitude flood events is still very scarce, mostly because these are (locally) rare and thus unlikely to be directly monitored. Therefore, post-event surveys are invaluable ways to get insights (although indirectly) on LW recruitment processes, transport distance, and factors inducing LW deposition - all aspects that are crucial for the proper management of river basins related to flood hazard mitigation. This paper presents a review of the (quite limited) literature available on LW transport during large floods, drawing extensively on the authors' own experience in mountain and piedmont rivers, published and unpublished. The overall picture emerging from these studies points to a high, catchment-specific variability in all the different processes affecting LW dynamics during floods. Specifically, in the LW recruitment phase, the relative floodplain (bank erosion) vs. hillslope (landslide and debris flows) contribution in mountain rivers varies substantially, as it relates to the extent of channel widening (which depends on many variables itself) but also to the hillslope-channel connectivity of LW mobilized on the slopes. As to the LW transport phase within the channel network, it appears to be widely characterized by supply-limited conditions; whereby LW transport rates (and thus volumes) are ultimately constrained by the amount of LW that is made available to the flow. Indeed, LW deposition during floods was mostly (in terms of volume) observed at artificial structures (bridges) in all the documented events. This implies that the estimation of LW recruitment and the assessment of clogging probabilities for each structure (for a flood event of given magnitude) are the most important

  4. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  5. Large-bore pipe decontamination

    SciTech Connect

    Ebadian, M.A.

    1998-01-01

    The decontamination and decommissioning (D and D) of 1200 buildings within the US Department of Energy-Office of Environmental Management (DOE-EM) Complex will require the disposition of miles of pipe. The disposition of large-bore pipe, in particular, presents difficulties in the area of decontamination and characterization. The pipe is potentially contaminated internally as well as externally. This situation requires a system capable of decontaminating and characterizing both the inside and outside of the pipe. Current decontamination and characterization systems are not designed for application to this geometry, making the direct disposal of piping systems necessary in many cases. The pipe often creates voids in the disposal cell, which requires the pipe to be cut in half or filled with a grout material. These methods are labor intensive and costly to perform on large volumes of pipe. Direct disposal does not take advantage of recycling, which could provide monetary dividends. To facilitate the decontamination and characterization of large-bore piping and thereby reduce the volume of piping required for disposal, a detailed analysis will be conducted to document the pipe remediation problem set; determine potential technologies to solve this remediation problem set; design and laboratory test potential decontamination and characterization technologies; fabricate a prototype system; provide a cost-benefit analysis of the proposed system; and transfer the technology to industry. This report summarizes the activities performed during fiscal year 1997 and describes the planned activities for fiscal year 1998. Accomplishments for FY97 include the development of the applicable and relevant and appropriate regulations, the screening of decontamination and characterization technologies, and the selection and initial design of the decontamination system.

  6. Radiosurgery for Large Brain Metastases

    SciTech Connect

    Han, Jung Ho; Kim, Dong Gyu; Chung, Hyun-Tai; Paek, Sun Ha; Park, Chul-Kee; Jung, Hee-Won

    2012-05-01

    Purpose: To determine the efficacy and safety of radiosurgery in patients with large brain metastases treated with radiosurgery. Patients and Methods: Eighty patients with large brain metastases (>14 cm{sup 3}) were treated with radiosurgery between 1998 and 2009. The mean age was 59 {+-} 11 years, and 49 (61.3%) were men. Neurologic symptoms were identified in 77 patients (96.3%), and 30 (37.5%) exhibited a dependent functional status. The primary disease was under control in 36 patients (45.0%), and 44 (55.0%) had a single lesion. The mean tumor volume was 22.4 {+-} 8.8 cm{sup 3}, and the mean marginal dose prescribed was 13.8 {+-} 2.2 Gy. Results: The median survival time from radiosurgery was 7.9 months (95% confidence interval [CI], 5.343-10.46), and the 1-year survival rate was 39.2%. Functional improvement within 1-4 months or the maintenance of the initial independent status was observed in 48 (60.0%) and 20 (25.0%) patients after radiosurgery, respectively. Control of the primary disease, a marginal dose of {>=}11 Gy, and a tumor volume {>=}26 cm{sup 3} were significantly associated with overall survival (hazard ratio, 0.479; p = .018; 95% CI, 0.261-0.880; hazard ratio, 0.350; p = .004; 95% CI, 0.171-0.718; hazard ratio, 2.307; p = .006; 95% CI, 1.274-4.180, respectively). Unacceptable radiation-related toxicities (Radiation Toxicity Oncology Group central nervous system toxicity Grade 3, 4, and 5 in 7, 6, and 2 patients, respectively) developed in 15 patients (18.8%). Conclusion: Radiosurgery seems to have a comparable efficacy with surgery for large brain metastases. However, the rate of radiation-related toxicities after radiosurgery should be considered when deciding on a treatment modality.

  7. The Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Axelrod, T. S.

    2006-07-01

    The Large Synoptic Survey Telescope (LSST) is an 8.4 meter telescope with a 10 square degree field degree field and a 3 Gigapixel imager, planned to be on-sky in 2012. It is a dedicated all-sky survey instrument, with several complementary science missions. These include understanding dark energy through weak lensing and supernovae; exploring transients and variable objects; creating and maintaining a solar system map, with particular emphasis on potentially hazardous objects; and increasing the precision with which we understand the structure of the Milky Way. The instrument operates continuously at a rapid cadence, repetitively scanning the visible sky every few nights. The data flow rates from LSST are larger than those from current surveys by roughly a factor of 1000: A few GB/night are typical today. LSST will deliver a few TB/night. From a computing hardware perspective, this factor of 1000 can be dealt with easily in 2012. The major issues in designing the LSST data management system arise from the fact that the number of people available to critically examine the data will not grow from current levels. This has a number of implications. For example, every large imaging survey today is resigned to the fact that their image reduction pipelines fail at some significant rate. Many of these failures are dealt with by rerunning the reduction pipeline under human supervision, with carefully ``tweaked'' parameters to deal with the original problem. For LSST, this will no longer be feasible. The problem is compounded by the fact that the processing must of necessity occur on clusters with large numbers of CPU's and disk drives, and with some components connected by long-haul networks. This inevitably results in a significant rate of hardware component failures, which can easily lead to further software failures. Both hardware and software failures must be seen as a routine fact of life rather than rare exceptions to normality.

  8. Large spin systematics in CFT

    NASA Astrophysics Data System (ADS)

    Alday, Luis F.; Bissi, Agnese; Lukowski, Tomasz

    2015-11-01

    Using conformal field theory (CFT) arguments we derive an infinite number of constraints on the large spin expansion of the anomalous dimensions and structure constants of higher spin operators. These arguments rely only on analyticity, unitarity, crossing-symmetry and the structure of the conformal partial wave expansion. We obtain results for both, perturbative CFT to all order in the perturbation parameter, as well as non-perturbatively. For the case of conformal gauge theories this provides a proof of the reciprocity principle to all orders in perturbation theory and provides a new "reciprocity" principle for structure constants. We argue that these results extend also to non-conformal theories.

  9. LHC: The Large Hadron Collider

    SciTech Connect

    Lincoln, Don

    2015-03-04

    The Large Hadron Collider (or LHC) is the world’s most powerful particle accelerator. In 2012, scientists used data taken by it to discover the Higgs boson, before pausing operations for upgrades and improvements. In the spring of 2015, the LHC will return to operations with 163% the energy it had before and with three times as many collisions per second. It’s essentially a new and improved version of itself. In this video, Fermilab’s Dr. Don Lincoln explains both some of the absolutely amazing scientific and engineering properties of this modern scientific wonder.

  10. Uncertainties in large space systems

    NASA Technical Reports Server (NTRS)

    Fuh, Jon-Shen

    1988-01-01

    Uncertainties of a large space system (LSS) can be deterministic or stochastic in nature. The former may result in, for example, an energy spillover problem by which the interaction between unmodeled modes and controls may cause system instability. The stochastic uncertainties are responsible for mode localization and estimation errors, etc. We will address the effects of uncertainties on structural model formulation, use of available test data to verify and modify analytical models before orbiting, and how the system model can be further improved in the on-orbit environment.

  11. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  12. the Large Aperture GRB Observatory

    SciTech Connect

    Bertou, Xavier

    2009-04-30

    The Large Aperture GRB Observatory (LAGO) aims at the detection of high energy photons from Gamma Ray Bursts (GRB) using the single particle technique (SPT) in ground based water Cherenkov detectors (WCD). To reach a reasonable sensitivity, high altitude mountain sites have been selected in Mexico (Sierra Negra, 4550 m a.s.l.), Bolivia (Chacaltaya, 5300 m a.s.l.) and Venezuela (Merida, 4765 m a.s.l.). We report on the project progresses and the first operation at high altitude, search for bursts in 6 months of preliminary data, as well as search for signal at ground level when satellites report a burst.

  13. Large block test status report

    SciTech Connect

    Wilder, D.G.; Lin, W.; Blair, S.C.

    1997-08-26

    This report is intended to serve as a status report, which essentially transmits the data that have been collected to date on the Large Block Test (LBT). The analyses of data will be performed during FY98, and then a complete report will be prepared. This status report includes introductory material that is not needed merely to transmit data but is available at this time and therefore included. As such, this status report will serve as the template for the future report, and the information is thus preserved.

  14. Large aperture diffractive space telescope

    DOEpatents

    Hyde, Roderick A.

    2001-01-01

    A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.

  15. How Large Asexual Populations Adapt

    NASA Astrophysics Data System (ADS)

    Desai, Michael

    2007-03-01

    We often think of beneficial mutations as being rare, and of adaptation as a sequence of selected substitutions: a beneficial mutation occurs, spreads through a population in a selective sweep, then later another beneficial mutation occurs, and so on. This simple picture is the basis for much of our intuition about adaptive evolution, and underlies a number of practical techniques for analyzing sequence data. Yet many large and mostly asexual populations -- including a wide variety of unicellular organisms and viruses -- live in a very different world. In these populations, beneficial mutations are common, and frequently interfere or cooperate with one another as they all attempt to sweep simultaneously. This radically changes the way these populations adapt: rather than an orderly sequence of selective sweeps, evolution is a constant swarm of competing and interfering mutations. I will describe some aspects of these dynamics, including why large asexual populations cannot evolve very quickly and the character of the diversity they maintain. I will explain how this changes our expectations of sequence data, how sex can help a population adapt, and the potential role of ``mutator'' phenotypes with abnormally high mutation rates. Finally, I will discuss comparisons of these predictions with evolution experiments in laboratory yeast populations.

  16. Large amplitude drop shape oscillations

    NASA Technical Reports Server (NTRS)

    Trinh, E. H.; Wang, T. G.

    1982-01-01

    An experimental study of large amplitude drop shape oscillation was conducted in immiscible liquids systems and with levitated free liquid drops in air. In liquid-liquid systems the results indicate the existence of familiar characteristics of nonlinear phenomena. The resonance frequency of the fundamental quadrupole mode of stationary, low viscosity Silicone oil drops acoustically levitated in water falls to noticeably low values as the amplitude of oscillation is increased. A typical, experimentally determined relative frequency decrease of a 0.5 cubic centimeters drop would be about 10% when the maximum deformed shape is characterized by a major to minor axial ratio of 1.9. On the other hand, no change in the fundamental mode frequency could be detected for 1 mm drops levitated in air. The experimental data for the decay constant of the quadrupole mode of drops immersed in a liquid host indicate a slight increase for larger oscillation amplitudes. A qualitative investigation of the internal fluid flows for such drops revealed the existence of steady internal circulation within drops oscillating in the fundamental and higher modes. The flow field configuration in the outer host liquid is also significantly altered when the drop oscillation amplitude becomes large.

  17. Large-mode enhancement cavities.

    PubMed

    Carstens, Henning; Holzberger, Simon; Kaster, Jan; Weitenberg, Johannes; Pervak, Volodymyr; Apolonski, Alexander; Fill, Ernst; Krausz, Ferenc; Pupeza, Ioachim

    2013-05-01

    In passive enhancement cavities the achievable power level is limited by mirror damage. Here, we address the design of robust optical resonators with large spot sizes on all mirrors, a measure that promises to mitigate this limitation by decreasing both the intensity and the thermal gradient on the mirror surfaces. We introduce a misalignment sensitivity metric to evaluate the robustness of resonator designs. We identify the standard bow-tie resonator operated close to the inner stability edge as the most robust large-mode cavity and implement this cavity with two spherical mirrors with 600 mm radius of curvature, two plane mirrors and a round trip length of 1.2 m, demonstrating a stable power enhancement of near-infrared laser light by a factor of 2000. Beam radii of 5.7 mm × 2.6 mm (sagittal × tangential 1/e(2) intensity radius) on all mirrors are obtained. We propose a simple all-reflective ellipticity compensation scheme. This will enable a significant increase of the attainable power and intensity levels in enhancement cavities. PMID:23670017

  18. Mesoscale Ocean Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank

    2015-11-01

    The highest resolution global climate models (GCMs) can now resolve the largest scales of mesoscale dynamics in the ocean. This has the potential to increase the fidelity of GCMs. However, the effects of the smallest, unresolved, scales of mesoscale dynamics must still be parametrized. One such family of parametrizations are mesoscale ocean large eddy simulations (MOLES), but the effects of including MOLES in a GCM are not well understood. In this presentation, several MOLES schemes are implemented in a mesoscale-resolving GCM (CESM), and the resulting flow is compared with that produced by more traditional sub-grid parametrizations. Large eddy simulation (LES) is used to simulate flows where the largest scales of turbulent motion are resolved, but the smallest scales are not resolved. LES has traditionally been used to study 3D turbulence, but recently it has also been applied to idealized 2D and quasi-geostrophic (QG) turbulence. The MOLES presented here are based on 2D and QG LES schemes.

  19. Large phased-array radars

    SciTech Connect

    Brookner, D.E.

    1988-12-15

    Large phased-array radars can play a very important part in arms control. They can be used to determine the number of RVs being deployed, the type of targeting of the RVs (the same or different targets), the shape of the deployed objects, and possibly the weight and yields of the deployed RVs. They can provide this information at night as well as during the day and during rain and cloud covered conditions. The radar can be on the ground, on a ship, in an airplane, or space-borne. Airborne and space-borne radars can provide high resolution map images of the ground for reconnaissance, of anti-ballistic missile (ABM) ground radar installations, missile launch sites, and tactical targets such as trucks and tanks. The large ground based radars can have microwave carrier frequencies or be at HF (high frequency). For a ground-based HF radar the signal is reflected off the ionosphere so as to provide over-the-horizon (OTH) viewing of targets. OTH radars can potentially be used to monitor stealth targets and missile traffic.

  20. Electron Collisions with Large Molecules

    NASA Astrophysics Data System (ADS)

    McKoy, Vincent

    2006-10-01

    In recent years, interest in electron-molecule collisions has increasingly shifted to large molecules. Applications within the semiconductor industry, for example, require electron collision data for molecules such as perfluorocyclobutane, while almost all biological applications involve macromolecules such as DNA. A significant development in recent years has been the realization that slow electrons can directly damage DNA. This discovery has spurred studies of low-energy collisions with the constituents of DNA, including the bases, deoxyribose, the phosphate, and larger moieties assembled from them. In semiconductor applications, a key goal is development of electron cross section sets for plasma chemistry modeling, while biological studies are largely focused on understanding the role of localized resonances in inducing DNA strand breaks. Accurate calculations of low-energy electron collisions with polyatomic molecules are computationally demanding because of the low symmetry and inherent many-electron nature of the problem; moreover, the computational requirements scale rapidly with the size of the molecule. To pursue such studies, we have adapted our computational procedure, known as the Schwinger multichannel method, to run efficiently on highly parallel computers. In this talk, we will present some of our recent results for fluorocarbon etchants used in the semiconductor industry and for constituents of DNA and RNA. In collaboration with Carl Winstead, California Institute of Technology.

  1. Histotripsy Liquefaction of Large Hematomas.

    PubMed

    Khokhlova, Tatiana D; Monsky, Wayne L; Haider, Yasser A; Maxwell, Adam D; Wang, Yak-Nam; Matula, Thomas J

    2016-07-01

    Intra- and extra-muscular hematomas result from repetitive injury as well as sharp and blunt limb trauma. The clinical consequences can be serious, including debilitating pain and functional deficit. There are currently no short-term treatment options for large hematomas, only lengthy conservative treatment. The goal of this work was to evaluate the feasibility of a high intensity focused ultrasound (HIFU)-based technique, termed histotripsy, for rapid (within a clinically relevant timeframe of 15-20 min) liquefaction of large volume (up to 20 mL) extra-vascular hematomas for subsequent fine-needle aspiration. Experiments were performed using in vitro extravascular hematoma phantoms-fresh bovine blood poured into 50 mL molds and allowed to clot. The resulting phantoms were treated by boiling histotripsy (BH), cavitation histotripsy (CH) or a combination in a degassed water tank under ultrasound guidance. Two different transducers operating at 1 MHz and 1.5 MHz with f-number = 1 were used. The liquefied lysate was aspirated and analyzed by histology and sized in a Coulter Counter. The peak instantaneous power to achieve BH was lower than (at 1.5 MHz) or equal to (at 1 MHz) that which was required to initiate CH. Under the same exposure duration, BH-induced cavities were one and a half to two times larger than the CH-induced cavities, but the CH-induced cavities were more regularly shaped, facilitating easier aspiration. The lysates contained a small amount of debris larger than 70 μm, and 99% of particulates were smaller than 10 μm. A combination treatment of BH (for initial debulking) and CH (for liquefaction of small residual fragments) yielded 20 mL of lysate within 17.5 minutes of treatment and was found to be most optimal for liquefaction of large extravascular hematomas. PMID:27126244

  2. Large Space Antenna Systems Technology, 1984

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1985-01-01

    Mission applications for large space antenna systems; large space antenna structural systems; materials and structures technology; structural dynamics and control technology, electromagnetics technology, large space antenna systems and the Space Station; and flight test and evaluation were examined.

  3. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  4. Safe handling of large animals.

    PubMed

    Grandin, T

    1999-01-01

    The major causes of accidents with cattle, horses, and other grazing animals are: panic due to fear, male dominance aggression, or the maternal aggression of a mother protecting her newborn. Danger is inherent when handling large animals. Understanding their behavior patterns improves safety, but working with animals will never be completely safe. Calm, quiet handling and non-slip flooring are beneficial. Rough handling and excessive use of electric prods increase chances of injury to both people and animals, because fearful animals may jump, kick, or rear. Training animals to voluntarily cooperate with veterinary procedures reduces stress and improves safety. Grazing animals have a herd instinct, and a lone, isolated animal can become agitated. Providing a companion animal helps keep an animal calm. PMID:10329901

  5. Biotherapies in large vessel vasculitis.

    PubMed

    Ferfar, Y; Mirault, T; Desbois, A C; Comarmond, C; Messas, E; Savey, L; Domont, F; Cacoub, P; Saadoun, D

    2016-06-01

    Giant cell arteritis (GCA) and Takayasu's arteritis (TA) are large vessel vasculitis (LVV) and aortic involvement is not uncommon in Behcet's disease (BD) and relapsing polychondritis (RP). Glucocorticosteroids are the mainstay of therapy in LVV. However, a significant proportion of patients have glucocorticoid dependance, serious side effects or refractory disease to steroids and other immunosuppressive treatments such as cyclophosphamide, azathioprine, mycophenolate mofetil and methotrexate. Recent advances in the understanding of the pathogenesis have resulted in the use of biological agents in patients with LVV. Anti-tumor necrosis factor-α drugs seem effective in patients with refractory Takayasu arteritis and vascular BD but have failed to do so in giant cell arteritis. Preliminary reports on the use of the anti-IL6-receptor antibody (tocilizumab), in LVV have been encouraging. The development of new biologic targeted therapies will probably open a promising future for patients with LVV. PMID:26883459

  6. Large Aperture Electrostatic Dust Detector

    SciTech Connect

    C.H. Skinner, R. Hensley, and A.L Roquemore

    2007-10-09

    Diagnosis and management of dust inventories generated in next-step magnetic fusion devices is necessary for their safe operation. A novel electrostatic dust detector, based on a fine grid of interlocking circuit traces biased to 30 or 50 ν has been developed for the detection of dust particles on remote surfaces in air and vacuum environments. Impinging dust particles create a temporary short circuit and the resulting current pulse is recorded by counting electronics. Up to 90% of the particles are ejected from the grid or vaporized suggesting the device may be useful for controlling dust inventories. We report measurements of the sensitivity of a large area (5x5 cm) detector to microgram quantities of dust particles and review its applications to contemporary tokamaks and ITER.

  7. Large area pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Kruer, Mark A. (Inventor)

    1999-01-01

    An advanced solar simulator illuminates the surface a very large solar array, such as one twenty feet by twenty feet in area, from a distance of about twenty-six feet with an essentially uniform intensity field of pulsed light of an intensity of one AMO, enabling the solar array to be efficiently tested with light that emulates the sun. Light modifiers sculpt a portion of the light generated by an electrically powered high power Xenon lamp and together with direct light from the lamp provide uniform intensity illumination throughout the solar array, compensating for the square law and cosine law reduction in direct light intensity, particularly at the corner locations of the array. At any location within the array the sum of the direct light and reflected light is essentially constant.

  8. Adaptive Optics for Large Telescopes

    SciTech Connect

    Olivier, S

    2008-06-27

    The use of adaptive optics was originally conceived by astronomers seeking to correct the blurring of images made with large telescopes due to the effects of atmospheric turbulence. The basic idea is to use a device, a wave front corrector, to adjust the phase of light passing through an optical system, based on some measurement of the spatial variation of the phase transverse to the light propagation direction, using a wave front sensor. Although the original concept was intended for application to astronomical imaging, the technique can be more generally applied. For instance, adaptive optics systems have been used for several decades to correct for aberrations in high-power laser systems. At Lawrence Livermore National Laboratory (LLNL), the world's largest laser system, the National Ignition Facility, uses adaptive optics to correct for aberrations in each of the 192 beams, all of which must be precisely focused on a millimeter scale target in order to perform nuclear physics experiments.

  9. Intergalactic shells at large redshift

    NASA Technical Reports Server (NTRS)

    Shull, J. M.; Silk, J.

    1981-01-01

    The intergalactic shells produced by galactic explosions at large redshift, whose interiors cool by inverse Compton scattering off the cosmic background radiation, have a characteristic angular size of about 1 arcmin at peak brightness. At z values lower than 2, the shells typically have a radius of 0.5 Mpc, a velocity of about 50 km/sec, a metal abundance of about 0.0001 of cosmic values, and strong radiation in H I(Lyman-alpha), He II 304 A, and the IR fine-structure lines of C II and Si II. The predicted extragalactic background emission from many shells, strongly peaked toward the UV, sets an upper limit to the number of exploding sources at z values of about 10. Shell absorption lines of H I, C II, Si II, and Fe II, which may be seen at more recent epochs in quasar spectra, may probe otherwise invisible explosions in the early universe.

  10. Analysis of large urban fires

    SciTech Connect

    Kang, S.W.; Reitter, T.A.; Takata, A.N.

    1984-11-01

    Fires in urban areas caused by a nuclear burst are analyzed as a first step towards determining their smoke-generation chacteristics, which may have grave implications for global-scale climatic consequences. A chain of events and their component processes which would follow a nuclear attack are described. A numerical code is currently being developed to calculate ultimately the smoke production rate for a given attack scenario. Available models for most of the processes are incorporated into the code. Sample calculations of urban fire-development history performed in the code for an idealized uniform city are presented. Preliminary results indicate the importance of the wind, thermal radiation transmission, fuel distributions, and ignition thresholds on the urban fire spread characteristics. Future plans are to improve the existing models and develop new ones to characterize smoke production from large urban fires. 21 references, 18 figures.

  11. Large hole rotary drill performance

    SciTech Connect

    Workman, J.L.; Calder, P.N.

    1996-12-31

    Large hole rotary drilling is one of the most common methods of producing blastholes in open pit mining. Large hole drilling generally refers to diameters from 9 to 17 inch (229 to 432 mm), however a considerable amount of rotary drilling is done in diameters from 6{1/2} to 9 inch (165 to 229 mm). These smaller diameters are especially prevalent in gold mining and quarrying. Rotary drills are major mining machines having substantial capital cost. Drill bit costs can also be high, depending on the bit type and formation being drilled. To keep unit costs low the drills must perform at a high productivity level. The most important factor in rotary drilling is the penetration rate. This paper discusses the factors affecting penetration rate. An empirical factor in rotary drilling is the penetration rate. This paper discusses the factors affecting penetration rate. An empirical factor is given for calculating the penetration rate based on rock strength, pulldown weight and the RPM. The importance of using modern drill performance monitoring systems to calibrate the penetration equation for specific rock formations is discussed. Adequate air delivered to the bottom of the hole is very important to achieving maximum penetration rates. If there is insufficient bailing velocity cuttings will not be transported from the bottom of the hole rapidly enough and the penetration rate is very likely to decrease. An expression for the balancing air velocity is given. The amount by which the air velocity must exceed the balancing velocity for effective operation is discussed. The effect of altitude on compressor size is also provided.

  12. Large and small photovoltaic powerplants

    NASA Astrophysics Data System (ADS)

    Cormode, Daniel

    The installed base of photovoltaic power plants in the United States has roughly doubled every 1 to 2 years between 2008 and 2015. The primary economic drivers of this are government mandates for renewable power, falling prices for all PV system components, 3rd party ownership models, and a generous tariff scheme known as net-metering. Other drivers include a desire for decreasing the environmental impact of electricity generation and a desire for some degree of independence from the local electric utility. The result is that in coming years, PV power will move from being a minor niche to a mainstream source of energy. As additional PV power comes online this will create challenges for the electric grid operators. We examine some problems related to large scale adoption of PV power in the United States. We do this by first discussing questions of reliability and efficiency at the PV system level. We measure the output of a fleet of small PV systems installed at Tucson Electric Power, and we characterize the degradation of those PV systems over several years. We develop methods to predict energy output from PV systems and quantify the impact of negatives such as partial shading, inverter inefficiency and malfunction of bypass diodes. Later we characterize the variability from large PV systems, including fleets of geographically diverse utility scale power plants. We also consider the power and energy requirements needed to smooth those systems, both from the perspective of an individual system and as a fleet. Finally we report on experiments from a utility scale PV plus battery hybrid system deployed near Tucson, Arizona where we characterize the ability of this system to produce smoothly ramping power as well as production of ancillary energy services such as frequency response.

  13. Sweetwater, Texas Large N Experiment

    NASA Astrophysics Data System (ADS)

    Sumy, D. F.; Woodward, R.; Barklage, M.; Hollis, D.; Spriggs, N.; Gridley, J. M.; Parker, T.

    2015-12-01

    From 7 March to 30 April 2014, NodalSeismic, Nanometrics, and IRIS PASSCAL conducted a collaborative, spatially-dense seismic survey with several thousand nodal short-period geophones complemented by a backbone array of broadband sensors near Sweetwater, Texas. This pilot project demonstrates the efficacy of industry and academic partnerships, and leveraged a larger, commercial 3D survey to collect passive source seismic recordings to image the subsurface. This innovative deployment of a large-N mixed-mode array allows industry to explore array geometries and investigate the value of broadband recordings, while affording academics a dense wavefield imaging capability and an operational model for high volume instrument deployment. The broadband array consists of 25 continuously-recording stations from IRIS PASSCAL and Nanometrics, with an array design that maximized recording of horizontal-traveling seismic energy for surface wave analysis over the primary target area with sufficient offset for imaging objectives at depth. In addition, 2639 FairfieldNodal Zland nodes from NodalSeismic were deployed in three sub-arrays: the outlier, backbone, and active source arrays. The backbone array consisted of 292 nodes that covered the entire survey area, while the outlier array consisted of 25 continuously-recording nodes distributed at a ~3 km distance away from the survey perimeter. Both the backbone and outlier array provide valuable constraints for the passive source portion of the analysis. This project serves as a learning platform to develop best practices in the support of large-N arrays with joint industry and academic expertise. Here we investigate lessons learned from a facility perspective, and present examples of data from the various sensors and array geometries. We will explore first-order results from local and teleseismic earthquakes, and show visualizations of the data across the array. Data are archived at the IRIS DMC under stations codes XB and 1B.

  14. Low Cost Large Space Antennas

    NASA Technical Reports Server (NTRS)

    Chmielewski, Artur B.; Freeland, Robert

    1997-01-01

    The mobile communication community could significantly benefit from the availability of low-cost, large space-deployable antennas. A new class of space structures, called inflatable deployable structures, will become an option for this industry in the near future. This new technology recently made significant progress with respect to reducing the risk of flying large inflatable structures in space. This progress can be attributed to the successful space flight of the Inflatable Antenna Experiment in May of 1996, which prompted the initiation of the NASA portion of the joint NASA/DOD coordinated Space Inflatables Program, which will develop the technology to be used in future mobile communications antennas along with other users. The NASA/DOD coordinated Space Inflatables Program was initiated in 1997 as a direct result of the Inflatable Antenna Experiment. The program adds a new NASA initiative to a substantial DOD program that involves developing a series of ground test hardware, starting with 3 meter diameter units and advancing the manufacturing techniques to fabricate a 25 meter ground demonstrator unit with surface accuracy exceeding the requirements for mobile communication applications. Simultaneously, the program will be advancing the state of the art in several important inflatable technology areas, such as developing rigidizable materials for struts and tori and investigating thin film technology issues, such as application of coatings, property measurement and materials processing and assembly techniques. A very important technology area being addressed by the program is deployment control techniques. The program will sponsor activities that will lead to understanding the effects of material strain energy release, residual air in the stowed structure, and the design of the launch restraint and release system needed to control deployment dynamics. Other technology areas directly applicable to developing inflatable mobile communication antennas in the near

  15. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  16. Large and small volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Agust; Mohajeri, Nahid

    2013-04-01

    Despite great progress in volcanology in the past decades, we still cannot make reliable forecasts as to the likely size (volume, mass) of an eruption once it has started. Empirical data collected from volcanoes worldwide indicates that the volumes (or masses) of eruptive materials in volcanic eruptions are heavy-tailed. This means that most of the volumes erupted from a given magma chamber are comparatively small. Yet, the same magma chamber can, under certain conditions, squeeze out large volumes of magma. To know these conditions is of fundamental importance for forecasting the likely size of an eruption. Thermodynamics provides the basis for understanding the elastic energy available to (i) propagate an injected dyke from the chamber and to the surface to feed an eruption, and (ii) squeeze magma out of the chamber during the eruption. The elastic energy consists of two main parts: first, the strain energy stored in the volcano before magma-chamber rupture and dyke injection, and, second, the work done through displacement of the flanks of the volcano (or the margins of a rift zone) and the expansion and shrinkage of the magma chamber itself. Other forms of energy in volcanoes - thermal, seismic, kinetic - are generally important but less so for squeezing magma out of a chamber during an eruption. Here we suggest that for (basaltic) eruptions in rift zones the strain energy is partly related to minor doming above the reservoir, and partly to stretching of the rift zone before rupture. The larger the reservoir, the larger is the stored strain energy before eruption. However, for the eruption to be really large, the strain energy has to accumulate in the entire crustal segment above the reservoir and there will be additional energy input into the system during the eruption which relates to the displacements of the boundary of the rift-zone segment. This is presumably why feeder dykes commonly propagate laterally at the surface following the initial fissure

  17. Large Block Test Final Report

    SciTech Connect

    Lin, W

    2001-12-01

    This report documents the Large-Block Test (LBT) conducted at Fran Ridge near Yucca Mountain, Nevada. The LBT was a thermal test conducted on an exposed block of middle non-lithophysal Topopah Spring tuff (Tptpmn) and was designed to assist in understanding the thermal-hydrological-mechanical-chemical (THMC) processes associated with heating and then cooling a partially saturated fractured rock mass. The LBT was unique in that it was a large (3 x 3 x 4.5 m) block with top and sides exposed. Because the block was exposed at the surface, boundary conditions on five of the six sides of the block were relatively well known and controlled, making this test both easier to model and easier to monitor. This report presents a detailed description of the test as well as analyses of the data and conclusions drawn from the test. The rock block that was tested during the LBT was exposed by excavation and removal of the surrounding rock. The block was characterized and instrumented, and the sides were sealed and insulated to inhibit moisture and heat loss. Temperature on the top of the block was also controlled. The block was heated for 13 months, during which time temperature, moisture distribution, and deformation were monitored. After the test was completed and the block cooled down, a series of boreholes were drilled, and one of the heater holes was over-cored to collect samples for post-test characterization of mineralogy and mechanical properties. Section 2 provides background on the test. Section 3 lists the test objectives and describes the block site, the site configuration, and measurements made during the test. Section 3 also presents a chronology of events associated with the LBT, characterization of the block, and the pre-heat analyses of the test. Section 4 describes the fracture network contained in the block. Section 5 describes the heating/cooling system used to control the temperature in the block and presents the thermal history of the block during the test

  18. Large Volcanic Rises on Venus

    NASA Technical Reports Server (NTRS)

    Smrekar, Suzanne E.; Kiefer, Walter S.; Stofan, Ellen R.

    1997-01-01

    Large volcanic rises on Venus have been interpreted as hotspots, or the surface manifestation of mantle upwelling, on the basis of their broad topographic rises, abundant volcanism, and large positive gravity anomalies. Hotspots offer an important opportunity to study the behavior of the lithosphere in response to mantle forces. In addition to the four previously known hotspots, Atla, Bell, Beta, and western Eistla Regiones, five new probable hotspots, Dione, central Eistla, eastern Eistla, Imdr, and Themis, have been identified in the Magellan radar, gravity and topography data. These nine regions exhibit a wider range of volcano-tectonic characteristics than previously recognized for venusian hotspots, and have been classified as rift-dominated (Atla, Beta), coronae-dominated (central and eastern Eistla, Themis), or volcano-dominated (Bell, Dione, western Eistla, Imdr). The apparent depths of compensation for these regions ranges from 65 to 260 km. New estimates of the elastic thickness, using the 90 deg and order spherical harmonic field, are 15-40 km at Bell Regio, and 25 km at western Eistla Regio. Phillips et al. find a value of 30 km at Atla Regio. Numerous models of lithospheric and mantle behavior have been proposed to interpret the gravity and topography signature of the hotspots, with most studies focusing on Atla or Beta Regiones. Convective models with Earth-like parameters result in estimates of the thickness of the thermal lithosphere of approximately 100 km. Models of stagnant lid convection or thermal thinning infer the thickness of the thermal lithosphere to be 300 km or more. Without additional constraints, any of the model fits are equally valid. The thinner thermal lithosphere estimates are most consistent with the volcanic and tectonic characteristics of the hotspots. Estimates of the thermal gradient based on estimates of the elastic thickness also support a relatively thin lithosphere (Phillips et al.). The advantage of larger estimates of

  19. How Large Should Whales Be?

    PubMed Central

    Clauset, Aaron

    2013-01-01

    The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2 g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000 g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000 g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them. PMID:23342050

  20. Large N{sub c}

    SciTech Connect

    Jenkins, Elizabeth E.

    2009-12-17

    The 1/N{sub c} expansion of QCD with N{sub c} = 3 has been successful in explaining a wide variety of QCD phenomenology. Here I focus on the contracted spin-flavor symmetry of baryons in the large-N{sub c} limit and deviations from spin-flavor symmetry due to corrections suppressed by powers of 1/N{sub c}. Baryon masses provide an important example of the 1/N{sub c} expansion, and successful predictions of masses of heavy-quark baryons continue to be tested by experiment. The ground state charmed baryon masses have all been measured, and five of the eight ground state bottom baryon masses have been found. Results of the 1/N{sub c} expansion can aid in the discovery of the remaining bottom baryons. The brand new measurement of the {omega}{sub b}{sup -} mass by the CDF collaboration conflicts with the original D0 discovery value and is in excellent agreement with the prediction of the 1/N{sub c} expansion.

  1. Anthropogenic Triggering of Large Earthquakes

    PubMed Central

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1–10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor “foreshocks”, since the induction may occur with a delay up to several years. PMID:25156190

  2. Large Angle Satellite Attitude Maneuvers

    NASA Technical Reports Server (NTRS)

    Cochran, J. E.; Junkins, J. L.

    1975-01-01

    Two methods are proposed for performing large angle reorientation maneuvers. The first method is based upon Euler's rotation theorem; an arbitrary reorientation is ideally accomplished by rotating the spacecraft about a line which is fixed in both the body and in space. This scheme has been found to be best suited for the case in which the initial and desired attitude states have small angular velocities. The second scheme is more general in that a general class of transition trajectories is introduced which, in principle, allows transfer between arbitrary orientation and angular velocity states. The method generates transition maneuvers in which the uncontrolled (free) initial and final states are matched in orientation and angular velocity. The forced transition trajectory is obtained by using a weighted average of the unforced forward integration of the initial state and the unforced backward integration of the desired state. The current effort is centered around practical validation of this second class of maneuvers. Of particular concern is enforcement of given control system constraints and methods for suboptimization by proper selection of maneuver initiation and termination times. Analogous reorientation strategies which force smooth transition in angular momentum and/or rotational energy are under consideration.

  3. Anthropogenic Triggering of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Mulargia, Francesco; Bizzarri, Andrea

    2014-08-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor ``foreshocks'', since the induction may occur with a delay up to several years.

  4. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years. PMID:25156190

  5. Amplification of large artificial chromosomes.

    PubMed Central

    Smith, D R; Smyth, A P; Moir, D T

    1990-01-01

    Yeast artificial chromosome cloning is an attractive technology for genomic mapping studies because very large DNA segments can be readily propagated. However, detailed analyses often require the extensive application of blotting-hybridization techniques because artificial chromosomes are normally present at only one copy per haploid genome. We have developed a cloning vector and host strain that alleviate this problem by permitting copy number amplification of artificial chromosomes. The vector includes a conditional centromere that can be turned on or off by changing the carbon source. Strong selective pressure for extra copies of the artificial chromosome can be applied by selecting for the expression of a heterologous thymidine kinase gene. When this system was used, artificial chromosomes ranging from about 100 to 600 kilobases in size were readily amplified 10- to 20-fold. The selective conditions did not induce obvious rearrangements in any of the clones tested. Reactivation of the centromere in amplified artificial chromosome clones resulted in stable maintenance of an elevated copy number for 20 generations. Applications of copy number control to various aspects of artificial chromosome analysis are addressed. Images PMID:2236036

  6. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  7. The Mass of Large Impactors

    NASA Technical Reports Server (NTRS)

    Parisi, M. G.; Brunini, A.

    1996-01-01

    By means of a simplified dynamical model, we have computed the eccentricity change in the orbit of each giant planet, caused by a single, large impact at the end of the accretion process. In order to set an upper bound on this eccentricity change, we have considered the giant planets' present eccentricities as primordial ones. By means of this procedure, we were able to obtain an implicit relation for the impactor masses and maximum velocities. We have estimated by this method the maximum allowed mass to impact Jupiter to be approx. 1.136 x 10(exp -1), being in the case of Neptune approx. 3.99 x 10(exp -2) (expressed in units of each planet final mass). Due to the similar present eccentricities of Saturn, Uranus and Jupiter, the constraint masses and velocities of the bodies to impact them (in units of each planet final mass and velocity respectively) are almost the same for the three planets. These results are in good agreement with those obtained by Lissauer and Safronov. These bounds might be used to derive the mass distribution of planetesimals in the early solar system.

  8. Natural Selection in Large Populations

    NASA Astrophysics Data System (ADS)

    Desai, Michael

    2011-03-01

    I will discuss theoretical and experimental approaches to the evolutionary dynamics and population genetics of natural selection in large populations. In these populations, many mutations are often present simultaneously, and because recombination is limited, selection cannot act on them all independently. Rather, it can only affect whole combinations of mutations linked together on the same chromosome. Methods common in theoretical population genetics have been of limited utility in analyzing this coupling between the fates of different mutations. In the past few years it has become increasingly clear that this is a crucial gap in our understanding, as sequence data has begun to show that selection appears to act pervasively on many linked sites in a wide range of populations, including viruses, microbes, Drosophila, and humans. I will describe approaches that combine analytical tools drawn from statistical physics and dynamical systems with traditional methods in theoretical population genetics to address this problem, and describe how experiments in budding yeast can help us directly observe these evolutionary dynamics.

  9. Chemotaxis of large granular lymphocytes

    SciTech Connect

    Pohajdak, B.; Gomez, J.; Orr, F.W.; Khalil, N.; Talgoy, M.; Greenberg, A.H.

    1986-01-01

    The hypothesis that large granular lymphocytes (LGL) are capable of directed locomotion (chemotaxis) was tested. A population of LGL isolated from discontinuous Percoll gradients migrated along concentration gradients of N-formyl-methionyl-leucyl-phenylalanine (f-MLP), casein, and C5a, well known chemoattractants for polymorphonuclear leukocytes and monocytes, as well as interferon-..beta.. and colony-stimulating factor. Interleukin 2, tuftsin, platelet-derived growth factor, and fibronectin were inactive. Migratory responses were greater in Percoll fractions with the highest lytic activity and HNK-1/sup +/ cells. The chemotactic response to f-MLP, casein, and C5a was always greater when the chemoattractant was present in greater concentration in the lower compartment of the Boyden chamber. Optimum chemotaxis was observed after a 1 hr incubation that made use of 12 ..mu..m nitrocellulose filters. LGL exhibited a high degree of nondirected locomotion when allowed to migrate for longer periods (> 2 hr), and when cultured in vitro for 24 to 72 hr in the presence or absence of IL 2 containing phytohemagluttinin-conditioned medium. LGL chemotaxis to f-MLP could be inhibited in a dose-dependent manner by the inactive structural analog CBZ-phe-met, and the RNK tumor line specifically bound f-ML(/sup 3/H)P, suggesting that LGL bear receptors for the chemotactic peptide.

  10. Disorder in large- N theories

    NASA Astrophysics Data System (ADS)

    Aharony, Ofer; Komargodski, Zohar; Yankielowicz, Shimon

    2016-04-01

    We consider Euclidean Conformal Field Theories perturbed by quenched disorder, namely by random fluctuations in their couplings. Such theories are relevant for second-order phase transitions in the presence of impurities or other forms of disorder. Theories with quenched disorder often flow to new fixed points of the renormalization group. We begin with disorder in free field theories. Imry and Ma showed that disordered free fields can only exist for d > 4. For d > 4 we show that disorder leads to new fixed points which are not scale-invariant. We then move on to large- N theories (vector models or gauge theories in the `t Hooft limit). We compute exactly the beta function for the disorder, and the correlation functions of the disordered theory. We generalize the results of Imry and Ma by showing that such disordered theories exist only when disorder couples to operators of dimension Δ > d/4. Sometimes the disordered fixed points are not scale-invariant, and in other cases they have unconventional dependence on the disorder, including non-trivial effects due to irrelevant operators. Holography maps disorder in conformal theories to stochastic differential equations in a higher dimensional space. We use this dictionary to reproduce our field theory results. We also study the leading 1 /N corrections, both by field theory methods and by holography. These corrections are particularly important when disorder scales with the number of degrees of freedom.

  11. Facilitating Navigation Through Large Archives

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Smith, Stephanie L.; Troung, Dat; Hodgson, Terry R.

    2005-01-01

    Automated Visual Access (AVA) is a computer program that effectively makes a large collection of information visible in a manner that enables a user to quickly and efficiently locate information resources, with minimal need for conventional keyword searches and perusal of complex hierarchical directory systems. AVA includes three key components: (1) a taxonomy that comprises a collection of words and phrases, clustered according to meaning, that are used to classify information resources; (2) a statistical indexing and scoring engine; and (3) a component that generates a graphical user interface that uses the scoring data to generate a visual map of resources and topics. The top level of an AVA display is a pictorial representation of an information archive. The user enters the depicted archive by either clicking on a depiction of subject area cluster, selecting a topic from a list, or entering a query into a text box. The resulting display enables the user to view candidate information entities at various levels of detail. Resources are grouped spatially by topic with greatest generality at the top layer and increasing detail with depth. The user can zoom in or out of specific sites or into greater or lesser content detail.

  12. How large should whales be?

    PubMed

    Clauset, Aaron

    2013-01-01

    The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2 g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000 g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000 g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them. PMID:23342050

  13. Large Isotope Spectrometer for Astromag

    NASA Technical Reports Server (NTRS)

    Binns, W. R.; Klarmann, J.; Israel, M. H.; Garrard, T. L.; Mewaldt, R. A.; Stone, E. C.; Ormes, J. F.; Streitmatter, R. E.; Rasmussen, I. L.; Wiedenbeck, M. E.

    1990-01-01

    The Large Isotope Spectrometer for Astromag (LISA) is an experiment designed to measure the isotopic composition and energy spectra of cosmic rays for elements extending from beryllium through zinc. The overall objectives of this investigation are to study the origin and evolution of galactic matter; the acceleration, transport, and time scales of cosmic rays in the galaxy; and search for heavy antinuclei in the cosmic radiation. To achieve these objectives, the LISA experiment will make the first identifications of individual heavy cosmic ray isotopes in the energy range from about 2.5 to 4 GeV/n where relativistic time dilation effects enhance the abundances of radioactive clocks and where the effects of solar modulation and cross-section variations are minimized. It will extend high resolution measurements of individual element abundances and their energy spectra to energies of nearly 1 TeV/n, and has the potential for discovering heavy anti-nuclei which could not have been formed except in extragalactic sources.

  14. Control of large space structures

    NASA Technical Reports Server (NTRS)

    Gran, R.; Rossi, M.; Moyer, H. G.; Austin, F.

    1979-01-01

    The control of large space structures was studied to determine what, if any, limitations are imposed on the size of spacecraft which may be controlled using current control system design technology. Using a typical structure in the 35 to 70 meter size category, a control system design that used actuators that are currently available was designed. The amount of control power required to maintain the vehicle in a stabilized gravity gradient pointing orientation that also damped various structural motions was determined. The moment of inertia and mass properties of this structure were varied to verify that stability and performance were maintained. The study concludes that the structure's size is required to change by at least a factor of two before any stability problems arise. The stability margin that is lost is due to the scaling of the gravity gradient torques (the rigid body control) and as such can easily be corrected by changing the control gains associated with the rigid body control. A secondary conclusion from the study is that the control design that accommodates the structural motions (to damp them) is a little more sensitive than the design that works on attitude control of the rigid body only.

  15. Large cities are less green

    NASA Astrophysics Data System (ADS)

    Oliveira, Erneson A.; Andrade, José S.; Makse, Hernán A.

    2014-02-01

    We study how urban quality evolves as a result of carbon dioxide emissions as urban agglomerations grow. We employ a bottom-up approach combining two unprecedented microscopic data on population and carbon dioxide emissions in the continental US. We first aggregate settlements that are close to each other into cities using the City Clustering Algorithm (CCA) defining cities beyond the administrative boundaries. Then, we use data on CO2 emissions at a fine geographic scale to determine the total emissions of each city. We find a superlinear scaling behavior, expressed by a power-law, between CO2 emissions and city population with average allometric exponent β = 1.46 across all cities in the US. This result suggests that the high productivity of large cities is done at the expense of a proportionally larger amount of emissions compared to small cities. Furthermore, our results are substantially different from those obtained by the standard administrative definition of cities, i.e. Metropolitan Statistical Area (MSA). Specifically, MSAs display isometric scaling emissions and we argue that this discrepancy is due to the overestimation of MSA areas. The results suggest that allometric studies based on administrative boundaries to define cities may suffer from endogeneity bias.

  16. Large optics inspection, tilting, and washing stand

    DOEpatents

    Ayers, Marion Jay; Ayers, Shannon Lee

    2012-10-09

    A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

  17. Large optics inspection, tilting, and washing stand

    DOEpatents

    Ayers, Marion Jay; Ayers, Shannon Lee

    2010-08-24

    A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

  18. Fronts in Large Marine Ecosystems

    NASA Astrophysics Data System (ADS)

    Belkin, Igor M.; Cornillon, Peter C.; Sherman, Kenneth

    2009-04-01

    Oceanic fronts shape marine ecosystems; therefore front mapping and characterization are among the most important aspects of physical oceanography. Here we report on the first global remote sensing survey of fronts in the Large Marine Ecosystems (LME). This survey is based on a unique frontal data archive assembled at the University of Rhode Island. Thermal fronts were automatically derived with the edge detection algorithm of Cayula and Cornillon (1992, 1995, 1996) from 12 years of twice-daily, global, 9-km resolution satellite sea surface temperature (SST) fields to produce synoptic (nearly instantaneous) frontal maps, and to compute the long-term mean frequency of occurrence of SST fronts and their gradients. These synoptic and long-term maps were used to identify major quasi-stationary fronts and to derive provisional frontal distribution maps for all LMEs. Since SST fronts are typically collocated with fronts in other water properties such as salinity, density and chlorophyll, digital frontal paths from SST frontal maps can be used in studies of physical-biological correlations at fronts. Frontal patterns in several exemplary LMEs are described and compared, including those for: the East and West Bering Sea LMEs, Sea of Okhotsk LME, East China Sea LME, Yellow Sea LME, North Sea LME, East and West Greenland Shelf LMEs, Newfoundland-Labrador Shelf LME, Northeast and Southeast US Continental Shelf LMEs, Gulf of Mexico LME, and Patagonian Shelf LME. Seasonal evolution of frontal patterns in major upwelling zones reveals an order-of-magnitude growth of frontal scales from summer to winter. A classification of LMEs with regard to the origin and physics of their respective dominant fronts is presented. The proposed classification lends itself to comparative studies of frontal ecosystems.

  19. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  20. The Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Ivezic, Zeljko

    2007-05-01

    The Large Synoptic Survey Telescope (LSST) is currently by far the most ambitious proposed ground-based optical survey. With initial funding from the US National Science Foundation (NSF), Department of Energy (DOE) laboratories and private sponsors, the design and development efforts are well underway at many institutions, including top universities and leading national laboratories. The main science themes that drive the LSST system design are Dark Energy and Matter, the Solar System Inventory, Transient Optical Sky and the Milky Way Mapping. The LSST system, with its 8.4m telescope and 3,200 Megapixel camera, will be sited at Cerro Pachon in northern Chile, with the first light scheduled for 2014. In a continuous observing campaign, LSST will cover the entire available sky every three nights in two photometric bands to a depth of V=25 per visit (two 15 second exposures), with exquisitely accurate astrometry and photometry. Over the proposed survey lifetime of 10 years, each sky location would be observed about 1000 times, with the total exposure time of 8 hours distributed over six broad photometric bandpasses (ugrizY). This campaign will open a movie-like window on objects that change brightness, or move, on timescales ranging from 10 seconds to 10 years, and will produce a catalog containing over 10 billion galaxies and a similar number of stars. The survey will have a data rate of about 30 TB/night, and will collect over 60 PB of raw data over its lifetime, resulting in an incredibly rich and extensive public archive that will be a treasure trove for breakthroughs in many areas of astronomy and astrophysics.

  1. India's National Large Solar Telescope

    NASA Astrophysics Data System (ADS)

    Hasan, S. S.

    2012-12-01

    India's 2-m National Large Solar Telescope (NLST) is aimed primarily at carrying out observations of the solar atmosphere with high spatial and spectral resolution. A comprehensive site characterization program, that commenced in 2007, has identified two superb sites in the Himalayan region at altitudes greater than 4000-m that have extremely low water vapor content and are unaffected by monsoons. With an innovative optical design, the NLST is an on-axis Gregorian telescope with a low number of optical elements to reduce the number of reflections and yield a high throughput with low polarization. In addition, it is equipped with a high-order adaptive optics to produce close to diffraction limited performance. To control atmospheric and thermal perturbations of the observations, the telescope will function with a fully open dome, to achieve its full potential atop a 25 m tower. Given its design, NLST can also operate at night, without compromising its solar performance. The post-focus instruments include broad-band and tunable Fabry-Pérot narrow-band imaging instruments; a high resolution spectropolarimeter and an Echelle spectrograph for night time astronomy. This project is led by the Indian Institute of Astrophysics and has national and international partners. Its geographical location will fill the longitudinal gap between Japan and Europe and is expected to be the largest solar telescope with an aperture larger than 1.5 m till the ATST and EST come into operation. An international consortium has been identified to build the NLST. The facility is expected to be commissioned by 2016.

  2. Large Alluvial Fans on Mars

    NASA Technical Reports Server (NTRS)

    Moore, Jeffrey M.; Howard, Alan D.

    2004-01-01

    Several dozen distinct alluvial fans, 10 to greater than 40 km long downslope are observed exclusively in highlands craters. Within a search region between 0 deg. and 30 deg. S, alluvial fan-containing craters were only found between 18 and 29 S, and they all occur at around plus or minus 1 km of the MOLA-defined Martian datum. Within the study area they are not randomly distributed but instead form three distinct clusters. Fans typically descend greater than 1 km from where they disgorge from their alcoves. Longitudinal profiles show that their surfaces are very slightly concave with a mean slope of 2 degrees. Many fans exhibit very long, narrow low-relief ridges radially oriented down-slope, often branching at their distal ends, suggestive of distributaries. Morphometric data for 31 fans was derived from MOLA data and compared with terrestrial fans with high-relief source areas, terrestrial low gradient alluvial ramps in inactive tectonic settings, and older Martian alluvial ramps along crater floors. The Martian alluvial fans generally fall on the same trends as the terrestrial alluvial fans, whereas the gentler Martian crater floor ramps are similar in gradient to the low relief terrestrial alluvial surfaces. For a given fan gradient, Martian alluvial fans generally have greater source basin relief than terrestrial fans in active tectonic settings. This suggests that the terrestrial source basins either yield coarser debris or have higher sediment concentrations than their Martian counterpoints. Martian fans and Basin and Range fans have steeper gradients than the older Martian alluvial ramps and terrestrial low relief alluvial surfaces, which is consistent with a supply of coarse sediment. Martian fans are relatively large and of low gradient, similar to terrestrial fluvial fans rather than debris flow fans. However, gravity scaling uncertainties make the flow regime forming Martian fans uncertain. Martian fans, at least those in Holden crater, apparently

  3. Interface structure at large supercooling

    NASA Astrophysics Data System (ADS)

    Misbah, C.; Müller-Krumbhaar, H.; Temkin, D. E.

    1991-04-01

    The front dynamics during the growth of a pure substance in the large undercooling limit including interface kinetics is analyzed. There exists a critical dimensionless undercooling Δ_s(>1) above which a planar front is linearly stable. For Δ < Δ_s the planar front is unstable against short wavenumbers k's perturbations, 01). Pour le front est instable vis-à-vis des perturbations de petit vecteur d'onde, 0

  4. Large space systems technology, 1981. [conferences

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1982-01-01

    A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems. Specifically, program status, structures, materials, and analyses, and control of large space systems are addressed.

  5. Investing in a Large Stretch Press

    NASA Technical Reports Server (NTRS)

    Choate, M.; Nealson, W.; Jay, G.; Buss, W.

    1986-01-01

    Press for forming large aluminum parts from plates provides substantial economies. Study assessed advantages and disadvantages of investing in large stretch-forming press, and also developed procurement specification for press.

  6. Large Space Antenna Systems Technology, 1984

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1985-01-01

    Papers are presented which provide a comprehensive review of space missions requiring large antenna systems and of the status of key technologies required to enable these missions. Topic areas include mission applications for large space antenna systems, large space antenna structural systems, materials and structures technology, structural dynamics and control technology, electromagnetics technology, large space antenna systems and the space station, and flight test and evaluation.

  7. Large space systems technology, 1980, volume 1

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    The technological and developmental efforts in support of the large space systems technology are described. Three major areas of interests are emphasized: (1) technology pertient to large antenna systems; (2) technology related to large space systems; and (3) activities that support both antenna and platform systems.

  8. 27 CFR 19.915 - Large plants.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Large plants. 19.915... OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Distilled Spirits For Fuel Use Permits § 19.915 Large plants. Any person wishing to establish a large plant shall make application for and obtain...

  9. Large Devaluations and the Real Exchange Rate

    ERIC Educational Resources Information Center

    Burstein, Ariel; Eichenbaum, Martin; Rebelo, Sergio

    2005-01-01

    In this paper we argue that the primary force behind the large drop in real exchange rates that occurs after large devaluations is the slow adjustment in the prices of nontradable goods and services. Our empirical analysis uses data from five large devaluation episodes: Argentina (2002), Brazil (1999), Korea (1997), Mexico (1994), and Thailand…

  10. Large variable conductance heat pipe. Transverse header

    NASA Technical Reports Server (NTRS)

    Edelstein, F.

    1975-01-01

    The characteristics of gas-loaded, variable conductance heat pipes (VCHP) are discussed. The difficulties involved in developing a large VCHP header are analyzed. The construction of the large capacity VCHP is described. A research project to eliminate some of the problems involved in large capacity VCHP operation is explained.

  11. Large N reduction on coset spaces

    SciTech Connect

    Kawai, Hikaru; Shimasaki, Shinji; Tsuchiya, Asato

    2010-04-15

    As an extension of our previous work concerning the large N reduction on group manifolds, we study the large N reduction on coset spaces. We show that large N field theories on coset spaces are described by certain corresponding matrix models. We also construct Chern-Simons-like theories on group manifolds and coset spaces, and give their reduced models.

  12. Large Space Systems Technology, Part 2, 1981

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1982-01-01

    Four major areas of interest are covered: technology pertinent to large antenna systems; technology related to the control of large space systems; basic technology concerning structures, materials, and analyses; and flight technology experiments. Large antenna systems and flight technology experiments are described. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. These research studies represent state-of-the art technology that is necessary for the development of large space systems. A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems.

  13. Large fluctuations at the lasing threshold of solid- and liquid-state dye lasers.

    PubMed

    Basak, Supratim; Blanco, Alvaro; López, Cefe

    2016-01-01

    Intensity fluctuations in lasers are commonly studied above threshold in some special configurations (especially when emission is fed back into the cavity or when two lasers are coupled) and related with their chaotic behaviour. Similar fluctuating instabilities are usually observed in random lasers, which are open systems with plenty of quasi-modes whose non orthogonality enables them to exchange energy and provides the sort of loss mechanism whose interplay with pumping leads to replica symmetry breaking. The latter however, had never been observed in plain cavity lasers where disorder is absent or not intentionally added. Here we show a fluctuating lasing behaviour at the lasing threshold both in solid and liquid dye lasers. Above and below a narrow range around the threshold the spectral line-shape is well correlated with the pump energy. At the threshold such correlation disappears, and the system enters a regime where emitted laser fluctuates between narrow, intense and broad, weak peaks. The immense number of modes and the reduced resonator quality favour the coupling of modes and prepares the system so that replica symmetry breaking occurs without added disorder. PMID:27558968

  14. Large fluctuations at the lasing threshold of solid- and liquid-state dye lasers

    PubMed Central

    Basak, Supratim; Blanco, Alvaro; López, Cefe

    2016-01-01

    Intensity fluctuations in lasers are commonly studied above threshold in some special configurations (especially when emission is fed back into the cavity or when two lasers are coupled) and related with their chaotic behaviour. Similar fluctuating instabilities are usually observed in random lasers, which are open systems with plenty of quasi-modes whose non orthogonality enables them to exchange energy and provides the sort of loss mechanism whose interplay with pumping leads to replica symmetry breaking. The latter however, had never been observed in plain cavity lasers where disorder is absent or not intentionally added. Here we show a fluctuating lasing behaviour at the lasing threshold both in solid and liquid dye lasers. Above and below a narrow range around the threshold the spectral line-shape is well correlated with the pump energy. At the threshold such correlation disappears, and the system enters a regime where emitted laser fluctuates between narrow, intense and broad, weak peaks. The immense number of modes and the reduced resonator quality favour the coupling of modes and prepares the system so that replica symmetry breaking occurs without added disorder. PMID:27558968

  15. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  16. Shape control of large space structures

    NASA Technical Reports Server (NTRS)

    Hagan, M. T.

    1982-01-01

    A survey has been conducted to determine the types of control strategies which have been proposed for controlling the vibrations in large space structures. From this survey several representative control strategies were singled out for detailed analyses. The application of these strategies to a simplified model of a large space structure has been simulated. These simulations demonstrate the implementation of the control algorithms and provide a basis for a preliminary comparison of their suitability for large space structure control.

  17. Force Sensor for Large Robot Arms

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.; Primus, H. C.; Scheinman, V. D.

    1985-01-01

    Modified Maltese-cross force sensor larger and more sensitive than earlier designs. Measures inertial forces and torques exerted on large robot arms during free movement as well as those exerted by claw on manipulated objects. Large central hole of sensor allows claw drive mounted inside arm instead of perpendicular to its axis, eliminating potentially hazardous projection. Originally developed for Space Shuttle, sensor finds applications in large industrial robots.

  18. Improved Large-Field Focusing Schlieren System

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard M.

    1993-01-01

    System used to examine complicated two- and three-dimensional flows. High-brightness large-field focusing schlieren system incorporates Fresnel lens instead of glass diffuser. In system with large field of view, image may also be very large. Relay optical subsystem minifies large image while retaining all of light. Facilities candidates for use of focusing schlieren include low-speed wind and water tunnels. Heated or cooled flow tracers or injected low- or high-density tracers used to make flows visible for photographic recording.

  19. Perception for a large deployable reflector telescope

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. M.; Swanson, P. N.; Meinel, A. B.; Meinel, M. P.

    1984-01-01

    Optical science and technology concepts for a large deployable reflector for far-infrared and submillimeter astronomy from above the earth's atmosphere are discussed. Requirements given at the Asilomar Conference are reviewed. The technical challenges of this large-aperture (about 20-meter) telescope, which will be diffraction limited in the infrared, are highlighted in a brief discussion of one particular configuration.

  20. World atlas of large optical telescopes

    NASA Technical Reports Server (NTRS)

    Meszaros, S. P.

    1979-01-01

    By 1980 there will be approximately 100 large optical telescopes in the world with mirror or lens diameters of one meter (39 inches) and larger. This atlas gives information on these telescopes and shows their locations on continent-sized maps. Observatory locations considered suitable for the construction of future large telescopes are also shown.

  1. Densifying forest biomass into large round bales

    SciTech Connect

    Fridley, J.; Burkhardt, T.H.

    1981-01-01

    A large round-bale hay baler was modified to examine the concept of baling forest biomass in large round bales. Material baled, feed orientation, and baler belt tension were varied to observe their effects on the baling process and bale density. The torque and power required to drive the baler were measured. 10 refs.

  2. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  3. Large Lecture Format: Some Lessons Learned.

    ERIC Educational Resources Information Center

    Kryder, LeeAnne G.

    2002-01-01

    Shares some surprising results from a business communication program's recent experiment in using a large lecture format to teach an upper-division business communication course: approximately 90-95% of the students liked the large lecture format, and the quality of their communication deliverables was as good as that produced by students who took…

  4. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  5. 75 FR 73983 - Assessments, Large Bank Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    ... Kapoor, Counsel, Legal Division, (202) 898-3960. Correction In proposed rule FR Doc. 2010-29138...; ] FEDERAL DEPOSIT INSURANCE CORPORATION 12 CFR Part 327 RIN 3064-AD66 Assessments, Large Bank Pricing AGENCY..., 2010, regarding Assessments, Large Bank Pricing. This correction clarifies that the comment period...

  6. 76 FR 17521 - Assessments, Large Bank Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-30

    ... Register of February 25, 2011 (76 FR 10672), regarding Assessments, Large Bank Pricing. This correction... 17th Street, NW., Washington, DC 20429. SUPPLEMENTARY INFORMATION: In FR Doc. 2011-3086, appearing on... 327 RIN 3064-AD66 Assessments, Large Bank Pricing AGENCY: Federal Deposit Insurance Corporation...

  7. The algebras of large N matrix mechanics

    SciTech Connect

    Halpern, M.B.; Schwartz, C.

    1999-09-16

    Extending early work, we formulate the large N matrix mechanics of general bosonic, fermionic and supersymmetric matrix models, including Matrix theory: The Hamiltonian framework of large N matrix mechanics provides a natural setting in which to study the algebras of the large N limit, including (reduced) Lie algebras, (reduced) supersymmetry algebras and free algebras. We find in particular a broad array of new free algebras which we call symmetric Cuntz algebras, interacting symmetric Cuntz algebras, symmetric Bose/Fermi/Cuntz algebras and symmetric Cuntz superalgebras, and we discuss the role of these algebras in solving the large N theory. Most important, the interacting Cuntz algebras are associated to a set of new (hidden!) local quantities which are generically conserved only at large N. A number of other new large N phenomena are also observed, including the intrinsic nonlocality of the (reduced) trace class operators of the theory and a closely related large N field identification phenomenon which is associated to another set (this time nonlocal) of new conserved quantities at large N.

  8. Implementing Large Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Coppit, David

    2006-01-01

    In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that…

  9. Collaborative Working for Large Digitisation Projects

    ERIC Educational Resources Information Center

    Yeates, Robin; Guy, Damon

    2006-01-01

    Purpose: To explore the effectiveness of large-scale consortia for disseminating local heritage via the web. To describe the creation of a large geographically based cultural heritage consortium in the South East of England and management lessons resulting from a major web site digitisation project. To encourage the improved sharing of experience…

  10. How Do People Apprehend Large Numerosities?

    ERIC Educational Resources Information Center

    Sophian, Catherine; Chu, Yun

    2008-01-01

    People discriminate remarkably well among large numerosities. These discriminations, however, need not entail numerical representation of the quantities being compared. This research evaluated the role of both non-numerical and numerical information in adult judgments of relative numerosity for large-numerosity spatial arrays. Results of…

  11. Fabrication of large ceramic electrolyte disks

    NASA Technical Reports Server (NTRS)

    Ring, S. A.

    1972-01-01

    Process for sintering compressed ceramic powders produces large ceramic disks for use as electrolytes in high-temperature electrolytic cells. Thin, strain-free uniformly dense disks as large as 30 cm squared have been fabricated by slicing ceramic slugs produced by this technique.

  12. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  13. LARGE AND GREAT RIVERS: NEW ASSESSMENT TOOLS

    EPA Science Inventory

    The Ecological Exposure Research Division has been conducting research to support the development of the next generation of bioassessment and monitoring tools for large and great rivers. Focus has largely been on the development of standardized protocols for the traditional indi...

  14. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  15. Environmental effects and large space systems

    NASA Technical Reports Server (NTRS)

    Garrett, H. B.

    1981-01-01

    When planning large scale operations in space, environmental impact must be considered in addition to radiation, spacecraft charging, contamination, high power and size. Pollution of the atmosphere and space is caused by rocket effluents and by photoelectrons generated by sunlight falling on satellite surfaces even light pollution may result (the SPS may reflect so much light as to be a nuisance to astronomers). Large (100 Km 2) structures also will absorb the high energy particles that impinge on them. Altogether, these effects may drastically alter the Earth's magnetosphere. It is not clear if these alterations will in any way affect the Earth's surface climate. Large structures will also generate large plasma wakes and waves which may cause interference with communications to the vehicle. A high energy, microwave beam from the SPS will cause ionospheric turbulence, affecting UHF and VHF communications. Although none of these effects may ultimately prove critical, they must be considered in the design of large structures.

  16. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  17. Generically large nongaussianity in small multifield inflation

    SciTech Connect

    Bramante, Joseph

    2015-07-07

    If forthcoming measurements of cosmic photon polarization restrict the primordial tensor-to-scalar ratio to r<0.01, small field inflation will be a principal candidate for the origin of the universe. Here we show that small multifield inflation, without the hybrid mechanism, typically results in large squeezed nongaussianity. Small multifield potentials contain multiple flat field directions, often identified with the gauge invariant field directions in supersymmetric potentials. We find that unless these field directions have equal slopes, large nongaussianity arises. After identifying relevant differences between large and small two-field potentials, we demonstrate that the latter naturally fulfill the Byrnes-Choi-Hall large nongaussianity conditions. Computations of the primordial power spectrum, spectral index, and squeezed bispectrum, reveal that small two-field models which otherwise match observed primordial perturbations, produce excludably large nongaussianity if the inflatons’ field directions have unequal slopes.

  18. Testing Large Structures in the Field

    NASA Technical Reports Server (NTRS)

    James, George; Carne, Thomas G.

    2009-01-01

    Field testing large structures creates unique challenges such as limited choices for boundary conditions and the fact that natural excitation sources cannot be removed. Several critical developments in field testing of large structures are reviewed, including: step relaxation testing which has been developed into a useful technique to apply large forces to operational systems by careful windowing; the capability of large structures testing with free support conditions which has been expanded by implementing modeling of the support structure; natural excitation which has been developed as a viable approach to field testing; and the hybrid approach which has been developed to allow forces to be estimated in operating structures. These developments have increased the ability to extract information from large structures and are highlighted in this presentation.

  19. Do large hiatal hernias affect esophageal peristalsis?

    PubMed Central

    Roman, Sabine; Kahrilas, Peter J; Kia, Leila; Luger, Daniel; Soper, Nathaniel; Pandolfino, John E

    2013-01-01

    Background & Aim Large hiatal hernias can be associated with a shortened or tortuous esophagus. We hypothesized that these anatomic changes may alter esophageal pressure topography (EPT) measurements made during high-resolution manometry (HRM). Our aim was to compare EPT measures of esophageal motility in patients with large hiatal hernias to those of patients without hernia. Methods Among 2000 consecutive clinical EPT, we identified 90 patients with large (>5 cm) hiatal hernias on endoscopy and at least 7 evaluable swallows on EPT. Within the same database a control group without hernia was selected. EPT was analyzed for lower esophageal sphincter (LES) pressure, Distal Contractile Integral (DCI), contraction amplitude, Contractile Front Velocity (CFV) and Distal Latency time (DL). Esophageal length was measured on EPT from the distal border of upper esophageal sphincter to the proximal border of the LES. EPT diagnosis was based on the Chicago Classification. Results The manometry catheter was coiled in the hernia and did not traverse the crural diaphragm in 44 patients (49%) with large hernia. Patients with large hernias had lower average LES pressures, lower DCI, slower CFV and shorter DL than patients without hernia. They also exhibited a shorter mean esophageal length. However, the distribution of peristaltic abnormalities was not different in patients with and without large hernia. Conclusions Patients with large hernias had an alteration of EPT measurements as a consequence of the associated shortened esophagus. However, the distribution of peristaltic disorders was unaffected by the presence of hernia. PMID:22508779

  20. On the distinction between large deformation and large distortion for anisotropic materials

    SciTech Connect

    BRANNON,REBECCA M.

    2000-02-24

    A motion involves large distortion if the ratios of principal stretches differ significantly from unity. A motion involves large deformation if the deformation gradient tensor is significantly different from the identity. Unfortunately, rigid rotation fits the definition of large deformation, and models that claim to be valid for large deformation are often inadequate for large distortion. An exact solution for the stress in an idealized fiber-reinforced composite is used to show that conventional large deformation representations for transverse isotropy give errant results. Possible alternative approaches are discussed.

  1. 1919 + 742 - A large double radio source

    NASA Astrophysics Data System (ADS)

    Strom, R. G.; Eckart, A.; Biermann, P.

    1985-10-01

    A 49-cm map of the source 1919 + 742 (previously catalogued as 4C74.24 and NB74.26) shows it to be a large double. Although the optical identification is unclear, there are several candidate objects near the source centroid which appear to be members of a small group of galaxies. The absence of bright identification candidates argues for a moderately distant, and hence intrinsically large (750 kpc overall size for z = 0.1) object. The spectrum between 80 and 600 MHz is determined, some intrinsic properties are derived, and these are briefly discussed in the context of other large sources.

  2. Experimental verification of a large flexible manipulator

    NASA Technical Reports Server (NTRS)

    Lee, Jac Won; Huggins, James D.; Book, Wayne J.

    1988-01-01

    A large experimental lightweight manipulator would be useful for material handling, for welding, or for ultrasonic inspection of a large structure, such as an airframe. The flexible parallel link mechanism is designed for high rigidity without increasing weight. This constrained system is analyzed by singular value decomposition of the constraint Jacobian matrix. A verification of the modeling using the assumed mode method is presented. Eigenvalues and eigenvectors of the linearized model are compared to the measured system natural frequencies and their associated mode shapes. The modeling results for large motions are compared to the time response data from the experiments. The hydraulic actuator is verified.

  3. Learning to build large structures in space

    NASA Technical Reports Server (NTRS)

    Hagler, T.; Patterson, H. G.; Nathan, C. A.

    1977-01-01

    The paper examines some of the key technologies and forms of construction know-how that will have to be developed and tested for eventual application to building large structures in space. Construction of a shuttle-tended space construction/demonstration platform would comprehensively demonstrate large structure technology, develop construction capability, and furnish a construction platform for a variety of operational large structures. Completion of this platform would lead to demonstrations of the Satellite Power System (SPS) concept, including microwave transmission, fabrication of 20-m-deep beams, conductor installation, rotary joint installation, and solar blanket installation.

  4. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  5. Very Large System Dynamics Models - Lessons Learned

    SciTech Connect

    Jacob J. Jacobson; Leonard Malczynski

    2008-10-01

    This paper provides lessons learned from developing several large system dynamics (SD) models. System dynamics modeling practice emphasize the need to keep models small so that they are manageable and understandable. This practice is generally reasonable and prudent; however, there are times that large SD models are necessary. This paper outlines two large SD projects that were done at two Department of Energy National Laboratories, the Idaho National Laboratory and Sandia National Laboratories. This paper summarizes the models and then discusses some of the valuable lessons learned during these two modeling efforts.

  6. The Amateurs' Love Affair with Large Datasets

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Jacoby, S. H.; Henden, A.

    2006-12-01

    Amateur astronomers are professionals in other areas. They bring expertise from such varied and technical careers as computer science, mathematics, engineering, and marketing. These skills, coupled with an enthusiasm for astronomy, can be used to help manage the large data sets coming online in the next decade. We will show specific examples where teams of amateurs have been involved in mining large, online data sets and have authored and published their own papers in peer-reviewed astronomical journals. Using the proposed LSST database as an example, we will outline a framework for involving amateurs in data analysis and education with large astronomical surveys.

  7. Efficient generation of large random networks

    NASA Astrophysics Data System (ADS)

    Batagelj, Vladimir; Brandes, Ulrik

    2005-03-01

    Random networks are frequently generated, for example, to investigate the effects of model parameters on network properties or to test the performance of algorithms. Recent interest in the statistics of large-scale networks sparked a growing demand for network generators that can generate large numbers of large networks quickly. We here present simple and efficient algorithms to randomly generate networks according to the most commonly used models. Their running time and space requirement is linear in the size of the network generated, and they are easily implemented.

  8. Optical metrology for very large convex aspheres

    NASA Astrophysics Data System (ADS)

    Burge, J. H.; Su, P.; Zhao, C.

    2008-07-01

    Telescopes with very large diameter or with wide fields require convex secondary mirrors that may be many meters in diameter. The optical surfaces for these mirrors can be manufactured to the accuracy limited by the surface metrology. We have developed metrology systems that are specifically optimized for measuring very large convex aspheric surfaces. Large aperture vibration insensitive sub-aperture Fizeau interferometer combined with stitching software give high resolution surface measurements. The global shape is corroborated with a coordinate measuring machine based on the swing arm profilometer.

  9. Microwave sintering of large alumina bodies

    SciTech Connect

    Blake, R.D.; Katz, J.D.

    1993-05-01

    The application of microwaves as an energy source for materials processing of large alumina bodies at elevated temperatures has been limited to date. Most work has concerned itself with small laboratory samples. The nonuniformity of the microwave field within a cavity subjects large alumina bodies to areas of concentrated energy, resulting in uneven heating and subsequent cracking. Smaller bodies are not significantly affected by field nonuniformity due to their smaller mass. This work will demonstrate a method for microwave sintering of large alumina bodies while maintaining their structural integrity. Several alumina configurations were successfully sintered using a method which creates an artificial field or environment within the microwave cavity.

  10. Microwave sintering of large alumina bodies

    SciTech Connect

    Blake, R.D.; Katz, J.D.

    1993-01-01

    The application of microwaves as an energy source for materials processing of large alumina bodies at elevated temperatures has been limited to date. Most work has concerned itself with small laboratory samples. The nonuniformity of the microwave field within a cavity subjects large alumina bodies to areas of concentrated energy, resulting in uneven heating and subsequent cracking. Smaller bodies are not significantly affected by field nonuniformity due to their smaller mass. This work will demonstrate a method for microwave sintering of large alumina bodies while maintaining their structural integrity. Several alumina configurations were successfully sintered using a method which creates an artificial field or environment within the microwave cavity.