Science.gov

Sample records for large non-orthogonal stbcs

  1. Heat welding of non-orthogonal X-junction of single-walled carbon nanotubes

    NASA Astrophysics Data System (ADS)

    Yang, Xueming; Han, Zhonghe; Li, Yonghua; Chen, Dongci; Zhang, Pu; To, Albert C.

    2012-09-01

    Though X-junctions of single-walled carbon nanotubes (SWCNTs) have been intensively studied, studies concerning non-orthogonal X-junctions are still very rare. In this paper, the heat welding of defect-free non-orthogonal X-junctions with different crossed angles are investigated by molecular dynamics simulations. The difference between the heat welding of non-orthogonal and orthogonal X-junctions is described, and the angle effect on the configuration and stability of the heat welded non-orthogonal X-junctions is discussed. Compared with the orthogonal X-junction, two crossed SWCNTs with a smaller non-orthogonal angle are easier to join by heat welding, and this may be an important reason why the large tubes are difficult to join, whereas large nanotube bundles are easier to observe in experiments.

  2. Accurate Calculation of Oscillator Strengths for CI II Lines Using Non-orthogonal Wavefunctions

    NASA Technical Reports Server (NTRS)

    Tayal, S. S.

    2004-01-01

    Non-orthogonal orbitals technique in the multiconfiguration Hartree-Fock approach is used to calculate oscillator strengths and transition probabilities for allowed and intercombination lines in Cl II. The relativistic corrections are included through the Breit-Pauli Hamiltonian. The Cl II wave functions show strong term dependence. The non-orthogonal orbitals are used to describe the term dependence of radial functions. Large sets of spectroscopic and correlation functions are chosen to describe adequately strong interactions in the 3s(sup 2)3p(sup 3)nl (sup 3)Po, (sup 1)Po and (sup 3)Do Rydberg series and to properly account for the important correlation and relaxation effects. The length and velocity forms of oscillator strength show good agreement for most transitions. The calculated radiative lifetime for the 3s3p(sup 5) (sup 3)Po state is in good agreement with experiment.

  3. Asymptotic Performance Analysis of STBCs from Coordinate Interleaved Orthogonal Designs in Shadowed Rayleigh Fading Channels

    NASA Astrophysics Data System (ADS)

    Yoon, Chanho; Lee, Hoojin; Kang, Joonhyuk

    In this letter, we provide an asymptotic error rate performance evaluation of space-time block codes from coordinate interleaved orthogonal designs (STBCs-CIODs), especially in shadowed Rayleigh fading channels. By evaluating a simplified probability density function (PDF) of Rayleigh and Rayleigh-lognormal channels affecting the STBC-CIOD system, we derive an accurate closed-form approximation for the tight upper and lower bounds on the symbol error rate (SER). We show that shadowing asymptotically affects coding gain only, and conclude that an increase in diversity order under shadowing causes slower convergence to asymptotic bound due to the relatively larger loss of coding gain. By comparing the derived formulas and Monte-Carlo simulations, we validate the accuracy of the theoretical results.

  4. The VOLMAX Transient Electromagnetic Modeling System, Including Sub-Cell Slots and Wires on Random Non-Orthogonal Cells

    SciTech Connect

    Riley, D.J.; Turner, C.D.

    1997-12-31

    VOLMAX is a three-dimensional transient volumetric Maxwell equation solver that operates on standard rectilinear finite-difference time-domain (FDTD) grids, non-orthogonal unstructured grids, or a combination of both types (hybrid grids). The algorithm is fully explicit. Open geometries are typically solved by embedding multiple unstructured regions into a simple rectilinear FDTD mesh. The grid types are fully connected at the mesh interfaces without the need for complex spatial interpolation. The approach permits detailed modeling of complex geometry while mitigating the large cell count typical of non-orthogonal cells such as tetrahedral elements. To further improve efficiency, the unstructured region carries a separate time step that sub-cycles relative to the time-step used in the FDTD mesh.

  5. Implementation of generalized quantum measurements for unambiguous discrimination of multiple non-orthogonal coherent states.

    PubMed

    Becerra, F E; Fan, J; Migdall, A

    2013-01-01

    Generalized quantum measurements implemented to allow for measurement outcomes termed inconclusive can perform perfect discrimination of non-orthogonal states, a task which is impossible using only measurements with definitive outcomes. Here we demonstrate such generalized quantum measurements for unambiguous discrimination of four non-orthogonal coherent states and obtain their quantum mechanical description, the positive-operator valued measure. For practical realizations of this positive-operator valued measure, where noise and realistic imperfections prevent perfect unambiguous discrimination, we show that our experimental implementation outperforms any ideal standard-quantum-limited measurement performing the same non-ideal unambiguous state discrimination task for coherent states with low mean photon numbers. PMID:23774177

  6. Non-Orthogonality of Seafloor Spreading: A New Look at Fast Spreading Centers

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Gordon, R. G.

    2015-12-01

    Most of Earth's surface is created by seafloor spreading. While most seafloor spreading is orthogonal, that is, the strike of mid-ocean ridge segments is perpendicular to nearby transform faults, examples of significant non-orthogonality have been noted since the 1970s, in particular in regions of slow seafloor spreading such as the western Gulf of Aden with non-orthogonality up to 45°. In contrast, here we focus on fast and ultra-fast seafloor spreading along the East Pacific Rise. To estimate non-orthogonality, we compare ridge-segment strikes with the direction of plate motion determined from the angular velocity that best fits all the data along the boundary of a single plate pair [DeMets et al., 2010]. The advantages of this approach include greater accuracy and the ability to estimate non-orthogonality where there are no nearby transform faults. Estimating the strikes of fast-spreading mid-ocean ridge segments present several challenges as non-transform offsets on various scales affect the estimate of the strike. While spreading is orthogonal or nearly orthogonal along much of the East Pacific Rise, some ridge segments along the Pacific-Nazca boundary near 30°S and near 16°S-22°S deviate from orthogonality by as much as 6°-12° even when we exclude the portions of mid-ocean ridge segments involved in overlapping spreading centers. Thus modest but significant non-orthogonality occurs where seafloor spreading is the fastest on the planet. If a plume lies near the ridge segment, we assume it contributes to magma overpressure along the ridge segment [Abelson & Agnon, 1997]. We further assume that the contribution to magma overpressure is proportional to the buoyancy flux of the plume [Sleep, 1990] and inversely proportional to the distance between the mid-ocean ridge segment and a given plume. We find that the non-orthogonal angle tends to decrease with increasing spreading rate and with increasing distance between ridge segment and plume.

  7. The gravitational Hamiltonian in the presence of non-orthogonal boundaries

    NASA Astrophysics Data System (ADS)

    Hawking, S. W.; Hunter, C. J.

    1996-10-01

    This paper generalizes earlier work on Hamiltonian boundary terms by omitting the requirement that the spacelike hypersurfaces 0264-9381/13/10/012/img1 intersect the timelike boundary 0264-9381/13/10/012/img2 orthogonally. The expressions for the action and Hamiltonian are calculated and the required subtraction of a background contribution is discussed. The new features of a Hamiltonian formulation with non-orthogonal boundaries are then illustrated in two examples.

  8. The spatial-matched-filter beam pattern of a biaxial non-orthogonal velocity sensor

    NASA Astrophysics Data System (ADS)

    Lee, Charles Hung; Lee, Hye Rin Lindsay; Wong, Kainam Thomas; Razo, Mario

    2016-04-01

    This work derives the "spatial matched filter" beam pattern of a "u-u probe", which comprises two uniaxial velocity sensors, that are identical, collocated, and oriented supposedly in orthogonality. This non-orthogonality may be unrealized in real-world hardware implementation, and would consequentially cause a beamformer to have a systemic pointing error, which is derived analytically here in this paper. Other than this point error, this paper's analysis shows that the beam shape would otherwise be unchanged.

  9. Non-Orthogonality of Seafloor Spreading: A New Look at Fast Spreading Centers

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Gordon, R. G.

    2014-12-01

    Most of Earth's surface is created by seafloor spreading, which is one of a handful of fundamental global tectonic processes. While most seafloor spreading is orthogonal, that is, the strike of mid-ocean ridge segments are perpendicular to transform faults, examples of significant non-orthogonality have been noted since the 1970s, in particular in regions of slow seafloor spreading such as the western Gulf of Aden with the non-orthogonality up to 45°. In contrast, here we focus on fast and ultra-fast seafloor spreading along the East Pacific Rise. For our analysis, instead of comparing the strike of mid-ocean ridges with the strike of nearby transform faults, the azimuth of which can be uncertain, we compare with the direction of plate motion determined from the angular velocity that best fits all the data along the boundary of a single plate pair [DeMet, Gordon, and Argus 2010]. The advantages of our approach include greater accuracy and the ability to estimate non-orthogonality where there are no nearby transform faults. Estimating the strikes of fast-spreading mid-ocean ridge segments present several challenges as non-transform offsets on various scales affect the estimate of the strike. Moreover, the strike may vary considerably within a single ridge segment bounded by transform faults. This is especially evident near overlapping spreading centers along with the strike varies rapidly with distance along a ridge segment. We use various bathymetric data sets to make our estimates including ETOPO1 [Amante and Eakins, 2009] and GeoMapApp [Ryan et al., 2009]. While spreading is orthogonal or nearly orthogonal along much of the East Pacific Rise, it appears that some ridge segments along the Pacific-Nazca boundary near 30°S and near 16°S-22°S deviate significantly from orthogonality by as much as 6°-12° even when we exclude the portions of mid-ocean ridge segments involved in overlapping spreading centers. Thus modest but significant non-orthogonality occurs

  10. Fairness for Non-Orthogonal Multiple Access in 5G Systems

    NASA Astrophysics Data System (ADS)

    Timotheou, Stelios; Krikidis, Ioannis

    2015-10-01

    In non-orthogonal multiple access (NOMA) downlink, multiple data flows are superimposed in the power domain and user decoding is based on successive interference cancellation. NOMA's performance highly depends on the power split among the data flows and the associated power allocation (PA) problem. In this letter, we study NOMA from a fairness standpoint and we investigate PA techniques that ensure fairness for the downlink users under i) instantaneous channel state information (CSI) at the transmitter, and ii) average CSI. Although the formulated problems are non-convex, we have developed low-complexity polynomial algorithms that yield the optimal solution in both cases considered.

  11. Non-orthogonal optical multicarrier access based on filter bank and SCMA.

    PubMed

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-10-19

    This paper proposes a novel non-orthogonal optical multicarrier access system based on filter bank and sparse code multiple access (SCMA). It offers released frequency offset and better spectral efficiency for multicarrier access. An experiment of 73.68 Gb/s filter bank-based multicarrier (FBMC) SCMA system with 60 km single mode fiber link is performed to demonstrate the feasibility. The comparison between fast Fourier transform (FFT) based multicarrier and the proposed scheme is also investigated in the experiment. PMID:26480395

  12. Transducer Shadowing Explains Observed Underestimates in Vertical Wind Velocity from Non-orthogonal Sonic Anemometers

    NASA Astrophysics Data System (ADS)

    Frank, J. M.; Massman, W. J.; Swiatek, E.; Zimmerman, H.; Ewers, B. E.

    2014-12-01

    Sonic anemometry is fundamental to all eddy-covariance studies of surface energy and ecosystem carbon and water balance. While recent studies have shown that some anemometers underestimate vertical wind, we hypothesize that this is caused by the lack of transducer shadowing correction in non-orthogonal models. We tested this in an experiment comparing three sonic anemometer designs: orthogonal (O), non-orthogonal (NO), and quasi-orthogonal (QO); using four models: K-probe (O) and A-probe (NO) (Applied Technologies, Inc.) and CSAT3 (NO) and CSAT3V (QO) (Campbell Scientific, Inc.). For each of a 12-week experiment at the GLEES AmeriFlux site, five instruments from a pool of twelve (three of each model) were randomly selected and located around a control (CSAT3); mid-week all but the control were re-mounted horizontally. We used Bayesian analysis to test differences between models in half-hour standard deviations (σu, σv, σw, and σT), turbulent kinetic energy (TKE), and the ratio between vertical/horizontal TKE (VHTKE). The K-probe experiences horizontal transducer shadowing which is effectively corrected using an established wind-tunnel derived algorithm. We constructed shadow correction algorithms for the NO/QO anemometers by applying the K-probe function to each non-orthogonal transducer pair (SC1) as well as a stronger correction of twice the magnitude (SC2). While the partitioning of VHTKE was higher in O than NO/QO anemometers, the application of SC1 explained 45-60% of this discrepancy while SC2 overcorrected it. During the horizontal manipulation changes in the NO/QO were moderate in σu (4-8% decrease), very strong in σv (9-11% decrease), and minimal in σw (-3 to 4% change) while only σu measurements changed (3% decrease) with the K-probe. These changes were predicted by both shadow correction algorithms, with SC2 better explaining the data. This confirms our hypothesis while eliminating others that attribute the underestimate to a systematic bias in

  13. Non-Orthogonality of Seafloor Spreading: A New Global Survey Building on the MORVEL Plate Motion Project

    NASA Astrophysics Data System (ADS)

    Throckmorton, C. R.; Zhang, T.; Gordon, R. G.

    2013-12-01

    Most of Earth's surface is created by seafloor spreading, which is one of a handful of fundamental global tectonic processes. While most seafloor spreading is orthogonal, that is, the strike of mid-ocean ridge segments are perpendicular to transform faults, examples of significant non-orthogonality have been noted since the 1970s, in particular in regions of slow seafloor spreading such as the western Gulf of Aden. Here we present a new global analysis of non-orthogonality of seafloor spreading by building on the results of the MORVEL global plate motion project including both new estimates of plate angular velocities and global estimates of the strikes of mid-ocean ridge segments [DeMets, Gordon, & Argus, 2010]. For our analysis, instead of comparing the strike of mid-ocean ridges with the strike of nearby transform faults, the azimuth of which can be uncertain, we compare with the direction of plate motion determined from the angular velocity that best fits all the data along the boundary of a single plate pair. The advantages of our approach include greater accuracy and the ability to estimate non-orthogonality where there are no nearby transform faults. Unsurprisingly we confirm that most seafloor spreading is within a few degrees of orthogonality. Moreover we confirm non-orthogonality in many previously recognized regions of slow seafloor spreading. Surprisingly, however, we find non-orthogonality in several regions of fast seafloor spreading. Implications for mid-ocean ridge processes and hypothesized lithosphere deformation will be discussed.

  14. Efficient computation of Hamiltonian matrix elements between non-orthogonal Slater determinants

    NASA Astrophysics Data System (ADS)

    Utsuno, Yutaka; Shimizu, Noritaka; Otsuka, Takaharu; Abe, Takashi

    2013-01-01

    We present an efficient numerical method for computing Hamiltonian matrix elements between non-orthogonal Slater determinants, focusing on the most time-consuming component of the calculation that involves a sparse array. In the usual case where many matrix elements should be calculated, this computation can be transformed into a multiplication of dense matrices. It is demonstrated that the present method based on the matrix-matrix multiplication attains ˜80% of the theoretical peak performance measured on systems equipped with modern microprocessors, a factor of 5-10 better than the normal method using indirectly indexed arrays to treat a sparse array. The reason for such different performances is discussed from the viewpoint of memory access.

  15. Simultaneous Source Localization and Polarization Estimation via Non-Orthogonal Joint Diagonalization with Vector-Sensors

    PubMed Central

    Gong, Xiao-Feng; Wang, Ke; Lin, Qiu-Hua; Liu, Zhi-Wen; Xu, You-Gen

    2012-01-01

    Joint estimation of direction-of-arrival (DOA) and polarization with electromagnetic vector-sensors (EMVS) is considered in the framework of complex-valued non-orthogonal joint diagonalization (CNJD). Two new CNJD algorithms are presented, which propose to tackle the high dimensional optimization problem in CNJD via a sequence of simple sub-optimization problems, by using LU or LQ decompositions of the target matrices as well as the Jacobi-type scheme. Furthermore, based on the above CNJD algorithms we present a novel strategy to exploit the multi-dimensional structure present in the second-order statistics of EMVS outputs for simultaneous DOA and polarization estimation. Simulations are provided to compare the proposed strategy with existing tensorial or joint diagonalization based methods. PMID:22737015

  16. A New Algorithm for Complex Non-Orthogonal Joint Diagonalization Based on Shear and Givens Rotations

    NASA Astrophysics Data System (ADS)

    Mesloub, Ammar; Abed-Meraim, Karim; Belouchrani, Adel

    2014-04-01

    This paper introduces a new algorithm to approximate non orthogonal joint diagonalization (NOJD) of a set of complex matrices. This algorithm is based on the Frobenius norm formulation of the JD problem and takes advantage from combining Givens and Shear rotations to attempt the approximate joint diagonalization (JD). It represents a non trivial generalization of the JDi (Joint Diagonalization) algorithm (Souloumiac 2009) to the complex case. The JDi is first slightly modified then generalized to the CJDi (i.e. Complex JDi) using complex to real matrix transformation. Also, since several methods exist already in the literature, we propose herein a brief overview of existing NOJD algorithms then we provide an extensive comparative study to illustrate the effectiveness and stability of the CJDi w.r.t. various system parameters and application contexts.

  17. Spatio-Temporal Evolutions of Non-Orthogonal Equatorial Wave Modes Derived from Observations

    NASA Astrophysics Data System (ADS)

    Barton, C.; Cai, M.

    2015-12-01

    Equatorial waves have been studied extensively due to their importance to the tropical climate and weather systems. Historically, their activity is diagnosed mainly in the wavenumber-frequency domain. Recently, many studies have projected observational data onto parabolic cylinder functions (PCF), which represent the meridional structure of individual wave modes, to attain time-dependent spatial wave structures. In this study, we propose a methodology that seeks to identify individual wave modes in instantaneous fields of observations by determining their projections on PCF modes according to the equatorial wave theory. The new method has the benefit of yielding a closed system with a unique solution for all waves' spatial structures, including IG waves, for a given instantaneous observed field. We have applied our method to the ERA-Interim reanalysis dataset in the tropical stratosphere where the wave-mean flow interaction mechanism for the quasi-biennial oscillation (QBO) is well-understood. We have confirmed the continuous evolution of the selection mechanism for equatorial waves in the stratosphere from observations as predicted by the theory for the QBO. This also validates the proposed method for decomposition of observed tropical wave fields into non-orthogonal equatorial wave modes.

  18. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors

    PubMed Central

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  19. Optimized Non-Orthogonal Localized Orbitals for Linear Scaling Quantum Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Williamson, Andrew; Reboredo, Fernando; Galli, Giulia

    2004-03-01

    It has been shown [1] that Quantum Monte Carlo calculations of total energies of interacting systems can be made to scale nearly linearly with the number of electrons (N), by using localized single particle orbitals to construct Slater determinants. Here we propose a new way of defining the localized orbitals required for O(N)-QMC calculation, by minimizing an appropriate cost function yielding a set of N non-orthogonal (NO) localized orbitals considerably smoother in real space than Maximally localized Wannier functions (MLWF). These NO orbitals have better localization properties than MLWFs. We show that for semiconducting systems NO orbitals can be localized in a much smaller region of space than orthogonal orbitals (typically, one eighth of the volume) and give total energies with the same accuracy, thus yielding a linear scaling QMC algorithm which is 5 times faster than the one originally proposed [1]. We also discuss the extension of O(N)-QMC with NO orbitals to the calculations of total energies of metallic systems. This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. [1] A. J. Williamson, R.Q. Hood and J.C. Grossman, Phys. Rev. Lett. 87, 246406 (2001)

  20. A Novel Attitude Estimation Algorithm Based on the Non-Orthogonal Magnetic Sensors.

    PubMed

    Zhu, Jianliang; Wu, Panlong; Bo, Yuming

    2016-01-01

    Because the existing extremum ratio method for projectile attitude measurement is vulnerable to random disturbance, a novel integral ratio method is proposed to calculate the projectile attitude. First, the non-orthogonal measurement theory of the magnetic sensors is analyzed. It is found that the projectile rotating velocity is constant in one spinning circle and the attitude error is actually the pitch error. Next, by investigating the model of the extremum ratio method, an integral ratio mathematical model is established to improve the anti-disturbance performance. Finally, by combining the preprocessed magnetic sensor data based on the least-square method and the rotating extremum features in one cycle, the analytical expression of the proposed integral ratio algorithm is derived with respect to the pitch angle. The simulation results show that the proposed integral ratio method gives more accurate attitude calculations than does the extremum ratio method, and that the attitude error variance can decrease by more than 90%. Compared to the extremum ratio method (which collects only a single data point in one rotation cycle), the proposed integral ratio method can utilize all of the data collected in the high spin environment, which is a clearly superior calculation approach, and can be applied to the actual projectile environment disturbance. PMID:27213389

  1. A Non-Orthogonal Block-Localized Effective Hamiltonian Approach for Chemical and Enzymatic Reactions

    PubMed Central

    Cembran, Alessandro; Payaka, Apirak; Lin, Yen-lin; Xie, Wangshen; Mo, Yirong; Song, Lingchun; Gao, Jiali

    2010-01-01

    The effective Hamiltonian-molecular orbital and valence bond (EH-MOVB) method based on non-orthogonal block-localized fragment orbitals has been implemented into the program CHARMM for molecular dynamics simulations of chemical and enzymatic reactions, making use of semiempirical quantum mechanical models. Building upon ab initio MOVB theory, we make use of two parameters in the EH-MOVB method to fit the barrier height and the relative energy between the reactant and product state for a given chemical reaction to be in agreement with experiment or high-level ab initio or density functional results. Consequently, the EH-MOVB method provides a highly accurate and computationally efficient QM/MM model for dynamics simulation of chemical reactions in solution. The EH-MOVB method is illustrated by examination of the potential energy surface of the hydride transfer reaction from trimethylamine to a flavin cofactor model in the gas phase. In the present study, we employed the semiempirical AM1 model, which yields a reaction barrier that is more than 5 kcal/mol too high. We use a parameter calibration procedure for the EH-MOVB method similar to that employed to adjust the results of semiempirical and empirical models. Thus, the relative energy of these two diabatic states can be shifted to reproduce the experimental energy of reaction, and the barrier height is optimized to reproduce the desired (accurate) value by adding a constant to the off-diagonal matrix element. The present EH-MOVB method offers a viable approach to characterizing solvent and protein-reorganization effects in the realm of combined QM/MM simulations. PMID:20694172

  2. A program for calculating photonic band structures, Green's functions and transmission/reflection coefficients using a non-orthogonal FDTD method

    NASA Astrophysics Data System (ADS)

    Ward, A. J.; Pendry, J. B.

    2000-06-01

    In this paper we present an updated version of our ONYX program for calculating photonic band structures using a non-orthogonal finite difference time domain method. This new version employs the same transparent formalism as the first version with the same capabilities for calculating photonic band structures or causal Green's functions but also includes extra subroutines for the calculation of transmission and reflection coefficients. Both the electric and magnetic fields are placed onto a discrete lattice by approximating the spacial and temporal derivatives with finite differences. This results in discrete versions of Maxwell's equations which can be used to integrate the fields forwards in time. The time required for a calculation using this method scales linearly with the number of real space points used in the discretization so the technique is ideally suited to handling systems with large and complicated unit cells.

  3. Non-orthogonal spin-adaptation of coupled cluster methods: A new implementation of methods including quadruple excitations

    SciTech Connect

    Matthews, Devin A.; Stanton, John F.

    2015-02-14

    The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating an efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))

  4. On the Performance of Non-Orthogonal Multiple Access in 5G Systems with Randomly Deployed Users

    NASA Astrophysics Data System (ADS)

    Ding, Zhiguo; Yang, Zheng; Fan, Pingzhi; Poor, H. Vincent

    2014-12-01

    In this letter, the performance of non-orthogonal multiple access (NOMA) is investigated in a cellular downlink scenario with randomly deployed users. The developed analytical results show that NOMA can achieve superior performance in terms of ergodic sum rates; however, the outage performance of NOMA depends critically on the choices of the users' targeted data rates and allocated power. In particular, a wrong choice of the targeted data rates and allocated power can lead to a situation in which the user's outage probability is always one, i.e. the user's targeted quality of service will never be met.

  5. The non-orthogonal fixed beam arrangement for the second proton therapy facility at the National Accelerator Center

    NASA Astrophysics Data System (ADS)

    Schreuder, A. N.; Jones, D. T. L.; Conradie, J. L.; Fourie, D. T.; Botha, A. H.; Müller, A.; Smit, H. A.; O'Ryan, A.; Vernimmen, F. J. A.; Wilson, J.; Stannard, C. E.

    1999-06-01

    The medical user group at the National Accelerator Center (NAC) is currently unable to treat all eligible patients with high energy protons. Developing a second proton treatment room is desirable since the 200 MeV proton beam from the NAC separated sector cyclotron is currently under-utilized during proton therapy sessions. During the patient positioning phase in one treatment room, the beam could be used for therapy in a second room. The second proton therapy treatment room at the NAC will be equipped with two non-orthogonal beam lines, one horizontal and one at 30 degrees to the vertical. The two beams will have a common isocentre. This beam arrangement together with a versatile patient positioning system (commercial robot arm) will provide the radiation oncologist with a diversity of possible beam arrangements and offers a reasonable cost-effective alternative to an isocentric gantry.

  6. The non-orthogonal fixed beam arrangement for the second proton therapy facility at the National Accelerator Center

    SciTech Connect

    Schreuder, A. N.; Jones, D. T. L.; Conradie, J. L.; Fourie, D. T.; Botha, A. H.; Mueller, A.; Smit, H. A.; O'Ryan, A.; Vernimmen, F. J. A.; Wilson, J.; Stannard, C. E.

    1999-06-10

    The medical user group at the National Accelerator Center (NAC) is currently unable to treat all eligible patients with high energy protons. Developing a second proton treatment room is desirable since the 200 MeV proton beam from the NAC separated sector cyclotron is currently under-utilized during proton therapy sessions. During the patient positioning phase in one treatment room, the beam could be used for therapy in a second room. The second proton therapy treatment room at the NAC will be equipped with two non-orthogonal beam lines, one horizontal and one at 30 degrees to the vertical. The two beams will have a common isocentre. This beam arrangement together with a versatile patient positioning system (commercial robot arm) will provide the radiation oncologist with a diversity of possible beam arrangements and offers a reasonable cost-effective alternative to an isocentric gantry.

  7. Size consistent formulations of the perturb-then-diagonalize Møller-Plesset perturbation theory correction to non-orthogonal configuration interaction.

    PubMed

    Yost, Shane R; Head-Gordon, Martin

    2016-08-01

    In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the number of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis. PMID:27497537

  8. Size consistent formulations of the perturb-then-diagonalize Møller-Plesset perturbation theory correction to non-orthogonal configuration interaction

    NASA Astrophysics Data System (ADS)

    Yost, Shane R.; Head-Gordon, Martin

    2016-08-01

    In this paper we introduce two size consistent forms of the non-orthogonal configuration interaction with second-order Møller-Plesset perturbation theory method, NOCI-MP2. We show that the original NOCI-MP2 formulation [S. R. Yost, T. Kowalczyk, and T. VanVoorh, J. Chem. Phys. 193, 174104 (2013)], which is a perturb-then-diagonalize multi-reference method, is not size consistent. We also show that this causes significant errors in large systems like the linear acenes. By contrast, the size consistent versions of the method give satisfactory results for singlet and triplet excited states when compared to other multi-reference methods that include dynamic correlation. For NOCI-MP2 however, the number of required determinants to yield similar levels of accuracy is significantly smaller. These results show the promise of the NOCI-MP2 method, though work still needs to be done in creating a more consistent black-box approach to computing the determinants that comprise the many-electron NOCI basis.

  9. Multiphase flow modelling using non orthogonal collocated finite volumes : application to fluid catalytical cracking and large scale geophysical flows.

    NASA Astrophysics Data System (ADS)

    Martin, R. M.; Nicolas, A. N.

    2003-04-01

    A modeling approach of gas solid flow, taking into account different physical phenomena such as gas turbulence and inter-particle interactions is presented. Moment transport equations are derived for the second order fluctuating velocity tensor which allow to involve practical closures based on single phase turbulence modeling on one hand and kinetic theory of granular media on the other hand. The model is applied to fluid catalytic cracking processes and explosive volcanism. In the industry as well as in the geophysical community, multiphase flows are modeled using a finite volume approach and a multicorrector algorithm in time in order to determine implicitly the pressures, velocities and volume fractions for each phase. Pressures, and velocities are generally determined at mid-half mesh step from each other following the staggered grid approach. This ensures stability and prevents oscillations in pressure. It allows to treat almost all the Reynolds number ranges for all speeds and viscosities. The disadvantages appear when we want to treat more complex geometries or if a generalized curvilinear formulation of the conservation equations is considered. Too many interpolations have to be done and accuracy is then lost. In order to overcome these problems, we use here a similar algorithm in time and a Rhie and Chow interpolation (1983) of the collocated variables and essentially the velocities at the interface. The Rhie and Chow interpolation of the velocities at the finite volume interfaces allows to have no oscillations of the pressure without checkerboard effects and to stabilize all the algorithm. In a first predictor step, fluxes at the interfaces of the finite volumes are then computed using 2nd and 3rd order shock capturing schemes of MUSCL/TVD or Van Leer type, and the orthogonal stress components are treated implicitly while cross viscous/diffusion terms are treated explicitly. Pentadiagonal linear systems are solved in each geometrical direction (the so called Alternate Direction Implicit algorithm) to reduce the cost of computation. Then a multi-correction of interpolated velocities, pressures and volumic fractions of each phase are done in the Cartesian frame or the deformed local curvilinear coordinate system till convergence and mass conservation. In all this process the momentum exchange forces and the interphase heat exchanges are treated implicitly to ensure stability. To reduce the computational cost, a domain decomposition strategy is adopted with an overlapping procedure at the interface between subdomains. We show here two cases involving non-Cartesian computational domains: a two-phase volcanic flow along a realistic topography and a gas-particle flow in a complex vertical conduct (riser) used in industrial plants of fluid catalytical cracking processes geometry. With an initial Richardson number of 0.16 slightly higher than the critical Richardson number of 0.1, particles and water vapor are injected at the bottom of the riser. Countercurrents appear near the walls and gravity effects begin to dominate inducing an increase of particulate volumic fractions near the walls. We show here the hydrodynamics for 13s.

  10. Functional Implications of Ubiquitous Semicircular Canal Non-Orthogonality in Mammals

    PubMed Central

    Berlin, Jeri C.; Kirk, E. Christopher; Rowe, Timothy B.

    2013-01-01

    The ‘canonical model’ of semicircular canal orientation in mammals assumes that 1) the three ipsilateral canals of an inner ear exist in orthogonal planes (i.e., orthogonality), 2) corresponding left and right canal pairs have equivalent angles (i.e., angle symmetry), and 3) contralateral synergistic canals occupy parallel planes (i.e., coplanarity). However, descriptions of vestibular anatomy that quantify semicircular canal orientation in single species often diverge substantially from this model. Data for primates further suggest that semicircular canal orthogonality varies predictably with the angular head velocities encountered in locomotion. These observations raise the possibility that orthogonality, symmetry, and coplanarity are misleading descriptors of semicircular canal orientation in mammals, and that deviations from these norms could have significant functional consequences. Here we critically assess the canonical model of semicircular canal orientation using high-resolution X-ray computed tomography scans of 39 mammal species. We find that substantial deviations from orthogonality, angle symmetry, and coplanarity are the rule for the mammals in our comparative sample. Furthermore, the degree to which the semicircular canals of a given species deviate from orthogonality is negatively correlated with estimated vestibular sensitivity. We conclude that the available comparative morphometric data do not support the canonical model and that its overemphasis as a heuristic generalization obscures a large amount of functionally relevant variation in semicircular canal orientation between species. PMID:24260256

  11. Three Dimensional Wind Speed and Flux Measurement over a Rain-fed Soybean Field Using Orthogonal and Non-orthogonal Sonic Anemometer Designs

    NASA Astrophysics Data System (ADS)

    Thomas, T.; Suyker, A.; Burba, G. G.; Billesbach, D.

    2014-12-01

    The eddy covariance method for estimating fluxes of trace gases, energy and momentum in the constant flux layer above a plant canopy fundamentally relies on accurate measurements of the vertical wind speed. This wind speed is typically measured using a three dimensional ultrasonic anemometer. These anemometers incorporate designs with transducer sets that are aligned either orthogonally or non-orthogonally. Previous studies comparing the two designs suggest differences in measured 3D wind speed components, in particular vertical wind speed, from the non-orthogonal transducer relative to the orthogonal design. These differences, attributed to additional flow distortion caused by the non-orthogonal transducer arrangement, directly affect fluxes of trace gases, energy and momentum. A field experiment is being conducted over a rain-fed soybean field at the AmeriFlux site (US-Ne3) near Mead, Nebraska. In this study, ultrasonic anemometers featuring orthogonal transducer sets (ATI Vx Probe) and non-orthogonal transducer sets (Gill R3-100) collect high frequency wind vector and sonic temperature data. Sensible heat and momentum fluxes and other key sonic performance data are evaluated based on environmental parameters including wind speed, wind direction, temperature, and angle of attack. Preliminary field experiment results are presented.

  12. Reliable Attention Network Scores and Mutually Inhibited Inter-network Relationships Revealed by Mixed Design and Non-orthogonal Method

    PubMed Central

    Wang, Yi-Feng; Jing, Xiu-Juan; Liu, Feng; Li, Mei-Ling; Long, Zhi-Liang; Yan, Jin H.; Chen, Hua-Fu

    2015-01-01

    The attention system can be divided into alerting, orienting, and executive control networks. The efficiency and independence of attention networks have been widely tested with the attention network test (ANT) and its revised versions. However, many studies have failed to find effects of attention network scores (ANSs) and inter-network relationships (INRs). Moreover, the low reliability of ANSs can not meet the demands of theoretical and empirical investigations. Two methodological factors (the inter-trial influence in the event-related design and the inter-network interference in orthogonal contrast) may be responsible for the unreliability of ANT. In this study, we combined the mixed design and non-orthogonal method to explore ANSs and directional INRs. With a small number of trials, we obtained reliable and independent ANSs (split-half reliability of alerting: 0.684; orienting: 0.588; and executive control: 0.616), suggesting an individual and specific attention system. Furthermore, mutual inhibition was observed when two networks were operated simultaneously, indicating a differentiated but integrated attention system. Overall, the reliable and individual specific ANSs and mutually inhibited INRs provide novel insight into the understanding of the developmental, physiological and pathological mechanisms of attention networks, and can benefit future experimental and clinical investigations of attention using ANT. PMID:25997025

  13. Novel methods for configuration interaction and orbital optimization for wave functions containing non-orthogonal orbitals with applications to the chromium dimer and trimer

    NASA Astrophysics Data System (ADS)

    Olsen, Jeppe

    2015-09-01

    A novel algorithm for performing configuration interaction (CI) calculations using non-orthogonal orbitals is introduced. In the new algorithm, the explicit calculation of the Hamiltonian matrix is replaced by the direct evaluation of the Hamiltonian matrix times a vector, which allows expressing the CI-vector in a bi-orthonormal basis, thereby drastically reducing the computational complexity. A new non-orthogonal orbital optimization method that employs exponential mappings is also described. To allow non-orthogonal transformations of the orbitals, the standard exponential mapping using anti-symmetric operators is supplemented with an exponential mapping based on a symmetric operator in the active orbital space. Expressions are obtained for the orbital gradient and Hessian, which involve the calculation of at most two-body density matrices, thereby avoiding the time-consuming calculation of the three- and four-body density matrices of the previous approaches. An approach that completely avoids the calculation of any four-body terms with limited degradation of convergence is also devised. The novel methods for non-orthogonal configuration interaction and orbital optimization are applied to the chromium dimer and trimer. For internuclear distances that are typical for chromium clusters, it is shown that a reference configuration consisting of optimized singly occupied active orbitals is sufficient to give a potential curve that is in qualitative agreement with complete active space self-consistent field (CASSCF) calculations containing more than 500 × 106 determinants. To obtain a potential curve that deviates from the CASSCF curve by less than 1 mHartree, it is sufficient to add single and double excitations out from the reference configuration.

  14. Novel methods for configuration interaction and orbital optimization for wave functions containing non-orthogonal orbitals with applications to the chromium dimer and trimer.

    PubMed

    Olsen, Jeppe

    2015-09-21

    A novel algorithm for performing configuration interaction (CI) calculations using non-orthogonal orbitals is introduced. In the new algorithm, the explicit calculation of the Hamiltonian matrix is replaced by the direct evaluation of the Hamiltonian matrix times a vector, which allows expressing the CI-vector in a bi-orthonormal basis, thereby drastically reducing the computational complexity. A new non-orthogonal orbital optimization method that employs exponential mappings is also described. To allow non-orthogonal transformations of the orbitals, the standard exponential mapping using anti-symmetric operators is supplemented with an exponential mapping based on a symmetric operator in the active orbital space. Expressions are obtained for the orbital gradient and Hessian, which involve the calculation of at most two-body density matrices, thereby avoiding the time-consuming calculation of the three- and four-body density matrices of the previous approaches. An approach that completely avoids the calculation of any four-body terms with limited degradation of convergence is also devised. The novel methods for non-orthogonal configuration interaction and orbital optimization are applied to the chromium dimer and trimer. For internuclear distances that are typical for chromium clusters, it is shown that a reference configuration consisting of optimized singly occupied active orbitals is sufficient to give a potential curve that is in qualitative agreement with complete active space self-consistent field (CASSCF) calculations containing more than 500 × 10(6) determinants. To obtain a potential curve that deviates from the CASSCF curve by less than 1 mHartree, it is sufficient to add single and double excitations out from the reference configuration. PMID:26395682

  15. New Advances In Multiphase Flow Numerical Modelling Using A General Domain Decomposition and Non-orthogonal Collocated Finite Volume Algorithm: Application To Industrial Fluid Catalytical Cracking Process and Large Scale Geophysical Fluids.

    NASA Astrophysics Data System (ADS)

    Martin, R.; Gonzalez Ortiz, A.

    In the industry as well as in the geophysical community, multiphase flows are mod- elled using a finite volume approach and a multicorrector algorithm in time in order to determine implicitly the pressures, velocities and volume fractions for each phase. Pressures, and velocities are generally determined at mid-half mesh step from each other following the staggered grid approach. This ensures stability and prevents os- cillations in pressure. It allows to treat almost all the Reynolds number ranges for all speeds and viscosities. The disadvantages appear when we want to treat more complex geometries or if a generalized curvilinear formulation of the conservation equations is considered. Too many interpolations have to be done and accuracy is then lost. In order to overcome these problems, we use here a similar algorithm in time and a Rhie and Chow interpolation (1983) of the collocated variables and essentially the velocities at the interface. The Rhie and Chow interpolation of the velocities at the finite volume interfaces allows to have no oscillatons of the pressure without checkerboard effects and to stabilize all the algorithm. In a first predictor step, fluxes at the interfaces of the finite volumes are then computed using 2nd and 3rd order shock capturing schemes of MUSCL/TVD or Van Leer type, and the orthogonal stress components are treated implicitly while cross viscous/diffusion terms are treated explicitly. A pentadiagonal system in 2D or a septadiagonal in 3D must be solve but here we have chosen to solve 3 tridiagonal linear systems (the so called Alternate Direction Implicit algorithm), one in each spatial direction, to reduce the cost of computation. Then a multi-correction of interpolated velocities, pressures and volumic fractions of each phase are done in the cartesian frame or the deformed local curvilinear coordinate system till convergence and mass conservation. At the end the energy conservation equations are solved. In all this process the momentum exchange forces and the interphase heat exchanges are 1 treated implicitly to ensure stability. In order to reduce one more time the computa- tional cost, a decomposition of the global domain in N subdomains is introduced and all the previous algorithms applied to one block is performed in each block. At the in- terface between subdomains, an overlapping procedure is used. Another advantage is that different sets of equations can be solved in each block like fluid/structure interac- tions for instance. We show here the hydrodynamics of a two-phase flow in a vertical conduct as in industrial plants of fluid catalytical cracking processes with a complex geometry. With an initial Richardson number of 0.16 slightly higher than the critical Richardson number of 0.1, particles and water vapor are injected at the bottom of the riser. Countercurrents appear near the walls and gravity effects begin to dominate in- ducing an increase of particulate volumic fractions near the walls. We show here the hydrodynamics for 13s. 2

  16. Stability of a non-orthogonal stagnation flow to three dimensional disturbances

    NASA Technical Reports Server (NTRS)

    Lasseigne, D. G.; Jackson, T. L.

    1991-01-01

    A similarity solution for a low Mach number nonorthogonal flow impinging on a hot or cold plate is presented. For the constant density case, it is known that the stagnation point shifts in the direction of the incoming flow and that this shift increases as the angle of attack decreases. When the effects of density variations are included, a critical plate temperature exists; above this temperature the stagnation point shifts away from the incoming stream as the angle is decreased. This flow field is believed to have application to the reattachment zone of certain separated flows or to a lifting body at a high angle of attack. Finally, the stability of this nonorthogonal flow to self similar, 3-D disturbances is examined. Stability properties of the flow are given as a function of the parameters of this study; ratio of the plate temperature to that of the outer potential flow and angle of attack. In particular, it is shown that the angle of attack can be scaled out by a suitable definition of an equivalent wavenumber and temporal growth rate, and the stability problem for the nonorthogonal case is identical to the stability problem for the orthogonal case.

  17. Divergence preserving discrete surface integral methods for Maxwell's curl equations using non-orthogonal unstructured grids

    NASA Technical Reports Server (NTRS)

    Madsen, Niel K.

    1992-01-01

    Several new discrete surface integral (DSI) methods for solving Maxwell's equations in the time-domain are presented. These methods, which allow the use of general nonorthogonal mixed-polyhedral unstructured grids, are direct generalizations of the canonical staggered-grid finite difference method. These methods are conservative in that they locally preserve divergence or charge. Employing mixed polyhedral cells, (hexahedral, tetrahedral, etc.) these methods allow more accurate modeling of non-rectangular structures and objects because the traditional stair-stepped boundary approximations associated with the orthogonal grid based finite difference methods can be avoided. Numerical results demonstrating the accuracy of these new methods are presented.

  18. Thinking large.

    PubMed

    Devries, Egbert

    2016-05-01

    Egbert Devries was brought up on a farm in the Netherlands and large animal medicine has always been his area of interest. After working in UK practice for 12 years he joined CVS and was soon appointed large animal director with responsibility for building a stronger large animal practice base. PMID:27154956

  19. A multireference perturbation method using non-orthogonal Hartree-Fock determinants for ground and excited states

    SciTech Connect

    Yost, Shane R.; Kowalczyk, Tim; Van Voorhis, Troy

    2013-11-07

    In this article we propose the ΔSCF(2) framework, a multireference strategy based on second-order perturbation theory, for ground and excited electronic states. Unlike the complete active space family of methods, ΔSCF(2) employs a set of self-consistent Hartree-Fock determinants, also known as ΔSCF states. Each ΔSCF electronic state is modified by a first-order correction from Møller-Plesset perturbation theory and used to construct a Hamiltonian in a configuration interactions like framework. We present formulas for the resulting matrix elements between nonorthogonal states that scale as N{sub occ}{sup 2}N{sub virt}{sup 3}. Unlike most active space methods, ΔSCF(2) treats the ground and excited state determinants even-handedly. We apply ΔSCF(2) to the H{sub 2}, hydrogen fluoride, and H{sub 4} systems and show that the method provides accurate descriptions of ground- and excited-state potential energy surfaces with no single active space containing more than 10 ΔSCF states.

  20. On the Relative Merits of Non-Orthogonal and Orthogonal Valence Bond Methods Illustrated on the Hydrogen Molecule

    ERIC Educational Resources Information Center

    Angeli, Celestino; Cimiraglia, Renzo; Malrieu, Jean-Paul

    2008-01-01

    Valence bond (VB) is one of the cornerstone theories of quantum chemistry. Even if in practical applications the molecular orbital (MO) approach has obtained more attention, some basic chemical concepts (such as the nature of the chemical bond and the failure of the single determinant-based MO methods in describing the bond cleavage) are normally…

  1. A multireference perturbation method using non-orthogonal Hartree-Fock determinants for ground and excited states.

    PubMed

    Yost, Shane R; Kowalczyk, Tim; Van Voorhis, Troy

    2013-11-01

    In this article we propose the ΔSCF(2) framework, a multireference strategy based on second-order perturbation theory, for ground and excited electronic states. Unlike the complete active space family of methods, ΔSCF(2) employs a set of self-consistent Hartree-Fock determinants, also known as ΔSCF states. Each ΔSCF electronic state is modified by a first-order correction from Mo̸ller-Plesset perturbation theory and used to construct a Hamiltonian in a configuration interactions like framework. We present formulas for the resulting matrix elements between nonorthogonal states that scale as N(occ)(2)N(virt)(3). Unlike most active space methods, ΔSCF(2) treats the ground and excited state determinants even-handedly. We apply ΔSCF(2) to the H2, hydrogen fluoride, and H4 systems and show that the method provides accurate descriptions of ground- and excited-state potential energy surfaces with no single active space containing more than 10 ΔSCF states. PMID:24206284

  2. Large-scale B-spline R-matrix calculations of electron impact excitation and ionization processes in complex atoms

    NASA Astrophysics Data System (ADS)

    Zatsarinny, Oleg

    2013-09-01

    In recent years, the B-spline R-matrix (BSR) method has been applied to the treatment of a large number of atomic structure and electron-atom collision problems. Characteristic features of the BSR approach include the use of B-splines as a universal basis to describe the projectile electron inside the R-matrix box and the employment of term-dependent, and hence non-orthogonal, orbitals to construct the target states. The latter flexibility has proven to be of crucial importance for complex targets with several partially filled subshells. The published computer code has since been updated and extended to allow for a fully relativistic description at the level of the Dirac-Coulomb hamiltonian. Also, the systematic inclusion of a large number of pseudo-states in the close-coupling expansion has made it possible to extend the range of applicability from elastic and inelastic low-energy near-threshold phenomena to intermediate energies (up to several times the ionization threshold) and, in particular, to describe ionization processes as well. The basic ideas of the BSR approach will be reviewed, and its application will be illustrated for a variety of targets. Particular emphasis will be placed on systems of relevance for applications in gaseous electronics, such as the generation of complete datasets for electron collisions with the heavy noble gases Ne-Xe. Many of our data, which are needed for the description of transport processes in plasmas, are available through the LXCat database. This work was performed in collaboration with Klaus Bartschat. It is supported by the National Science Foundation under Grant No. PHY-1212450 and the XSEDE Allocation PHY-090031.

  3. Large bowel resection - slideshow

    MedlinePlus

    ... this page: //medlineplus.gov/ency/presentations/100089.htm Large bowel resection - Series To use the sharing features ... 6 out of 6 Normal anatomy Overview The large bowel [large intestine or the colon] is part ...

  4. Instantons and Large N

    NASA Astrophysics Data System (ADS)

    Mariño, Marcos

    2015-09-01

    Preface; Part I. Instantons: 1. Instantons in quantum mechanics; 2. Unstable vacua in quantum field theory; 3. Large order behavior and Borel summability; 4. Non-perturbative aspects of Yang–Mills theories; 5. Instantons and fermions; Part II. Large N: 6. Sigma models at large N; 7. The 1=N expansion in QCD; 8. Matrix models and matrix quantum mechanics at large N; 9. Large N QCD in two dimensions; 10. Instantons at large N; Appendix A. Harmonic analysis on S3; Appendix B. Heat kernel and zeta functions; Appendix C. Effective action for large N sigma models; References; Author index; Subject index.

  5. Large intestine (colon) (image)

    MedlinePlus

    The large intestine is the portion of the digestive system most responsible for absorption of water from the indigestible ... the ileum (small intestine) passes material into the large intestine at the cecum. Material passes through the ...

  6. Large displacement spherical joint

    DOEpatents

    Bieg, Lothar F.; Benavides, Gilbert L.

    2002-01-01

    A new class of spherical joints has a very large accessible full cone angle, a property which is beneficial for a wide range of applications. Despite the large cone angles, these joints move freely without singularities.

  7. High-resolution combined global gravity field modelling: Solving large kite systems using distributed computational algorithms

    NASA Astrophysics Data System (ADS)

    Zingerle, Philipp; Fecher, Thomas; Pail, Roland; Gruber, Thomas

    2016-04-01

    One of the major obstacles in modern global gravity field modelling is the seamless combination of lower degree inhomogeneous gravity field observations (e.g. data from satellite missions) with (very) high degree homogeneous information (e.g. gridded and reduced gravity anomalies, beyond d/o 1000). Actual approaches mostly combine such data only on the basis of the coefficients, meaning that previously for both observation classes (resp. models) a spherical harmonic analysis is done independently, solving dense normal equations (NEQ) for the inhomogeneous model and block-diagonal NEQs for the homogeneous. Obviously those methods are unable to identify or eliminate effects as spectral leakage due to band limitations of the models and non-orthogonality of the spherical harmonic base functions. To antagonize such problems a combination of both models on NEQ-basis is desirable. Theoretically this can be achieved using NEQ-stacking. Because of the higher maximum degree of the homogeneous model a reordering of the coefficient is needed which leads inevitably to the destruction of the block diagonal structure of the appropriate NEQ-matrix and therefore also to the destruction of simple sparsity. Hence, a special coefficient ordering is needed to create some new favorable sparsity pattern leading to a later efficient computational solving method. Such pattern can be found in the so called kite-structure (Bosch, 1993), achieving when applying the kite-ordering to the stacked NEQ-matrix. In a first step it is shown what is needed to attain the kite-(NEQ)system, how to solve it efficiently and also how to calculate the appropriate variance information from it. Further, because of the massive computational workload when operating on large kite-systems (theoretically possible up to about max. d/o 100.000), the main emphasis is put on to the presentation of special distributed algorithms which may solve those systems parallel on an indeterminate number of processes and are

  8. Large mode radius resonators

    NASA Technical Reports Server (NTRS)

    Harris, Michael R.

    1987-01-01

    Resonator configurations permitting operation with large mode radius while maintaining good transverse mode discrimination are considered. Stable resonators incorporating an intracavity telescope and unstable resonator geometries utilizing an output coupler with a Gaussian reflectivity profile are shown to enable large radius single mode laser operation. Results of heterodyne studies of pulsed CO2 lasers with large (11mm e sup-2 radius) fundamental mode sizes are presented demonstrating minimal frequency sweeping in accordance with the theory of laser-induced medium perturbations.

  9. Large Print Bibliography, 1990.

    ERIC Educational Resources Information Center

    South Dakota State Library, Pierre.

    This bibliography lists materials that are available in large print format from the South Dakota State Library. The annotated entries are printed in large print and include the title of the material and its author, call number, publication date, and type of story or subject area covered. Some recorded items are included in the list. The entries…

  10. Large wind turbine generators

    NASA Technical Reports Server (NTRS)

    Thomas, R. L.; Donovon, R. M.

    1978-01-01

    The development associated with large wind turbine systems is briefly described. The scope of this activity includes the development of several large wind turbines ranging in size from 100 kW to several megawatt levels. A description of the wind turbine systems, their programmatic status and a summary of their potential costs is included.

  11. LARGE BUILDING RADON MANUAL

    EPA Science Inventory

    The report summarizes information on how bilding systems -- especially the heating, ventilating, and air-conditioning (HVAC) system -- inclurence radon entry into large buildings and can be used to mitigate radon problems. It addresses the fundamentals of large building HVAC syst...

  12. Large Scale Computing

    NASA Astrophysics Data System (ADS)

    Capiluppi, Paolo

    2005-04-01

    Large Scale Computing is acquiring an important role in the field of data analysis and treatment for many Sciences and also for some Social activities. The present paper discusses the characteristics of Computing when it becomes "Large Scale" and the current state of the art for some particular application needing such a large distributed resources and organization. High Energy Particle Physics (HEP) Experiments are discussed in this respect; in particular the Large Hadron Collider (LHC) Experiments are analyzed. The Computing Models of LHC Experiments represent the current prototype implementation of Large Scale Computing and describe the level of maturity of the possible deployment solutions. Some of the most recent results on the measurements of the performances and functionalities of the LHC Experiments' testing are discussed.

  13. Large bowel resection - discharge

    MedlinePlus

    ... large bowel). You may also have had a colostomy . ... have diarrhea. You may have problems with your colostomy. ... protect it if needed. If you have a colostomy, follow care instructions from your provider. Sitting on ...

  14. Large Customers (DR Sellers)

    SciTech Connect

    Kiliccot, Sila

    2011-10-25

    State of the large customers for demand response integration of solar and wind into electric grid; openADR; CAISO; DR as a pseudo generation; commercial and industrial DR strategies; California regulations

  15. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  16. Closed Large Cell Clouds

    Atmospheric Science Data Center

    2013-04-19

    article title:  Closed Large Cell Clouds in the South Pacific     ... unperturbed by cyclonic or frontal activity. When the cell centers are cloudy and the main sinking motion is concentrated at cell ...

  17. Large pore alumina

    SciTech Connect

    Ternan, M. )

    1994-04-01

    Earlier the authors reported preparation conditions for an alumina material which contained large diameter macropores (0.1-100 [mu]). The preparation variable that caused the formation of the uncommonly large macropores was the large acid/alumina ratios which were very much greater than the ones used in the preparation of conventional porous aluminas. The alumina material had large BET surface areas (200 m[sup 2]/g) and small mercury porosimetry surface areas (1 m[sup 2]/g). This indicated that micropores (d[sub MIP]<2 nm) were present in the alumina, since they were large enough for nitrogen gas molecules to enter, but too small for mercury to enter. As a result they would be too small for significant diffusion rates of residuum molecules. In earlier work, the calcining temperature was fixed at 500[degrees]C. In the current work, variations in both calcining temperature and calcining time were used in an attempt to convert some of the micropores into mesopores. 12 refs., 2 figs., 1 tab.

  18. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  19. Large TV display system

    NASA Technical Reports Server (NTRS)

    Liu, Hua-Kuang (Inventor)

    1986-01-01

    A relatively small and low cost system is provided for projecting a large and bright television image onto a screen. A miniature liquid crystal array is driven by video circuitry to produce a pattern of transparencies in the array corresponding to a television image. Light is directed against the rear surface of the array to illuminate it, while a projection lens lies in front of the array to project the image of the array onto a large screen. Grid lines in the liquid crystal array are eliminated by a spacial filter which comprises a negative of the Fourier transform of the grid.

  20. Large gauged Q balls

    NASA Astrophysics Data System (ADS)

    Anagnostopoulos, K. N.; Axenides, M.; Floratos, E. G.; Tetradis, N.

    2001-12-01

    We study Q balls associated with local U(1) symmetries. Such Q balls are expected to become unstable for large values of their charge because of the repulsion mediated by the gauge force. We consider the possibility that the repulsion is eliminated through the presence in the interior of the Q ball of fermions with charge opposite to that of the scalar condensate. Another possibility is that two scalar condensates of opposite charge form in the interior. We demonstrate that both these scenarios can lead to the existence of classically stable, large, gauged Q balls. We present numerical solutions, as well as an analytical treatment of the ``thin-wall'' limit.

  1. Teaching Very Large Classes

    ERIC Educational Resources Information Center

    DeRogatis, Amy; Honerkamp, Kenneth; McDaniel, Justin; Medine, Carolyn; Nyitray, Vivian-Lee; Pearson, Thomas

    2014-01-01

    The editor of "Teaching Theology and Religion" facilitated this reflective conversation with five teachers who have extensive experience and success teaching extremely large classes (150 students or more). In the course of the conversation these professors exchange and analyze the effectiveness of several active learning strategies they…

  2. Large N Cosmology

    NASA Astrophysics Data System (ADS)

    Hawking, S. W.

    2001-09-01

    The large N approximation should hold in cosmology even at the origin of the universe. I use ADS-CFT to calculate the effective action and obtain a cosmological model in which inflation is driven by the trace anomaly. Despite having ghosts, this model can agree with observations.

  3. Developing Large CAI Packages.

    ERIC Educational Resources Information Center

    Reed, Mary Jac M.; Smith, Lynn H.

    1983-01-01

    When developing large computer-assisted instructional (CAI) courseware packages, it is suggested that there be more attentive planning to the overall package design before actual lesson development is begun. This process has been simplified by modifying the systems approach used to develop single CAI lessons, followed by planning for the…

  4. Risks of Large Portfolios

    PubMed Central

    Fan, Jianqing; Liao, Yuan; Shi, Xiaofeng

    2014-01-01

    The risk of a large portfolio is often estimated by substituting a good estimator of the volatility matrix. However, the accuracy of such a risk estimator is largely unknown. We study factor-based risk estimators under a large amount of assets, and introduce a high-confidence level upper bound (H-CLUB) to assess the estimation. The H-CLUB is constructed using the confidence interval of risk estimators with either known or unknown factors. We derive the limiting distribution of the estimated risks in high dimensionality. We find that when the dimension is large, the factor-based risk estimators have the same asymptotic variance no matter whether the factors are known or not, which is slightly smaller than that of the sample covariance-based estimator. Numerically, H-CLUB outperforms the traditional crude bounds, and provides an insightful risk assessment. In addition, our simulated results quantify the relative error in the risk estimation, which is usually negligible using 3-month daily data. PMID:26195851

  5. Teaching Large Evening Classes

    ERIC Educational Resources Information Center

    Wambuguh, Oscar

    2008-01-01

    High enrollments, conflicting student work schedules, and the sheer convenience of once-a-week classes are pushing many colleges to schedule evening courses. Held from 6 to 9 pm or 7 to 10 pm, these classes are typically packed, sometimes with more than 150 students in a large lecture theater. How can faculty effectively teach, control, or even…

  6. Death Writ Large

    ERIC Educational Resources Information Center

    Kastenbaum, Robert

    2004-01-01

    Mainstream thanatology has devoted its efforts to improving the understanding, care, and social integration of people who are confronted with life-threatening illness or bereavement. This article suggests that it might now be time to expand the scope and mission to include large-scale death and death that occurs through complex and multi-domain…

  7. LARGE BUILDING HVAC SIMULATION

    EPA Science Inventory

    The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...

  8. Estimating Large Numbers

    ERIC Educational Resources Information Center

    Landy, David; Silbert, Noah; Goldin, Aleah

    2013-01-01

    Despite their importance in public discourse, numbers in the range of 1 million to 1 trillion are notoriously difficult to understand. We examine magnitude estimation by adult Americans when placing large numbers on a number line and when qualitatively evaluating descriptions of imaginary geopolitical scenarios. Prior theoretical conceptions…

  9. Launching large antennas

    NASA Astrophysics Data System (ADS)

    Brandli, H. W.

    1983-09-01

    Large antennas will provide communication to rural and remote areas in times of need. This is seen as facilitating the work of law enforcement agencies. All mobile radio communications will enjoy advantages in distances covered and information relayed owing to the large number of beams possible from super radio transmitters in space. If the antennas are placed in low-earth orbit, advantages will be realized in the remote sensing of the earth's resources. It is pointed out that with umbrella or bicyclelike antennas turned outward toward space, the universe could be scouted for signals from intelligent life. Various concepts that have been put forward by U.S. companies are described. These include the radial rib, wrap rib, and parabolic erectable truss designs. Others are the mesh hoop column collapsable umbrella made of gold and molybdenum and the maypole design.

  10. Coherent large telescopes

    NASA Astrophysics Data System (ADS)

    Nelson, J. E.

    Present ground-based telescopes are compared with those of the future. The inherent limitations of ground-based telescopes are reviewed, and existing telescopes and their evolution are briefly surveyed in order to see the trends that led to the present period of innovative telescope design. The major telescope types and the critical design factors that must be considered in designing large telescopes for the future are reviewed, emphasizing economicality. As an example, the Ten Meter Telescope project at the University of California is discussed in detail, including the telescope buildings, domes, and apertures, the telescope moving weights, the image quality, and the equipment. Finally, a brief review of current work in progress on large telescopes is given.

  11. Hulls for Large Seaplanes

    NASA Technical Reports Server (NTRS)

    Magaldi, Giulio

    1925-01-01

    In reality, the principle of similitude is not applicable to the hulls, the designing of which increases in difficulty with increasing size of the seaplanes. In order to formulate, at least in a general way, the basic principles of calculation, we must first summarize the essential characteristics of a hull with reference to its gradual enlargement. In this study, we will disregard hulls with wing stubs, as being inapplicable to large seaplanes.

  12. The Large Area Telescope

    SciTech Connect

    Michelson, Peter F.; /KIPAC, Menlo Park /Stanford U., HEPL

    2007-11-13

    The Large Area Telescope (LAT), one of two instruments on the Gamma-ray Large Area Space Telescope (GLAST) mission, is an imaging, wide field-of-view, high-energy pair-conversion telescope, covering the energy range from {approx}20 MeV to more than 300 GeV. The LAT is being built by an international collaboration with contributions from space agencies, high-energy particle physics institutes, and universities in France, Italy, Japan, Sweden, and the United States. The scientific objectives the LAT will address include resolving the high-energy gamma-ray sky and determining the nature of the unidentified gamma-ray sources and the origin of the apparently isotropic diffuse emission observed by EGRET; understanding the mechanisms of particle acceleration in celestial sources, including active galactic nuclei, pulsars, and supernovae remnants; studying the high-energy behavior of gamma-ray bursts and transients; using high-energy gamma-rays to probe the early universe to z {ge} 6; and probing the nature of dark matter. The components of the LAT include a precision silicon-strip detector tracker and a CsI(Tl) calorimeter, a segmented anticoincidence shield that covers the tracker array, and a programmable trigger and data acquisition system. The calorimeter's depth and segmentation enable the high-energy reach of the LAT and contribute significantly to background rejection. The aspect ratio of the tracker (height/width) is 0.4, allowing a large field-of-view and ensuring that nearly all pair-conversion showers initiated in the tracker will pass into the calorimeter for energy measurement. This paper includes a description of each of these LAT subsystems as well as a summary of the overall performance of the telescope.

  13. Large area LED package

    NASA Astrophysics Data System (ADS)

    Goullon, L.; Jordan, R.; Braun, T.; Bauer, J.; Becker, F.; Hutter, M.; Schneider-Ramelow, M.; Lang, K.-D.

    2015-03-01

    Solid state lighting using LED-dies is a rapidly growing market. LED-dies with the needed increasing luminous flux per chip area produce a lot of heat. Therefore an appropriate thermal management is required for general lighting with LEDdies. One way to avoid overheating and shorter lifetime is the use of many small LED-dies on a large area heat sink (down to 70 μm edge length), so that heat can spread into a large area while at the same time light also appears on a larger area. The handling with such small LED-dies is very difficult because they are too small to be picked with common equipment. Therefore a new concept called collective transfer bonding using a temporary carrier chip was developed. A further benefit of this new technology is the high precision assembly as well as the plane parallel assembly of the LED-dies which is necessary for wire bonding. It has been shown that hundred functional LED-dies were transferred and soldered at the same time. After the assembly a cost effective established PCB-technology was applied to produce a large-area light source consisting of many small LED-dies and electrically connected on a PCB-substrate. The top contacts of the LED-dies were realized by laminating an adhesive copper sheet followed by LDI structuring as known from PCB-via-technology. This assembly can be completed by adding converting and light forming optical elements. In summary two technologies based on standard SMD and PCB technology have been developed for panel level LED packaging up to 610x 457 mm2 area size.

  14. Gyrokinetic large eddy simulations

    SciTech Connect

    Morel, P.; Navarro, A. Banon; Albrecht-Marc, M.; Carati, D.; Merz, F.; Goerler, T.; Jenko, F.

    2011-07-15

    The large eddy simulation approach is adapted to the study of plasma microturbulence in a fully three-dimensional gyrokinetic system. Ion temperature gradient driven turbulence is studied with the GENE code for both a standard resolution and a reduced resolution with a model for the sub-grid scale turbulence. A simple dissipative model for representing the effect of the sub-grid scales on the resolved scales is proposed and tested. Once calibrated, the model appears to be able to reproduce most of the features of the free energy spectra for various values of the ion temperature gradient.

  15. Large space structures testing

    NASA Technical Reports Server (NTRS)

    Waites, Henry; Worley, H. Eugene

    1987-01-01

    There is considerable interest in the development of testing concepts and facilities that accurately simulate the pathologies believed to exist in future spacecraft. Both the Government and Industry have participated in the development of facilites over the past several years. The progress and problems associated with the development of the Large Space Structure Test Facility at the Marshall Flight Center are presented. This facility was in existence for a number of years and its utilization has run the gamut from total in-house involvement, third party contractor testing, to the mutual participation of other Government Agencies in joint endeavors.

  16. Large, Bright Wind Ripples

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-397, 20 June 2003

    This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows large, relatively bright ripples of windblown sediment in the Sinus Sabaeus region south of Schiaparelli Basin. The surrounding substrate is thickly mantled by very dark material, possibly windblown silt that settled out of the atmosphere. The picture is located near 7.1oS, 343.7oW. Sunlight illuminates the scene from the left.

  17. Large Windblown Ripples

    NASA Technical Reports Server (NTRS)

    2003-01-01

    MGS MOC Release No. MOC2-519, 20 October 2003

    This April 2003 Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) high resolution image shows a depression in the martian southern cratered highlands near 1.3oS, 244.3oW. The floor of the depression and some nearby craters are covered by large windblown ripples or small sand dunes. This image of ancient martian terrain covers an area 3 km (1.9 mi) across and is illuminated by sunlight from the upper left.

  18. Large Spectral Library Problem

    SciTech Connect

    Chilton, Lawrence K.; Walsh, Stephen J.

    2008-10-03

    Hyperspectral imaging produces a spectrum or vector at each image pixel. These spectra can be used to identify materials present in the image. In some cases, spectral libraries representing atmospheric chemicals or ground materials are available. The challenge is to determine if any of the library chemicals or materials exist in the hyperspectral image. The number of spectra in these libraries can be very large, far exceeding the number of spectral channels collected in the ¯eld. Suppose an image pixel contains a mixture of p spectra from the library. Is it possible to uniquely identify these p spectra? We address this question in this paper and refer to it as the Large Spectral Library (LSL) problem. We show how to determine if unique identi¯cation is possible for any given library. We also show that if p is small compared to the number of spectral channels, it is very likely that unique identi¯cation is possible. We show that unique identi¯cation becomes less likely as p increases.

  19. Infinitely Large New Dimensions

    SciTech Connect

    Arkani-Hamed, Nima; Dimopoulos, Savas; Dvali, Gia; Kaloper, Nemanja

    1999-07-29

    We construct intersecting brane configurations in Anti-de-Sitter space localizing gravity to the intersection region, with any number n of extra dimensions. This allows us to construct two kinds of theories with infinitely large new dimensions, TeV scale quantum gravity and sub-millimeter deviations from Newton's Law. The effective 4D Planck scale M{sub Pl} is determined in terms of the fundamental Planck scale M{sub *} and the AdS radius of curvature L via the familiar relation M{sub Pl}{sup 2} {approx} M{sub *}{sup 2+n} L{sup n}; L acts as an effective radius of compactification for gravity on the intersection. Taking M{sub *} {approx} TeV and L {approx} sub-mm reproduces the phenomenology of theories with large extra dimensions. Alternately, taking M{sub *} {approx} L{sup -1} {approx} M{sub Pl}, and placing our 3-brane a distance {approx} 100M{sub Pl}{sup -1} away from the intersection gives us a theory with an exponential determination of the Weak/Planck hierarchy.

  20. Large Particle Titanate Sorbents

    SciTech Connect

    Taylor-Pashow, K.

    2015-10-08

    This research project was aimed at developing a synthesis technique for producing large particle size monosodium titanate (MST) to benefit high level waste (HLW) processing at the Savannah River Site (SRS). Two applications were targeted, first increasing the size of the powdered MST used in batch contact processing to improve the filtration performance of the material, and second preparing a form of MST suitable for deployment in a column configuration. Increasing the particle size should lead to improvements in filtration flux, and decreased frequency of filter cleaning leading to improved throughput. Deployment of MST in a column configuration would allow for movement from a batch process to a more continuous process. Modifications to the typical MST synthesis led to an increase in the average particle size. Filtration testing on dead-end filters showed improved filtration rates with the larger particle material; however, no improvement in filtration rate was realized on a crossflow filter. In order to produce materials suitable for column deployment several approaches were examined. First, attempts were made to coat zirconium oxide microspheres (196 µm) with a layer of MST. This proved largely unsuccessful. An alternate approach was then taken synthesizing a porous monolith of MST which could be used as a column. Several parameters were tested, and conditions were found that were able to produce a continuous structure versus an agglomeration of particles. This monolith material showed Sr uptake comparable to that of previously evaluated samples of engineered MST in batch contact testing.

  1. Large furlable antenna study

    NASA Technical Reports Server (NTRS)

    Campbell, G. K. C.

    1975-01-01

    The parametric study of the performance of large furlable antennas is described and the availability of various size antennas is discussed. Three types of unfurlable reflector designs are considered: the wrapped rib, the polyconic, and the maypole. On the basis of these approaches, a space shuttle launch capability, and state-of-the-art materials, it is possible to design unfurlable reflectors as large as 130 feet (40 meters) in diameter to operate at 10 GHz and 600 feet (183 meters) in diameter at 0.5 GHz. These figures can be increased if very low thermal coefficient of expansion materials can be developed over the next 2-5 years. It is recommended that a special effort be made to develop light weight materials that would provide nearly zero thermal coefficient of expansion and good thermal conductivity within the next 10 years. A conservative prediction of the kinds of unfurlable spacecraft antennas that will be available by 1985 with orbital performance predicted on the basis of test data and with developed manufacturing processes is summarized.

  2. Synchronizing large systolic arrays

    SciTech Connect

    Fisher, A.L.; Kung, H.T.

    1982-04-01

    Parallel computing structures consist of many processors operating simultaneously. If a concurrent structure is regular, as in the case of systolic array, it may be convenient to think of all processors as operating in lock step. Totally synchronized systems controlled by central clocks are difficult to implement because of the inevitable problem of clock skews and delays. An alternate means of enforcing necessary synchronization is the use of self-timed, asynchronous schemes, at the cost of increased design complexity and hardware cost. Realizing that different circumstances call for different synchronization methods, this paper provides a spectrum of synchronization models; based on the assumptions made for each model, theoretical lower bounds on clock skew are derived, and appropriate or best-possible synchronization schemes for systolic arrays are proposed. This paper represents a first step towards a systematic study of synchronization problems for large systolic arrays.

  3. Large area plasma source

    NASA Technical Reports Server (NTRS)

    Foster, John (Inventor); Patterson, Michael (Inventor)

    2008-01-01

    An all permanent magnet Electron Cyclotron Resonance, large diameter (e.g., 40 cm) plasma source suitable for ion/plasma processing or electric propulsion, is capable of producing uniform ion current densities at its exit plane at very low power (e.g., below 200 W), and is electrodeless to avoid sputtering or contamination issues. Microwave input power is efficiently coupled with an ionizing gas without using a dielectric microwave window and without developing a throat plasma by providing a ferromagnetic cylindrical chamber wall with a conical end narrowing to an axial entrance hole for microwaves supplied on-axis from an open-ended waveguide. Permanent magnet rings are attached inside the wall with alternating polarities against the wall. An entrance magnet ring surrounding the entrance hole has a ferromagnetic pole piece that extends into the chamber from the entrance hole to a continuing second face that extends radially across an inner pole of the entrance magnet ring.

  4. Contrasting Large Solar Events

    NASA Astrophysics Data System (ADS)

    Lanzerotti, Louis J.

    2010-10-01

    After an unusually long solar minimum, solar cycle 24 is slowly beginning. A large coronal mass ejection (CME) from sunspot 1092 occurred on 1 August 2010, with effects reaching Earth on 3 August and 4 August, nearly 38 years to the day after the huge solar event of 4 August 1972. The prior event, which those of us engaged in space research at the time remember well, recorded some of the highest intensities of solar particles and rapid changes of the geomagnetic field measured to date. What can we learn from the comparisons of these two events, other than their essentially coincident dates? One lesson I took away from reading press coverage and Web reports of the August 2010 event is that the scientific community and the press are much more aware than they were nearly 4 decades ago that solar events can wreak havoc on space-based technologies.

  5. Large Binocular Telescope Project

    NASA Astrophysics Data System (ADS)

    Hill, John M.; Salinari, Piero

    1998-08-01

    The Large Binocular Telescope (LBT) Project is a collaboration between institutions in Arizona, Germany, Italy, and Ohio. With the addition of the partners from Ohio State and Germany in February 1997, the Large Binocular Telescope Corporation has the funding required to build the full telescope populated with both 8.4 meter optical trans. The first of two 8.4 meter borosilicate honeycomb primary mirrors for LBT was cast at the Steward Observatory Mirror Lab in 1997. The baseline optical configuration of LBT includes adaptive infrared secondaries of a Gregorian design. The F/15 secondaries are undersized to provide a low thermal background focal plane. The interferometric focus combining the light from the two 8.4 meter primaries will reimage the two folded Gregorian focal planes to three central locations. The telescope elevation structure accommodates swing arms which allow rapid interchange of the various secondary and tertiary mirrors. Maximum stiffness and minimal thermal disturbance were important drivers for the design of the telescope in order to provide the best possible images for interferometric observations. The telescope structure accommodates installation of a vacuum bell jar for aluminizing the primary mirrors in-situ on the telescope. The detailed design of the telescope structure was completed in 1997 by ADS Italia (Lecco) and European Industrial Engineering (Mestre). A series of contracts for the fabrication and machining of the telescope structure had been placed at the end of 1997. The final enclosure design was completed at M3 Engineering & Technology (Tucson), EIE and ADS Italia. During 1997, the telescope pier and the concrete ring wall for the rotating enclosure were completed along with the steel structure of the fixed portion of the enclosure. The erection of the steel structure for the rotating portion of the enclosure will begin in the Spring of 1998.

  6. The large binocular telescope.

    PubMed

    Hill, John M

    2010-06-01

    The Large Binocular Telescope (LBT) Observatory is a collaboration among institutions in Arizona, Germany, Italy, Indiana, Minnesota, Ohio, and Virginia. The telescope on Mount Graham in Southeastern Arizona uses two 8.4 m diameter primary mirrors mounted side by side. A unique feature of the LBT is that the light from the two Gregorian telescope sides can be combined to produce phased-array imaging of an extended field. This cophased imaging along with adaptive optics gives the telescope the diffraction-limited resolution of a 22.65 m aperture and a collecting area equivalent to an 11.8 m circular aperture. This paper describes the design, construction, and commissioning of this unique telescope. We report some sample astronomical results with the prime focus cameras. We comment on some of the technical challenges and solutions. The telescope uses two F/15 adaptive secondaries to correct atmospheric turbulence. The first of these adaptive mirrors has completed final system testing in Firenze, Italy, and is planned to be at the telescope by Spring 2010. PMID:20517352

  7. Large area Czochralski silicon

    NASA Technical Reports Server (NTRS)

    Rea, S. N.; Gleim, P. S.

    1977-01-01

    The overall cost effectiveness of the Czochralski process for producing large-area silicon was determined. The feasibility of growing several 12 cm diameter crystals sequentially at 12 cm/h during a furnace run and the subsequent slicing of the ingot using a multiblade slurry saw were investigated. The goal of the wafering process was a slice thickness of 0.25 mm with minimal kerf. A slice + kerf of 0.56 mm was achieved on 12 cm crystal using both 400 grit B4C and SiC abrasive slurries. Crystal growth experiments were performed at 12 cm diameter in a commercially available puller with both 10 and 12 kg melts. Several modifications to the puller hoz zone were required to achieve stable crystal growth over the entire crystal length and to prevent crystallinity loss a few centimeters down the crystal. The maximum practical growth rate for 12 cm crystal in this puller design was 10 cm/h, with 12 to 14 cm/h being the absolute maximum range at which melt freeze occurred.

  8. Large forging manufacturing process

    DOEpatents

    Thamboo, Samuel V.; Yang, Ling

    2002-01-01

    A process for forging large components of Alloy 718 material so that the components do not exhibit abnormal grain growth includes the steps of: a) providing a billet with an average grain size between ASTM 0 and ASTM 3; b) heating the billet to a temperature of between 1750.degree. F. and 1800.degree. F.; c) upsetting the billet to obtain a component part with a minimum strain of 0.125 in at least selected areas of the part; d) reheating the component part to a temperature between 1750.degree. F. and 1800.degree. F.; e) upsetting the component part to a final configuration such that said selected areas receive no strains between 0.01 and 0.125; f) solution treating the component part at a temperature of between 1725.degree. F. and 1750.degree. F.; and g) aging the component part over predetermined times at different temperatures. A modified process achieves abnormal grain growth in selected areas of a component where desirable.

  9. Large scale tracking algorithms.

    SciTech Connect

    Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  10. Stability of large systems

    NASA Astrophysics Data System (ADS)

    Hastings, Harold

    2007-03-01

    We address a long-standing dilemma concerning stability of large systems. MacArthur (1955) and Hutchinson (1959) argued that more ``complex'' natural systems tended to be more stable than less complex systems based upon energy flow. May (1972) argued the opposite, using random matrix models; see Cohen and Newman (1984, 1985), Bai and Yin (1986). We show that in some sense both are right: under reasonable scaling assumptions on interaction strength, Lyapunov stability increases but structural stability decreases as complexity is increased (c.f. Harrison, 1979; Hastings, 1984). We apply this result to a variety of network systems. References: Bai, Z.D. & Yin, Y.Q. 1986. Probab. Th. Rel. Fields 73, 555. Cohen, J.E., & Newman, C.M. 1984. Annals Probab. 12, 283; 1985. Theoret. Biol. 113, 153. Harrison, G.W. 1979. Amer. Natur. 113, 659. Hastings, H.M. 1984. BioSystems 17, 171. Hastings, H.M., Juhasz, F., & Schreiber, M. 1992. .Proc. Royal Soc., Ser. B. 249, 223. Hutchinson, G.E. 1959. Amer. Natur. 93, 145, MacArthur, R. H. 1955. Ecology 35, 533, May, R.M. 1972. Nature 238, 413.

  11. The Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Hughes, D. H.; Schloerb, F. P.; LMT Project Team

    2009-05-01

    This paper, presented on behalf of the Large Millimeter Telescope (LMT) project team, describes the status and near-term plans for the telescope and its initial instrumentation. The LMT is a bi-national collaboration between México and the USA, led by the Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) and the University of Massachusetts at Amherst, to construct, commission and operate a 50 m diameter millimeter-wave radio telescope. Construction activities are nearly complete at the LMT site, at an altitude of ˜ 4600 m on the summit of Sierra Negra, an extinct volcano in the Mexican state of Puebla. Full movement of the telescope, under computer control in both azimuth and elevation, has been achieved. First-light at centimeter wavelengths on astronomical sources was obtained in November 2006. Installation of precision surface segments for millimeter-wave operation is underway, with the inner 32 m diameter of the surface now complete and ready to be used to obtain first-light at millimeter wavelengths in 2008. Installation of the remainder of the reflector will continue during the next year and be completed in 2009 for final commissioning of the antenna. The full LMT antenna, outfitted with its initial complement of scientific instruments, will be a world-leading scientific research facility for millimeter-wave astronomy.

  12. The Large Millimeter Telescope

    NASA Astrophysics Data System (ADS)

    Schloerb, F. Peter

    2008-07-01

    This paper, presented on behalf of the Large Millimeter Telescope (LMT) project team, describes the status and near-term plans for the telescope and its initial instrumentation. The LMT is a bi-national collaboration between Mexico and the USA, led by the Instituto Nacional de Astrofísica, Optica y Electronica (INAOE) and the University of Massachusetts at Amherst, to construct, commission and operate a 50m-diameter millimeter-wave radio telescope. Construction activities are nearly complete at the 4600m LMT site on the summit of Sierra Negra, an extinct volcano in the Mexican state of Puebla. Full movement of the telescope, under computer control in both azimuth and elevation, has been achieved. First-light at centimeter wavelengths on astronomical sources was obtained in November 2006. Installation of precision surface segments for millimeter-wave operation is underway, with the inner 32m-diameter of the surface now complete and ready to be used to obtain first light at millimeter wavelengths in 2008. Installation of the remainder of the reflector will continue during the next year and be completed in 2009 for final commissioning of the antenna. The full LMT antenna, outfitted with its initial complement of scientific instruments, will be a world-leading scientific research facility for millimeter-wave astronomy.

  13. Large scale traffic simulations

    SciTech Connect

    Nagel, K.; Barrett, C.L.; Rickert, M.

    1997-04-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computational speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated {open_quotes}looping{close_quotes} between the microsimulation and the simulated planning of individual person`s behavior is necessary). As a rough number, a real-time simulation of an area such as Los Angeles (ca. 1 million travellers) will need a computational speed of much higher than 1 million {open_quotes}particle{close_quotes} (= vehicle) updates per second. This paper reviews how this problem is approached in different projects and how these approaches are dependent both on the specific questions and on the prospective user community. The approaches reach from highly parallel and vectorizable, single-bit implementations on parallel supercomputers for Statistical Physics questions, via more realistic implementations on coupled workstations, to more complicated driving dynamics implemented again on parallel supercomputers. 45 refs., 9 figs., 1 tab.

  14. Large Deployable Reflectarray Antenna

    NASA Technical Reports Server (NTRS)

    Fang, Houfei; Huang, John; Lou, Michael

    2006-01-01

    A report discusses a 7-meter-diameter reflectarray antenna that has been conceived in a continuing effort to develop large reflectarray antennas to be deployed in outer space. Major underlying concepts were reported in three prior NASA Tech Briefs articles: "Inflatable Reflectarray Antennas" (NPO-20433), Vol. 23, No. 10 (October 1999), page 50; "Tape-Spring Reinforcements for Inflatable Structural Tubes" (NPO-20615), Vol. 24, No. 7 (July 2000), page 58; and "Self-Inflatable/Self-Rigidizable Reflectarray Antenna" (NPO-30662), Vol. 28, No. 1 (January 2004), page 61. Like previous antennas in the series, the antenna now proposed would include a reflectarray membrane stretched flat on a frame of multiple inflatable booms. The membrane and booms would be rolled up and folded for compact stowage during transport. Deployment in outer space would be effected by inflating the booms to unroll and then to unfold the membrane, thereby stretching the membrane out flat to its full size. The membrane would achieve the flatness for a Ka-band application. The report gives considerable emphasis to designing the booms to rigidify themselves upon deployment: for this purpose, the booms could be made as spring-tape-reinforced aluminum laminate tubes like those described in two of the cited prior articles.

  15. Large Format Radiographic Imaging

    SciTech Connect

    J. S. Rohrer; Lacey Stewart; M. D. Wilke; N. S. King; S. A Baker; Wilfred Lewis

    1999-08-01

    Radiographic imaging continues to be a key diagnostic in many areas at Los Alamos National Laboratory (LANL). Radiographic recording systems have taken on many form, from high repetition-rate, gated systems to film recording and storage phosphors. Some systems are designed for synchronization to an accelerator while others may be single shot or may record a frame sequence in a dynamic radiography experiment. While film recording remains a reliable standby in the radiographic community, there is growing interest in investigating electronic recording for many applications. The advantages of real time access to remote data acquisition are highly attractive. Cooled CCD camera systems are capable of providing greater sensitivity with improved signal-to-noise ratio. This paper begins with a review of performance characteristics of the Bechtel Nevada large format imaging system, a gated system capable of viewing scintillators up to 300 mm in diameter. We then examine configuration alternatives in lens coupled and fiber optically coupled electro-optical recording systems. Areas of investigation include tradeoffs between fiber optic and lens coupling, methods of image magnification, and spectral matching from scintillator to CCD camera. Key performance features discussed include field of view, resolution, sensitivity, dynamic range, and system noise characteristics.

  16. [Large granular lymphocyte leukemia].

    PubMed

    Lazaro, Estibaliz; Caubet, Olivier; Menard, Fanny; Pellegrin, Jean-Luc; Viallard, Jean-François

    2007-11-01

    Large granular lymphocyte (LGL) leukemia is a clonal proliferation of cytotoxic cells, either CD3(+) (T-cell) or CD3(-) (natural killer, or NK). Both subtypes can manifest as indolent or aggressive disorders. T-LGL leukemia is associated with cytopenias and autoimmune diseases and most often has an indolent course and good prognosis. Rheumatoid arthritis and Felty syndrome are frequent. NK-LGL leukemias can be more aggressive. LGL expansion is currently hypothesized to be a virus (Ebstein Barr or human T-cell leukemia viruses) antigen-driven T-cell response that involves disruption of apoptosis. The diagnosis of T-LGL is suggested by flow cytometry and confirmed by T-cell receptor gene rearrangement studies. Clonality is difficult to determine in NK-LGL but use of monoclonal antibodies specific for killer cell immunoglobulin-like receptor (KIR) has improved this process. Treatment is required when T-LGL leukemia is associated with recurrent infections secondary to chronic neutropenia. Long-lasting remission can be obtained with immunosuppressive treatments such as methotrexate, cyclophosphamide, and cyclosporine A. NK-LGL leukemias may be more aggressive and refractory to conventional therapy. PMID:17596907

  17. Very Large Scale Optimization

    NASA Technical Reports Server (NTRS)

    Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)

    2002-01-01

    The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.

  18. Large Binocular Telescope Project

    NASA Astrophysics Data System (ADS)

    Hill, John M.

    1997-03-01

    The large binocular telescope (LBT) project have evolved from concepts first proposed in 1985. The present partners involved in the design and construction of this 2 by 8.4 meter binocular telescope are the University of Arizona, Italy represented by the Osservatorio Astrofisico di Arcetri and the Research Corporation based in Tucson, Arizona. These three partners have committed sufficient funds to build the enclosure and the telescope populated with a single 8.4 meter optical train -- approximately 40 million dollars (1989). Based on this commitment, design and construction activities are now moving forward. Additional partners are being sought. The next mirror to be cast at the Steward Observatory Mirror Lab in the fall of 1996 will be the first borosilicate honeycomb primary for LBT. The baseline optical configuration of LBT includes wide field Cassegrain secondaries with optical foci above the primaries to provide a corrected one degree field at F/4. The infrared F/15 secondaries are a Gregorian design to allow maximum flexibility for adaptive optics. The F/15 secondaries are undersized to provide a low thermal background focal plane which is unvignetted over a 4 arcminute diameter field-of-view. The interferometric focus combining the light from the two 8.4 meter primaries will reimage two folded Gregorian focal planes to a central location. The telescope elevation structure accommodates swing arms which allow rapid interchange of the various secondary and tertiary mirrors. Maximum stiffness and minimal thermal disturbance continue to be important drivers for the detailed design of the telescope. The telescope structure accommodates installation of a vacuum bell jar for aluminizing the primary mirrors in-situ on the telescope. The detailed design of the telescope structure will be completed in 1996 by ADS Italia (Lecco) and European Industrial Engineering (Mestre). The final enclosure design is now in progress at M3 Engineering (Tucson), EIE and ADS Italia

  19. Applied large eddy simulation.

    PubMed

    Tucker, Paul G; Lardeau, Sylvain

    2009-07-28

    Large eddy simulation (LES) is now seen more and more as a viable alternative to current industrial practice, usually based on problem-specific Reynolds-averaged Navier-Stokes (RANS) methods. Access to detailed flow physics is attractive to industry, especially in an environment in which computer modelling is bound to play an ever increasing role. However, the improvement in accuracy and flow detail has substantial cost. This has so far prevented wider industrial use of LES. The purpose of the applied LES discussion meeting was to address questions regarding what is achievable and what is not, given the current technology and knowledge, for an industrial practitioner who is interested in using LES. The use of LES was explored in an application-centred context between diverse fields. The general flow-governing equation form was explored along with various LES models. The errors occurring in LES were analysed. Also, the hybridization of RANS and LES was considered. The importance of modelling relative to boundary conditions, problem definition and other more mundane aspects were examined. It was to an extent concluded that for LES to make most rapid industrial impact, pragmatic hybrid use of LES, implicit LES and RANS elements will probably be needed. Added to this further, highly industrial sector model parametrizations will be required with clear thought on the key target design parameter(s). The combination of good numerical modelling expertise, a sound understanding of turbulence, along with artistry, pragmatism and the use of recent developments in computer science should dramatically add impetus to the industrial uptake of LES. In the light of the numerous technical challenges that remain it appears that for some time to come LES will have echoes of the high levels of technical knowledge required for safe use of RANS but with much greater fidelity. PMID:19531503

  20. Large planer for finishing smooth, flat surfaces of large pieces ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Large planer for finishing smooth, flat surfaces of large pieces of metal; in operating condition and used for public demonstrations. - Thomas A. Edison Laboratories, Building No. 5, Main Street & Lakeside Avenue, West Orange, Essex County, NJ

  1. Large for gestational age (LGA)

    MedlinePlus

    ... medlineplus.gov/ency/article/002248.htm Large for gestational age (LGA) To use the sharing features on this page, please enable JavaScript. Large for gestational age means that a fetus or infant is larger ...

  2. Large-D gravity and low-D strings.

    PubMed

    Emparan, Roberto; Grumiller, Daniel; Tanabe, Kentaro

    2013-06-21

    We show that in the limit of a large number of dimensions a wide class of nonextremal neutral black holes has a universal near-horizon limit. The limiting geometry is the two-dimensional black hole of string theory with a two-dimensional target space. Its conformal symmetry explains the properties of massless scalars found recently in the large-D limit. For black branes with string charges, the near-horizon geometry is that of the three-dimensional black strings of Horne and Horowitz. The analogies between the α' expansion in string theory and the large-D expansion in gravity suggest a possible effective string description of the large-D limit of black holes. We comment on applications to several subjects, in particular to the problem of critical collapse. PMID:23829726

  3. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB)

    NASA Astrophysics Data System (ADS)

    Grimme, Stefan; Bannwarth, Christoph

    2016-08-01

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H-Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  4. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB).

    PubMed

    Grimme, Stefan; Bannwarth, Christoph

    2016-08-01

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H-Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  5. Health impacts of large dams

    SciTech Connect

    Lerer, L.B.; Scudder, T.

    1999-03-01

    Large dams have been criticized because of their negative environmental and social impacts. Public health interest largely has focused on vector-borne diseases, such as schistosomiasis, associated with reservoirs and irrigation projects. Large dams also influence health through changes in water and food security, increases in communicable diseases, and the social disruption caused by construction and involuntary resettlement. Communities living in close proximity to large dams often do not benefit from water transfer and electricity generation revenues. A comprehensive health component is required in environmental and social impact assessments for large dam projects.

  6. Analytic bootstrap at large spin

    NASA Astrophysics Data System (ADS)

    Kaviraj, Apratim; Sen, Kallol; Sinha, Aninda

    2015-11-01

    We use analytic conformal bootstrap methods to determine the anomalous dimensions and OPE coefficients for large spin operators in general conformal field theories in four dimensions containing a scalar operator of conformal dimension Δ ϕ . It is known that such theories will contain an infinite sequence of large spin operators with twists approaching 2Δ ϕ + 2 n for each integer n. By considering the case where such operators are separated by a twist gap from other operators at large spin, we analytically determine the n, Δ ϕ dependence of the anomalous dimensions. We find that for all n, the anomalous dimensions are negative for Δ ϕ satisfying the unitarity bound. We further compute the first subleading correction at large spin and show that it becomes universal for large twist. In the limit when n is large, we find exact agreement with the AdS/CFT prediction corresponding to the Eikonal limit of a 2-2 scattering with dominant graviton exchange.

  7. Querying Large Biological Network Datasets

    ERIC Educational Resources Information Center

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  8. Team Learning in Large Classes.

    ERIC Educational Resources Information Center

    Roueche, Suanne D., Ed.

    1984-01-01

    Information and suggestions are provided on the use of team learning in large college classes. Introductory material discusses the negative cycle of student-teacher interaction that may be provoked by large classes, and the use of permanent, heterogeneous, six- or seven-member student learning groups as the central focus of class activity as a…

  9. Robust large dimension terahertz cloaking.

    PubMed

    Liang, Dachuan; Gu, Jianqiang; Han, Jiaguang; Yang, Yuanmu; Zhang, Shuang; Zhang, Weili

    2012-02-14

    A large scale homogenous invisibility cloak functioning at terahertz frequencies is reported. The terahertz invisibility device features a large concealed volume, low loss, and broad bandwidth. In particular, it is capable of hiding objects with a dimension nearly an order of magnitude larger than that of its lithographic counterpart, but without involving complex and time-consuming cleanroom processing. PMID:22253094

  10. Sharpen Your Skills: Large Type.

    ERIC Educational Resources Information Center

    Knisely, Phillis; Wickham, Marian

    1984-01-01

    Three short articles about large type transcribing are provided for braille transcribers and teachers of the visually handicapped. The first article lists general suggestions for simple typewriter maintenance. The second article reviews the guidelines for typing fractions in large type for mathematics exercises. The third article describes a…

  11. Measuring happiness in large population

    NASA Astrophysics Data System (ADS)

    Wenas, Annabelle; Sjahputri, Smita; Takwin, Bagus; Primaldhi, Alfindra; Muhamad, Roby

    2016-01-01

    The ability to know emotional states for large number of people is important, for example, to ensure the effectiveness of public policies. In this study, we propose a measure of happiness that can be used in large scale population that is based on the analysis of Indonesian language lexicons. Here, we incorporate human assessment of Indonesian words, then quantify happiness on large-scale of texts gathered from twitter conversations. We used two psychological constructs to measure happiness: valence and arousal. We found that Indonesian words have tendency towards positive emotions. We also identified several happiness patterns during days of the week, hours of the day, and selected conversation topics.

  12. Large engines and vehicles, 1958

    NASA Technical Reports Server (NTRS)

    1978-01-01

    During the mid-1950s, the Air Force sponsored work on the feasibility of building large, single-chamber engines, presumably for boost-glide aircraft or spacecraft. In 1956, the Army missile development group began studies of large launch vehicles. The possibilities opened up by Sputnik accelerated this work and gave the Army an opportunity to bid for the leading role in launch vehicles. The Air Force had the responsibility for the largest ballistic missiles and hence a ready-made base for extending their capability for spaceflight. During 1958, actions taken to establish a civilian space agency, and the launch vehicle needs seen by its planners, added a third contender to the space vehicle competition. These activities during 1958 are examined as to how they resulted in the initiation of a large rocket engine and the first large launch vehicle.

  13. LSD: Large Survey Database framework

    NASA Astrophysics Data System (ADS)

    Juric, Mario

    2012-09-01

    The Large Survey Database (LSD) is a Python framework and DBMS for distributed storage, cross-matching and querying of large survey catalogs (>10^9 rows, >1 TB). The primary driver behind its development is the analysis of Pan-STARRS PS1 data. It is specifically optimized for fast queries and parallel sweeps of positionally and temporally indexed datasets. It transparently scales to more than >10^2 nodes, and can be made to function in "shared nothing" architectures.

  14. Does Yellowstone need large fires

    SciTech Connect

    Romme, W.H. ); Turner, M.G.; Gardner, R.H.; Hargrove, W.W. )

    1994-06-01

    This paper synthesizes several studies initiated after the 1988 Yellowstone fires, to address the question whether the ecological effects of large fires differ qualitatively as well as quantitatively from small fires. Large burn patches had greater dominance and contagion of burn severity classes, and a higher proportion of crown fire. Burned aspen stands resprouted vigorously over an extensive area, but heavy ungulate browsing prevented establishment of new tree-sized stems. A burst of sexual reproduction occurred in forest herbs that usually reproduce vegetatively, and new aspen clones became established from seed - a rare event in this region. We conclude that the effects of large fires are qualitatively different, but less dramatically so than expected.

  15. Inflating with large effective fields

    SciTech Connect

    Burgess, C.P.; Cicoli, M.; Quevedo, F.; Williams, M. E-mail: mcicoli@ictp.it E-mail: mwilliams@perimeterinsititute.ca

    2014-11-01

    We re-examine large scalar fields within effective field theory, in particular focussing on the issues raised by their use in inflationary models (as suggested by BICEP2 to obtain primordial tensor modes). We argue that when the large-field and low-energy regimes coincide the scalar dynamics is most effectively described in terms of an asymptotic large-field expansion whose form can be dictated by approximate symmetries, which also help control the size of quantum corrections. We discuss several possible symmetries that can achieve this, including pseudo-Goldstone inflatons characterized by a coset G/H (based on abelian and non-abelian, compact and non-compact symmetries), as well as symmetries that are intrinsically higher dimensional. Besides the usual trigonometric potentials of Natural Inflation we also find in this way simple large-field power laws (like V ∝ φ{sup 2}) and exponential potentials, V(φ) = ∑{sub k}V{sub x}e{sup −kφ/M}. Both of these can describe the data well and give slow-roll inflation for large fields without the need for a precise balancing of terms in the potential. The exponential potentials achieve large r through the limit |η| || ε and so predict r ≅ (8/3)(1-n{sub s}); consequently n{sub s} ≅ 0.96 gives r ≅ 0.11 but not much larger (and so could be ruled out as measurements on r and n{sub s} improve). We examine the naturalness issues for these models and give simple examples where symmetries protect these forms, using both pseudo-Goldstone inflatons (with non-abelian non-compact shift symmetries following familiar techniques from chiral perturbation theory) and extra-dimensional models.

  16. The MAGNEX large acceptance spectrometer

    SciTech Connect

    Cavallaro, M.; Cappuzzello, F.; Cunsolo, A.; Carbone, D.; Foti, A.

    2010-03-01

    The main features of the MAGNEX large acceptance magnetic spectrometer are described. It has a quadrupole + dipole layout and a hybrid detector located at the focal plane. The aberrations due to the large angular (50 msr) and momentum (+- 13%) acceptance are reduced by an accurate hardware design and then compensated by an innovative software ray-reconstruction technique. The obtained resolution in energy, angle and mass are presented in the paper. MAGNEX has been used up to now for different experiments in nuclear physics and astrophysics confirming to be a multipurpose device.

  17. Detecting communities in large networks

    NASA Astrophysics Data System (ADS)

    Capocci, A.; Servedio, V. D. P.; Caldarelli, G.; Colaiori, F.

    2005-07-01

    We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and link orientation. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable for the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.

  18. Fermi's Large Area Telescope (LAT)

    NASA Video Gallery

    Fermi’s Large Area Telescope (LAT) is the spacecraft’s main scientificinstrument. This animation shows a gamma ray (purple) entering the LAT,where it is converted into an electron (red) and a...

  19. The very large hadron collider

    SciTech Connect

    1998-09-01

    This paper reviews the purposes to be served by a very large hadron collider and the organization and coordination of efforts to bring it about. There is some discussion of magnet requirements and R&D and the suitability of the Fermilab site.

  20. Ideas for Managing Large Classes.

    ERIC Educational Resources Information Center

    Kabel, Robert L.

    1983-01-01

    Describes management strategies used in a large kinetics/industrial chemistry course. Strategies are designed to make instruction in such classes more efficient and effective. Areas addressed include homework assignment, quizzes, final examination, grading and feedback, and rewards for conducting the class in the manner described. (JN)

  1. CERN's Large Hadron Collider project

    NASA Astrophysics Data System (ADS)

    Fearnley, Tom A.

    1997-03-01

    The paper gives a brief overview of CERN's Large Hadron Collider (LHC) project. After an outline of the physics motivation, we describe the LHC machine, interaction rates, experimental challenges, and some important physics channels to be studied. Finally we discuss the four experiments planned at the LHC: ATLAS, CMS, ALICE and LHC-B.

  2. Large area CMOS image sensors

    NASA Astrophysics Data System (ADS)

    Turchetta, R.; Guerrini, N.; Sedgwick, I.

    2011-01-01

    CMOS image sensors, also known as CMOS Active Pixel Sensors (APS) or Monolithic Active Pixel Sensors (MAPS), are today the dominant imaging devices. They are omnipresent in our daily life, as image sensors in cellular phones, web cams, digital cameras, ... In these applications, the pixels can be very small, in the micron range, and the sensors themselves tend to be limited in size. However, many scientific applications, like particle or X-ray detection, require large format, often with large pixels, as well as other specific performance, like low noise, radiation hardness or very fast readout. The sensors are also required to be sensitive to a broad spectrum of radiation: photons from the silicon cut-off in the IR down to UV and X- and gamma-rays through the visible spectrum as well as charged particles. This requirement calls for modifications to the substrate to be introduced to provide optimized sensitivity. This paper will review existing CMOS image sensors, whose size can be as large as a single CMOS wafer, and analyse the technical requirements and specific challenges of large format CMOS image sensors.

  3. Large deviations and portfolio optimization

    NASA Astrophysics Data System (ADS)

    Sornette, Didier

    Risk control and optimal diversification constitute a major focus in the finance and insurance industries as well as, more or less consciously, in our everyday life. We present a discussion of the characterization of risks and of the optimization of portfolios that starts from a simple illustrative model and ends by a general functional integral formulation. A major item is that risk, usually thought of as one-dimensional in the conventional mean-variance approach, has to be addressed by the full distribution of losses. Furthermore, the time-horizon of the investment is shown to play a major role. We show the importance of accounting for large fluctuations and use the theory of Cramér for large deviations in this context. We first treat a simple model with a single risky asset that exemplifies the distinction between the average return and the typical return and the role of large deviations in multiplicative processes, and the different optimal strategies for the investors depending on their size. We then analyze the case of assets whose price variations are distributed according to exponential laws, a situation that is found to describe daily price variations reasonably well. Several portfolio optimization strategies are presented that aim at controlling large risks. We end by extending the standard mean-variance portfolio optimization theory, first within the quasi-Gaussian approximation and then using a general formulation for non-Gaussian correlated assets in terms of the formalism of functional integrals developed in the field theory of critical phenomena.

  4. Large gap magnetic suspension system

    NASA Technical Reports Server (NTRS)

    Abdelsalam, Moustafa K.; Eyssa, Y. M.

    1991-01-01

    The design of a large gap magnetic suspension system is discussed. Some of the topics covered include: the system configuration, permanent magnet material, levitation magnet system, superconducting magnets, resistive magnets, superconducting levitation coils, resistive levitation coils, levitation magnet system, and the nitrogen cooled magnet system.

  5. Mass spectrometry of large complexes.

    PubMed

    Bich, Claudia; Zenobi, Renato

    2009-10-01

    Mass spectrometry is becoming a more and more powerful tool for investigating protein complexes. Recent developments, based on different ionization techniques, electrospray, desorption/ionization and others are contributing to the usefulness of MS to describe the organization and structure of large non-covalent assemblies. PMID:19782560

  6. Unusually large submandibular gland stone.

    PubMed

    Al-Hussona, Aws Adel

    2015-01-01

    Submandibular gland calculi is the most common disease of the gland. In this article, we report a case with unusually large stone located at the hilum of the gland causing necrosis of the overlying duct and the oral mucosa (floor of mouth). PMID:25934409

  7. Energy conservation in large buildings

    NASA Astrophysics Data System (ADS)

    Rosenfeld, A.; Hafemeister, D.

    1985-11-01

    As energy prices rise, newly energy aware designers use better tools and technology to create energy efficient buildings. Thus the U.S. office stock (average age 20 years) uses 250 kBTU/ft2 of resource energy, but the guzzler of 1972 uses 500 (up×2), and the 1986 ASHRAE standards call for 100-125 (less than 25% of their 1972 ancestors). Surprisingly, the first real cost of these efficient buildings has not risen since 1972. Scaling laws are used to calculate heat gains and losses of buildings to obtain the ΔT(free) which can be as large as 15-30 °C (30-60 °F) for large buildings. The net thermal demand and thermal time constants are determined for the Swedish Thermodeck buildings which need essentially no heat in the winter and no chillers in summer. The BECA and other data bases for large buildings are discussed. Off-peak cooling for large buildings is analyzed in terms of saving peak-electrical power. By downsizing chillers and using cheaper, off-peak power, cost-effective thermal storage in new commercial buildings can reduce U.S. peak power demands by 10-20 GW in 15 years. A further potential of about 40 GW is available from adopting partial thermal storage and more efficient air conditioners in existing buildings.

  8. Galaxy clustering on large scales.

    PubMed

    Efstathiou, G

    1993-06-01

    I describe some recent observations of large-scale structure in the galaxy distribution. The best constraints come from two-dimensional galaxy surveys and studies of angular correlation functions. Results from galaxy redshift surveys are much less precise but are consistent with the angular correlations, provided the distortions in mapping between real-space and redshift-space are relatively weak. The galaxy two-point correlation function, rich-cluster two-point correlation function, and galaxy-cluster cross-correlation function are all well described on large scales ( greater, similar 20h-1 Mpc, where the Hubble constant, H0 = 100h km.s-1.Mpc; 1 pc = 3.09 x 10(16) m) by the power spectrum of an initially scale-invariant, adiabatic, cold-dark-matter Universe with Gamma = Omegah approximately 0.2. I discuss how this fits in with the Cosmic Background Explorer (COBE) satellite detection of large-scale anisotropies in the microwave background radiation and other measures of large-scale structure in the Universe. PMID:11607400

  9. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  10. Large-scale polarimetry of large optical galaxies

    NASA Astrophysics Data System (ADS)

    Sholomitskii, G. B.; Maslov, I. A.; Vitrichenko, E. A.

    1999-11-01

    We present preliminary results of wide-field visual CCD polarimetry for large optical galaxies through a concentric multisector radial-tangential polaroid analyzer mounted at the intermediate focus of a Zeiss-1000 telescope. The mean degree of tangential polarization in a 13-arcmin field, which was determined by processing images with imprinted ``orthogonal'' sectors, ranges from several percent (M 82) and 0.51% (the spirals M 51, M 81) to lower values for elliptical galaxies (M 49, M 87). It is emphasized that the parameters of large-scale polarization can be properly determined by using physical models for galaxies; inclination and azimuthal dependences of the degree of polarization are given for spirals.

  11. The physics of large eruptions

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Agust

    2015-04-01

    Based on eruptive volumes, eruptions can be classified as follows: small if the volumes are from less than 0.001 km3 to 0.1 km3, moderate if the volumes are from 0.1 to 10 km3, and large if the volumes are from 10 km3 to 1000 km3 or larger. The largest known explosive and effusive eruptions have eruptive volumes of 4000-5000 km3. The physics of small to moderate eruptions is reasonably well understood. For a typical mafic magma chamber in a crust that behaves as elastic, about 0.1% of the magma leaves the chamber (erupted and injected as a dyke) during rupture and eruption. Similarly, for a typical felsic magma chamber, the eruptive/injected volume during rupture and eruption is about 4%. To provide small to moderate eruptions, chamber volumes of the order of several tens to several hundred cubic kilometres would be needed. Shallow crustal chambers of these sizes are common, and deep-crustal and upper-mantle reservoirs of thousands of cubic kilometres exist. Thus, elastic and poro-elastic chambers of typical volumes can account for small to moderate eruptive volumes. When the eruptions become large, with volumes of tens or hundreds of cubic kilometres or more, an ordinary poro-elastic mechanism can no longer explain the eruptive volumes. The required sizes of the magma chambers and reservoirs to explain such volumes are simply too large to be plausible. Here I propose that the mechanics of large eruptions is fundamentally different from that of small to moderate eruptions. More specifically, I suggest that all large eruptions derive their magmas from chambers and reservoirs whose total cavity-volumes are mechanically reduced very much during the eruption. There are two mechanisms by which chamber/reservoir cavity-volumes can be reduced rapidly so as to squeeze out much of, or all, their magmas. One is piston-like caldera collapse. The other is graben subsidence. During large slip on the ring-faults/graben-faults the associated chamber/reservoir shrinks in volume

  12. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  13. Chunking of Large Multidimensional Arrays

    SciTech Connect

    Rotem, Doron; Otoo, Ekow J.; Seshadri, Sridhar

    2007-02-28

    Data intensive scientific computations as well on-lineanalytical processing applications as are done on very large datasetsthat are modeled as k-dimensional arrays. The storage organization ofsuch arrays on disks is done by partitioning the large global array intofixed size hyper-rectangular sub-arrays called chunks or tiles that formthe units of data transfer between disk and memory. Typical queriesinvolve the retrieval of sub-arrays in a manner that accesses all chunksthat overlap the query results. An important metric of the storageefficiency is the expected number of chunks retrieved over all suchqueries. The question that immediately arises is "what shapes of arraychunks give the minimum expected number of chunks over a query workload?"In this paper we develop two probabilistic mathematical models of theproblem and provide exact solutions using steepest descent and geometricprogramming methods. Experimental results, using synthetic workloads onreal life data sets, show that our chunking is much more efficient thanthe existing approximate solutions.

  14. Progress on large area GEMs

    NASA Astrophysics Data System (ADS)

    Villa, Marco; Duarte Pinto, Serge; Alfonsi, Matteo; Brock, Ian; Croci, Gabriele; David, Eric; de Oliveira, Rui; Ropelewski, Leszek; Taureg, Hans; van Stenis, Miranda

    2011-02-01

    The Gas Electron Multiplier (GEM) manufacturing technique has recently evolved to allow the production of large area GEMs. A novel approach based on single mask photolithography eliminates the mask alignment issue, which limits the dimensions in the traditional double mask process. Moreover, a splicing technique overcomes the limited width of the raw material. Stretching and handling issues in large area GEMs have also been addressed. Using the new improvements it was possible to build a prototype triple-GEM detector of ˜2000 cm2 active area, aimed at an application for the TOTEM T1 upgrade. Further refinements of the single mask technique allow great control over the shape of the GEM holes and the size of the rims, which can be tuned as needed. In this framework, simulation studies can help to understand the GEM behavior depending on the hole shape.

  15. Large aperture Fresnel telescopes/011

    SciTech Connect

    Hyde, R.A., LLNL

    1998-07-16

    At Livermore we`ve spent the last two years examining an alternative approach towards very large aperture (VLA) telescopes, one based upon transmissive Fresnel lenses rather than on mirrors. Fresnel lenses are attractive for VLA telescopes because they are launchable (lightweight, packagable, and deployable) and because they virtually eliminate the traditional, very tight, surface shape requirements faced by reflecting telescopes. Their (potentially severe) optical drawback, a very narrow spectral bandwidth, can be eliminated by use of a second (much smaller) chromatically-correcting Fresnel element. This enables Fresnel VLA telescopes to provide either single band ({Delta}{lambda}/{lambda} {approximately} 0.1), multiple band, or continuous spectral coverage. Building and fielding such large Fresnel lenses will present a significant challenge, but one which appears, with effort, to be solvable.

  16. Large aperture scanning airborne lidar

    NASA Technical Reports Server (NTRS)

    Smith, J.; Bindschadler, R.; Boers, R.; Bufton, J. L.; Clem, D.; Garvin, J.; Melfi, S. H.

    1988-01-01

    A large aperture scanning airborne lidar facility is being developed to provide important new capabilities for airborne lidar sensor systems. The proposed scanning mechanism allows for a large aperture telescope (25 in. diameter) in front of an elliptical flat (25 x 36 in.) turning mirror positioned at a 45 degree angle with respect to the telescope optical axis. The lidar scanning capability will provide opportunities for acquiring new data sets for atmospheric, earth resources, and oceans communities. This completed facility will also make available the opportunity to acquire simulated EOS lidar data on a near global basis. The design and construction of this unique scanning mechanism presents exciting technological challenges of maintaining the turning mirror optical flatness during scanning while exposed to extreme temperatures, ambient pressures, aircraft vibrations, etc.

  17. Large Scale Nanolaminate Deformable Mirror

    SciTech Connect

    Papavasiliou, A; Olivier, S; Barbee, T; Miles, R; Chang, K

    2005-11-30

    This work concerns the development of a technology that uses Nanolaminate foils to form light-weight, deformable mirrors that are scalable over a wide range of mirror sizes. While MEMS-based deformable mirrors and spatial light modulators have considerably reduced the cost and increased the capabilities of adaptive optic systems, there has not been a way to utilize the advantages of lithography and batch-fabrication to produce large-scale deformable mirrors. This technology is made scalable by using fabrication techniques and lithography that are not limited to the sizes of conventional MEMS devices. Like many MEMS devices, these mirrors use parallel plate electrostatic actuators. This technology replicates that functionality by suspending a horizontal piece of nanolaminate foil over an electrode by electroplated nickel posts. This actuator is attached, with another post, to another nanolaminate foil that acts as the mirror surface. Most MEMS devices are produced with integrated circuit lithography techniques that are capable of very small line widths, but are not scalable to large sizes. This technology is very tolerant of lithography errors and can use coarser, printed circuit board lithography techniques that can be scaled to very large sizes. These mirrors use small, lithographically defined actuators and thin nanolaminate foils allowing them to produce deformations over a large area while minimizing weight. This paper will describe a staged program to develop this technology. First-principles models were developed to determine design parameters. Three stages of fabrication will be described starting with a 3 x 3 device using conventional metal foils and epoxy to a 10-across all-metal device with nanolaminate mirror surfaces.

  18. Measuring Diameters Of Large Vessels

    NASA Technical Reports Server (NTRS)

    Currie, James R.; Kissel, Ralph R.; Oliver, Charles E.; Smith, Earnest C.; Redmon, John W., Sr.; Wallace, Charles C.; Swanson, Charles P.

    1990-01-01

    Computerized apparatus produces accurate results quickly. Apparatus measures diameter of tank or other large cylindrical vessel, without prior knowledge of exact location of cylindrical axis. Produces plot of inner circumference, estimate of true center of vessel, data on radius, diameter of best-fit circle, and negative and positive deviations of radius from circle at closely spaced points on circumference. Eliminates need for time-consuming and error-prone manual measurements.

  19. Large Component Removal/Disposal

    SciTech Connect

    Wheeler, D. M.

    2002-02-27

    This paper describes the removal and disposal of the large components from Maine Yankee Atomic Power Plant. The large components discussed include the three steam generators, pressurizer, and reactor pressure vessel. Two separate Exemption Requests, which included radiological characterizations, shielding evaluations, structural evaluations and transportation plans, were prepared and issued to the DOT for approval to ship these components; the first was for the three steam generators and one pressurizer, the second was for the reactor pressure vessel. Both Exemption Requests were submitted to the DOT in November 1999. The DOT approved the Exemption Requests in May and July of 2000, respectively. The steam generators and pressurizer have been removed from Maine Yankee and shipped to the processing facility. They were removed from Maine Yankee's Containment Building, loaded onto specially designed skid assemblies, transported onto two separate barges, tied down to the barges, th en shipped 2750 miles to Memphis, Tennessee for processing. The Reactor Pressure Vessel Removal Project is currently under way and scheduled to be completed by Fall of 2002. The planning, preparation and removal of these large components has required extensive efforts in planning and implementation on the part of all parties involved.

  20. Extremely Large Cusp Diamagnetic Cavities

    NASA Astrophysics Data System (ADS)

    Chen, J.; Fritz, T. A.

    2002-05-01

    Extremely large diamagnetic cavities with a size of as large as 6 Re have been observed in the dayside high-altitude cusp regions. Some of the diamagnetic cavities were independent of the IMF directions, which is unexpected by the current MHD (or ISM) models, suggesting that the cusp diamagnetic cavities are different from the magnetospheric sash, which provides a challenge to the existing MHD (or ISM) models. Associated with these cavities are ions with energies from 40 keV up to 8 MeV. The charge state distribution of these cusp cavity ions was indicative of their seed populations being a mixture of the ionospheric and the solar wind particles. The intensities of the cusp cavity energetic ions were observed to increase by as large as four orders of the magnitudes. During high solar wind pressure period on April 21, 1999, the POLAR spacecraft observed lower ion flux in the dayside high-latitude magnetosheath than that in the neighbouring cusp cavities. These observations indicate that the dayside high-altitude cusp diamagnetic cavity is a key region for transferring the solar wind energy, mass, and momentum into the Earth's magnetosphere. These energetic particles in the cusp diamagnetic cavity together with the cusp's connectivity have significant global impacts on the geospace environment research and will be shedding light on the long-standing unsolved fundamental issue about the origins of the energetic particles in the ring current and in upstream ion events.

  1. Extremely large cusp diamagnetic cavities

    NASA Astrophysics Data System (ADS)

    Chen, J.; Fritz, T.; Siscoe, G.

    Extremely large diamagnetic cavities with a size of as large as 6 Re have been observed in the dayside high-altitude cusp regions. These diamagnetic cavities are always there day by day. Some of the diamagnetic cavities have been observed in the morningside during intervals when the IMF By component was positive (duskward), suggesting that the cusp diamagnetic cavities are different from the magnetospheric sash predicted by MHD simulations. Associated with these cavities are ions with energies from 40 keV up to 8 MeV. The charge state distribution of these cusp cavity ions was indicative of their seed populations being a mixture of the ionospheric and the solar wind particles. The intensities of the cusp cavity energetic ions were observed to increase by as large as four orders of the magnitudes. These observations indicate that the dayside high-altitude cusp diamagnetic cavity is a key region for transferring the solar wind energy, mass, and momentum into the Earth's magnetosphere. These energetic particles in the cusp diamagnetic cavity together with the cusp's connectivity to the entire magnetopause may have significant global impacts on the geospace environment. They will possibly be shedding light on the long-standing unsolved fundamental issue about the origins of the energetic particles in the ring current and in the regions upstream of the subsolar magnetopause where energetic ion events frequently are observed.

  2. Deflectometric measurement of large mirrors

    NASA Astrophysics Data System (ADS)

    Olesch, Evelyn; Häusler, Gerd; Wörnlein, André; Stinzing, Friedrich; van Eldik, Christopher

    2014-06-01

    We discuss the inspection of large-sized, spherical mirror tiles by `Phase Measuring Deflectometry' (PMD). About 10 000 of such mirror tiles, each satisfying strict requirements regarding the spatial extent of the point-spread-function (PSF), are planned to be installed on the Cherenkov Telescope Array (CTA), a future ground-based instrument to observe the sky in very high energy gamma-rays. Owing to their large radii of curvature of up to 60 m, a direct PSF measurement of these mirrors with concentric geometry requires large space. We present a PMD sensor with a footprint of only 5×2×1.2 m3 that overcomes this limitation. The sensor intrinsically acquires the surface slope; the shape data are calculated by integration. In this way, the PSF can be calculated for real case scenarios, e.g., when the light source is close to infinity and off-axis. The major challenge is the calibration of the PMD sensor, specifically because the PSF data have to be reconstructed from different camera views. The calibration of the setup is described, and measurements presented and compared to results obtained with the direct approach.

  3. Large wood recruitment and transport during large floods: A review

    NASA Astrophysics Data System (ADS)

    Comiti, F.; Lucía, A.; Rickenmann, D.

    2016-09-01

    Large wood (LW) elements transported during large floods are long known to have the capacity to induce dangerous obstructions along the channel network, mostly at bridges and at hydraulic structures such as weirs. However, our current knowledge of wood transport dynamics during high-magnitude flood events is still very scarce, mostly because these are (locally) rare and thus unlikely to be directly monitored. Therefore, post-event surveys are invaluable ways to get insights (although indirectly) on LW recruitment processes, transport distance, and factors inducing LW deposition - all aspects that are crucial for the proper management of river basins related to flood hazard mitigation. This paper presents a review of the (quite limited) literature available on LW transport during large floods, drawing extensively on the authors' own experience in mountain and piedmont rivers, published and unpublished. The overall picture emerging from these studies points to a high, catchment-specific variability in all the different processes affecting LW dynamics during floods. Specifically, in the LW recruitment phase, the relative floodplain (bank erosion) vs. hillslope (landslide and debris flows) contribution in mountain rivers varies substantially, as it relates to the extent of channel widening (which depends on many variables itself) but also to the hillslope-channel connectivity of LW mobilized on the slopes. As to the LW transport phase within the channel network, it appears to be widely characterized by supply-limited conditions; whereby LW transport rates (and thus volumes) are ultimately constrained by the amount of LW that is made available to the flow. Indeed, LW deposition during floods was mostly (in terms of volume) observed at artificial structures (bridges) in all the documented events. This implies that the estimation of LW recruitment and the assessment of clogging probabilities for each structure (for a flood event of given magnitude) are the most important

  4. Large-bore pipe decontamination

    SciTech Connect

    Ebadian, M.A.

    1998-01-01

    The decontamination and decommissioning (D and D) of 1200 buildings within the US Department of Energy-Office of Environmental Management (DOE-EM) Complex will require the disposition of miles of pipe. The disposition of large-bore pipe, in particular, presents difficulties in the area of decontamination and characterization. The pipe is potentially contaminated internally as well as externally. This situation requires a system capable of decontaminating and characterizing both the inside and outside of the pipe. Current decontamination and characterization systems are not designed for application to this geometry, making the direct disposal of piping systems necessary in many cases. The pipe often creates voids in the disposal cell, which requires the pipe to be cut in half or filled with a grout material. These methods are labor intensive and costly to perform on large volumes of pipe. Direct disposal does not take advantage of recycling, which could provide monetary dividends. To facilitate the decontamination and characterization of large-bore piping and thereby reduce the volume of piping required for disposal, a detailed analysis will be conducted to document the pipe remediation problem set; determine potential technologies to solve this remediation problem set; design and laboratory test potential decontamination and characterization technologies; fabricate a prototype system; provide a cost-benefit analysis of the proposed system; and transfer the technology to industry. This report summarizes the activities performed during fiscal year 1997 and describes the planned activities for fiscal year 1998. Accomplishments for FY97 include the development of the applicable and relevant and appropriate regulations, the screening of decontamination and characterization technologies, and the selection and initial design of the decontamination system.

  5. Radiosurgery for Large Brain Metastases

    SciTech Connect

    Han, Jung Ho; Kim, Dong Gyu; Chung, Hyun-Tai; Paek, Sun Ha; Park, Chul-Kee; Jung, Hee-Won

    2012-05-01

    Purpose: To determine the efficacy and safety of radiosurgery in patients with large brain metastases treated with radiosurgery. Patients and Methods: Eighty patients with large brain metastases (>14 cm{sup 3}) were treated with radiosurgery between 1998 and 2009. The mean age was 59 {+-} 11 years, and 49 (61.3%) were men. Neurologic symptoms were identified in 77 patients (96.3%), and 30 (37.5%) exhibited a dependent functional status. The primary disease was under control in 36 patients (45.0%), and 44 (55.0%) had a single lesion. The mean tumor volume was 22.4 {+-} 8.8 cm{sup 3}, and the mean marginal dose prescribed was 13.8 {+-} 2.2 Gy. Results: The median survival time from radiosurgery was 7.9 months (95% confidence interval [CI], 5.343-10.46), and the 1-year survival rate was 39.2%. Functional improvement within 1-4 months or the maintenance of the initial independent status was observed in 48 (60.0%) and 20 (25.0%) patients after radiosurgery, respectively. Control of the primary disease, a marginal dose of {>=}11 Gy, and a tumor volume {>=}26 cm{sup 3} were significantly associated with overall survival (hazard ratio, 0.479; p = .018; 95% CI, 0.261-0.880; hazard ratio, 0.350; p = .004; 95% CI, 0.171-0.718; hazard ratio, 2.307; p = .006; 95% CI, 1.274-4.180, respectively). Unacceptable radiation-related toxicities (Radiation Toxicity Oncology Group central nervous system toxicity Grade 3, 4, and 5 in 7, 6, and 2 patients, respectively) developed in 15 patients (18.8%). Conclusion: Radiosurgery seems to have a comparable efficacy with surgery for large brain metastases. However, the rate of radiation-related toxicities after radiosurgery should be considered when deciding on a treatment modality.

  6. Challenges for Large Scale Simulations

    NASA Astrophysics Data System (ADS)

    Troyer, Matthias

    2010-03-01

    With computational approaches becoming ubiquitous the growing impact of large scale computing on research influences both theoretical and experimental work. I will review a few examples in condensed matter physics and quantum optics, including the impact of computer simulations in the search for supersolidity, thermometry in ultracold quantum gases, and the challenging search for novel phases in strongly correlated electron systems. While only a decade ago such simulations needed the fastest supercomputers, many simulations can now be performed on small workstation clusters or even a laptop: what was previously restricted to a few experts can now potentially be used by many. Only part of the gain in computational capabilities is due to Moore's law and improvement in hardware. Equally impressive is the performance gain due to new algorithms - as I will illustrate using some recently developed algorithms. At the same time modern peta-scale supercomputers offer unprecedented computational power and allow us to tackle new problems and address questions that were impossible to solve numerically only a few years ago. While there is a roadmap for future hardware developments to exascale and beyond, the main challenges are on the algorithmic and software infrastructure side. Among the problems that face the computational physicist are: the development of new algorithms that scale to thousands of cores and beyond, a software infrastructure that lifts code development to a higher level and speeds up the development of new simulation programs for large scale computing machines, tools to analyze the large volume of data obtained from such simulations, and as an emerging field provenance-aware software that aims for reproducibility of the complete computational workflow from model parameters to the final figures. Interdisciplinary collaborations and collective efforts will be required, in contrast to the cottage-industry culture currently present in many areas of computational

  7. The Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Axelrod, T. S.

    2006-07-01

    The Large Synoptic Survey Telescope (LSST) is an 8.4 meter telescope with a 10 square degree field degree field and a 3 Gigapixel imager, planned to be on-sky in 2012. It is a dedicated all-sky survey instrument, with several complementary science missions. These include understanding dark energy through weak lensing and supernovae; exploring transients and variable objects; creating and maintaining a solar system map, with particular emphasis on potentially hazardous objects; and increasing the precision with which we understand the structure of the Milky Way. The instrument operates continuously at a rapid cadence, repetitively scanning the visible sky every few nights. The data flow rates from LSST are larger than those from current surveys by roughly a factor of 1000: A few GB/night are typical today. LSST will deliver a few TB/night. From a computing hardware perspective, this factor of 1000 can be dealt with easily in 2012. The major issues in designing the LSST data management system arise from the fact that the number of people available to critically examine the data will not grow from current levels. This has a number of implications. For example, every large imaging survey today is resigned to the fact that their image reduction pipelines fail at some significant rate. Many of these failures are dealt with by rerunning the reduction pipeline under human supervision, with carefully ``tweaked'' parameters to deal with the original problem. For LSST, this will no longer be feasible. The problem is compounded by the fact that the processing must of necessity occur on clusters with large numbers of CPU's and disk drives, and with some components connected by long-haul networks. This inevitably results in a significant rate of hardware component failures, which can easily lead to further software failures. Both hardware and software failures must be seen as a routine fact of life rather than rare exceptions to normality.

  8. LHC: The Large Hadron Collider

    SciTech Connect

    Lincoln, Don

    2015-03-04

    The Large Hadron Collider (or LHC) is the world’s most powerful particle accelerator. In 2012, scientists used data taken by it to discover the Higgs boson, before pausing operations for upgrades and improvements. In the spring of 2015, the LHC will return to operations with 163% the energy it had before and with three times as many collisions per second. It’s essentially a new and improved version of itself. In this video, Fermilab’s Dr. Don Lincoln explains both some of the absolutely amazing scientific and engineering properties of this modern scientific wonder.

  9. Uncertainties in large space systems

    NASA Technical Reports Server (NTRS)

    Fuh, Jon-Shen

    1988-01-01

    Uncertainties of a large space system (LSS) can be deterministic or stochastic in nature. The former may result in, for example, an energy spillover problem by which the interaction between unmodeled modes and controls may cause system instability. The stochastic uncertainties are responsible for mode localization and estimation errors, etc. We will address the effects of uncertainties on structural model formulation, use of available test data to verify and modify analytical models before orbiting, and how the system model can be further improved in the on-orbit environment.

  10. Microfluidic large-scale integration.

    PubMed

    Thorsen, Todd; Maerkl, Sebastian J; Quake, Stephen R

    2002-10-18

    We developed high-density microfluidic chips that contain plumbing networks with thousands of micromechanical valves and hundreds of individually addressable chambers. These fluidic devices are analogous to electronic integrated circuits fabricated using large-scale integration. A key component of these networks is the fluidic multiplexor, which is a combinatorial array of binary valve patterns that exponentially increases the processing power of a network by allowing complex fluid manipulations with a minimal number of inputs. We used these integrated microfluidic networks to construct the microfluidic analog of a comparator array and a microfluidic memory storage device whose behavior resembles random-access memory. PMID:12351675

  11. Large spin systematics in CFT

    NASA Astrophysics Data System (ADS)

    Alday, Luis F.; Bissi, Agnese; Lukowski, Tomasz

    2015-11-01

    Using conformal field theory (CFT) arguments we derive an infinite number of constraints on the large spin expansion of the anomalous dimensions and structure constants of higher spin operators. These arguments rely only on analyticity, unitarity, crossing-symmetry and the structure of the conformal partial wave expansion. We obtain results for both, perturbative CFT to all order in the perturbation parameter, as well as non-perturbatively. For the case of conformal gauge theories this provides a proof of the reciprocity principle to all orders in perturbation theory and provides a new "reciprocity" principle for structure constants. We argue that these results extend also to non-conformal theories.

  12. Large block test status report

    SciTech Connect

    Wilder, D.G.; Lin, W.; Blair, S.C.

    1997-08-26

    This report is intended to serve as a status report, which essentially transmits the data that have been collected to date on the Large Block Test (LBT). The analyses of data will be performed during FY98, and then a complete report will be prepared. This status report includes introductory material that is not needed merely to transmit data but is available at this time and therefore included. As such, this status report will serve as the template for the future report, and the information is thus preserved.

  13. the Large Aperture GRB Observatory

    SciTech Connect

    Bertou, Xavier

    2009-04-30

    The Large Aperture GRB Observatory (LAGO) aims at the detection of high energy photons from Gamma Ray Bursts (GRB) using the single particle technique (SPT) in ground based water Cherenkov detectors (WCD). To reach a reasonable sensitivity, high altitude mountain sites have been selected in Mexico (Sierra Negra, 4550 m a.s.l.), Bolivia (Chacaltaya, 5300 m a.s.l.) and Venezuela (Merida, 4765 m a.s.l.). We report on the project progresses and the first operation at high altitude, search for bursts in 6 months of preliminary data, as well as search for signal at ground level when satellites report a burst.

  14. Large-mode enhancement cavities.

    PubMed

    Carstens, Henning; Holzberger, Simon; Kaster, Jan; Weitenberg, Johannes; Pervak, Volodymyr; Apolonski, Alexander; Fill, Ernst; Krausz, Ferenc; Pupeza, Ioachim

    2013-05-01

    In passive enhancement cavities the achievable power level is limited by mirror damage. Here, we address the design of robust optical resonators with large spot sizes on all mirrors, a measure that promises to mitigate this limitation by decreasing both the intensity and the thermal gradient on the mirror surfaces. We introduce a misalignment sensitivity metric to evaluate the robustness of resonator designs. We identify the standard bow-tie resonator operated close to the inner stability edge as the most robust large-mode cavity and implement this cavity with two spherical mirrors with 600 mm radius of curvature, two plane mirrors and a round trip length of 1.2 m, demonstrating a stable power enhancement of near-infrared laser light by a factor of 2000. Beam radii of 5.7 mm × 2.6 mm (sagittal × tangential 1/e(2) intensity radius) on all mirrors are obtained. We propose a simple all-reflective ellipticity compensation scheme. This will enable a significant increase of the attainable power and intensity levels in enhancement cavities. PMID:23670017

  15. Large aperture diffractive space telescope

    DOEpatents

    Hyde, Roderick A.

    2001-01-01

    A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.

  16. How Large Asexual Populations Adapt

    NASA Astrophysics Data System (ADS)

    Desai, Michael

    2007-03-01

    We often think of beneficial mutations as being rare, and of adaptation as a sequence of selected substitutions: a beneficial mutation occurs, spreads through a population in a selective sweep, then later another beneficial mutation occurs, and so on. This simple picture is the basis for much of our intuition about adaptive evolution, and underlies a number of practical techniques for analyzing sequence data. Yet many large and mostly asexual populations -- including a wide variety of unicellular organisms and viruses -- live in a very different world. In these populations, beneficial mutations are common, and frequently interfere or cooperate with one another as they all attempt to sweep simultaneously. This radically changes the way these populations adapt: rather than an orderly sequence of selective sweeps, evolution is a constant swarm of competing and interfering mutations. I will describe some aspects of these dynamics, including why large asexual populations cannot evolve very quickly and the character of the diversity they maintain. I will explain how this changes our expectations of sequence data, how sex can help a population adapt, and the potential role of ``mutator'' phenotypes with abnormally high mutation rates. Finally, I will discuss comparisons of these predictions with evolution experiments in laboratory yeast populations.

  17. Large amplitude drop shape oscillations

    NASA Technical Reports Server (NTRS)

    Trinh, E. H.; Wang, T. G.

    1982-01-01

    An experimental study of large amplitude drop shape oscillation was conducted in immiscible liquids systems and with levitated free liquid drops in air. In liquid-liquid systems the results indicate the existence of familiar characteristics of nonlinear phenomena. The resonance frequency of the fundamental quadrupole mode of stationary, low viscosity Silicone oil drops acoustically levitated in water falls to noticeably low values as the amplitude of oscillation is increased. A typical, experimentally determined relative frequency decrease of a 0.5 cubic centimeters drop would be about 10% when the maximum deformed shape is characterized by a major to minor axial ratio of 1.9. On the other hand, no change in the fundamental mode frequency could be detected for 1 mm drops levitated in air. The experimental data for the decay constant of the quadrupole mode of drops immersed in a liquid host indicate a slight increase for larger oscillation amplitudes. A qualitative investigation of the internal fluid flows for such drops revealed the existence of steady internal circulation within drops oscillating in the fundamental and higher modes. The flow field configuration in the outer host liquid is also significantly altered when the drop oscillation amplitude becomes large.

  18. Electron Collisions with Large Molecules

    NASA Astrophysics Data System (ADS)

    McKoy, Vincent

    2006-10-01

    In recent years, interest in electron-molecule collisions has increasingly shifted to large molecules. Applications within the semiconductor industry, for example, require electron collision data for molecules such as perfluorocyclobutane, while almost all biological applications involve macromolecules such as DNA. A significant development in recent years has been the realization that slow electrons can directly damage DNA. This discovery has spurred studies of low-energy collisions with the constituents of DNA, including the bases, deoxyribose, the phosphate, and larger moieties assembled from them. In semiconductor applications, a key goal is development of electron cross section sets for plasma chemistry modeling, while biological studies are largely focused on understanding the role of localized resonances in inducing DNA strand breaks. Accurate calculations of low-energy electron collisions with polyatomic molecules are computationally demanding because of the low symmetry and inherent many-electron nature of the problem; moreover, the computational requirements scale rapidly with the size of the molecule. To pursue such studies, we have adapted our computational procedure, known as the Schwinger multichannel method, to run efficiently on highly parallel computers. In this talk, we will present some of our recent results for fluorocarbon etchants used in the semiconductor industry and for constituents of DNA and RNA. In collaboration with Carl Winstead, California Institute of Technology.

  19. Large phased-array radars

    SciTech Connect

    Brookner, D.E.

    1988-12-15

    Large phased-array radars can play a very important part in arms control. They can be used to determine the number of RVs being deployed, the type of targeting of the RVs (the same or different targets), the shape of the deployed objects, and possibly the weight and yields of the deployed RVs. They can provide this information at night as well as during the day and during rain and cloud covered conditions. The radar can be on the ground, on a ship, in an airplane, or space-borne. Airborne and space-borne radars can provide high resolution map images of the ground for reconnaissance, of anti-ballistic missile (ABM) ground radar installations, missile launch sites, and tactical targets such as trucks and tanks. The large ground based radars can have microwave carrier frequencies or be at HF (high frequency). For a ground-based HF radar the signal is reflected off the ionosphere so as to provide over-the-horizon (OTH) viewing of targets. OTH radars can potentially be used to monitor stealth targets and missile traffic.

  20. Mesoscale Ocean Large Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Pearson, Brodie; Fox-Kemper, Baylor; Bachman, Scott; Bryan, Frank

    2015-11-01

    The highest resolution global climate models (GCMs) can now resolve the largest scales of mesoscale dynamics in the ocean. This has the potential to increase the fidelity of GCMs. However, the effects of the smallest, unresolved, scales of mesoscale dynamics must still be parametrized. One such family of parametrizations are mesoscale ocean large eddy simulations (MOLES), but the effects of including MOLES in a GCM are not well understood. In this presentation, several MOLES schemes are implemented in a mesoscale-resolving GCM (CESM), and the resulting flow is compared with that produced by more traditional sub-grid parametrizations. Large eddy simulation (LES) is used to simulate flows where the largest scales of turbulent motion are resolved, but the smallest scales are not resolved. LES has traditionally been used to study 3D turbulence, but recently it has also been applied to idealized 2D and quasi-geostrophic (QG) turbulence. The MOLES presented here are based on 2D and QG LES schemes.

  1. Histotripsy Liquefaction of Large Hematomas.

    PubMed

    Khokhlova, Tatiana D; Monsky, Wayne L; Haider, Yasser A; Maxwell, Adam D; Wang, Yak-Nam; Matula, Thomas J

    2016-07-01

    Intra- and extra-muscular hematomas result from repetitive injury as well as sharp and blunt limb trauma. The clinical consequences can be serious, including debilitating pain and functional deficit. There are currently no short-term treatment options for large hematomas, only lengthy conservative treatment. The goal of this work was to evaluate the feasibility of a high intensity focused ultrasound (HIFU)-based technique, termed histotripsy, for rapid (within a clinically relevant timeframe of 15-20 min) liquefaction of large volume (up to 20 mL) extra-vascular hematomas for subsequent fine-needle aspiration. Experiments were performed using in vitro extravascular hematoma phantoms-fresh bovine blood poured into 50 mL molds and allowed to clot. The resulting phantoms were treated by boiling histotripsy (BH), cavitation histotripsy (CH) or a combination in a degassed water tank under ultrasound guidance. Two different transducers operating at 1 MHz and 1.5 MHz with f-number = 1 were used. The liquefied lysate was aspirated and analyzed by histology and sized in a Coulter Counter. The peak instantaneous power to achieve BH was lower than (at 1.5 MHz) or equal to (at 1 MHz) that which was required to initiate CH. Under the same exposure duration, BH-induced cavities were one and a half to two times larger than the CH-induced cavities, but the CH-induced cavities were more regularly shaped, facilitating easier aspiration. The lysates contained a small amount of debris larger than 70 μm, and 99% of particulates were smaller than 10 μm. A combination treatment of BH (for initial debulking) and CH (for liquefaction of small residual fragments) yielded 20 mL of lysate within 17.5 minutes of treatment and was found to be most optimal for liquefaction of large extravascular hematomas. PMID:27126244

  2. Large Space Antenna Systems Technology, 1984

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1985-01-01

    Mission applications for large space antenna systems; large space antenna structural systems; materials and structures technology; structural dynamics and control technology, electromagnetics technology, large space antenna systems and the Space Station; and flight test and evaluation were examined.

  3. Large Aperture Electrostatic Dust Detector

    SciTech Connect

    C.H. Skinner, R. Hensley, and A.L Roquemore

    2007-10-09

    Diagnosis and management of dust inventories generated in next-step magnetic fusion devices is necessary for their safe operation. A novel electrostatic dust detector, based on a fine grid of interlocking circuit traces biased to 30 or 50 ν has been developed for the detection of dust particles on remote surfaces in air and vacuum environments. Impinging dust particles create a temporary short circuit and the resulting current pulse is recorded by counting electronics. Up to 90% of the particles are ejected from the grid or vaporized suggesting the device may be useful for controlling dust inventories. We report measurements of the sensitivity of a large area (5x5 cm) detector to microgram quantities of dust particles and review its applications to contemporary tokamaks and ITER.

  4. Large area pulsed solar simulator

    NASA Technical Reports Server (NTRS)

    Kruer, Mark A. (Inventor)

    1999-01-01

    An advanced solar simulator illuminates the surface a very large solar array, such as one twenty feet by twenty feet in area, from a distance of about twenty-six feet with an essentially uniform intensity field of pulsed light of an intensity of one AMO, enabling the solar array to be efficiently tested with light that emulates the sun. Light modifiers sculpt a portion of the light generated by an electrically powered high power Xenon lamp and together with direct light from the lamp provide uniform intensity illumination throughout the solar array, compensating for the square law and cosine law reduction in direct light intensity, particularly at the corner locations of the array. At any location within the array the sum of the direct light and reflected light is essentially constant.

  5. Adaptive Optics for Large Telescopes

    SciTech Connect

    Olivier, S

    2008-06-27

    The use of adaptive optics was originally conceived by astronomers seeking to correct the blurring of images made with large telescopes due to the effects of atmospheric turbulence. The basic idea is to use a device, a wave front corrector, to adjust the phase of light passing through an optical system, based on some measurement of the spatial variation of the phase transverse to the light propagation direction, using a wave front sensor. Although the original concept was intended for application to astronomical imaging, the technique can be more generally applied. For instance, adaptive optics systems have been used for several decades to correct for aberrations in high-power laser systems. At Lawrence Livermore National Laboratory (LLNL), the world's largest laser system, the National Ignition Facility, uses adaptive optics to correct for aberrations in each of the 192 beams, all of which must be precisely focused on a millimeter scale target in order to perform nuclear physics experiments.

  6. Large scale topography of Io

    NASA Technical Reports Server (NTRS)

    Gaskell, R. W.; Synnott, S. P.

    1987-01-01

    To investigate the large scale topography of the Jovian satellite Io, both limb observations and stereographic techniques applied to landmarks are used. The raw data for this study consists of Voyager 1 images of Io, 800x800 arrays of picture elements each of which can take on 256 possible brightness values. In analyzing this data it was necessary to identify and locate landmarks and limb points on the raw images, remove the image distortions caused by the camera electronics and translate the corrected locations into positions relative to a reference geoid. Minimizing the uncertainty in the corrected locations is crucial to the success of this project. In the highest resolution frames, an error of a tenth of a pixel in image space location can lead to a 300 m error in true location. In the lowest resolution frames, the same error can lead to an uncertainty of several km.

  7. Safe handling of large animals.

    PubMed

    Grandin, T

    1999-01-01

    The major causes of accidents with cattle, horses, and other grazing animals are: panic due to fear, male dominance aggression, or the maternal aggression of a mother protecting her newborn. Danger is inherent when handling large animals. Understanding their behavior patterns improves safety, but working with animals will never be completely safe. Calm, quiet handling and non-slip flooring are beneficial. Rough handling and excessive use of electric prods increase chances of injury to both people and animals, because fearful animals may jump, kick, or rear. Training animals to voluntarily cooperate with veterinary procedures reduces stress and improves safety. Grazing animals have a herd instinct, and a lone, isolated animal can become agitated. Providing a companion animal helps keep an animal calm. PMID:10329901

  8. Biotherapies in large vessel vasculitis.

    PubMed

    Ferfar, Y; Mirault, T; Desbois, A C; Comarmond, C; Messas, E; Savey, L; Domont, F; Cacoub, P; Saadoun, D

    2016-06-01

    Giant cell arteritis (GCA) and Takayasu's arteritis (TA) are large vessel vasculitis (LVV) and aortic involvement is not uncommon in Behcet's disease (BD) and relapsing polychondritis (RP). Glucocorticosteroids are the mainstay of therapy in LVV. However, a significant proportion of patients have glucocorticoid dependance, serious side effects or refractory disease to steroids and other immunosuppressive treatments such as cyclophosphamide, azathioprine, mycophenolate mofetil and methotrexate. Recent advances in the understanding of the pathogenesis have resulted in the use of biological agents in patients with LVV. Anti-tumor necrosis factor-α drugs seem effective in patients with refractory Takayasu arteritis and vascular BD but have failed to do so in giant cell arteritis. Preliminary reports on the use of the anti-IL6-receptor antibody (tocilizumab), in LVV have been encouraging. The development of new biologic targeted therapies will probably open a promising future for patients with LVV. PMID:26883459

  9. Analysis of large urban fires

    SciTech Connect

    Kang, S.W.; Reitter, T.A.; Takata, A.N.

    1984-11-01

    Fires in urban areas caused by a nuclear burst are analyzed as a first step towards determining their smoke-generation chacteristics, which may have grave implications for global-scale climatic consequences. A chain of events and their component processes which would follow a nuclear attack are described. A numerical code is currently being developed to calculate ultimately the smoke production rate for a given attack scenario. Available models for most of the processes are incorporated into the code. Sample calculations of urban fire-development history performed in the code for an idealized uniform city are presented. Preliminary results indicate the importance of the wind, thermal radiation transmission, fuel distributions, and ignition thresholds on the urban fire spread characteristics. Future plans are to improve the existing models and develop new ones to characterize smoke production from large urban fires. 21 references, 18 figures.

  10. Intergalactic shells at large redshift

    NASA Technical Reports Server (NTRS)

    Shull, J. M.; Silk, J.

    1981-01-01

    The intergalactic shells produced by galactic explosions at large redshift, whose interiors cool by inverse Compton scattering off the cosmic background radiation, have a characteristic angular size of about 1 arcmin at peak brightness. At z values lower than 2, the shells typically have a radius of 0.5 Mpc, a velocity of about 50 km/sec, a metal abundance of about 0.0001 of cosmic values, and strong radiation in H I(Lyman-alpha), He II 304 A, and the IR fine-structure lines of C II and Si II. The predicted extragalactic background emission from many shells, strongly peaked toward the UV, sets an upper limit to the number of exploding sources at z values of about 10. Shell absorption lines of H I, C II, Si II, and Fe II, which may be seen at more recent epochs in quasar spectra, may probe otherwise invisible explosions in the early universe.

  11. Sweetwater, Texas Large N Experiment

    NASA Astrophysics Data System (ADS)

    Sumy, D. F.; Woodward, R.; Barklage, M.; Hollis, D.; Spriggs, N.; Gridley, J. M.; Parker, T.

    2015-12-01

    From 7 March to 30 April 2014, NodalSeismic, Nanometrics, and IRIS PASSCAL conducted a collaborative, spatially-dense seismic survey with several thousand nodal short-period geophones complemented by a backbone array of broadband sensors near Sweetwater, Texas. This pilot project demonstrates the efficacy of industry and academic partnerships, and leveraged a larger, commercial 3D survey to collect passive source seismic recordings to image the subsurface. This innovative deployment of a large-N mixed-mode array allows industry to explore array geometries and investigate the value of broadband recordings, while affording academics a dense wavefield imaging capability and an operational model for high volume instrument deployment. The broadband array consists of 25 continuously-recording stations from IRIS PASSCAL and Nanometrics, with an array design that maximized recording of horizontal-traveling seismic energy for surface wave analysis over the primary target area with sufficient offset for imaging objectives at depth. In addition, 2639 FairfieldNodal Zland nodes from NodalSeismic were deployed in three sub-arrays: the outlier, backbone, and active source arrays. The backbone array consisted of 292 nodes that covered the entire survey area, while the outlier array consisted of 25 continuously-recording nodes distributed at a ~3 km distance away from the survey perimeter. Both the backbone and outlier array provide valuable constraints for the passive source portion of the analysis. This project serves as a learning platform to develop best practices in the support of large-N arrays with joint industry and academic expertise. Here we investigate lessons learned from a facility perspective, and present examples of data from the various sensors and array geometries. We will explore first-order results from local and teleseismic earthquakes, and show visualizations of the data across the array. Data are archived at the IRIS DMC under stations codes XB and 1B.

  12. Large and small photovoltaic powerplants

    NASA Astrophysics Data System (ADS)

    Cormode, Daniel

    The installed base of photovoltaic power plants in the United States has roughly doubled every 1 to 2 years between 2008 and 2015. The primary economic drivers of this are government mandates for renewable power, falling prices for all PV system components, 3rd party ownership models, and a generous tariff scheme known as net-metering. Other drivers include a desire for decreasing the environmental impact of electricity generation and a desire for some degree of independence from the local electric utility. The result is that in coming years, PV power will move from being a minor niche to a mainstream source of energy. As additional PV power comes online this will create challenges for the electric grid operators. We examine some problems related to large scale adoption of PV power in the United States. We do this by first discussing questions of reliability and efficiency at the PV system level. We measure the output of a fleet of small PV systems installed at Tucson Electric Power, and we characterize the degradation of those PV systems over several years. We develop methods to predict energy output from PV systems and quantify the impact of negatives such as partial shading, inverter inefficiency and malfunction of bypass diodes. Later we characterize the variability from large PV systems, including fleets of geographically diverse utility scale power plants. We also consider the power and energy requirements needed to smooth those systems, both from the perspective of an individual system and as a fleet. Finally we report on experiments from a utility scale PV plus battery hybrid system deployed near Tucson, Arizona where we characterize the ability of this system to produce smoothly ramping power as well as production of ancillary energy services such as frequency response.

  13. Low Cost Large Space Antennas

    NASA Technical Reports Server (NTRS)

    Chmielewski, Artur B.; Freeland, Robert

    1997-01-01

    The mobile communication community could significantly benefit from the availability of low-cost, large space-deployable antennas. A new class of space structures, called inflatable deployable structures, will become an option for this industry in the near future. This new technology recently made significant progress with respect to reducing the risk of flying large inflatable structures in space. This progress can be attributed to the successful space flight of the Inflatable Antenna Experiment in May of 1996, which prompted the initiation of the NASA portion of the joint NASA/DOD coordinated Space Inflatables Program, which will develop the technology to be used in future mobile communications antennas along with other users. The NASA/DOD coordinated Space Inflatables Program was initiated in 1997 as a direct result of the Inflatable Antenna Experiment. The program adds a new NASA initiative to a substantial DOD program that involves developing a series of ground test hardware, starting with 3 meter diameter units and advancing the manufacturing techniques to fabricate a 25 meter ground demonstrator unit with surface accuracy exceeding the requirements for mobile communication applications. Simultaneously, the program will be advancing the state of the art in several important inflatable technology areas, such as developing rigidizable materials for struts and tori and investigating thin film technology issues, such as application of coatings, property measurement and materials processing and assembly techniques. A very important technology area being addressed by the program is deployment control techniques. The program will sponsor activities that will lead to understanding the effects of material strain energy release, residual air in the stowed structure, and the design of the launch restraint and release system needed to control deployment dynamics. Other technology areas directly applicable to developing inflatable mobile communication antennas in the near

  14. Large hole rotary drill performance

    SciTech Connect

    Workman, J.L.; Calder, P.N.

    1996-12-31

    Large hole rotary drilling is one of the most common methods of producing blastholes in open pit mining. Large hole drilling generally refers to diameters from 9 to 17 inch (229 to 432 mm), however a considerable amount of rotary drilling is done in diameters from 6{1/2} to 9 inch (165 to 229 mm). These smaller diameters are especially prevalent in gold mining and quarrying. Rotary drills are major mining machines having substantial capital cost. Drill bit costs can also be high, depending on the bit type and formation being drilled. To keep unit costs low the drills must perform at a high productivity level. The most important factor in rotary drilling is the penetration rate. This paper discusses the factors affecting penetration rate. An empirical factor in rotary drilling is the penetration rate. This paper discusses the factors affecting penetration rate. An empirical factor is given for calculating the penetration rate based on rock strength, pulldown weight and the RPM. The importance of using modern drill performance monitoring systems to calibrate the penetration equation for specific rock formations is discussed. Adequate air delivered to the bottom of the hole is very important to achieving maximum penetration rates. If there is insufficient bailing velocity cuttings will not be transported from the bottom of the hole rapidly enough and the penetration rate is very likely to decrease. An expression for the balancing air velocity is given. The amount by which the air velocity must exceed the balancing velocity for effective operation is discussed. The effect of altitude on compressor size is also provided.

  15. Large-Scale Information Systems

    SciTech Connect

    D. M. Nicol; H. R. Ammerlahn; M. E. Goldsby; M. M. Johnson; D. E. Rhodes; A. S. Yoshimura

    2000-12-01

    Large enterprises are ever more dependent on their Large-Scale Information Systems (LSLS), computer systems that are distinguished architecturally by distributed components--data sources, networks, computing engines, simulations, human-in-the-loop control and remote access stations. These systems provide such capabilities as workflow, data fusion and distributed database access. The Nuclear Weapons Complex (NWC) contains many examples of LSIS components, a fact that motivates this research. However, most LSIS in use grew up from collections of separate subsystems that were not designed to be components of an integrated system. For this reason, they are often difficult to analyze and control. The problem is made more difficult by the size of a typical system, its diversity of information sources, and the institutional complexities associated with its geographic distribution across the enterprise. Moreover, there is no integrated approach for analyzing or managing such systems. Indeed, integrated development of LSIS is an active area of academic research. This work developed such an approach by simulating the various components of the LSIS and allowing the simulated components to interact with real LSIS subsystems. This research demonstrated two benefits. First, applying it to a particular LSIS provided a thorough understanding of the interfaces between the system's components. Second, it demonstrated how more rapid and detailed answers could be obtained to questions significant to the enterprise by interacting with the relevant LSIS subsystems through simulated components designed with those questions in mind. In a final, added phase of the project, investigations were made on extending this research to wireless communication networks in support of telemetry applications.

  16. Large and small volcanic eruptions

    NASA Astrophysics Data System (ADS)

    Gudmundsson, Agust; Mohajeri, Nahid

    2013-04-01

    Despite great progress in volcanology in the past decades, we still cannot make reliable forecasts as to the likely size (volume, mass) of an eruption once it has started. Empirical data collected from volcanoes worldwide indicates that the volumes (or masses) of eruptive materials in volcanic eruptions are heavy-tailed. This means that most of the volumes erupted from a given magma chamber are comparatively small. Yet, the same magma chamber can, under certain conditions, squeeze out large volumes of magma. To know these conditions is of fundamental importance for forecasting the likely size of an eruption. Thermodynamics provides the basis for understanding the elastic energy available to (i) propagate an injected dyke from the chamber and to the surface to feed an eruption, and (ii) squeeze magma out of the chamber during the eruption. The elastic energy consists of two main parts: first, the strain energy stored in the volcano before magma-chamber rupture and dyke injection, and, second, the work done through displacement of the flanks of the volcano (or the margins of a rift zone) and the expansion and shrinkage of the magma chamber itself. Other forms of energy in volcanoes - thermal, seismic, kinetic - are generally important but less so for squeezing magma out of a chamber during an eruption. Here we suggest that for (basaltic) eruptions in rift zones the strain energy is partly related to minor doming above the reservoir, and partly to stretching of the rift zone before rupture. The larger the reservoir, the larger is the stored strain energy before eruption. However, for the eruption to be really large, the strain energy has to accumulate in the entire crustal segment above the reservoir and there will be additional energy input into the system during the eruption which relates to the displacements of the boundary of the rift-zone segment. This is presumably why feeder dykes commonly propagate laterally at the surface following the initial fissure

  17. Large Block Test Final Report

    SciTech Connect

    Lin, W

    2001-12-01

    This report documents the Large-Block Test (LBT) conducted at Fran Ridge near Yucca Mountain, Nevada. The LBT was a thermal test conducted on an exposed block of middle non-lithophysal Topopah Spring tuff (Tptpmn) and was designed to assist in understanding the thermal-hydrological-mechanical-chemical (THMC) processes associated with heating and then cooling a partially saturated fractured rock mass. The LBT was unique in that it was a large (3 x 3 x 4.5 m) block with top and sides exposed. Because the block was exposed at the surface, boundary conditions on five of the six sides of the block were relatively well known and controlled, making this test both easier to model and easier to monitor. This report presents a detailed description of the test as well as analyses of the data and conclusions drawn from the test. The rock block that was tested during the LBT was exposed by excavation and removal of the surrounding rock. The block was characterized and instrumented, and the sides were sealed and insulated to inhibit moisture and heat loss. Temperature on the top of the block was also controlled. The block was heated for 13 months, during which time temperature, moisture distribution, and deformation were monitored. After the test was completed and the block cooled down, a series of boreholes were drilled, and one of the heater holes was over-cored to collect samples for post-test characterization of mineralogy and mechanical properties. Section 2 provides background on the test. Section 3 lists the test objectives and describes the block site, the site configuration, and measurements made during the test. Section 3 also presents a chronology of events associated with the LBT, characterization of the block, and the pre-heat analyses of the test. Section 4 describes the fracture network contained in the block. Section 5 describes the heating/cooling system used to control the temperature in the block and presents the thermal history of the block during the test

  18. Large Volcanic Rises on Venus

    NASA Technical Reports Server (NTRS)

    Smrekar, Suzanne E.; Kiefer, Walter S.; Stofan, Ellen R.

    1997-01-01

    Large volcanic rises on Venus have been interpreted as hotspots, or the surface manifestation of mantle upwelling, on the basis of their broad topographic rises, abundant volcanism, and large positive gravity anomalies. Hotspots offer an important opportunity to study the behavior of the lithosphere in response to mantle forces. In addition to the four previously known hotspots, Atla, Bell, Beta, and western Eistla Regiones, five new probable hotspots, Dione, central Eistla, eastern Eistla, Imdr, and Themis, have been identified in the Magellan radar, gravity and topography data. These nine regions exhibit a wider range of volcano-tectonic characteristics than previously recognized for venusian hotspots, and have been classified as rift-dominated (Atla, Beta), coronae-dominated (central and eastern Eistla, Themis), or volcano-dominated (Bell, Dione, western Eistla, Imdr). The apparent depths of compensation for these regions ranges from 65 to 260 km. New estimates of the elastic thickness, using the 90 deg and order spherical harmonic field, are 15-40 km at Bell Regio, and 25 km at western Eistla Regio. Phillips et al. find a value of 30 km at Atla Regio. Numerous models of lithospheric and mantle behavior have been proposed to interpret the gravity and topography signature of the hotspots, with most studies focusing on Atla or Beta Regiones. Convective models with Earth-like parameters result in estimates of the thickness of the thermal lithosphere of approximately 100 km. Models of stagnant lid convection or thermal thinning infer the thickness of the thermal lithosphere to be 300 km or more. Without additional constraints, any of the model fits are equally valid. The thinner thermal lithosphere estimates are most consistent with the volcanic and tectonic characteristics of the hotspots. Estimates of the thermal gradient based on estimates of the elastic thickness also support a relatively thin lithosphere (Phillips et al.). The advantage of larger estimates of

  19. Anthropogenic Triggering of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Mulargia, Francesco; Bizzarri, Andrea

    2014-08-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor ``foreshocks'', since the induction may occur with a delay up to several years.

  20. Anthropogenic triggering of large earthquakes.

    PubMed

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1-10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor "foreshocks", since the induction may occur with a delay up to several years. PMID:25156190

  1. Amplification of large artificial chromosomes.

    PubMed Central

    Smith, D R; Smyth, A P; Moir, D T

    1990-01-01

    Yeast artificial chromosome cloning is an attractive technology for genomic mapping studies because very large DNA segments can be readily propagated. However, detailed analyses often require the extensive application of blotting-hybridization techniques because artificial chromosomes are normally present at only one copy per haploid genome. We have developed a cloning vector and host strain that alleviate this problem by permitting copy number amplification of artificial chromosomes. The vector includes a conditional centromere that can be turned on or off by changing the carbon source. Strong selective pressure for extra copies of the artificial chromosome can be applied by selecting for the expression of a heterologous thymidine kinase gene. When this system was used, artificial chromosomes ranging from about 100 to 600 kilobases in size were readily amplified 10- to 20-fold. The selective conditions did not induce obvious rearrangements in any of the clones tested. Reactivation of the centromere in amplified artificial chromosome clones resulted in stable maintenance of an elevated copy number for 20 generations. Applications of copy number control to various aspects of artificial chromosome analysis are addressed. Images PMID:2236036

  2. Large Scale Homing in Honeybees

    PubMed Central

    Pahl, Mario; Zhu, Hong; Tautz, Jürgen; Zhang, Shaowu

    2011-01-01

    Honeybee foragers frequently fly several kilometres to and from vital resources, and communicate those locations to their nest mates by a symbolic dance language. Research has shown that they achieve this feat by memorizing landmarks and the skyline panorama, using the sun and polarized skylight as compasses and by integrating their outbound flight paths. In order to investigate the capacity of the honeybees' homing abilities, we artificially displaced foragers to novel release spots at various distances up to 13 km in the four cardinal directions. Returning bees were individually registered by a radio frequency identification (RFID) system at the hive entrance. We found that homing rate, homing speed and the maximum homing distance depend on the release direction. Bees released in the east were more likely to find their way back home, and returned faster than bees released in any other direction, due to the familiarity of global landmarks seen from the hive. Our findings suggest that such large scale homing is facilitated by global landmarks acting as beacons, and possibly the entire skyline panorama. PMID:21602920

  3. The Mass of Large Impactors

    NASA Technical Reports Server (NTRS)

    Parisi, M. G.; Brunini, A.

    1996-01-01

    By means of a simplified dynamical model, we have computed the eccentricity change in the orbit of each giant planet, caused by a single, large impact at the end of the accretion process. In order to set an upper bound on this eccentricity change, we have considered the giant planets' present eccentricities as primordial ones. By means of this procedure, we were able to obtain an implicit relation for the impactor masses and maximum velocities. We have estimated by this method the maximum allowed mass to impact Jupiter to be approx. 1.136 x 10(exp -1), being in the case of Neptune approx. 3.99 x 10(exp -2) (expressed in units of each planet final mass). Due to the similar present eccentricities of Saturn, Uranus and Jupiter, the constraint masses and velocities of the bodies to impact them (in units of each planet final mass and velocity respectively) are almost the same for the three planets. These results are in good agreement with those obtained by Lissauer and Safronov. These bounds might be used to derive the mass distribution of planetesimals in the early solar system.

  4. Natural Selection in Large Populations

    NASA Astrophysics Data System (ADS)

    Desai, Michael

    2011-03-01

    I will discuss theoretical and experimental approaches to the evolutionary dynamics and population genetics of natural selection in large populations. In these populations, many mutations are often present simultaneously, and because recombination is limited, selection cannot act on them all independently. Rather, it can only affect whole combinations of mutations linked together on the same chromosome. Methods common in theoretical population genetics have been of limited utility in analyzing this coupling between the fates of different mutations. In the past few years it has become increasingly clear that this is a crucial gap in our understanding, as sequence data has begun to show that selection appears to act pervasively on many linked sites in a wide range of populations, including viruses, microbes, Drosophila, and humans. I will describe approaches that combine analytical tools drawn from statistical physics and dynamical systems with traditional methods in theoretical population genetics to address this problem, and describe how experiments in budding yeast can help us directly observe these evolutionary dynamics.

  5. How Large Should Whales Be?

    PubMed Central

    Clauset, Aaron

    2013-01-01

    The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2 g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000 g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000 g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them. PMID:23342050

  6. Large N{sub c}

    SciTech Connect

    Jenkins, Elizabeth E.

    2009-12-17

    The 1/N{sub c} expansion of QCD with N{sub c} = 3 has been successful in explaining a wide variety of QCD phenomenology. Here I focus on the contracted spin-flavor symmetry of baryons in the large-N{sub c} limit and deviations from spin-flavor symmetry due to corrections suppressed by powers of 1/N{sub c}. Baryon masses provide an important example of the 1/N{sub c} expansion, and successful predictions of masses of heavy-quark baryons continue to be tested by experiment. The ground state charmed baryon masses have all been measured, and five of the eight ground state bottom baryon masses have been found. Results of the 1/N{sub c} expansion can aid in the discovery of the remaining bottom baryons. The brand new measurement of the {omega}{sub b}{sup -} mass by the CDF collaboration conflicts with the original D0 discovery value and is in excellent agreement with the prediction of the 1/N{sub c} expansion.

  7. Anthropogenic Triggering of Large Earthquakes

    PubMed Central

    Mulargia, Francesco; Bizzarri, Andrea

    2014-01-01

    The physical mechanism of the anthropogenic triggering of large earthquakes on active faults is studied on the basis of experimental phenomenology, i.e., that earthquakes occur on active tectonic faults, that crustal stress values are those measured in situ and, on active faults, comply to the values of the stress drop measured for real earthquakes, that the static friction coefficients are those inferred on faults, and that the effective triggering stresses are those inferred for real earthquakes. Deriving the conditions for earthquake nucleation as a time-dependent solution of the Tresca-Von Mises criterion applied in the framework of poroelasticity yields that active faults can be triggered by fluid overpressures < 0.1 MPa. Comparing this with the deviatoric stresses at the depth of crustal hypocenters, which are of the order of 1–10 MPa, we find that injecting in the subsoil fluids at the pressures typical of oil and gas production and storage may trigger destructive earthquakes on active faults at a few tens of kilometers. Fluid pressure propagates as slow stress waves along geometric paths operating in a drained condition and can advance the natural occurrence of earthquakes by a substantial amount of time. Furthermore, it is illusory to control earthquake triggering by close monitoring of minor “foreshocks”, since the induction may occur with a delay up to several years. PMID:25156190

  8. Large Angle Satellite Attitude Maneuvers

    NASA Technical Reports Server (NTRS)

    Cochran, J. E.; Junkins, J. L.

    1975-01-01

    Two methods are proposed for performing large angle reorientation maneuvers. The first method is based upon Euler's rotation theorem; an arbitrary reorientation is ideally accomplished by rotating the spacecraft about a line which is fixed in both the body and in space. This scheme has been found to be best suited for the case in which the initial and desired attitude states have small angular velocities. The second scheme is more general in that a general class of transition trajectories is introduced which, in principle, allows transfer between arbitrary orientation and angular velocity states. The method generates transition maneuvers in which the uncontrolled (free) initial and final states are matched in orientation and angular velocity. The forced transition trajectory is obtained by using a weighted average of the unforced forward integration of the initial state and the unforced backward integration of the desired state. The current effort is centered around practical validation of this second class of maneuvers. Of particular concern is enforcement of given control system constraints and methods for suboptimization by proper selection of maneuver initiation and termination times. Analogous reorientation strategies which force smooth transition in angular momentum and/or rotational energy are under consideration.

  9. Facilitating Navigation Through Large Archives

    NASA Technical Reports Server (NTRS)

    Shelton, Robert O.; Smith, Stephanie L.; Troung, Dat; Hodgson, Terry R.

    2005-01-01

    Automated Visual Access (AVA) is a computer program that effectively makes a large collection of information visible in a manner that enables a user to quickly and efficiently locate information resources, with minimal need for conventional keyword searches and perusal of complex hierarchical directory systems. AVA includes three key components: (1) a taxonomy that comprises a collection of words and phrases, clustered according to meaning, that are used to classify information resources; (2) a statistical indexing and scoring engine; and (3) a component that generates a graphical user interface that uses the scoring data to generate a visual map of resources and topics. The top level of an AVA display is a pictorial representation of an information archive. The user enters the depicted archive by either clicking on a depiction of subject area cluster, selecting a topic from a list, or entering a query into a text box. The resulting display enables the user to view candidate information entities at various levels of detail. Resources are grouped spatially by topic with greatest generality at the top layer and increasing detail with depth. The user can zoom in or out of specific sites or into greater or lesser content detail.

  10. How large should whales be?

    PubMed

    Clauset, Aaron

    2013-01-01

    The evolution and distribution of species body sizes for terrestrial mammals is well-explained by a macroevolutionary tradeoff between short-term selective advantages and long-term extinction risks from increased species body size, unfolding above the 2 g minimum size induced by thermoregulation in air. Here, we consider whether this same tradeoff, formalized as a constrained convection-reaction-diffusion system, can also explain the sizes of fully aquatic mammals, which have not previously been considered. By replacing the terrestrial minimum with a pelagic one, at roughly 7000 g, the terrestrial mammal tradeoff model accurately predicts, with no tunable parameters, the observed body masses of all extant cetacean species, including the 175,000,000 g Blue Whale. This strong agreement between theory and data suggests that a universal macroevolutionary tradeoff governs body size evolution for all mammals, regardless of their habitat. The dramatic sizes of cetaceans can thus be attributed mainly to the increased convective heat loss is water, which shifts the species size distribution upward and pushes its right tail into ranges inaccessible to terrestrial mammals. Under this macroevolutionary tradeoff, the largest expected species occurs where the rate at which smaller-bodied species move up into large-bodied niches approximately equals the rate at which extinction removes them. PMID:23342050

  11. Large Isotope Spectrometer for Astromag

    NASA Technical Reports Server (NTRS)

    Binns, W. R.; Klarmann, J.; Israel, M. H.; Garrard, T. L.; Mewaldt, R. A.; Stone, E. C.; Ormes, J. F.; Streitmatter, R. E.; Rasmussen, I. L.; Wiedenbeck, M. E.

    1990-01-01

    The Large Isotope Spectrometer for Astromag (LISA) is an experiment designed to measure the isotopic composition and energy spectra of cosmic rays for elements extending from beryllium through zinc. The overall objectives of this investigation are to study the origin and evolution of galactic matter; the acceleration, transport, and time scales of cosmic rays in the galaxy; and search for heavy antinuclei in the cosmic radiation. To achieve these objectives, the LISA experiment will make the first identifications of individual heavy cosmic ray isotopes in the energy range from about 2.5 to 4 GeV/n where relativistic time dilation effects enhance the abundances of radioactive clocks and where the effects of solar modulation and cross-section variations are minimized. It will extend high resolution measurements of individual element abundances and their energy spectra to energies of nearly 1 TeV/n, and has the potential for discovering heavy anti-nuclei which could not have been formed except in extragalactic sources.

  12. Control of large space structures

    NASA Technical Reports Server (NTRS)

    Gran, R.; Rossi, M.; Moyer, H. G.; Austin, F.

    1979-01-01

    The control of large space structures was studied to determine what, if any, limitations are imposed on the size of spacecraft which may be controlled using current control system design technology. Using a typical structure in the 35 to 70 meter size category, a control system design that used actuators that are currently available was designed. The amount of control power required to maintain the vehicle in a stabilized gravity gradient pointing orientation that also damped various structural motions was determined. The moment of inertia and mass properties of this structure were varied to verify that stability and performance were maintained. The study concludes that the structure's size is required to change by at least a factor of two before any stability problems arise. The stability margin that is lost is due to the scaling of the gravity gradient torques (the rigid body control) and as such can easily be corrected by changing the control gains associated with the rigid body control. A secondary conclusion from the study is that the control design that accommodates the structural motions (to damp them) is a little more sensitive than the design that works on attitude control of the rigid body only.

  13. Disorder in large- N theories

    NASA Astrophysics Data System (ADS)

    Aharony, Ofer; Komargodski, Zohar; Yankielowicz, Shimon

    2016-04-01

    We consider Euclidean Conformal Field Theories perturbed by quenched disorder, namely by random fluctuations in their couplings. Such theories are relevant for second-order phase transitions in the presence of impurities or other forms of disorder. Theories with quenched disorder often flow to new fixed points of the renormalization group. We begin with disorder in free field theories. Imry and Ma showed that disordered free fields can only exist for d > 4. For d > 4 we show that disorder leads to new fixed points which are not scale-invariant. We then move on to large- N theories (vector models or gauge theories in the `t Hooft limit). We compute exactly the beta function for the disorder, and the correlation functions of the disordered theory. We generalize the results of Imry and Ma by showing that such disordered theories exist only when disorder couples to operators of dimension Δ > d/4. Sometimes the disordered fixed points are not scale-invariant, and in other cases they have unconventional dependence on the disorder, including non-trivial effects due to irrelevant operators. Holography maps disorder in conformal theories to stochastic differential equations in a higher dimensional space. We use this dictionary to reproduce our field theory results. We also study the leading 1 /N corrections, both by field theory methods and by holography. These corrections are particularly important when disorder scales with the number of degrees of freedom.

  14. Chemotaxis of large granular lymphocytes

    SciTech Connect

    Pohajdak, B.; Gomez, J.; Orr, F.W.; Khalil, N.; Talgoy, M.; Greenberg, A.H.

    1986-01-01

    The hypothesis that large granular lymphocytes (LGL) are capable of directed locomotion (chemotaxis) was tested. A population of LGL isolated from discontinuous Percoll gradients migrated along concentration gradients of N-formyl-methionyl-leucyl-phenylalanine (f-MLP), casein, and C5a, well known chemoattractants for polymorphonuclear leukocytes and monocytes, as well as interferon-..beta.. and colony-stimulating factor. Interleukin 2, tuftsin, platelet-derived growth factor, and fibronectin were inactive. Migratory responses were greater in Percoll fractions with the highest lytic activity and HNK-1/sup +/ cells. The chemotactic response to f-MLP, casein, and C5a was always greater when the chemoattractant was present in greater concentration in the lower compartment of the Boyden chamber. Optimum chemotaxis was observed after a 1 hr incubation that made use of 12 ..mu..m nitrocellulose filters. LGL exhibited a high degree of nondirected locomotion when allowed to migrate for longer periods (> 2 hr), and when cultured in vitro for 24 to 72 hr in the presence or absence of IL 2 containing phytohemagluttinin-conditioned medium. LGL chemotaxis to f-MLP could be inhibited in a dose-dependent manner by the inactive structural analog CBZ-phe-met, and the RNK tumor line specifically bound f-ML(/sup 3/H)P, suggesting that LGL bear receptors for the chemotactic peptide.

  15. Large cities are less green

    NASA Astrophysics Data System (ADS)

    Oliveira, Erneson A.; Andrade, José S.; Makse, Hernán A.

    2014-02-01

    We study how urban quality evolves as a result of carbon dioxide emissions as urban agglomerations grow. We employ a bottom-up approach combining two unprecedented microscopic data on population and carbon dioxide emissions in the continental US. We first aggregate settlements that are close to each other into cities using the City Clustering Algorithm (CCA) defining cities beyond the administrative boundaries. Then, we use data on CO2 emissions at a fine geographic scale to determine the total emissions of each city. We find a superlinear scaling behavior, expressed by a power-law, between CO2 emissions and city population with average allometric exponent β = 1.46 across all cities in the US. This result suggests that the high productivity of large cities is done at the expense of a proportionally larger amount of emissions compared to small cities. Furthermore, our results are substantially different from those obtained by the standard administrative definition of cities, i.e. Metropolitan Statistical Area (MSA). Specifically, MSAs display isometric scaling emissions and we argue that this discrepancy is due to the overestimation of MSA areas. The results suggest that allometric studies based on administrative boundaries to define cities may suffer from endogeneity bias.

  16. Large optics inspection, tilting, and washing stand

    DOEpatents

    Ayers, Marion Jay; Ayers, Shannon Lee

    2012-10-09

    A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

  17. Large optics inspection, tilting, and washing stand

    DOEpatents

    Ayers, Marion Jay; Ayers, Shannon Lee

    2010-08-24

    A large optics stand provides a risk free means of safely tilting large optics with ease and a method of safely tilting large optics with ease. The optics are supported in the horizontal position by pads. In the vertical plane the optics are supported by saddles that evenly distribute the optics weight over a large area.

  18. The Large Synoptic Survey Telescope

    NASA Astrophysics Data System (ADS)

    Ivezic, Zeljko

    2007-05-01

    The Large Synoptic Survey Telescope (LSST) is currently by far the most ambitious proposed ground-based optical survey. With initial funding from the US National Science Foundation (NSF), Department of Energy (DOE) laboratories and private sponsors, the design and development efforts are well underway at many institutions, including top universities and leading national laboratories. The main science themes that drive the LSST system design are Dark Energy and Matter, the Solar System Inventory, Transient Optical Sky and the Milky Way Mapping. The LSST system, with its 8.4m telescope and 3,200 Megapixel camera, will be sited at Cerro Pachon in northern Chile, with the first light scheduled for 2014. In a continuous observing campaign, LSST will cover the entire available sky every three nights in two photometric bands to a depth of V=25 per visit (two 15 second exposures), with exquisitely accurate astrometry and photometry. Over the proposed survey lifetime of 10 years, each sky location would be observed about 1000 times, with the total exposure time of 8 hours distributed over six broad photometric bandpasses (ugrizY). This campaign will open a movie-like window on objects that change brightness, or move, on timescales ranging from 10 seconds to 10 years, and will produce a catalog containing over 10 billion galaxies and a similar number of stars. The survey will have a data rate of about 30 TB/night, and will collect over 60 PB of raw data over its lifetime, resulting in an incredibly rich and extensive public archive that will be a treasure trove for breakthroughs in many areas of astronomy and astrophysics.

  19. Fronts in Large Marine Ecosystems

    NASA Astrophysics Data System (ADS)

    Belkin, Igor M.; Cornillon, Peter C.; Sherman, Kenneth

    2009-04-01

    Oceanic fronts shape marine ecosystems; therefore front mapping and characterization are among the most important aspects of physical oceanography. Here we report on the first global remote sensing survey of fronts in the Large Marine Ecosystems (LME). This survey is based on a unique frontal data archive assembled at the University of Rhode Island. Thermal fronts were automatically derived with the edge detection algorithm of Cayula and Cornillon (1992, 1995, 1996) from 12 years of twice-daily, global, 9-km resolution satellite sea surface temperature (SST) fields to produce synoptic (nearly instantaneous) frontal maps, and to compute the long-term mean frequency of occurrence of SST fronts and their gradients. These synoptic and long-term maps were used to identify major quasi-stationary fronts and to derive provisional frontal distribution maps for all LMEs. Since SST fronts are typically collocated with fronts in other water properties such as salinity, density and chlorophyll, digital frontal paths from SST frontal maps can be used in studies of physical-biological correlations at fronts. Frontal patterns in several exemplary LMEs are described and compared, including those for: the East and West Bering Sea LMEs, Sea of Okhotsk LME, East China Sea LME, Yellow Sea LME, North Sea LME, East and West Greenland Shelf LMEs, Newfoundland-Labrador Shelf LME, Northeast and Southeast US Continental Shelf LMEs, Gulf of Mexico LME, and Patagonian Shelf LME. Seasonal evolution of frontal patterns in major upwelling zones reveals an order-of-magnitude growth of frontal scales from summer to winter. A classification of LMEs with regard to the origin and physics of their respective dominant fronts is presented. The proposed classification lends itself to comparative studies of frontal ecosystems.

  20. Temporal Large-Eddy Simulation

    NASA Technical Reports Server (NTRS)

    Pruett, C. D.; Thomas, B. C.

    2004-01-01

    In 1999, Stolz and Adams unveiled a subgrid-scale model for LES based upon approximately inverting (defiltering) the spatial grid-filter operator and termed .the approximate deconvolution model (ADM). Subsequently, the utility and accuracy of the ADM were demonstrated in a posteriori analyses of flows as diverse as incompressible plane-channel flow and supersonic compression-ramp flow. In a prelude to the current paper, a parameterized temporal ADM (TADM) was developed and demonstrated in both a priori and a posteriori analyses for forced, viscous Burger's flow. The development of a time-filtered variant of the ADM was motivated-primarily by the desire for a unifying theoretical and computational context to encompass direct numerical simulation (DNS), large-eddy simulation (LES), and Reynolds averaged Navier-Stokes simulation (RANS). The resultant methodology was termed temporal LES (TLES). To permit exploration of the parameter space, however, previous analyses of the TADM were restricted to Burger's flow, and it has remained to demonstrate the TADM and TLES methodology for three-dimensional flow. For several reasons, plane-channel flow presents an ideal test case for the TADM. Among these reasons, channel flow is anisotropic, yet it lends itself to highly efficient and accurate spectral numerical methods. Moreover, channel-flow has been investigated extensively by DNS, and a highly accurate data base of Moser et.al. exists. In the present paper, we develop a fully anisotropic TADM model and demonstrate its utility in simulating incompressible plane-channel flow at nominal values of Re(sub tau) = 180 and Re(sub tau) = 590 by the TLES method. The TADM model is shown to perform nearly as well as the ADM at equivalent resolution, thereby establishing TLES as a viable alternative to LES. Moreover, as the current model is suboptimal is some respects, there is considerable room to improve TLES.

  1. India's National Large Solar Telescope

    NASA Astrophysics Data System (ADS)

    Hasan, S. S.

    2012-12-01

    India's 2-m National Large Solar Telescope (NLST) is aimed primarily at carrying out observations of the solar atmosphere with high spatial and spectral resolution. A comprehensive site characterization program, that commenced in 2007, has identified two superb sites in the Himalayan region at altitudes greater than 4000-m that have extremely low water vapor content and are unaffected by monsoons. With an innovative optical design, the NLST is an on-axis Gregorian telescope with a low number of optical elements to reduce the number of reflections and yield a high throughput with low polarization. In addition, it is equipped with a high-order adaptive optics to produce close to diffraction limited performance. To control atmospheric and thermal perturbations of the observations, the telescope will function with a fully open dome, to achieve its full potential atop a 25 m tower. Given its design, NLST can also operate at night, without compromising its solar performance. The post-focus instruments include broad-band and tunable Fabry-Pérot narrow-band imaging instruments; a high resolution spectropolarimeter and an Echelle spectrograph for night time astronomy. This project is led by the Indian Institute of Astrophysics and has national and international partners. Its geographical location will fill the longitudinal gap between Japan and Europe and is expected to be the largest solar telescope with an aperture larger than 1.5 m till the ATST and EST come into operation. An international consortium has been identified to build the NLST. The facility is expected to be commissioned by 2016.

  2. Large Alluvial Fans on Mars

    NASA Technical Reports Server (NTRS)

    Moore, Jeffrey M.; Howard, Alan D.

    2004-01-01

    Several dozen distinct alluvial fans, 10 to greater than 40 km long downslope are observed exclusively in highlands craters. Within a search region between 0 deg. and 30 deg. S, alluvial fan-containing craters were only found between 18 and 29 S, and they all occur at around plus or minus 1 km of the MOLA-defined Martian datum. Within the study area they are not randomly distributed but instead form three distinct clusters. Fans typically descend greater than 1 km from where they disgorge from their alcoves. Longitudinal profiles show that their surfaces are very slightly concave with a mean slope of 2 degrees. Many fans exhibit very long, narrow low-relief ridges radially oriented down-slope, often branching at their distal ends, suggestive of distributaries. Morphometric data for 31 fans was derived from MOLA data and compared with terrestrial fans with high-relief source areas, terrestrial low gradient alluvial ramps in inactive tectonic settings, and older Martian alluvial ramps along crater floors. The Martian alluvial fans generally fall on the same trends as the terrestrial alluvial fans, whereas the gentler Martian crater floor ramps are similar in gradient to the low relief terrestrial alluvial surfaces. For a given fan gradient, Martian alluvial fans generally have greater source basin relief than terrestrial fans in active tectonic settings. This suggests that the terrestrial source basins either yield coarser debris or have higher sediment concentrations than their Martian counterpoints. Martian fans and Basin and Range fans have steeper gradients than the older Martian alluvial ramps and terrestrial low relief alluvial surfaces, which is consistent with a supply of coarse sediment. Martian fans are relatively large and of low gradient, similar to terrestrial fluvial fans rather than debris flow fans. However, gravity scaling uncertainties make the flow regime forming Martian fans uncertain. Martian fans, at least those in Holden crater, apparently

  3. Interface structure at large supercooling

    NASA Astrophysics Data System (ADS)

    Misbah, C.; Müller-Krumbhaar, H.; Temkin, D. E.

    1991-04-01

    The front dynamics during the growth of a pure substance in the large undercooling limit including interface kinetics is analyzed. There exists a critical dimensionless undercooling Δ_s(>1) above which a planar front is linearly stable. For Δ < Δ_s the planar front is unstable against short wavenumbers k's perturbations, 01). Pour le front est instable vis-à-vis des perturbations de petit vecteur d'onde, 0

  4. Large space systems technology, 1981. [conferences

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1982-01-01

    A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems. Specifically, program status, structures, materials, and analyses, and control of large space systems are addressed.

  5. Investing in a Large Stretch Press

    NASA Technical Reports Server (NTRS)

    Choate, M.; Nealson, W.; Jay, G.; Buss, W.

    1986-01-01

    Press for forming large aluminum parts from plates provides substantial economies. Study assessed advantages and disadvantages of investing in large stretch-forming press, and also developed procurement specification for press.

  6. Large Space Antenna Systems Technology, 1984

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1985-01-01

    Papers are presented which provide a comprehensive review of space missions requiring large antenna systems and of the status of key technologies required to enable these missions. Topic areas include mission applications for large space antenna systems, large space antenna structural systems, materials and structures technology, structural dynamics and control technology, electromagnetics technology, large space antenna systems and the space station, and flight test and evaluation.

  7. Large Devaluations and the Real Exchange Rate

    ERIC Educational Resources Information Center

    Burstein, Ariel; Eichenbaum, Martin; Rebelo, Sergio

    2005-01-01

    In this paper we argue that the primary force behind the large drop in real exchange rates that occurs after large devaluations is the slow adjustment in the prices of nontradable goods and services. Our empirical analysis uses data from five large devaluation episodes: Argentina (2002), Brazil (1999), Korea (1997), Mexico (1994), and Thailand…

  8. Large space systems technology, 1980, volume 1

    NASA Technical Reports Server (NTRS)

    Kopriver, F., III (Compiler)

    1981-01-01

    The technological and developmental efforts in support of the large space systems technology are described. Three major areas of interests are emphasized: (1) technology pertient to large antenna systems; (2) technology related to large space systems; and (3) activities that support both antenna and platform systems.

  9. 27 CFR 19.915 - Large plants.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Large plants. 19.915... OF THE TREASURY LIQUORS DISTILLED SPIRITS PLANTS Distilled Spirits For Fuel Use Permits § 19.915 Large plants. Any person wishing to establish a large plant shall make application for and obtain...

  10. Large N reduction on coset spaces

    SciTech Connect

    Kawai, Hikaru; Shimasaki, Shinji; Tsuchiya, Asato

    2010-04-15

    As an extension of our previous work concerning the large N reduction on group manifolds, we study the large N reduction on coset spaces. We show that large N field theories on coset spaces are described by certain corresponding matrix models. We also construct Chern-Simons-like theories on group manifolds and coset spaces, and give their reduced models.

  11. Large variable conductance heat pipe. Transverse header

    NASA Technical Reports Server (NTRS)

    Edelstein, F.

    1975-01-01

    The characteristics of gas-loaded, variable conductance heat pipes (VCHP) are discussed. The difficulties involved in developing a large VCHP header are analyzed. The construction of the large capacity VCHP is described. A research project to eliminate some of the problems involved in large capacity VCHP operation is explained.

  12. Large Space Systems Technology, Part 2, 1981

    NASA Technical Reports Server (NTRS)

    Boyer, W. J. (Compiler)

    1982-01-01

    Four major areas of interest are covered: technology pertinent to large antenna systems; technology related to the control of large space systems; basic technology concerning structures, materials, and analyses; and flight technology experiments. Large antenna systems and flight technology experiments are described. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. These research studies represent state-of-the art technology that is necessary for the development of large space systems. A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems.

  13. Large fluctuations at the lasing threshold of solid- and liquid-state dye lasers.

    PubMed

    Basak, Supratim; Blanco, Alvaro; López, Cefe

    2016-01-01

    Intensity fluctuations in lasers are commonly studied above threshold in some special configurations (especially when emission is fed back into the cavity or when two lasers are coupled) and related with their chaotic behaviour. Similar fluctuating instabilities are usually observed in random lasers, which are open systems with plenty of quasi-modes whose non orthogonality enables them to exchange energy and provides the sort of loss mechanism whose interplay with pumping leads to replica symmetry breaking. The latter however, had never been observed in plain cavity lasers where disorder is absent or not intentionally added. Here we show a fluctuating lasing behaviour at the lasing threshold both in solid and liquid dye lasers. Above and below a narrow range around the threshold the spectral line-shape is well correlated with the pump energy. At the threshold such correlation disappears, and the system enters a regime where emitted laser fluctuates between narrow, intense and broad, weak peaks. The immense number of modes and the reduced resonator quality favour the coupling of modes and prepares the system so that replica symmetry breaking occurs without added disorder. PMID:27558968

  14. Large fluctuations at the lasing threshold of solid- and liquid-state dye lasers

    PubMed Central

    Basak, Supratim; Blanco, Alvaro; López, Cefe

    2016-01-01

    Intensity fluctuations in lasers are commonly studied above threshold in some special configurations (especially when emission is fed back into the cavity or when two lasers are coupled) and related with their chaotic behaviour. Similar fluctuating instabilities are usually observed in random lasers, which are open systems with plenty of quasi-modes whose non orthogonality enables them to exchange energy and provides the sort of loss mechanism whose interplay with pumping leads to replica symmetry breaking. The latter however, had never been observed in plain cavity lasers where disorder is absent or not intentionally added. Here we show a fluctuating lasing behaviour at the lasing threshold both in solid and liquid dye lasers. Above and below a narrow range around the threshold the spectral line-shape is well correlated with the pump energy. At the threshold such correlation disappears, and the system enters a regime where emitted laser fluctuates between narrow, intense and broad, weak peaks. The immense number of modes and the reduced resonator quality favour the coupling of modes and prepares the system so that replica symmetry breaking occurs without added disorder. PMID:27558968

  15. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  16. Shape control of large space structures

    NASA Technical Reports Server (NTRS)

    Hagan, M. T.

    1982-01-01

    A survey has been conducted to determine the types of control strategies which have been proposed for controlling the vibrations in large space structures. From this survey several representative control strategies were singled out for detailed analyses. The application of these strategies to a simplified model of a large space structure has been simulated. These simulations demonstrate the implementation of the control algorithms and provide a basis for a preliminary comparison of their suitability for large space structure control.

  17. Force Sensor for Large Robot Arms

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.; Primus, H. C.; Scheinman, V. D.

    1985-01-01

    Modified Maltese-cross force sensor larger and more sensitive than earlier designs. Measures inertial forces and torques exerted on large robot arms during free movement as well as those exerted by claw on manipulated objects. Large central hole of sensor allows claw drive mounted inside arm instead of perpendicular to its axis, eliminating potentially hazardous projection. Originally developed for Space Shuttle, sensor finds applications in large industrial robots.

  18. Improved Large-Field Focusing Schlieren System

    NASA Technical Reports Server (NTRS)

    Weinstein, Leonard M.

    1993-01-01

    System used to examine complicated two- and three-dimensional flows. High-brightness large-field focusing schlieren system incorporates Fresnel lens instead of glass diffuser. In system with large field of view, image may also be very large. Relay optical subsystem minifies large image while retaining all of light. Facilities candidates for use of focusing schlieren include low-speed wind and water tunnels. Heated or cooled flow tracers or injected low- or high-density tracers used to make flows visible for photographic recording.

  19. Collaborative Working for Large Digitisation Projects

    ERIC Educational Resources Information Center

    Yeates, Robin; Guy, Damon

    2006-01-01

    Purpose: To explore the effectiveness of large-scale consortia for disseminating local heritage via the web. To describe the creation of a large geographically based cultural heritage consortium in the South East of England and management lessons resulting from a major web site digitisation project. To encourage the improved sharing of experience…

  20. How Do People Apprehend Large Numerosities?

    ERIC Educational Resources Information Center

    Sophian, Catherine; Chu, Yun

    2008-01-01

    People discriminate remarkably well among large numerosities. These discriminations, however, need not entail numerical representation of the quantities being compared. This research evaluated the role of both non-numerical and numerical information in adult judgments of relative numerosity for large-numerosity spatial arrays. Results of…

  1. Fabrication of large ceramic electrolyte disks

    NASA Technical Reports Server (NTRS)

    Ring, S. A.

    1972-01-01

    Process for sintering compressed ceramic powders produces large ceramic disks for use as electrolytes in high-temperature electrolytic cells. Thin, strain-free uniformly dense disks as large as 30 cm squared have been fabricated by slicing ceramic slugs produced by this technique.

  2. Densifying forest biomass into large round bales

    SciTech Connect

    Fridley, J.; Burkhardt, T.H.

    1981-01-01

    A large round-bale hay baler was modified to examine the concept of baling forest biomass in large round bales. Material baled, feed orientation, and baler belt tension were varied to observe their effects on the baling process and bale density. The torque and power required to drive the baler were measured. 10 refs.

  3. Large-Scale Reform Comes of Age

    ERIC Educational Resources Information Center

    Fullan, Michael

    2009-01-01

    This article reviews the history of large-scale education reform and makes the case that large-scale or whole system reform policies and strategies are becoming increasingly evident. The review briefly addresses the pre 1997 period concluding that while the pressure for reform was mounting that there were very few examples of deliberate or…

  4. Large Lecture Format: Some Lessons Learned.

    ERIC Educational Resources Information Center

    Kryder, LeeAnne G.

    2002-01-01

    Shares some surprising results from a business communication program's recent experiment in using a large lecture format to teach an upper-division business communication course: approximately 90-95% of the students liked the large lecture format, and the quality of their communication deliverables was as good as that produced by students who took…

  5. Large-scale infrared scene projectors

    NASA Astrophysics Data System (ADS)

    Murray, Darin A.

    1999-07-01

    Large-scale infrared scene projectors, typically have unique opto-mechanical characteristics associated to their application. This paper outlines two large-scale zoom lens assemblies with different environmental and package constraints. Various challenges and their respective solutions are discussed and presented.

  6. 75 FR 73983 - Assessments, Large Bank Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-30

    ... Kapoor, Counsel, Legal Division, (202) 898-3960. Correction In proposed rule FR Doc. 2010-29138...; ] FEDERAL DEPOSIT INSURANCE CORPORATION 12 CFR Part 327 RIN 3064-AD66 Assessments, Large Bank Pricing AGENCY..., 2010, regarding Assessments, Large Bank Pricing. This correction clarifies that the comment period...

  7. 76 FR 17521 - Assessments, Large Bank Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-30

    ... Register of February 25, 2011 (76 FR 10672), regarding Assessments, Large Bank Pricing. This correction... 17th Street, NW., Washington, DC 20429. SUPPLEMENTARY INFORMATION: In FR Doc. 2011-3086, appearing on... 327 RIN 3064-AD66 Assessments, Large Bank Pricing AGENCY: Federal Deposit Insurance Corporation...

  8. The algebras of large N matrix mechanics

    SciTech Connect

    Halpern, M.B.; Schwartz, C.

    1999-09-16

    Extending early work, we formulate the large N matrix mechanics of general bosonic, fermionic and supersymmetric matrix models, including Matrix theory: The Hamiltonian framework of large N matrix mechanics provides a natural setting in which to study the algebras of the large N limit, including (reduced) Lie algebras, (reduced) supersymmetry algebras and free algebras. We find in particular a broad array of new free algebras which we call symmetric Cuntz algebras, interacting symmetric Cuntz algebras, symmetric Bose/Fermi/Cuntz algebras and symmetric Cuntz superalgebras, and we discuss the role of these algebras in solving the large N theory. Most important, the interacting Cuntz algebras are associated to a set of new (hidden!) local quantities which are generically conserved only at large N. A number of other new large N phenomena are also observed, including the intrinsic nonlocality of the (reduced) trace class operators of the theory and a closely related large N field identification phenomenon which is associated to another set (this time nonlocal) of new conserved quantities at large N.

  9. LARGE AND GREAT RIVERS: NEW ASSESSMENT TOOLS

    EPA Science Inventory

    The Ecological Exposure Research Division has been conducting research to support the development of the next generation of bioassessment and monitoring tools for large and great rivers. Focus has largely been on the development of standardized protocols for the traditional indi...

  10. Evaluating Large-Scale Interactive Radio Programmes

    ERIC Educational Resources Information Center

    Potter, Charles; Naidoo, Gordon

    2009-01-01

    This article focuses on the challenges involved in conducting evaluations of interactive radio programmes in South Africa with large numbers of schools, teachers, and learners. It focuses on the role such large-scale evaluation has played during the South African radio learning programme's development stage, as well as during its subsequent…

  11. Perception for a large deployable reflector telescope

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. M.; Swanson, P. N.; Meinel, A. B.; Meinel, M. P.

    1984-01-01

    Optical science and technology concepts for a large deployable reflector for far-infrared and submillimeter astronomy from above the earth's atmosphere are discussed. Requirements given at the Asilomar Conference are reviewed. The technical challenges of this large-aperture (about 20-meter) telescope, which will be diffraction limited in the infrared, are highlighted in a brief discussion of one particular configuration.

  12. World atlas of large optical telescopes

    NASA Technical Reports Server (NTRS)

    Meszaros, S. P.

    1979-01-01

    By 1980 there will be approximately 100 large optical telescopes in the world with mirror or lens diameters of one meter (39 inches) and larger. This atlas gives information on these telescopes and shows their locations on continent-sized maps. Observatory locations considered suitable for the construction of future large telescopes are also shown.

  13. Implementing Large Projects in Software Engineering Courses

    ERIC Educational Resources Information Center

    Coppit, David

    2006-01-01

    In software engineering education, large projects are widely recognized as a useful way of exposing students to the real-world difficulties of team software development. But large projects are difficult to put into practice. First, educators rarely have additional time to manage software projects. Second, classrooms have inherent limitations that…

  14. ARPACK: Solving large scale eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Lehoucq, Rich; Maschhoff, Kristi; Sorensen, Danny; Yang, Chao

    2013-11-01

    ARPACK is a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems. The package is designed to compute a few eigenvalues and corresponding eigenvectors of a general n by n matrix A. It is most appropriate for large sparse or structured matrices A where structured means that a matrix-vector product w

  15. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  16. Generically large nongaussianity in small multifield inflation

    SciTech Connect

    Bramante, Joseph

    2015-07-07

    If forthcoming measurements of cosmic photon polarization restrict the primordial tensor-to-scalar ratio to r<0.01, small field inflation will be a principal candidate for the origin of the universe. Here we show that small multifield inflation, without the hybrid mechanism, typically results in large squeezed nongaussianity. Small multifield potentials contain multiple flat field directions, often identified with the gauge invariant field directions in supersymmetric potentials. We find that unless these field directions have equal slopes, large nongaussianity arises. After identifying relevant differences between large and small two-field potentials, we demonstrate that the latter naturally fulfill the Byrnes-Choi-Hall large nongaussianity conditions. Computations of the primordial power spectrum, spectral index, and squeezed bispectrum, reveal that small two-field models which otherwise match observed primordial perturbations, produce excludably large nongaussianity if the inflatons’ field directions have unequal slopes.

  17. Environmental effects and large space systems

    NASA Technical Reports Server (NTRS)

    Garrett, H. B.

    1981-01-01

    When planning large scale operations in space, environmental impact must be considered in addition to radiation, spacecraft charging, contamination, high power and size. Pollution of the atmosphere and space is caused by rocket effluents and by photoelectrons generated by sunlight falling on satellite surfaces even light pollution may result (the SPS may reflect so much light as to be a nuisance to astronomers). Large (100 Km 2) structures also will absorb the high energy particles that impinge on them. Altogether, these effects may drastically alter the Earth's magnetosphere. It is not clear if these alterations will in any way affect the Earth's surface climate. Large structures will also generate large plasma wakes and waves which may cause interference with communications to the vehicle. A high energy, microwave beam from the SPS will cause ionospheric turbulence, affecting UHF and VHF communications. Although none of these effects may ultimately prove critical, they must be considered in the design of large structures.

  18. Testing Large Structures in the Field

    NASA Technical Reports Server (NTRS)

    James, George; Carne, Thomas G.

    2009-01-01

    Field testing large structures creates unique challenges such as limited choices for boundary conditions and the fact that natural excitation sources cannot be removed. Several critical developments in field testing of large structures are reviewed, including: step relaxation testing which has been developed into a useful technique to apply large forces to operational systems by careful windowing; the capability of large structures testing with free support conditions which has been expanded by implementing modeling of the support structure; natural excitation which has been developed as a viable approach to field testing; and the hybrid approach which has been developed to allow forces to be estimated in operating structures. These developments have increased the ability to extract information from large structures and are highlighted in this presentation.

  19. Do large hiatal hernias affect esophageal peristalsis?

    PubMed Central

    Roman, Sabine; Kahrilas, Peter J; Kia, Leila; Luger, Daniel; Soper, Nathaniel; Pandolfino, John E

    2013-01-01

    Background & Aim Large hiatal hernias can be associated with a shortened or tortuous esophagus. We hypothesized that these anatomic changes may alter esophageal pressure topography (EPT) measurements made during high-resolution manometry (HRM). Our aim was to compare EPT measures of esophageal motility in patients with large hiatal hernias to those of patients without hernia. Methods Among 2000 consecutive clinical EPT, we identified 90 patients with large (>5 cm) hiatal hernias on endoscopy and at least 7 evaluable swallows on EPT. Within the same database a control group without hernia was selected. EPT was analyzed for lower esophageal sphincter (LES) pressure, Distal Contractile Integral (DCI), contraction amplitude, Contractile Front Velocity (CFV) and Distal Latency time (DL). Esophageal length was measured on EPT from the distal border of upper esophageal sphincter to the proximal border of the LES. EPT diagnosis was based on the Chicago Classification. Results The manometry catheter was coiled in the hernia and did not traverse the crural diaphragm in 44 patients (49%) with large hernia. Patients with large hernias had lower average LES pressures, lower DCI, slower CFV and shorter DL than patients without hernia. They also exhibited a shorter mean esophageal length. However, the distribution of peristaltic abnormalities was not different in patients with and without large hernia. Conclusions Patients with large hernias had an alteration of EPT measurements as a consequence of the associated shortened esophagus. However, the distribution of peristaltic disorders was unaffected by the presence of hernia. PMID:22508779

  20. On the distinction between large deformation and large distortion for anisotropic materials

    SciTech Connect

    BRANNON,REBECCA M.

    2000-02-24

    A motion involves large distortion if the ratios of principal stretches differ significantly from unity. A motion involves large deformation if the deformation gradient tensor is significantly different from the identity. Unfortunately, rigid rotation fits the definition of large deformation, and models that claim to be valid for large deformation are often inadequate for large distortion. An exact solution for the stress in an idealized fiber-reinforced composite is used to show that conventional large deformation representations for transverse isotropy give errant results. Possible alternative approaches are discussed.

  1. 1919 + 742 - A large double radio source

    NASA Astrophysics Data System (ADS)

    Strom, R. G.; Eckart, A.; Biermann, P.

    1985-10-01

    A 49-cm map of the source 1919 + 742 (previously catalogued as 4C74.24 and NB74.26) shows it to be a large double. Although the optical identification is unclear, there are several candidate objects near the source centroid which appear to be members of a small group of galaxies. The absence of bright identification candidates argues for a moderately distant, and hence intrinsically large (750 kpc overall size for z = 0.1) object. The spectrum between 80 and 600 MHz is determined, some intrinsic properties are derived, and these are briefly discussed in the context of other large sources.

  2. Experimental verification of a large flexible manipulator

    NASA Technical Reports Server (NTRS)

    Lee, Jac Won; Huggins, James D.; Book, Wayne J.

    1988-01-01

    A large experimental lightweight manipulator would be useful for material handling, for welding, or for ultrasonic inspection of a large structure, such as an airframe. The flexible parallel link mechanism is designed for high rigidity without increasing weight. This constrained system is analyzed by singular value decomposition of the constraint Jacobian matrix. A verification of the modeling using the assumed mode method is presented. Eigenvalues and eigenvectors of the linearized model are compared to the measured system natural frequencies and their associated mode shapes. The modeling results for large motions are compared to the time response data from the experiments. The hydraulic actuator is verified.

  3. Learning to build large structures in space

    NASA Technical Reports Server (NTRS)

    Hagler, T.; Patterson, H. G.; Nathan, C. A.

    1977-01-01

    The paper examines some of the key technologies and forms of construction know-how that will have to be developed and tested for eventual application to building large structures in space. Construction of a shuttle-tended space construction/demonstration platform would comprehensively demonstrate large structure technology, develop construction capability, and furnish a construction platform for a variety of operational large structures. Completion of this platform would lead to demonstrations of the Satellite Power System (SPS) concept, including microwave transmission, fabrication of 20-m-deep beams, conductor installation, rotary joint installation, and solar blanket installation.

  4. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  5. Microwave sintering of large alumina bodies

    SciTech Connect

    Blake, R.D.; Katz, J.D.

    1993-05-01

    The application of microwaves as an energy source for materials processing of large alumina bodies at elevated temperatures has been limited to date. Most work has concerned itself with small laboratory samples. The nonuniformity of the microwave field within a cavity subjects large alumina bodies to areas of concentrated energy, resulting in uneven heating and subsequent cracking. Smaller bodies are not significantly affected by field nonuniformity due to their smaller mass. This work will demonstrate a method for microwave sintering of large alumina bodies while maintaining their structural integrity. Several alumina configurations were successfully sintered using a method which creates an artificial field or environment within the microwave cavity.

  6. Microwave sintering of large alumina bodies

    SciTech Connect

    Blake, R.D.; Katz, J.D.

    1993-01-01

    The application of microwaves as an energy source for materials processing of large alumina bodies at elevated temperatures has been limited to date. Most work has concerned itself with small laboratory samples. The nonuniformity of the microwave field within a cavity subjects large alumina bodies to areas of concentrated energy, resulting in uneven heating and subsequent cracking. Smaller bodies are not significantly affected by field nonuniformity due to their smaller mass. This work will demonstrate a method for microwave sintering of large alumina bodies while maintaining their structural integrity. Several alumina configurations were successfully sintered using a method which creates an artificial field or environment within the microwave cavity.

  7. Synthesis of small and large scale dynamos

    NASA Astrophysics Data System (ADS)

    Subramanian, Kandaswamy

    Using a closure model for the evolution of magnetic correlations, we uncover an interesting plausible saturated state of the small-scale fluctuation dynamo (SSD) and a novel analogy between quantum mechanical tunnelling and the generation of large-scale fields. Large scale fields develop via the α-effect, but as magnetic helicity can only change on a resistive timescale, the time it takes to organize the field into large scales increases with magnetic Reynolds number. This is very similar to the results which obtain from simulations using the full MHD equations.

  8. Efficient generation of large random networks

    NASA Astrophysics Data System (ADS)

    Batagelj, Vladimir; Brandes, Ulrik

    2005-03-01

    Random networks are frequently generated, for example, to investigate the effects of model parameters on network properties or to test the performance of algorithms. Recent interest in the statistics of large-scale networks sparked a growing demand for network generators that can generate large numbers of large networks quickly. We here present simple and efficient algorithms to randomly generate networks according to the most commonly used models. Their running time and space requirement is linear in the size of the network generated, and they are easily implemented.

  9. Optical metrology for very large convex aspheres

    NASA Astrophysics Data System (ADS)

    Burge, J. H.; Su, P.; Zhao, C.

    2008-07-01

    Telescopes with very large diameter or with wide fields require convex secondary mirrors that may be many meters in diameter. The optical surfaces for these mirrors can be manufactured to the accuracy limited by the surface metrology. We have developed metrology systems that are specifically optimized for measuring very large convex aspheric surfaces. Large aperture vibration insensitive sub-aperture Fizeau interferometer combined with stitching software give high resolution surface measurements. The global shape is corroborated with a coordinate measuring machine based on the swing arm profilometer.

  10. Very Large System Dynamics Models - Lessons Learned

    SciTech Connect

    Jacob J. Jacobson; Leonard Malczynski

    2008-10-01

    This paper provides lessons learned from developing several large system dynamics (SD) models. System dynamics modeling practice emphasize the need to keep models small so that they are manageable and understandable. This practice is generally reasonable and prudent; however, there are times that large SD models are necessary. This paper outlines two large SD projects that were done at two Department of Energy National Laboratories, the Idaho National Laboratory and Sandia National Laboratories. This paper summarizes the models and then discusses some of the valuable lessons learned during these two modeling efforts.

  11. The Amateurs' Love Affair with Large Datasets

    NASA Astrophysics Data System (ADS)

    Price, Aaron; Jacoby, S. H.; Henden, A.

    2006-12-01

    Amateur astronomers are professionals in other areas. They bring expertise from such varied and technical careers as computer science, mathematics, engineering, and marketing. These skills, coupled with an enthusiasm for astronomy, can be used to help manage the large data sets coming online in the next decade. We will show specific examples where teams of amateurs have been involved in mining large, online data sets and have authored and published their own papers in peer-reviewed astronomical journals. Using the proposed LSST database as an example, we will outline a framework for involving amateurs in data analysis and education with large astronomical surveys.

  12. Interannual Behavior of Large Regional Dust Storms

    NASA Astrophysics Data System (ADS)

    Kass, D. M.; Kleinboehl, A.; McCleese, D. J.; Schofield, J. T.; Smith, M. D.

    2014-07-01

    We examine large regional dust storms in MCS and TES retrieved temperature profiles. There is significant repeatability with three regional storms (A, B and C) each Mars year. Each type of storm is distinct seasonally and in its behavior.

  13. Large Area Synthesis of 2D Materials

    NASA Astrophysics Data System (ADS)

    Vogel, Eric

    Transition metal dichalcogenides (TMDs) have generated significant interest for numerous applications including sensors, flexible electronics, heterostructures and optoelectronics due to their interesting, thickness-dependent properties. Despite recent progress, the synthesis of high-quality and highly uniform TMDs on a large scale is still a challenge. In this talk, synthesis routes for WSe2 and MoS2 that achieve monolayer thickness uniformity across large area substrates with electrical properties equivalent to geological crystals will be described. Controlled doping of 2D semiconductors is also critically required. However, methods established for conventional semiconductors, such as ion implantation, are not easily applicable to 2D materials because of their atomically thin structure. Redox-active molecular dopants will be demonstrated which provide large changes in carrier density and workfunction through the choice of dopant, treatment time, and the solution concentration. Finally, several applications of these large-area, uniform 2D materials will be described including heterostructures, biosensors and strain sensors.

  14. A Computer Program for Clustering Large Matrices

    ERIC Educational Resources Information Center

    Koch, Valerie L.

    1976-01-01

    A Fortran V program is described derived for the Univac 1100 Series Computer for clustering into hierarchical structures large matrices, up to 1000 x 1000 and larger, of interassociations between objects. (RC)

  15. [Large articular geode cyst in rheumatoid polyarthritis].

    PubMed

    Sabri, F; Calmes, D; Muller, M J

    1989-01-01

    A case of a large bone cyst in the tibial condyle of a patient with rheumatoid arthritis is reported. The etiology and pathology are discussed, and preventive surgical treatment is recommended. PMID:2801089

  16. Large Meteor Tracked over Northeast Alabama

    NASA Video Gallery

    On the evening of May 18, NASA all-sky meteor cameras located at NASA’s Marshall Space Flight Center and at the Walker County Science Center near Chickamauga, Ga. tracked the entry of a large meteo...

  17. Large Grain Superconducting RF Cavities at DESY

    SciTech Connect

    Singer, W.; Brinkmann, A.; Ermakov, A.; Iversen, J.; Kreps, G.; Matheisen, A.; Proch, D.; Reschke, D.; Singer, X.; Spiwek, M.; Wen, H.; Brokmeier, H. G.

    2007-08-09

    The DESY R and D program on cavities fabricated from large grain niobium explores the potential of this material for the production of approx. 1000 nine-cell cavities for the European XFEL. The program investigates basic material properties, comparing large grain material to standard sheet niobium, as well as fabrication and preparation aspects. Several single-cell cavities of TESLA shape have been fabricated from large grain niobium. A gradient up to 41 MV/m at Q0 = 1.4{center_dot}1010 (TB = 2K) was measured after electropolishing. The first three large grain nine-cell cavities worldwide have been produced under contract of DESY with ACCEL Instruments Co. The first tests have shown that all three cavities reach an accelerating gradient up to 30 MV/m after BCP (Buffered Chemical Polishing) treatment, what exceeds the XFEL requirements for RF test in the vertical cryostat.

  18. Meaningful statistical analysis of large computational clusters.

    SciTech Connect

    Gentile, Ann C.; Marzouk, Youssef M.; Brandt, James M.; Pebay, Philippe Pierre

    2005-07-01

    Effective monitoring of large computational clusters demands the analysis of a vast amount of raw data from a large number of machines. The fundamental interactions of the system are not, however, well-defined, making it difficult to draw meaningful conclusions from this data, even if one were able to efficiently handle and process it. In this paper we show that computational clusters, because they are comprised of a large number of identical machines, behave in a statistically meaningful fashion. We therefore can employ normal statistical methods to derive information about individual systems and their environment and to detect problems sooner than with traditional mechanisms. We discuss design details necessary to use these methods on a large system in a timely and low-impact fashion.

  19. Bipartite Graphs of Large Clique-Width

    NASA Astrophysics Data System (ADS)

    Korpelainen, Nicholas; Lozin, Vadim V.

    Recently, several constructions of bipartite graphs of large clique-width have been discovered in the literature. In the present paper, we propose a general framework for developing such constructions and use it to obtain new results on this topic.

  20. Large-scale inhomogeneities and galaxy statistics

    NASA Technical Reports Server (NTRS)

    Schaeffer, R.; Silk, J.

    1984-01-01

    The density fluctuations associated with the formation of large-scale cosmic pancake-like and filamentary structures are evaluated using the Zel'dovich approximation for the evolution of nonlinear inhomogeneities in the expanding universe. It is shown that the large-scale nonlinear density fluctuations in the galaxy distribution due to pancakes modify the standard scale-invariant correlation function xi(r) at scales comparable to the coherence length of adiabatic fluctuations. The typical contribution of pancakes and filaments to the J3 integral, and more generally to the moments of galaxy counts in a volume of approximately (15-40 per h Mpc)exp 3, provides a statistical test for the existence of large scale inhomogeneities. An application to several recent three dimensional data sets shows that despite large observational uncertainties over the relevant scales characteristic features may be present that can be attributed to pancakes in most, but not all, of the various galaxy samples.

  1. Large-Larmor-radius interchange instability

    SciTech Connect

    Ripin, B.H.; McLean, E.A.; Manka, C.K.; Pawley, C.; Stamper, J.A.; Peyser, T.A.; Mostovych, A.N.; Grun, J.; Hassam, A.B.; Huba, J.

    1987-11-16

    We observe linear and nonlinear features of a strong plasma/magnetic field interchange Rayleigh-Taylor instability in the limit of large ion Larmor radius. The instability undergoes rapid linear growth culminating in free-streaming flute tips.

  2. Large, horizontal-axis wind turbines

    NASA Technical Reports Server (NTRS)

    Linscott, B. S.; Perkins, P.; Dennett, J. T.

    1984-01-01

    Development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generating systems are presented. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. There are several ongoing large wind system development projects and applied research efforts directed toward meeting the technology requirements for utility applications. Detailed information on these projects is provided. The Mod-O research facility and current applied research effort in aerodynamics, structural dynamics and aeroelasticity, composite and hybrid composite materials, and multiple system interaction are described. A chronology of component research and technology development for large, horizontal axis wind turbines is presented. Wind characteristics, wind turbine economics, and the impact of wind turbines on the environment are reported. The need for continued wind turbine research and technology development is explored. Over 40 references are sited and a bibliography is included.

  3. Large Space Antenna Systems Technology, part 1

    NASA Technical Reports Server (NTRS)

    Lightner, E. B. (Compiler)

    1983-01-01

    A compilation of the unclassified papers presented at the NASA Conference on Large Space Antenna Systems Technology covers the following areas: systems, structures technology, control technology, electromagnetics, and space flight test and evaluation.

  4. Large & Small: Exploring the Laws of Nature

    ERIC Educational Resources Information Center

    Creutz, E.

    1976-01-01

    Illustrates how both large entities (such as stars and galaxies) and small entities (such as fundamental particles) obey the same physical laws. Discusses quantum mechanics, Newton's laws, and general relativity. (MLH)

  5. Communication architecture for large geostationary platforms

    NASA Technical Reports Server (NTRS)

    Bond, F. E.

    1979-01-01

    Large platforms have been proposed for supporting multipurpose communication payloads to exploit economy of scale, reduce congestion in the geostationary orbit, provide interconnectivity between diverse earth stations, and obtain significant frequency reuse with large multibeam antennas. This paper addresses a specific system design, starting with traffic projections in the next two decades and discussing tradeoffs and design approaches for major components including: antennas, transponders, and switches. Other issues explored are selection of frequency bands, modulation, multiple access, switching methods, and techniques for servicing areas with nonuniform traffic demands. Three-major services are considered: a high-volume trunking system, a direct-to-user system, and a broadcast system for video distribution and similar functions. Estimates of payload weight and d.c. power requirements are presented. Other subjects treated are: considerations of equipment layout for servicing by an orbit transfer vehicle, mechanical stability requirements for the large antennas, and reliability aspects of the large number of transponders employed.

  6. Large Area Sputter Coating on Glass

    NASA Astrophysics Data System (ADS)

    Katayama, Yoshihito

    Large glass has been used for commercial buildings, housings and vehicles for many years. Glass size for flat displays is getting larger and larger. The glass for the 8th generation is more than 5 m2 in area. Demand of the large glass is increasing not only in these markets but also in a solar cell market growing drastically. Therefore, large area coating is demanded to plus something else on glass more than ever. Sputtering and pyrolysis are the major coating methods on large glass today. Sputtering process is particularly popular because it can deposit a wide variety of materials in good coating uniformity on the glass. This paper describes typical industrial sputtering system and recent progress in sputtering technology. It also shows typical coated glass products in architectural, automotive and display fields and comments on their functions, film stacks and so on.

  7. Cosmogenic Nuclides Study of Large Iron Meteorites

    NASA Astrophysics Data System (ADS)

    Hutzler, A.; Smith, T.; Rochette, P.; Bourles, D. L.; Leya, I.; Gattacceca, J.

    2014-09-01

    Six large iron meteorites were selected (Saint-Aubin, Mont-Dieu, Caille, Morasko, Agoudal, and Gebel Kamil). We measured stable and radiogenic cosmogenic nuclides, to study pre-atmospheric size, cosmic-ray exposure ages and terrestrial ages.

  8. Large-scale regions of antimatter

    SciTech Connect

    Grobov, A. V. Rubin, S. G.

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  9. A General Conditional Large Deviation Principle

    NASA Astrophysics Data System (ADS)

    La Cour, Brian R.; Schieve, William C.

    2015-10-01

    Given a sequence of Borel probability measures on a Hausdorff space which satisfy a large deviation principle (LDP), we consider the corresponding sequence of measures formed by conditioning on a set B. If the large deviation rate function I is good and effectively continuous, and the conditioning set has the property that (1) and (2) for all , then the sequence of conditional measures satisfies a LDP with the good, effectively continuous rate function , where if and otherwise.

  10. Adaptive Machining Of Large, Somewhat Flexible Parts

    NASA Technical Reports Server (NTRS)

    Gutow, David; Wagner, Garrett; Gilbert, Jeffrey L.; Deily, David

    1996-01-01

    Adaptive machining is method of machining large, somewhat flexible workpieces to close tolerances. Devised for machining precise weld lands on aft skirts of rocket nozzles, but underlying concept generally applicable to precise machining of any of large variety of workpieces deformed by thermal, gravitational, and/or machining forces. For example, in principle, method used to bore precise hole on unanchored end of long cantilever beam.

  11. Large Format Detector Arrays for Astrophysics

    NASA Technical Reports Server (NTRS)

    Moseley, Harvey

    2006-01-01

    Improvements in detector design and advances in fabrication techniques has resulted in devices which can reach fundamental sensitivity limits in many cases. Many pressing astrophysical questions require large arrays of such sensitive detectors. I will describe the state of far infrared through millimeter detector development at NASA/GSFC, the design and production of large format arrays, and the initial deployment of these powerful new tools.

  12. Zone generator for Large Space Telescope technology

    NASA Technical Reports Server (NTRS)

    Erickson, K. E.

    1974-01-01

    A concept is presented for monitoring the optical adjustment and performance of a Large Space Telescope which consists of a 1.2m diameter turntable with a laser stylus to operate at speeds up to 30 rpm. The focus of the laser stylus is under closed loop control. A technique for scribing zones of suitable depth, width, and uniformity applicable to large telescope mirrors is also reported.

  13. Anatomical repair of large incisional hernias.

    PubMed Central

    Loh, A.; Rajkumar, J. S.; South, L. M.

    1992-01-01

    We present a method of repair for large incisional hernias using lateral relieving incisions of the anterior rectus sheath. This is a modification of the methods previously described by Young (1), Hunter (2) and Maguire and Young (3). There were no recurrences in the 13 patients reviewed. Other methods of repair for large incisional hernias are discussed. Images Figure 2a,b Figure 3a,b Figure 4 Figure 5 PMID:1567126

  14. Slew maneuvers of large flexible spacecrafts

    NASA Technical Reports Server (NTRS)

    Kakad, Y. P.

    1990-01-01

    The dynamics and control of arbitrary slew maneuvers of a large flexible spacecraft are developed. The dynamics of slew maneuvers are nonlinear and include the coupling between the rigid orbiter and the flexible appendage. A decentralized control scheme is used to perform a large-angle slew maneuver about an arbitrary axis in space and to suppress the vibrations of the flexible appendage during and after the maneuver.

  15. Primary propulsion/large space system interactions

    NASA Technical Reports Server (NTRS)

    Dergance, R. H.

    1980-01-01

    Three generic types of structural concepts and nonstructural surface densities were selected and combined to represent potential LSS applications. The design characteristics of various classes of large space systems that are impacted by primary propulsion thrust required to effect orbit transfer were identified. The effects of propulsion system thrust-to-mass ratio, thrust transients, and performance on the mass, area, and orbit transfer characteristics of large space systems were determined.

  16. A case for Large Space Systems Technology

    NASA Technical Reports Server (NTRS)

    Huckins, E. K., III

    1980-01-01

    The NASA Large Space Systems Technology (LSST) program, devoted to the development of Space Shuttle-deployable orbiting structures, is reviewed. The LSST program elements are: antennas, space platforms, assembly equipment and devices, surface sensors and control, control and stabilization, and analysis and design systems. Among the specific prospective applications for this technology base may be counted: multipurpose platforms, materials experimentation facilities, energy satellites, large optical and radio arrays, and communications platforms.

  17. Workflow management in large distributed systems

    NASA Astrophysics Data System (ADS)

    Legrand, I.; Newman, H.; Voicu, R.; Dobre, C.; Grigoras, C.

    2011-12-01

    The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.

  18. Large Payload Ground Transportation and Test Considerations

    NASA Technical Reports Server (NTRS)

    Rucker, Michelle A.

    2016-01-01

    Many spacecraft concepts under consideration by the National Aeronautics and Space Administration’s (NASA’s) Evolvable Mars Campaign take advantage of a Space Launch System payload shroud that may be 8 to 10 meters in diameter. Large payloads can theoretically save cost by reducing the number of launches needed--but only if it is possible to build, test, and transport a large payload to the launch site in the first place. Analysis performed previously for the Altair project identified several transportation and test issues with an 8.973 meters diameter payload. Although the entire Constellation Program—including Altair—has since been canceled, these issues serve as important lessons learned for spacecraft designers and program managers considering large payloads for future programs. A transportation feasibility study found that, even broken up into an Ascent and Descent Module, the Altair spacecraft would not fit inside available aircraft. Ground transportation of such large payloads over extended distances is not generally permitted, so overland transportation alone would not be an option. Limited ground transportation to the nearest waterway may be possible, but water transportation could take as long as 67 days per production unit, depending on point of origin and acceptance test facility; transportation from the western United States would require transit through the Panama Canal to access the Kennedy Space Center launch site. Large payloads also pose acceptance test and ground processing challenges. Although propulsion, mechanical vibration, and reverberant acoustic test facilities at NASA’s Plum Brook Station have been designed to accommodate large spacecraft, special handling and test work-arounds may be necessary, which could increase cost, schedule, and technical risk. Once at the launch site, there are no facilities currently capable of accommodating the combination of large payload size and hazardous processing such as hypergolic fuels

  19. Megascours: the morphodynamics of large river confluences

    NASA Astrophysics Data System (ADS)

    Dixon, Simon; Sambrook Smith, Greg; Nicholas, Andrew; Best, Jim; Bull, Jon; Vardy, Mark; Goodbred, Steve; Haque Sarker, Maminul

    2015-04-01

    River confluences are wildly acknowledged as crucial controlling influences upon upstream and downstream morphology and thus landscape evolution. Despite their importance very little is known about their evolution and morphodynamics, and there is a consensus in the literature that confluences represent fixed, nodal points in the fluvial network. Confluences have been shown to generate substantial bed scours around five times greater than mean depth. Previous research on the Ganges-Jamuna junction has shown large river confluences can be highly mobile, potentially 'combing' bed scours across a large area, although the extent to which this is representative of large confluences in general is unknown. Understanding the migration of confluences and associated scours is important for multiple applications including: designing civil engineering infrastructure (e.g. bridges, laying cable, pipelines, etc.), sequence stratigraphic interpretation for reconstruction of past environmental and sea level change, and in the hydrocarbon industry where it is crucial to discriminate autocyclic confluence scours from widespread allocyclic surfaces. Here we present a wide-ranging global review of large river confluence planforms based on analysis of Landsat imagery from 1972 through to 2014. This demonstrates there is an array of confluence morphodynamic types: from freely migrating confluences such as the Ganges-Jamuna, through confluences migrating on decadal timescales and fixed confluences. Along with data from recent geophysical field studies in the Ganges-Brahmaputra-Meghna basin we propose a conceptual model of large river confluence types and hypothesise how these influence morphodynamics and preservation of 'megascours' in the rock record. This conceptual model has implications for sequence stratigraphic models and the correct identification of surfaces related to past sea level change. We quantify the abundance of mobile confluence types by classifying all large confluences

  20. Large Payload Transportation and Test Considerations

    NASA Technical Reports Server (NTRS)

    Rucker, Michelle A.; Pope, James C.

    2011-01-01

    Ironically, the limiting factor to a national heavy lift strategy may not be the rocket technology needed to throw a heavy payload, but rather the terrestrial infrastructure - roads, bridges, airframes, and buildings - necessary to transport, acceptance test, and process large spacecraft. Failure to carefully consider how large spacecraft are designed, and where they are manufactured, tested, or launched, could result in unforeseen cost to modify/develop infrastructure, or incur additional risk due to increased handling or elimination of key verifications. During test and verification planning for the Altair project, a number of transportation and test issues related to the large payload diameter were identified. Although the entire Constellation Program - including Altair - was canceled in the 2011 NASA budget, issues identified by the Altair project serve as important lessons learned for future payloads that may be developed to support national "heavy lift" strategies. A feasibility study performed by the Constellation Ground Operations (CxGO) project found that neither the Altair Ascent nor Descent Stage would fit inside available transportation aircraft. Ground transportation of a payload this large over extended distances is generally not permitted by most states, so overland transportation alone would not have been an option. Limited ground transportation to the nearest waterway may be permitted, but water transportation could take as long as 66 days per production unit, depending on point of origin and acceptance test facility; transportation from the western United States would require transit through the Panama Canal to access the Kennedy Space Center launch site. Large payloads also pose acceptance test and ground processing challenges. Although propulsion, mechanical vibration, and reverberant acoustic test facilities at NASA s Plum Brook Station have been designed to accommodate large spacecraft, special handling and test work-arounds may be necessary

  1. Metrology of Large Parts. Chapter 5

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2012-01-01

    As discussed in the first chapter of this book, there are many different methods to measure a part using optical technology. Chapter 2 discussed the use of machine vision to measure macroscopic features such as length and position, which was extended to the use of interferometry as a linear measurement tool in chapter 3, and laser or other trackers to find the relation of key points on large parts in chapter 4. This chapter looks at measuring large parts to optical tolerances in the sub-micron range using interferometry, ranging, and optical tools discussed in the previous chapters. The purpose of this chapter is not to discuss specific metrology tools (such as interferometers or gauges), but to describe a systems engineering approach to testing large parts. Issues such as material warpage and temperature drifts that may be insignificant when measuring a part to micron levels under a microscope, as will be discussed in later chapters, can prove to be very important when making the same measurement over a larger part. In this chapter, we will define a set of guiding principles for successfully overcoming these challenges and illustrate the application of these principles with real world examples. While these examples are drawn from specific large optical testing applications, they inform the problems associated with testing any large part to optical tolerances. Manufacturing today relies on micrometer level part performance. Fields such as energy and transportation are demanding higher tolerances to provide increased efficiencies and fuel savings. By looking at how the optics industry approaches sub-micrometer metrology, one can gain a better understanding of the metrology challenges for any larger part specified to micrometer tolerances. Testing large parts, whether optical components or precision structures, to optical tolerances is just like testing small parts, only harder. Identical with what one does for small parts, a metrologist tests large parts and optics

  2. Large Fluvial Fans and Exploration for Hydrocarbons

    NASA Technical Reports Server (NTRS)

    Wilkinson, Murray Justin

    2005-01-01

    A report discusses the geological phenomena known, variously, as modern large (or large modern) fluvial fans or large continental fans, from a perspective of exploring for hydrocarbons. These fans are partial cones of river sediment that spread out to radii of 100 km or more. Heretofore, they have not been much recognized in the geological literature probably because they are difficult to see from the ground. They can, however, be seen in photographs taken by astronauts and on other remotely sensed imagery. Among the topics discussed in the report is the need for research to understand what seems to be an association among fluvial fans, alluvial fans, and hydrocarbon deposits. Included in the report is an abstract that summarizes the global distribution of large modern fluvial fans and a proposal to use that distribution as a guide to understanding paleo-fluvial reservoir systems where oil and gas have formed. Also included is an abstract that summarizes what a continuing mapping project has thus far revealed about the characteristics of large fans that have been found in a variety of geological environments.

  3. Development of large aperture composite adaptive optics

    NASA Astrophysics Data System (ADS)

    Kmetik, Viliam; Vitovec, Bohumil; Jiran, Lukas; Nemcova, Sarka; Zicha, Josef; Inneman, Adolf; Mikulickova, Lenka; Pavlica, Richard

    2015-01-01

    Large aperture composite adaptive optics for laser applications is investigated in cooperation of Institute of Plasma Physic, Department of Instrumentation and Control Engineering FME CTU and 5M Ltd. We are exploring opportunity of a large-size high-power-laser deformable-mirror production using a lightweight bimorph actuated structure with a composite core. In order to produce a sufficiently large operational free aperture we are developing new technologies for production of flexible core, bimorph actuator and deformable mirror reflector. Full simulation of a deformable-mirrors structure was prepared and validated by complex testing. A deformable mirror actuation and a response of a complicated structure are investigated for an accurate control of the adaptive optics. An original adaptive optics control system and a bimorph deformable mirror driver were developed. Tests of material samples, components and sub-assemblies were completed. A subscale 120 mm bimorph deformable mirror prototype was designed, fabricated and thoroughly tested. A large-size 300 mm composite-core bimorph deformable mirror was simulated and optimized, fabrication of a prototype is carried on. A measurement and testing facility is modified to accommodate large sizes optics.

  4. Evolution and interaction of large interplanetary streams

    NASA Technical Reports Server (NTRS)

    Whang, Y. C.; Burlaga, L. F.

    1985-01-01

    A computer simulation for the evolution and interaction of large interplanetary streams based on multi-spacecraft observations and an unsteady, one-dimensional MHD model is presented. Two events, each observed by two or more spacecraft separated by a distance of the order of 10 AU, were studied. The first simulation is based on the plasma and magnetic field observations made by two radially-aligned spacecraft. The second simulation is based on an event observed first by Helios-1 in May 1980 near 0.6 AU and later by Voyager-1 in June 1980 at 8.1 AU. These examples show that the dynamical evolution of large-scale solar wind structures is dominated by the shock process, including the formation, collision, and merging of shocks. The interaction of shocks with stream structures also causes a drastic decrease in the amplitude of the solar wind speed variation with increasing heliocentric distance, and as a result of interactions there is a large variation of shock-strengths and shock-speeds. The simulation results shed light on the interpretation for the interaction and evolution of large interplanetary streams. Observations were made along a few limited trajectories, but simulation results can supplement these by providing the detailed evolution process for large-scale solar wind structures in the vast region not directly observed. The use of a quantitative nonlinear simulation model including shock merging process is crucial in the interpretation of data obtained in the outer heliosphere.

  5. IAHS Symposium on Large River Basins

    NASA Astrophysics Data System (ADS)

    Frick, David M.

    The flow regime of large rivers is significantly influenced by man's activities, such as land use or river development. In other cases, there is evidence that climate change is the reason for modified flow regime. When basins are shared by a number of countries, the problems of hydrologic change become even more critical. Therefore, the social and economic consequences of changes in the flow regime of large river basins is far reaching,To improve the understanding of hydrologic processes and to investigate the availability of tools and methods that can be used to analyze the hydrological impacts of changes in flow, the International Association of Hydrologic Sciences (IAHS) and International Commission on Surface Water (ICSW) devoted its symposium, held at the August 1991 XXth General Assembly of the International Union of Geodesy and Geophysics (IUGG) in Vienna, Austria, to the theme “Hydrology for Water Management of Large River Basins.” The theme was divided into the four subtopics of water management and cooperation in large and/or international river basin: flow regimes and water management in relation to changes in climate, river development, and land use; water quality and sediment transport management in a large river environment; and operational flow and water quality forecasting. Both the general problem and organizational and operational aspects were investigated.

  6. Astronomy Outreach for Large and Unique Audiences

    NASA Astrophysics Data System (ADS)

    Lubowich, D.; Sparks, R. T.; Pompea, S. M.; Kendall, J. S.; Dugan, C.

    2013-04-01

    In this session, we discuss different approaches to reaching large audiences. In addition to star parties and astronomy events, the audiences for some of the events include music concerts or festivals, sick children and their families, minority communities, American Indian reservations, and tourist sites such as the National Mall. The goal is to bring science directly to the public—to people who attend astronomy events and to people who do not come to star parties, science museums, or science festivals. These programs allow the entire community to participate in astronomy activities to enhance the public appreciation of science. These programs attract large enthusiastic crowds often with young children participating in these family learning experiences. The public will become more informed, educated, and inspired about astronomy and will also be provided with information that will allow them to continue to learn after this outreach activity. Large and unique audiences often have common problems, and their solutions and the lessons learned will be presented. Interaction with the participants in this session will provide important community feedback used to improve astronomy outreach for large and unique audiences. New ways to expand astronomy outreach to new large audiences will be discussed.

  7. Hot seeding using large Y-123 seeds

    NASA Astrophysics Data System (ADS)

    Scruggs, S. J.; Putman, P. T.; Zhou, Y. X.; Fang, H.; Salama, K.

    2006-07-01

    There are several motivations for increasing the diameter of melt textured single domain discs. The maximum magnetic field produced by a trapped field magnet is proportional to the radius of the sample. Furthermore, the availability of trapped field magnets with large diameter could enable their use in applications that have traditionally been considered to require wound electromagnets, such as beam bending magnets for particle accelerators and electric propulsion. We have investigated the possibility of using large area epitaxial growth instead of the conventional point nucleation growth mechanism. This process involves the use of large Y123 seeds for the purpose of increasing the maximum achievable Y123 single domain size. The hot seeding technique using large Y-123 seeds was employed to seed Y-123 samples. Trapped field measurements indicate that single domain samples were indeed grown by this technique. Microstructural evaluation indicates that growth can be characterized by a rapid nucleation followed by the usual peritectic grain growth which occurs when large seeds are used. Critical temperature measurements show that no local Tc suppression occurs in the vicinity of the seed. This work supports the suggestion of using an iterative method for increasing the size of Y-123 single domains that can be grown.

  8. Baryon resonances in large Nc QCD

    NASA Astrophysics Data System (ADS)

    Matagne, N.; Stancu, Fl.

    2015-01-01

    The current status and open challenges of large Nc QCD baryon spectroscopy are reviewed. After introducing the 1 /Nc expansion method, the latest achievements for the ground state properties are revisited. Next the applicability of this method to excited states is presented using two different approaches with their advantages and disadvantages. Selected results for the spectrum and strong and electromagnetic decays are described. Also further developments for the applicability of the method to excited states are presented, based on the qualitative compatibility between the quark excitation picture and the meson-nucleon scattering picture. A quantitative comparison between results obtained from the mass formula of the 1 /Nc expansion method and quark models brings convincing support to quark models and the implications of different large Nc limits are discussed. The SU(6) spin-flavor structure of the large Nc baryon allows a convenient classification of highly excited resonances into SU(3) multiplets and predicts mass ranges for the missing partners.

  9. Control problems in very large accelerators

    SciTech Connect

    Crowley-Milling, M.C.

    1985-04-01

    There is no fundamental difference of kind in the control requirements between a small and a large accelerator since they are built of the same types of components, which individually have the same types of control inputs and outputs. The main difference is one of scale; the large machine has many more components of each type, and the distances involved are much greater. It is the purpose of this paper to look at the special control problems of large accelerators, which the author shall arbitrarily define as those with a length or circumference in excess of 10 km, and point out where special developments, or the adoption of developments from outside the accelerator control field, can be of assistance in minimizing the cost of the control system.

  10. Stimulated Raman scattering in large plasmas

    SciTech Connect

    Phillion, D.W.; Banner, D.L.

    1980-11-06

    Stimulated Raman scattering is of concern to laser fusion since it can create a hot electron environment which can increase the difficulty of achieving high final fuel densities. In earlier experiments with one micron laser light, the energy measured in Raman-scattered light has been insignificant. But these experiments were done with, at most, about 100 joules of laser energy. The Raman instability has a high threshold which also requires a large plasma to be irradiated with a large diameter spot. Only with a long interaction length can the Raman-scattered light wave convectively grow to a large amplitude, and only in recent long pulse, high energy experiments (4000 joules in 2 ns) at the Shiva laser facility have we observed as much as several percent of the laser light to be Raman-scattered. We find that the Raman instability has a much lower intensity threshold for longer laser pulselength and larger laser spot size on a solid target.

  11. Large molecules in diffuse interstellar clouds

    SciTech Connect

    Lepp, S.; Dalgarno, A.; Van Dishoeck, E.F.; Black, J.H.

    1988-06-01

    The effects of the presence of a substantial component of large molecules on the chemistry of diffuse molecular clouds are explored, and detailed models of the zeta Persei and zeta Ophiuchi clouds are constructed. The major consequence is a reduction in the abundances of singly charged atomic species. The long-standing discrepancy between cloud densities inferred from rotational and fine-structure level populations and from the ionization balance can be resolved by postulating a fractional abundance of large molecules of 1 x 10 to the -7th for zeta Persei and 6 x 10 to the -7th for zeta Ophiuchi. If the large molecules are polycyclic aromatic hydrocarbons (PAH) containing about 50 carbon atoms, they contain 1 percent of the carbon in zeta Persei and 7 percent in zeta Ophiuchi. Other consequences of the possible presence of PAH molecules are discussed. 23 references.

  12. Design of large aperture focal plane shutter

    NASA Astrophysics Data System (ADS)

    Hu, Jia-wen; Ma, Wen-li; Huang, Jin-long

    2012-09-01

    To satisfy the requirement of large telescope, a large aperture focal plane shutter with aperture size of φ200mm was researched and designed to realize, which could be started and stopped in a relative short time with precise position, and also the blades could open and close at the same time at any orientation. Timing-belts and stepper motors were adopted as the drive mechanism. Velocity and position of the stepper motors were controlled by the PWM pulse generated by DSP. Exponential curve is applied to control the velocity of the stepper motors to make the shutter start and stop in a short time. The closing/open time of shutter is 0.2s, which meets the performance requirements of large telescope properly.

  13. Some design considerations for large space structures

    NASA Technical Reports Server (NTRS)

    Bush, H. G.; Mikulas, M. M., Jr.; Heard, W. L., Jr.

    1977-01-01

    Physical characteristics of large skeletal frameworks for space applications are investigated by analyzing one concept: the tetrahedral truss, which is idealized as a sandwich plate with isotropic faces. Appropriate analytical relations are presented in terms of the truss column-element properties, which for calculations were taken as slender graphite/epoxy tubes. Column loads, resulting from gravity-gradient control and orbital transfer, are found to be small for the class structure investigated. Fundamental frequencies of large truss structures are shown to be an order of magnitude lower than large earth-based structures. Permissible loads are shown to result in small lateral deflections of the truss due to low strain at Euler buckling of the slender graphite/epoxy truss column elements. Lateral thermal deflections are found to be a fraction of the truss depth using graphite/epoxy columns.

  14. NASA technology for large space antennas

    NASA Technical Reports Server (NTRS)

    Russell, R. A.; Campbell, T. G.; Freeland, R. E.

    1979-01-01

    Technology developed by NASA in conjunction with industry for potential large, deployable space antennas with applications in communication, radio astronomy and earth observation is reviewed. Concepts for deployable antennas that have been developed to the point of detail design are summarized, including the advanced sunflower precision antenna, the radial rib antenna, the maypole (hoop/column) antenna and the parabolic erectable truss antenna. The assessment of state-of-the-art deployable antenna technology is discussed, and the approach taken by the NASA Large Space Systems Technology (LSST) Program to the development of technology for large space antenna systems is outlined. Finally, the further development of the wrap-rib antenna and the maypole (hoop/column) concept, which meet mission model requirements, to satisfy LSST size and frequency requirements is discussed.

  15. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  16. Eyeglass. 1. Very large aperture diffractive telescopes.

    PubMed

    Hyde, R A

    1999-07-01

    The Eyeglass is a very large aperture (25-100-m) space telescope consisting of two distinct spacecraft, separated in space by several kilometers. A diffractive lens provides the telescope s large aperture, and a separate, much smaller, space telescope serves as its mobile eyepiece. Use of a transmissive diffractive lens solves two basic problems associated with very large aperture space telescopes; it is inherently launchable (lightweight, packagable, and deployable) it and virtually eliminates the traditional, very tight surface shape tolerances faced by reflecting apertures. The potential drawback to use of a diffractive primary (very narrow spectral bandwidth) is eliminated by corrective optics in the telescope s eyepiece; the Eyeglass can provide diffraction-limited imaging with either single-band (Deltalambda/lambda approximately 0.1), multiband, or continuous spectral coverage. PMID:18323902

  17. Large-scale sparse singular value computations

    NASA Technical Reports Server (NTRS)

    Berry, Michael W.

    1992-01-01

    Four numerical methods for computing the singular value decomposition (SVD) of large sparse matrices on a multiprocessor architecture are presented. Lanczos and subspace iteration-based methods for determining several of the largest singular triplets (singular values and corresponding left and right-singular vectors) for sparse matrices arising from two practical applications: information retrieval and seismic reflection tomography are emphasized. The target architectures for implementations are the CRAY-2S/4-128 and Alliant FX/80. The sparse SVD problem is well motivated by recent information-retrieval techniques in which dominant singular values and their corresponding singular vectors of large sparse term-document matrices are desired, and by nonlinear inverse problems from seismic tomography applications which require approximate pseudo-inverses of large sparse Jacobian matrices.

  18. Large Deformations of a Soft Porous Material

    NASA Astrophysics Data System (ADS)

    MacMinn, Christopher W.; Dufresne, Eric R.; Wettlaufer, John S.

    2016-04-01

    Compressing a porous material will decrease the volume of the pore space, driving fluid out. Similarly, injecting fluid into a porous material can expand the pore space, distorting the solid skeleton. This poromechanical coupling has applications ranging from cell and tissue mechanics to geomechanics and hydrogeology. The classical theory of linear poroelasticity captures this coupling by combining Darcy's law with Terzaghi's effective stress and linear elasticity in a linearized kinematic framework. Linear poroelasticity is a good model for very small deformations, but it becomes increasingly inappropriate for moderate to large deformations, which are common in the context of phenomena such as swelling and damage, and for soft materials such as gels and tissues. The well-known theory of large-deformation poroelasticity combines Darcy's law with Terzaghi's effective stress and nonlinear elasticity in a rigorous kinematic framework. This theory has been used extensively in biomechanics to model large elastic deformations in soft tissues and in geomechanics to model large elastoplastic deformations in soils. Here, we first provide an overview and discussion of this theory with an emphasis on the physics of poromechanical coupling. We present the large-deformation theory in an Eulerian framework to minimize the mathematical complexity, and we show how this nonlinear theory simplifies to linear poroelasticity under the assumption of small strain. We then compare the predictions of linear poroelasticity with those of large-deformation poroelasticity in the context of two uniaxial model problems: fluid outflow driven by an applied mechanical load (the consolidation problem) and compression driven by a steady fluid throughflow. We explore the steady and dynamical errors associated with the linear model in both situations, as well as the impact of introducing a deformation-dependent permeability. We show that the error in linear poroelasticity is due primarily to kinematic

  19. Heterothermy in large mammals: inevitable or implemented?

    PubMed

    Hetem, Robyn S; Maloney, Shane K; Fuller, Andrea; Mitchell, Duncan

    2016-02-01

    Advances in biologging techniques over the past 20 years have allowed for the remote and continuous measurement of body temperatures in free-living mammals. While there is an abundance of literature on heterothermy in small mammals, fewer studies have investigated the daily variability of body core temperature in larger mammals. Here we review measures of heterothermy and the factors that influence heterothermy in large mammals in their natural habitats, focussing on large mammalian herbivores. The mean 24 h body core temperatures for 17 species of large mammalian herbivores (>10 kg) decreased by ∼1.3°C for each 10-fold increase in body mass, a relationship that remained significant following phylogenetic correction. The degree of heterothermy, as measured by the 24 h amplitude of body core temperature rhythm, was independent of body mass and appeared to be driven primarily by energy and water limitations. When faced with the competing demands of osmoregulation, energy acquisition and water or energy use for thermoregulation, large mammalian herbivores appear to relax the precision of thermoregulation thereby conserving body water and energy. Such relaxation may entail a cost in that an animal moves closer to its thermal limits for performance. Maintaining homeostasis requires trade-offs between regulated systems, and homeothermy apparently is not accorded the highest priority; large mammals are able to maintain optimal homeothermy only if they are well nourished, hydrated, and not compromised energetically. We propose that the amplitude of the 24 h rhythm of body core temperature provides a useful index of any compromise experienced by a free-living large mammal and may predict the performance and fitness of an animal. PMID:25522232

  20. Large space structures - Fantasies and facts

    NASA Technical Reports Server (NTRS)

    Card, M. F.; Boyer, W. J.

    1980-01-01

    A review of large space structures activities from 1973 to 1979 is presented. Long-range studies of space colonies, gigantic solar power stations and projected earth applications revived interest in space activities. Studies suggest opportunities for advanced antenna and platform applications. Matching low-thrust propulsion to large flexible vehicles will be a key technology. Current structures technology investigations include deployable and erectable structures and assembly techniques. Based on orbited structures experience, deployment reliability is a critical issue for deployable structures. For erectable structures, concepts for earth-fabricated and space-fabricated memb

  1. Convergence of large-deviation estimators.

    PubMed

    Rohwer, Christian M; Angeletti, Florian; Touchette, Hugo

    2015-11-01

    We study the convergence of statistical estimators used in the estimation of large-deviation functions describing the fluctuations of equilibrium, nonequilibrium, and manmade stochastic systems. We give conditions for the convergence of these estimators with sample size, based on the boundedness or unboundedness of the quantity sampled, and discuss how statistical errors should be defined in different parts of the convergence region. Our results shed light on previous reports of "phase transitions" in the statistics of free energy estimators and establish a general framework for reliably estimating large-deviation functions from simulation and experimental data and identifying parameter regions where this estimation converges. PMID:26651644

  2. Method of Making Large Area Nanostructures

    NASA Technical Reports Server (NTRS)

    Marks, Alvin M.

    1995-01-01

    A method which enables the high speed formation of nanostructures on large area surfaces is described. The method uses a super sub-micron beam writer (Supersebter). The Supersebter uses a large area multi-electrode (Spindt type emitter source) to produce multiple electron beams simultaneously scanned to form a pattern on a surface in an electron beam writer. A 100,000 x 100,000 array of electron point sources, demagnified in a long electron beam writer to simultaneously produce 10 billion nano-patterns on a 1 meter squared surface by multi-electron beam impact on a 1 cm squared surface of an insulating material is proposed.

  3. Large horizontal axis wind turbine development

    NASA Technical Reports Server (NTRS)

    Robbins, W. H.; Thomas, R. L.

    1979-01-01

    The paper presents an overview of the NASA activities in large horizontal axis wind turbine development. First generation technology large wind turbines (Mod-0A, Mod-1) have been designed and are in operation at selected utility sites. Second generation machines (Mod-2) are scheduled to begin operations on a utility site in 1980. These machines are estimated to generate electricity at less than 4 cents/kWh when manufactured in modest production rates. Meanwhile, plans are being made to continue developing wind turbines which can meet the cost goals of 2 to 3 cents/kWh.

  4. Large-signal klystron simulations using KLSC

    SciTech Connect

    Carlsten, B.E.; Ferguson, P.

    1997-10-01

    The authors describe large-signal klystron simulations using the particle-in-cell code KLSC. This code uses the induced-current model to describe the steady-state cavity modulations and resulting rf fields, and advances the space-charge fields through maxwell`s equations. In this paper, an eight-cavity, high-power S-band klystron simulation is used to highlight various aspects of this simulation technique. In particular, there are specific issues associated with modeling the input cavity, the gain circuit, and the large-signal circuit (including the output cavities), that have to be treated carefully.

  5. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  6. Method and apparatus for extruding large honeycombs

    SciTech Connect

    Kragle, Harry A.; Lambert, David W.; Lipp, G. Daniel

    1996-09-03

    Extrusion die apparatus and an extrusion method for extruding large-cross-section honeycomb structures from plasticized ceramic batch materials are described, the apparatus comprising a die having a support rod connected to its central portion, the support rod being anchored to support means upstream of the die. The support rod and support means act to limit die distortion during extrusion, reducing die strain and stress to levels permitting large honeycomb extrusion without die failure. Dies of optimal thickness are disclosed which reduce the maximum stresses exerted on the die during extrusion.

  7. Electrical system for a large cogeneration plant

    SciTech Connect

    Arvay, G.J. ); Smith, R.T. )

    1992-01-01

    The electrical system, interface, commissioning, and operations requirements of a major multiunit cogeneration plant interconnected with a large utility system through a 230-kV sulfur hexafluoride (SF{sub 6}) gas-insulated substation (GIS) are complex and demanding. This paper describes the electrical requirements, including utility interfaces, engineering, and on-site testing, as applied to the execution of a large, multiunit turnkey cogeneration project in California. The benefits of careful engineering efforts are shown to result in timely and cost effective completion of engineering, manufacturing, installation, testing, and commercial operation.

  8. Production of a large, quiescent, magnetized plasma

    NASA Technical Reports Server (NTRS)

    Landt, D. L.; Ajmera, R. C.

    1976-01-01

    An experimental device is described which produces a large homogeneous quiescent magnetized plasma. In this device, the plasma is created in an evacuated brass cylinder by ionizing collisions between electrons emitted from a large-diameter electron gun and argon atoms in the chamber. Typical experimentally measured values of the electron temperature and density are presented which were obtained with a glass-insulated planar Langmuir probe. It is noted that the present device facilitates the study of phenomena such as waves and diffusion in magnetized plasmas.

  9. Black holes at the Large Hadron Collider.

    PubMed

    Dimopoulos, S; Landsberg, G

    2001-10-15

    If the scale of quantum gravity is near TeV, the CERN Large Hadron Collider will be producing one black hole (BH) about every second. The decays of the BHs into the final states with prompt, hard photons, electrons, or muons provide a clean signature with low background. The correlation between the BH mass and its temperature, deduced from the energy spectrum of the decay products, can test Hawking's evaporation law and determine the number of large new dimensions and the scale of quantum gravity. PMID:11690198

  10. Indian LSSC (Large Space Simulation Chamber) facility

    NASA Technical Reports Server (NTRS)

    Brar, A. S.; Prasadarao, V. S.; Gambhir, R. D.; Chandramouli, M.

    1988-01-01

    The Indian Space Agency has undertaken a major project to acquire in-house capability for thermal and vacuum testing of large satellites. This Large Space Simulation Chamber (LSSC) facility will be located in Bangalore and is to be operational in 1989. The facility is capable of providing 4 meter diameter solar simulation with provision to expand to 4.5 meter diameter at a later date. With such provisions as controlled variations of shroud temperatures and availability of infrared equipment as alternative sources of thermal radiation, this facility will be amongst the finest anywhere. The major design concept and major aspects of the LSSC facility are presented here.

  11. Knowledge Discovery in Large Data Sets

    SciTech Connect

    Simas, Tiago; Silva, Gabriel; Miranda, Bruno; Ribeiro, Rita

    2008-12-05

    In this work we briefly address the problem of unsupervised classification on large datasets, magnitude around 100,000,000 objects. The objects are variable objects, which are around 10% of the 1,000,000,000 astronomical objects that will be collected by GAIA/ESA mission. We tested unsupervised classification algorithms on known datasets such as OGLE and Hipparcos catalogs. Moreover, we are building several templates to represent the main classes of variable objects as well as new classes to build a synthetic dataset of this dimension. In the future we will run the GAIA satellite scanning law on these templates to obtain a testable large dataset.

  12. Structure of large dsDNA viruses

    PubMed Central

    Klose, Thomas; Rossmann, Michael G.

    2015-01-01

    Nucleocytoplasmic large dsDNA viruses (NCLDVs) encompass an ever-increasing group of large eukaryotic viruses, infecting a wide variety of organisms. The set of core genes shared by all these viruses includes a major capsid protein with a double jelly-roll fold forming an icosahedral capsid, which surrounds a double layer membrane that contains the viral genome. Furthermore, some of these viruses, such as the members of the Mimiviridae and Phycodnaviridae have a unique vertex that is used during infection to transport DNA into the host. PMID:25003382

  13. Ground test experiment for large space structures

    NASA Technical Reports Server (NTRS)

    Tollison, D. K.; Waites, H. B.

    1985-01-01

    In recent years a new body of control theory has been developed for the design of control systems for Large Space Structures (LSS). The problems of testing this theory on LSS hardware are aggravated by the expense and risk of actual in orbit tests. Ground tests on large space structures can provide a proving ground for candidate control systems, but such tests require a unique facility for their execution. The current development of such a facility at the NASA Marshall Space Flight Center (MSFC) is the subject of this report.

  14. Design and construction of large capacitor banks

    SciTech Connect

    Whitham, K.; Gritton, D.G.; Holloway, R.W.; Merritt, B.T.

    1983-01-01

    Over the past 12 years, the Laser Program at LLNL has actively pursued laser fusion, using a series of large, solid-state lasers to develop target data leading to reactor designs using the concept of inertial confinement fusion. These lasers are all linear chains of flashlamp driven, Nd-doped glass amplifiers with a master oscillator at the front end. Techniques have been developed during this time to scale the lasers to an arbitrarily large size. A table shows the series of lasers and their parameters that have been developed to date.

  15. Formal Verification of Large Software Systems

    NASA Technical Reports Server (NTRS)

    Yin, Xiang; Knight, John

    2010-01-01

    We introduce a scalable proof structure to facilitate formal verification of large software systems. In our approach, we mechanically synthesize an abstract specification from the software implementation, match its static operational structure to that of the original specification, and organize the proof as the conjunction of a series of lemmas about the specification structure. By setting up a different lemma for each distinct element and proving each lemma independently, we obtain the important benefit that the proof scales easily for large systems. We present details of the approach and an illustration of its application on a challenge problem from the security domain

  16. Deflection of large near-earth objects

    SciTech Connect

    Canavan, G.H.

    1999-01-11

    The Earth is periodically hit by near Earth objects (NEOs) ranging in size from dust to mountains. The small ones are a useful source of information, but those larger than about 1 km can cause global damage. The requirements for the deflection of NEOs with significant material strength are known reasonably well; however, the strength of large NEOs is not known, so those requirements may not apply. Meteor impacts on the Earth`s atmosphere give some information on strength as a function of object size and composition. This information is used here to show that large, weak objects could also be deflected efficiently, if addressed properly.

  17. European Extremely Large Telescope: progress report

    NASA Astrophysics Data System (ADS)

    Tamai, R.; Spyromilio, J.

    2014-07-01

    The European Extremely Large Telescope is a project of the European Southern Observatory to build and operate a 40-m class optical near-infrared telescope. The telescope design effort is largely concluded and construction contracts are being placed with industry and academic/research institutes for the various components. The siting of the telescope in Northern Chile close to the Paranal site allows for an integrated operation of the facility providing significant economies. The progress of the project in various areas is presented in this paper and references to other papers at this SPIE meeting are made.

  18. Large Horizontal-Axis Wind Turbines

    NASA Technical Reports Server (NTRS)

    Thresher, R. W. (Editor)

    1982-01-01

    The proceedings of a workshop held in Cleveland, July 28-30, 1981 are described. The workshop emphasized recent experience in building and testing large propeller-type wind turbines, expanding upon the proceedings of three previous DOE/NASA workshops at which design and analysis topics were considered. A total of 41 papers were presented on the following subjects: current and advanced large wind turbine systems, rotor blade design and manufacture, electric utility activities, research and supporting technology, meteorological characteristics for design and operation, and wind resources assessments for siting.

  19. Large Terrain Modeling and Visualization for Planets

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Cameron, Jonathan; Lim, Christopher

    2011-01-01

    Physics-based simulations are actively used in the design, testing, and operations phases of surface and near-surface planetary space missions. One of the challenges in realtime simulations is the ability to handle large multi-resolution terrain data sets within models as well as for visualization. In this paper, we describe special techniques that we have developed for visualization, paging, and data storage for dealing with these large data sets. The visualization technique uses a real-time GPU-based continuous level-of-detail technique that delivers multiple frames a second performance even for planetary scale terrain model sizes.

  20. Tensor methods for large, sparse unconstrained optimization

    SciTech Connect

    Bouaricha, A.

    1996-11-01

    Tensor methods for unconstrained optimization were first introduced by Schnabel and Chow [SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate size problems. This paper extends these methods to large, sparse unconstrained optimization problems. This requires an entirely new way of solving the tensor model that makes the methods suitable for solving large, sparse optimization problems efficiently. We present test results for sets of problems where the Hessian at the minimizer is nonsingular and where it is singular. These results show that tensor methods are significantly more efficient and more reliable than standard methods based on Newton`s method.

  1. Large fracture toughness boron-epoxy composites

    NASA Technical Reports Server (NTRS)

    Atkins, A. G.

    1975-01-01

    The high tensile strengths of strong interfacial bonding may be combined with the large fracture toughness of weak interfacial bonding in brittle fiber/brittle matrix composites by intermittently coating the filaments before layup so as to have random alternate weak and strong regions. Appropriate coating materials enable Cook-Gordon Mode I interfacial debonding to take place, which produces very long pull-out lengths with an associated large contribution to toughness. Unidirectional boron-epoxy composites have been so made which have toughnesses greater than 200 kJ/sq m while retaining rule of mixtures tensile strengths. Similar trends have been observed for crossply layups.

  2. Large plasmids of avian Escherichia coli isolates.

    PubMed

    Doetkott, D M; Nolan, L K; Giddings, C W; Berryhill, D L

    1996-01-01

    The plasmid DNA of 30 Escherichia coli isolates from chickens was extracted and examined using techniques designed to isolate large plasmids. This plasmid DNA was examined for the presence of certain known virulence-related genes including cvaC, traT, and some aerobactin-related sequences. Seventeen of the 30 isolates contained from one to four plasmids greater than 50 kb in size. Eleven of these 17 strains possessed plasmids greater than 100 kb in size. Therefore, E. coli isolates of chickens frequently contain large plasmids, and many of these plasmids are likely to contain virulence-related sequences. PMID:8980827

  3. Large eddy simulation in the ocean

    NASA Astrophysics Data System (ADS)

    Scotti, Alberto

    2010-12-01

    Large eddy simulation (LES) is a relative newcomer to oceanography. In this review, both applications of traditional LES to oceanic flows and new oceanic LES still in an early stage of development are discussed. The survey covers LES applied to boundary layer flows, traditionally an area where LES has provided considerable insight into the physics of the flow, as well as more innovative applications, where new SGS closure schemes need to be developed. The merging of LES with large-scale models is also briefly reviewed.

  4. Large volume flow-through scintillating detector

    DOEpatents

    Gritzo, Russ E.; Fowler, Malcolm M.

    1995-01-01

    A large volume flow through radiation detector for use in large air flow situations such as incinerator stacks or building air systems comprises a plurality of flat plates made of a scintillating material arranged parallel to the air flow. Each scintillating plate has a light guide attached which transfers light generated inside the scintillating plate to an associated photomultiplier tube. The output of the photomultiplier tubes are connected to electronics which can record any radiation and provide an alarm if appropriate for the application.

  5. Cytogenetic findings in a large bowel adenocarcinoma.

    PubMed

    Ferti-Passantonopoulou, A; Panani, A; Avgerinos, A; Raptis, S

    1986-04-15

    Cytogenetic analysis of a biopsy specimen taken during sigmoidoscopy from an adenocarcinoma of the large bowel revealed a hypodiploid karyotype with numerical and structural abnormalities identified as trisomy 7, t(3;12), t(1;17), interstitial deletion of the long arm of a chromosome #5 and loss of the Y chromosome with double X chromosomes. The possibility of this karyotype being a further evolutionary step in a subgroup of large bowel cancers and the clinical value of the above findings are discussed. PMID:3456826

  6. Large muon electric dipole moment from flavor?

    SciTech Connect

    Hiller, Gudrun; Huitu, Katri; Rueppell, Timo; Laamanen, Jari

    2010-11-01

    We study the prospects and opportunities of a large muon electric dipole moment (EDM) of the order (10{sup -24}-10{sup -22}) ecm. We investigate how natural such a value is within the general minimal supersymmetric extension of the standard model with CP violation from lepton flavor violation in view of the experimental constraints. In models with hybrid gauge-gravity-mediated supersymmetry breaking, a large muon EDM is indicative for the structure of flavor breaking at the Planck scale, and points towards a high messenger scale.

  7. [Large vessels vasculopathy in systemic sclerosis].

    PubMed

    Tejera Segura, Beatriz; Ferraz-Amaro, Iván

    2015-12-01

    Vasculopathy in systemic sclerosis is a severe, in many cases irreversible, manifestation that can lead to amputation. While the classical clinical manifestations of the disease have to do with the involvement of microcirculation, proximal vessels of upper and lower limbs can also be affected. This involvement of large vessels may be related to systemic sclerosis, vasculitis or atherosclerotic, and the differential diagnosis is not easy. To conduct a proper and early diagnosis, it is essential to start prompt appropriate treatment. In this review, we examine the involvement of large vessels in scleroderma, an understudied manifestation with important prognostic and therapeutic implications. PMID:25726305

  8. Construction and assembly of large space structures

    NASA Technical Reports Server (NTRS)

    Mar, J. W.; Miller, R. H.; Bowden, M. L.

    1980-01-01

    Three aspects of the construction and assembly of large space structures, namely transportation costs, human productivity in space and the source of materials (lunar vs terrestrial), are considered. Studies on human productivity have been so encouraging that the cost of human labor is now regarded as much less important than transportation costs. It is pointed out that these costs, although high, are extremely demand-sensitive. Even with high demand, however, the construction of several large systems would warrant the use of lunar materials and space manufacturing. The importance of further research is stressed in order to establish the optimum tradeoff between automation and manual assembly.

  9. Timing signatures of large scale solar eruptions

    NASA Astrophysics Data System (ADS)

    Balasubramaniam, K. S.; Hock-Mysliwiec, Rachel; Henry, Timothy; Kirk, Michael S.

    2016-05-01

    We examine the timing signatures of large solar eruptions resulting in flares, CMEs and Solar Energetic Particle events. We probe solar active regions from the chromosphere through the corona, using data from space and ground-based observations, including ISOON, SDO, GONG, and GOES. Our studies include a number of flares and CMEs of mostly the M- and X-strengths as categorized by GOES. We find that the chromospheric signatures of these large eruptions occur 5-30 minutes in advance of coronal high temperature signatures. These timing measurements are then used as inputs to models and reconstruct the eruptive nature of these systems, and explore their utility in forecasts.

  10. Large natural geophysical events: planetary planning

    SciTech Connect

    Knox, J.B.; Smith, J.V.

    1984-09-01

    Geological and geophysical data suggest that during the evolution of the earth and its species, that there have been many mass extinctions due to large impacts from comets and large asteroids, and major volcanic events. Today, technology has developed to the stage where we can begin to consider protective measures for the planet. Evidence of the ecological disruption and frequency of these major events is presented. Surveillance and warning systems are most critical to develop wherein sufficient lead times for warnings exist so that appropriate interventions could be designed. The long term research undergirding these warning systems, implementation, and proof testing is rich in opportunities for collaboration for peace.

  11. 75 FR 21455 - Large Trader Reporting System

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-23

    ...; and (3) provide certain additional information in response to a Commission request. The proposed rule... regarding a response provided in Schedule 6 to a large trader's Form 13H concerning the identification of... the Federal eRulemaking Portal ( http://www.regulations.gov ). Follow the instructions for...

  12. 77 FR 18109 - Assessments, Large Bank Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-27

    ...The FDIC proposes to amend its regulations to revise some of the definitions used to determine assessment rates for large and highly complex insured depository institutions. The FDIC believes these proposed amendments will result in more consistent reporting, better reflect risk to the FDIC, significantly reduce reporting burden, and satisfy many concerns voiced by the banking...

  13. The Large Context Problem (LCP) Approach

    ERIC Educational Resources Information Center

    Stinner, Arthur

    2006-01-01

    This article traces the development of a contextual approach to the teaching of science (physics) subsequently called the Large Context Problem (LCP) approach. This approach is based on the general observation that learning could be well motivated by a context with one unifying central idea capable of capturing the imagination of the students. The…

  14. Large Scale Commodity Clusters for Lattice QCD

    SciTech Connect

    A. Pochinsky; W. Akers; R. Brower; J. Chen; P. Dreher; R. Edwards; S. Gottlieb; D. Holmgren; P. Mackenzie; J. Negele; D. Richards; J. Simone; W. Watson

    2002-06-01

    We describe the construction of large scale clusters for lattice QCD computing being developed under the umbrella of the U.S. DoE SciDAC initiative. We discuss the study of floating point and network performance that drove the design of the cluster, and present our plans for future multi-Terascale facilities.

  15. Large Indoor Sports and Recreation Facilities.

    ERIC Educational Resources Information Center

    Seidler, Todd

    This paper presents an overview and analysis of field houses, stadiums, arenas, and campus recreation centers. All are large indoor sports or recreation facilities. In general, stadiums and arenas are spectator facilities while field houses and campus recreation centers are primarily designed for activity. A college field house is a structure that…

  16. Responses of large mammals to climate change

    PubMed Central

    Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan

    2014-01-01

    Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change.

  17. Tutoring Large Numbers: An Unmet Challenge

    ERIC Educational Resources Information Center

    Lentell, Helen; O'Rourke, Jennifer

    2004-01-01

    Open and distance learning (ODL) is increasingly being regarded as a viable policy option for developing countries with limited educational resources for buildings, books and trained teachers, seeking to increase accessibility for large numbers of learners in education and training opportunities. Advocates of ODL as an appropriate solution to…

  18. Insights into Our Understandings of Large Numbers

    ERIC Educational Resources Information Center

    Kastberg, Signe E.; Walker, Vicki

    2008-01-01

    This article explores prospective teachers' understandings of one million to gain insights into the development of adult understanding of large numbers. Themes in the prospective teachers' work included number associated with a quantity of objects, number as an abstraction, and additive and multiplicative approaches. The authors suggest that the…

  19. Reading the World through Very Large Numbers

    ERIC Educational Resources Information Center

    Greer, Brian; Mukhopadhyay, Swapna

    2010-01-01

    One original, and continuing, source of interest in large numbers is observation of the natural world, such as trying to count the stars on a clear night or contemplation of the number of grains of sand on the seashore. Indeed, a search of the internet quickly reveals many discussions of the relative numbers of stars and grains of sand. Big…

  20. Uniform reflective films deposited on large surfaces

    NASA Technical Reports Server (NTRS)

    1966-01-01

    Specially designed baffle which intercepts varying amounts of the vapor stream from an evaporant source, vacuum deposits films of uniform thickness on large substrates, using a single small area evaporation source. A mirror coated by this method will have a reflectance as high as 82 percent at 1216 angstroms with a variation of only plus/minus 2 percent over the surface.

  1. Decorum in the Large Lecture Class

    ERIC Educational Resources Information Center

    Druger, Marvin

    2008-01-01

    Anyone who has taught a lecture to a large group of students has probably experienced undesirable student behaviors. The author, who has taught an introductory college biology course at Syracuse University for 45 years, relates that an important part of his teaching philosophy is that everyone should learn from everything that they do, and…

  2. Report of the large solenoid detector group

    SciTech Connect

    Hanson, G.G.; Mori, S.; Pondrom, L.G.; Williams, H.H.; Barnett, B.; Barnes, V.; Cashmore, R.; Chiba, M.; DeSalvo, R.; Devlin, T.

    1987-09-01

    This report presents a conceptual design of a large solenoid for studying physics at the SSC. The parameters and nature of the detector have been chosen based on present estimates of what is required to allow the study of heavy quarks, supersymmetry, heavy Higgs particles, WW scattering at large invariant masses, new W and Z bosons, and very large momentum transfer parton-parton scattering. Simply stated, the goal is to obtain optimum detection and identification of electrons, muons, neutrinos, jets, W's and Z's over a large rapidity region. The primary region of interest extends over +-3 units of rapidity, although the calorimetry must extend to +-5.5 units if optimal missing energy resolution is to be obtained. A magnetic field was incorporated because of the importance of identifying the signs of the charges for both electrons and muons and because of the added possibility of identifying tau leptons and secondary vertices. In addition, the existence of a magnetic field may prove useful for studying new physics processes about which we currently have no knowledge. Since hermeticity of the calorimetry is extremely important, the entire central and endcap calorimeters were located inside the solenoid. This does not at the moment seem to produce significant problems (although many issues remain to be resolved) and in fact leads to a very effective muon detector in the central region.

  3. Finite N from resurgent large N

    NASA Astrophysics Data System (ADS)

    Couso-Santamaría, Ricardo; Schiappa, Ricardo; Vaz, Ricardo

    2015-05-01

    Due to instanton effects, gauge-theoretic large N expansions yield asymptotic series, in powers of 1 /N2. The present work shows how to generically make such expansions meaningful via their completion into resurgent transseries, encoding both perturbative and nonperturbative data. Large N resurgent transseries compute gauge-theoretic finite N results nonperturbatively (no matter how small N is). Explicit calculations are carried out within the gauge theory prototypical example of the quartic matrix model. Due to integrability in the matrix model, it is possible to analytically compute (fixed integer) finite N results. At the same time, the large N resurgent transseries for the free energy of this model was recently constructed. Together, it is shown how the resummation of the large N resurgent transseries matches the analytical finite N results up to remarkable numerical accuracy. Due to lack of Borel summability, Stokes phenomena has to be carefully taken into account, implying that instantons play a dominant role in describing the finite N physics. The final resurgence results can be analytically continued, defining gauge theory for any complex value of N.

  4. Large astronomical catalog management for telescope operations

    NASA Astrophysics Data System (ADS)

    Baruffolo, Andrea; Benacchio, Leopoldo

    1998-07-01

    Large astronomical catalogues containing from a million up to hundreds of millions records are currently available, even larger catalogues will be released in the near future. They will have an important operational role since they will be used throughout the observing cycle of next generation large telescopes, for proposal and observation preparation, telescope scheduling, selection of guide stars, etc. These large databases pose new problems for fast and general access. Solutions based on custom software or on customized versions of specific catalogues have been proposed, but the problem will benefit from a more general database approach. While traditional database technologies have proven to be inadequate for this task, new technologies are emerging, in particular that of Object Relational DBMSs, that seem to be suitable to solve the problem. In this paper we describe our experiences in experimenting with ORDBMSs for the management of large astronomical catalogues. We worked especially on the database query language and access methods. In the first field to extend the database query language capabilities with astronomical functionalities and to support typical astronomical queries.In the second, to speed up the execution of queries containing astronomical predicates.

  5. Large area space solar cell assemblies

    NASA Technical Reports Server (NTRS)

    Spitzer, M. B.; Nowlan, M. J.

    1982-01-01

    Development of a large area space solar cell assembly is presented. The assembly consists of an ion implanted silicon cell and glass cover. The important attributes of fabrication are (1) use of a back surface field which is compatible with a back surface reflector, and (2) integration of coverglass application and call fabrication.

  6. Improving Interactions in the Large Language Class.

    ERIC Educational Resources Information Center

    Raymond, Patricia M.; Raymond, Jacques; Pilon, Daniel

    1998-01-01

    Describes a prototypical microcomputer system that improves the interactions between teacher and large language classes in a traditional language classroom setting. This system achieves dynamic interactions through multiple student/professor interventions, immediate and delayed feedback, and individual teacher/student conferences. The system uses…

  7. Large Deviations: Advanced Probability for Undergrads

    ERIC Educational Resources Information Center

    Rolls, David A.

    2007-01-01

    In the branch of probability called "large deviations," rates of convergence (e.g. of the sample mean) are considered. The theory makes use of the moment generating function. So, particularly for sums of independent and identically distributed random variables, the theory can be made accessible to senior undergraduates after a first course in…

  8. The Large Crater Origin of SNC Meteorites.

    PubMed

    Vickery, A M; Melosh, H J

    1987-08-14

    A large body of evidence strongly suggests that the shergottite, nakhlite, and Chassigny (SNC) meteorites are from Mars. Various mechanisms for the ejection of large rocks at martian escape velocity (5 kilometers per second) have been investigated, but none has proved wholly satisfactory. This article examines a number of possible ejection and cosmic-ray exposure histories to determine which is most plausible. For each possible history, the Melosh spallation model is used to estimate the size of the crater required to produce ejecta fragments of the required size with velocities >/=5 kilometers per second and to produce a total mass of solid ejecta consistent with the observed mass flux of SNC meteorites. Estimates of crater production rates on Mars are then used to evaluate the probability that sufficiently large craters have formed during the available time. The results indicate that the SNC meteorites were probably ejected from a very large crater (> 100 kilometers in diameter) about 200 million years ago, and that cosmic-ray exposure of the recovered meteorites was initiated after collisional fragmentation of the original ejecta in space at much later times (0.5 to 10 million years ago). PMID:17751563

  9. Microwave performance characterization of large space antennas

    NASA Technical Reports Server (NTRS)

    Bathker, D. A. (Editor)

    1977-01-01

    Performance capabilities of large microwave space antenna configurations with apertures generally from 100 wavelengths upwards are discussed. Types of antennas considered include: phased arrays, lenses, reflectors, and hybrid combinations of phased arrays with reflectors or lenses. The performance characteristics of these broad classes of antennas are examined and compared in terms of applications.

  10. Densifying forest biomass into large round bales

    SciTech Connect

    Fridley, J.L.; Burkhardt, T.H.

    1984-01-01

    A large round-bale hay baler was modified to handle forest biomass. Material baled, feed orientation, and baler belt tension were varied to observe their effects on the baling process and bale density. Torque and power required to drive the baler were measured. 12 references.

  11. The repetition of large-earthquake ruptures.

    PubMed Central

    Sieh, K

    1996-01-01

    This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events. Images Fig. 3 Fig. 7 Fig. 9 PMID:11607662

  12. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  13. Large-Scale Organizational Performance Improvement.

    ERIC Educational Resources Information Center

    Pilotto, Rudy; Young, Jonathan O'Donnell

    1999-01-01

    Describes the steps involved in a performance improvement program in the context of a large multinational corporation. Highlights include a training program for managers that explained performance improvement; performance matrices; divisionwide implementation, including strategic planning; organizationwide training of all personnel; and the…

  14. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  15. The Large Area Crop Inventory Experiment (LACIE)

    NASA Technical Reports Server (NTRS)

    Macdonald, R. B.

    1976-01-01

    A Large Area Crop Inventory Experiment (LACIE) was undertaken to prove out an economically important application of remote sensing from space. The experiment focused upon determination of wheat acreages in the U.S. Great Plains and upon the development and testing of yield models. The results and conclusions are presented.

  16. Mass extinctions caused by large bolide impacts

    SciTech Connect

    Alvarez, L.W.

    1987-07-01

    Evidence indicates that the collision of Earth and a large piece of Solar System derbris such as a meteoroid, asteroid or comet caused the great extinctions of 65 million years ago, leading to the transition from the age of the dinosaurs to the age of the mammals.

  17. Employment of the Disabled in Large Corporations.

    ERIC Educational Resources Information Center

    Rabby, Rami

    1983-01-01

    Large corporations are in a unique position to employ the disabled, but they sometimes lack the motivation to do so. The author discusses elements of a corporate policy for the disabled, ways of formulating and disseminating it, assignment of responsibility, changes in management attitudes, and the special case of the multinational company.…

  18. Large size space construction for space exploitation

    NASA Astrophysics Data System (ADS)

    Kondyurin, Alexey

    2016-07-01

    Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).

  19. Technology for large tandem mirror experiments

    SciTech Connect

    Thomassen, K.I.

    1980-09-04

    Construction of a large tandem mirror (MFTF-B) will soon begin at Lawrence Livermore National Laboratory (LLNL). Designed to reach break-even plasma conditions, the facility will significantly advance the physics and technology of magnetic-mirror-based fusion reactors. This paper describes the objectives and the design of the facility.

  20. Large communications platforms versus smaller satellites

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Communications systems using large platforms are compared with systems using conventional satellites. Systems models were generated and compared for U.S. domestic application and for 1 INTELSAT's international and domestic transponder lease application. Technology advances were assumed the platforms and the evolution of conventional satellites.

  1. Collaboration within Large Groups in the Classroom

    ERIC Educational Resources Information Center

    Szewkis, Eyal; Nussbaum, Miguel; Rosen, Tal; Abalos, Jose; Denardin, Fernanda; Caballero, Daniela; Tagle, Arturo; Alcoholado, Cristian

    2011-01-01

    The purpose of this paper is to show how a large group of students can work collaboratively in a synchronous way within the classroom using the cheapest possible technological support. Making use of the features of Single Display Groupware and of Multiple Mice we propose a computer-supported collaborative learning approach for big groups within…

  2. Kids and Chemistry: Large Event Guide.

    ERIC Educational Resources Information Center

    Tinnesand, Michael

    This guide is intended to provide Kids and Chemistry (K&C) with a variety of age-appropriate, fun, and safe demonstrations. It features information on planning a large event and includes safety guidelines. Several activities are included under each major topic. Topics include: (1) Acids and Bases; (2) Unsigned; (3) Kool Tie-Dye; (4) Secret…

  3. Modal Vibration Analysis of Large Castings

    NASA Technical Reports Server (NTRS)

    Werlink, Rudolph J.; Margasahayam, Ravi N.

    2009-01-01

    The art of experimental modal vibration analysis (MVA) has been extended to apply to large castings. This extension was made to enable the use of experimental MVA as a relatively inexpensive, simple means of assessing the internal structural integrity of tread shoes of crawler transporters used to move spacecraft to the launch pad at Kennedy Space Center. Each tread shoe is made from cast iron and weighs about a ton (has a mass .907 kg). The present extended version of experimental MVA could also be applied to other large castings. It could be especially useful to manufacturers as a means of rapidly discriminating against large castings that contain unacceptably large concentrations of internal defects. The use of experimental MVA to assess structural integrity is not new. What are new here are those aspects of the extension of experimental MVA that pertain to the application of MVA to objects so massive that it may not be practical or cost effective to mount them in special test fixtures that impose special test boundary conditions to test them in place under normal conditions of use.

  4. Large-Eddy Simulation and Multigrid Methods

    SciTech Connect

    Falgout,R D; Naegle,S; Wittum,G

    2001-06-18

    A method to simulate turbulent flows with Large-Eddy Simulation on unstructured grids is presented. Two kinds of dynamic models are used to model the unresolved scales of motion and are compared with each other on different grids. Thereby the behavior of the models is shown and additionally the feature of adaptive grid refinement is investigated. Furthermore the parallelization aspect is addressed.

  5. Design evolution of large wind turbine generators

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1979-01-01

    During the past five years, the goals of economy and reliability have led to a significant evolution in the basic design--both external and internal--of large wind turbine systems. To show the scope and nature of recent changes in wind turbine designs, development of three types are described: (1) system configuration developments; (2) computer code developments; and (3) blade technology developments.

  6. Pronunciation Modeling for Large Vocabulary Speech Recognition

    ERIC Educational Resources Information Center

    Kantor, Arthur

    2010-01-01

    The large pronunciation variability of words in conversational speech is one of the major causes of low accuracy in automatic speech recognition (ASR). Many pronunciation modeling approaches have been developed to address this problem. Some explicitly manipulate the pronunciation dictionary as well as the set of the units used to define the…

  7. Computerized Torque Control for Large dc Motors

    NASA Technical Reports Server (NTRS)

    Willett, Richard M.; Carroll, Michael J.; Geiger, Ronald V.

    1987-01-01

    Speed and torque ranges in generator mode extended. System of shunt resistors, electronic switches, and pulse-width modulation controls torque exerted by large, three-phase, electronically commutated dc motor. Particularly useful for motor operating in generator mode because it extends operating range to low torque and high speed.

  8. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-07-01

    The Jacksonville Electric Authority's large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy's Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process in included.

  9. Large-scale CFB combustion demonstration project

    SciTech Connect

    Nielsen, P.T.; Hebb, J.L.; Aquino, R.

    1998-04-01

    The Jacksonville Electric Authority`s large-scale CFB demonstration project is described. Given the early stage of project development, the paper focuses on the project organizational structure, its role within the Department of Energy`s Clean Coal Technology Demonstration Program, and the projected environmental performance. A description of the CFB combustion process is included.

  10. Small and large number discrimination in guppies.

    PubMed

    Piffer, Laura; Agrillo, Christian; Hyde, Daniel C

    2012-03-01

    Non-verbal numerical behavior in human infants, human adults, and non-human primates appears to be rooted in two distinct mechanisms: a precise system for tracking and comparing small numbers of items simultaneously (up to 3 or 4 items) and an approximate system for estimating numerical magnitude of a group of objects. The most striking evidence that these two mechanisms are distinct comes from the apparent inability of young human infants and non-human primates to compare quantites across the small (<3 or 4)/large (>4) number boundary. We ask whether this distinction is present in lower animal species more distantly related to humans, guppies (Poecilia reticulata). We found that, like human infants and non-human primates, fish succeed at comparisons between large numbers only (5 vs. 10), succeed at comparisons between small numbers only (3 vs. 4), but systematically fail at comparisons that closely span the small/large boundary (3 vs. 5). Furthermore, increasing the distance between the small and large number resulted in successful discriminations (3 vs. 6, 3 vs. 7, and 3 vs. 9). This pattern of successes and failures is similar to those observed in human infants and non-human primates to suggest that the two systems are present and functionally distinct across a wide variety of animal species. PMID:21909934

  11. Modeling needs for very large systems.

    SciTech Connect

    Stein, Joshua S.

    2010-10-01

    Most system performance models assume a point measurement for irradiance and that, except for the impact of shading from nearby obstacles, incident irradiance is uniform across the array. Module temperature is also assumed to be uniform across the array. For small arrays and hourly-averaged simulations, this may be a reasonable assumption. Stein is conducting research to characterize variability in large systems and to develop models that can better accommodate large system factors. In large, multi-MW arrays, passing clouds may block sunlight from a portion of the array but never affect another portion. Figure 22 shows that two irradiance measurements at opposite ends of a multi-MW PV plant appear to have similar irradiance (left), but in fact the irradiance is not always the same (right). Module temperature may also vary across the array, with modules on the edges being cooler because they have greater wind exposure. Large arrays will also have long wire runs and will be subject to associated losses. Soiling patterns may also vary, with modules closer to the source of soiling, such as an agricultural field, receiving more dust load. One of the primary concerns associated with this effort is how to work with integrators to gain access to better and more comprehensive data for model development and validation.

  12. Understanding Student Performance in a Large Class

    ERIC Educational Resources Information Center

    Snowball, Jen D.; Boughey, Chrissie

    2012-01-01

    Across the world, university teachers are increasingly being required to engage with diversity in the classes they teach. Using the data from a large Economics 1 class at a South African university, this attempts to understand the effects of diversity on chances of success and how assessment can impact on this. By demonstrating how theory can be…

  13. Unification and large-scale structure.

    PubMed Central

    Laing, R A

    1995-01-01

    The hypothesis of relativistic flow on parsec scales, coupled with the symmetrical (and therefore subrelativistic) outer structure of extended radio sources, requires that jets decelerate on scales observable with the Very Large Array. The consequences of this idea for the appearances of FRI and FRII radio sources are explored. PMID:11607609

  14. Responses of large mammals to climate change.

    PubMed

    Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan

    2014-01-01

    Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change. PMID:27583293

  15. Sterilization for Large Volunteer Temporary Clinics.

    PubMed

    Cuny, Eve

    2015-12-01

    Large portable clinics staffed by volunteers present many unique challenges, including establishing appropriate instrument processing services. This article explores many of the specific steps an organization can take to ensure a safe care environment for patients and a safe working environment for volunteers. PMID:26819989

  16. Computational scalability of large size image dissemination

    NASA Astrophysics Data System (ADS)

    Kooper, Rob; Bajcsy, Peter

    2011-01-01

    We have investigated the computational scalability of image pyramid building needed for dissemination of very large image data. The sources of large images include high resolution microscopes and telescopes, remote sensing and airborne imaging, and high resolution scanners. The term 'large' is understood from a user perspective which means either larger than a display size or larger than a memory/disk to hold the image data. The application drivers for our work are digitization projects such as the Lincoln Papers project (each image scan is about 100-150MB or about 5000x8000 pixels with the total number to be around 200,000) and the UIUC library scanning project for historical maps from 17th and 18th century (smaller number but larger images). The goal of our work is understand computational scalability of the web-based dissemination using image pyramids for these large image scans, as well as the preservation aspects of the data. We report our computational benchmarks for (a) building image pyramids to be disseminated using the Microsoft Seadragon library, (b) a computation execution approach using hyper-threading to generate image pyramids and to utilize the underlying hardware, and (c) an image pyramid preservation approach using various hard drive configurations of Redundant Array of Independent Disks (RAID) drives for input/output operations. The benchmarks are obtained with a map (334.61 MB, JPEG format, 17591x15014 pixels). The discussion combines the speed and preservation objectives.

  17. Colonic stenting in malignant large bowel obstruction.

    PubMed

    Rajadurai, Vinita A; Levitt, Michael

    2016-06-01

    In patients who are surgical candidates, colonic stenting is beneficial for preoperative decompression in large bowel obstruction, as it can convert a surgical procedure from an emergent two-step approach into an elective one-step resection with a primary anastomosis. PMID:27398210

  18. Teaching Extra-Large Foreign Language Classes.

    ERIC Educational Resources Information Center

    Giauque, Gerald S.

    High quality instruction can be achieved in a foreign language classroom even though the class may be large by traditional standards, with as many as 60 students. Attitudes, class structure, classroom activities, and the teacher's role all play a part in this process in such classes. A positive attitude and enthusiasm on the teacher's part are…

  19. Analysis of large soil samples for actinides

    DOEpatents

    Maxwell, III; Sherrod L.

    2009-03-24

    A method of analyzing relatively large soil samples for actinides by employing a separation process that includes cerium fluoride precipitation for removing the soil matrix and precipitates plutonium, americium, and curium with cerium and hydrofluoric acid followed by separating these actinides using chromatography cartridges.

  20. Employment and Large Cities: Problems and Outlook.

    ERIC Educational Resources Information Center

    Bairoch, Paul

    1982-01-01

    This article traces the history of the emergence of large cities and examines the outlook for the future. It then answers questions about the effects of city size on general living conditions and on the various aspects of employment and the ways in which it might develop. (CT)

  1. Black Holes and the Large Hadron Collider

    ERIC Educational Resources Information Center

    Roy, Arunava

    2011-01-01

    The European Center for Nuclear Research or CERN's Large Hadron Collider (LHC) has caught our attention partly due to the film "Angels and Demons." In the movie, an antimatter bomb attack on the Vatican is foiled by the protagonist. Perhaps just as controversial is the formation of mini black holes (BHs). Recently, the American Physical Society…

  2. Aerodynamic beam generator for large particles

    DOEpatents

    Brockmann, John E.; Torczynski, John R.; Dykhuizen, Ronald C.; Neiser, Richard A.; Smith, Mark F.

    2002-01-01

    A new type of aerodynamic particle beam generator is disclosed. This generator produces a tightly focused beam of large material particles at velocities ranging from a few feet per second to supersonic speeds, depending on the exact configuration and operating conditions. Such generators are of particular interest for use in additive fabrication techniques.

  3. Conservative management of a large maxillary cyst.

    PubMed

    Rees, J S

    1997-01-01

    This article describes the treatment of a large maxillary cyst by root canal treatment and decompression using a hollow drain made from surgical suction tubing. The rationale behind the use of this technique is reviewed and its advantages highlighted. PMID:9477796

  4. Facing software complexity on large telescopes

    NASA Astrophysics Data System (ADS)

    Swart, Gerhard; Bester, Deon; Brink, Janus; Gumede, Clifford; Schalekamp, Hendrik J.

    2004-09-01

    The successful development of any complex control system requires a blend of good software management, an appropriate computer architecture and good software engineering. Due to the large number of controlled parts, high performance goals and required operational efficiency, the control systems for large telescopes are particularly challenging to develop and maintain. In this paper the authors highlight some of the specific challenges that need to be met by control system developers to meet the requirements within a limited budget and schedule. They share some of the practices applied during the development of the Southern African Large Telescope (SALT) and describe specific aspects of the design that contribute to meeting these challenges. The topics discussed include: development methodology, defining the level of system integration, computer architecture, interface management, software standards, language selection, user interface design and personnel selection. Time will reveal the full truth, but the authors believe that the significant progress achieved in commissioning SALT (now 6 months from telescope completion), can largely be attributed to the combined application of these practices and design concepts.

  5. Linking Large-Scale Reading Assessments: Comment

    ERIC Educational Resources Information Center

    Hanushek, Eric A.

    2016-01-01

    E. A. Hanushek points out in this commentary that applied researchers in education have only recently begun to appreciate the value of international assessments, even though there are now 50 years of experience with these. Until recently, these assessments have been stand-alone surveys that have not been linked, and analysis has largely focused on…

  6. Control problems in very large accelerators

    SciTech Connect

    Crowley-Milling, M.C.

    1985-06-01

    There is no fundamental difference of kind in the control requirements between a small and a large accelerator since they are built of the same types of components, which individually have similar control inputs and outputs. The main difference is one of scale; the large machine has many more components of each type, and the distances involved are much greater. Both of these factors must be taken into account in determining the optimum way of carrying out the control functions. Small machines should use standard equipment and software for control as much as possible, as special developments for small quantities cannot normally be justified if all costs are taken into account. On the other hand, the very great number of devices needed for a large machine means that, if special developments can result in simplification, they may make possible an appreciable reduction in the control equipment costs. It is the purpose of this report to look at the special control problems of large accelerators, which the author shall arbitarily define as those with a length of circumference in excess of 10 km, and point out where special developments, or the adoption of developments from outside the accelerator control field, can be of assistance in minimizing the cost of the control system. Most of the first part of this report was presented as a paper to the 1985 Particle Accelerator Conference. It has now been extended to include a discussion on the special case of the controls for the SSC.

  7. Camera Systems Rapidly Scan Large Structures

    NASA Technical Reports Server (NTRS)

    2013-01-01

    Needing a method to quickly scan large structures like an aircraft wing, Langley Research Center developed the line scanning thermography (LST) system. LST works in tandem with a moving infrared camera to capture how a material responds to changes in temperature. Princeton Junction, New Jersey-based MISTRAS Group Inc. now licenses the technology and uses it in power stations and industrial plants.

  8. Large Format Multicolor QWIP Focal Plane Arrays

    NASA Technical Reports Server (NTRS)

    Soibel, A.; Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Mumolo, J. M.; Ting, D. Z.; Hill, C. J.; Nguyen, J.

    2009-01-01

    Mid-wave infrared (MWIR) and long-wave infrared (LWIR) multicolor focal plane array (FPA) cameras are essential for many DoD and NASA applications including Earth and planetary remote sensing. In this paper we summarize our recent development of large format multicolor QWIP FPA that cover MWIR and LWIR bands.

  9. Global Alignment System for Large Genomic Sequencing

    Energy Science and Technology Software Center (ESTSC)

    2002-03-01

    AVID is a global alignment system tailored for the alignment of large genomic sequences up to megabases in length. Features include the possibility of one sequence being in draft form, fast alignment, robustness and accuracy. The method is an anchor based alignment using maximal matches derived from suffix trees.

  10. Forecasting distribution of numbers of large fires

    USGS Publications Warehouse

    Eidenshink, Jeffery C.; Preisler, Haiganoush K.; Howard, Stephen; Burgan, Robert E.

    2014-01-01

    Systems to estimate forest fire potential commonly utilize one or more indexes that relate to expected fire behavior; however they indicate neither the chance that a large fire will occur, nor the expected number of large fires. That is, they do not quantify the probabilistic nature of fire danger. In this work we use large fire occurrence information from the Monitoring Trends in Burn Severity project, and satellite and surface observations of fuel conditions in the form of the Fire Potential Index, to estimate two aspects of fire danger: 1) the probability that a 1 acre ignition will result in a 100+ acre fire, and 2) the probabilities of having at least 1, 2, 3, or 4 large fires within a Predictive Services Area in the forthcoming week. These statistical processes are the main thrust of the paper and are used to produce two daily national forecasts that are available from the U.S. Geological Survey, Earth Resources Observation and Science Center and via the Wildland Fire Assessment System. A validation study of our forecasts for the 2013 fire season demonstrated good agreement between observed and forecasted values.

  11. Embodied largeness: a significant women's health issue.

    PubMed

    Carryer, J

    2001-06-01

    This paper describes a three-year long research project in which nine large-bodied women have engaged in a prolonged dialogue with the researcher about the experience of being 'obese'. The study involved an extensive review of the multidisciplinary literature that informs our understandings of body size. The literature review was shared with participants in order to support their critical understanding of their experience. An examination of a wide range of literature pertinent to the area of study reveals widespread acceptance of the notion that to be thin is to be healthy and virtuous, and to be fat is to be unhealthy and morally deficient. The experience of participants raised questions as to how nursing could best provide health-care for large women. According to the literature review, nurses have perpetuated an unhelpful and reductionist approach to their care of large women, in direct contradiction to nursing's supposed allegiance to a holistic approach to health-care. This paper suggests strategies for an improved response to women who are concerned about their large body size. PMID:11882207

  12. Large NMR signals and polarization asymmetries.

    SciTech Connect

    Penttila, S. I.

    1998-11-25

    A large modulation in the series Q-meter can lead to nonlinear NMR signals and asymmetric polarization values. With a careful circuit analysis the nonlinearity can be estimated and corrections to polarization can be determined as a function of the strength of the modulation. We describe the recent LAMPF polarized proton target experiment, its NMR measurement and corrections to the measured polarizations.

  13. Very large radio surveys of the sky.

    PubMed

    Condon, J J

    1999-04-27

    Recent advances in electronics and computing have made possible a new generation of large radio surveys of the sky that yield an order-of-magnitude higher sensitivity and positional accuracy. Combined with the unique properties of the radio universe, these quantitative improvements open up qualitatively different and exciting new scientific applications of radio surveys. PMID:10220365

  14. Very large radio surveys of the sky

    PubMed Central

    Condon, J. J.

    1999-01-01

    Recent advances in electronics and computing have made possible a new generation of large radio surveys of the sky that yield an order-of-magnitude higher sensitivity and positional accuracy. Combined with the unique properties of the radio universe, these quantitative improvements open up qualitatively different and exciting new scientific applications of radio surveys. PMID:10220365

  15. Hayward fault: Large earthquakes versus surface creep

    USGS Publications Warehouse

    Lienkaemper, James J.; Borchardt, Glenn

    1992-01-01

    The Hayward fault, thought a likely source of large earthquakes in the next few decades, has generated two large historic earthquakes (about magnitude 7), one in 1836 and another in 1868. We know little about the 1836 event, but the 1868 event had a surface rupture extending 41 km along the southern Hayward fault. Right-lateral surface slip occurred in 1868, but was not well measured. Witness accounts suggest coseismic right slip and afterslip of under a meter. We measured the spatial variation of the historic creep rate along the Hayward fault, deriving rates mainly from surveys of offset cultural features, (curbs, fences, and buildings). Creep occurs along at least 69 km of the fault's 82-km length (13 km is underwater). Creep rate seems nearly constant over many decades with short-term variations. The creep rate mostly ranges from 3.5 to 6.5 mm/yr, varying systemically along strike. The fastest creep is along a 4-km section near the south end. Here creep has been about 9mm/yr since 1921, and possibly since the 1868 event as indicated by offset railroad track rebuilt in 1869. This 9mm/yr slip rate may approach the long-term or deep slip rate related to the strain buildup that produces large earthquakes, a hypothesis supported by geoloic studies (Lienkaemper and Borchardt, 1992). If so, the potential for slip in large earthquakes which originate below the surficial creeping zone, may now be 1/1m along the southern (1868) segment and ≥1.4m along the northern (1836?) segment. Substracting surface creep rates from a long-term slip rate of 9mm/yr gives present potential for surface slip in large earthquakes of up to 0.8m. Our earthquake potential model which accounts for historic creep rate, microseismicity distribution, and geodetic data, suggests that enough strain may now be available for large magnitude earthquakes (magnitude 6.8 in the northern (1836?) segment, 6.7 in the southern (1868) segment, and 7.0 for both). Thus despite surficial creep, the fault may be

  16. Solar Rejection Filter for Large Telescopes

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid; Lesh, James

    2009-01-01

    To reject solar radiation photons at the front aperture for large telescopes, a mosaic of large transmission mode filters is placed in front of the telescope or at the aperture of the dome. Filtering options for effective rejection of sunlight include a smaller filter down-path near the focus of the telescope, and a large-diameter filter located in the front of the main aperture. Two types of large filters are viable: reflectance mode and transmittance mode. In the case of reflectance mode, a dielectric coating on a suitable substrate (e.g. a low-thermal-expansion glass) is arranged to reflect only a single, narrow wavelength and to efficiently transmit all other wavelengths. These coatings are commonly referred to as notch filter. In this case, the large mirror located in front of the telescope aperture reflects the received (signal and background) light into the telescope. In the case of transmittance mode, a dielectric coating on a suitable substrate (glass, sapphire, clear plastic, membrane, and the like) is arranged to transmit only a single wavelength and to reject all other wavelengths (visible and near IR) of light. The substrate of the large filter will determine its mass. At first glance, a large optical filter with a diameter of up to 10 m, located in front of the main aperture, would require a significant thickness to avoid sagging. However, a segmented filter supported by a structurally rugged grid can support smaller filters. The obscuration introduced by the grid is minimal because the total area can be made insignificant. This configuration can be detrimental to a diffraction- limited telescope due to diffraction effects at the edges of each sub-panel. However, no discernable degradation would result for a 20 diffraction-limit telescope (a photon bucket). Even the small amount of sagging in each subpanel should have minimal effect in the performance of a non-diffraction limited telescope because the part has no appreciable optical power. If the

  17. Large Payload Ground Transportation and Test Considerations

    NASA Technical Reports Server (NTRS)

    Rucker, Michelle A.

    2016-01-01

    During test and verification planning for the Altair lunar lander project, a National Aeronautics and Space Administration (NASA) study team identified several ground transportation and test issues related to the large payload diameter. Although the entire Constellation Program-including Altair-has since been canceled, issues identified by the Altair project serve as important lessons learned for payloads greater than 7 m diameter being considered for NASA's new Space Launch System (SLS). A transportation feasibility study found that Altair's 8.97 m diameter Descent Module would not fit inside available aircraft. Although the Ascent Module cabin was only 2.35 m diameter, the long reaction control system booms extended nearly to the Descent Module diameter, making it equally unsuitable for air transportation without removing the booms and invalidating assembly workmanship screens or acceptance testing that had already been performed. Ground transportation of very large payloads over extended distances is not generally permitted by most states, so overland transportation alone would not be an option. Limited ground transportation to the nearest waterway may be possible, but water transportation could take as long as 66 days per production unit, depending on point of origin and acceptance test facility; transportation from the western United States would require transit through the Panama Canal to access the Kennedy Space Center launch site. Large payloads also pose acceptance test and ground processing challenges. Although propulsion, mechanical vibration, and reverberant acoustic test facilities at NASA's Plum Brook Station have been designed to accommodate large spacecraft, special handling and test work-arounds may be necessary, which could increase cost, schedule, and technical risk. Once at the launch site, there are no facilities currently capable of accommodating the combination of large payload size and hazardous processing such as hypergolic fuels

  18. From the Law of Large Numbers to Large Deviation Theory in Statistical Physics: An Introduction

    NASA Astrophysics Data System (ADS)

    Cecconi, Fabio; Cencini, Massimo; Puglisi, Andrea; Vergni, Davide; Vulpiani, Angelo

    This contribution aims at introducing the topics of this book. We start with a brief historical excursion on the developments from the law of large numbers to the central limit theorem and large deviations theory. The same topics are then presented using the language of probability theory. Finally, some applications of large deviations theory in physics are briefly discussed through examples taken from statistical mechanics, dynamical and disordered systems.

  19. Large ejecta fragments from asteroids. [Abstract only

    NASA Technical Reports Server (NTRS)

    Asphaug, E.

    1994-01-01

    The asteroid 4 Vesta, with its unique basaltic crust, remains a key mystery of planetary evolution. A localized olivine feature suggests excavation of subcrustal material in a crater or impact basin comparable in size to the planetary radius (R(sub vesta) is approximately = 280 km). Furthermore, a 'clan' of small asteroids associated with Vesta (by spectral and orbital similarities) may be ejecta from this impact 151 and direct parents of the basaltic achondrites. To escape, these smaller (about 4-7 km) asteroids had to be ejected at speeds greater than the escape velocity, v(sub esc) is approximately = 350 m/s. This evidence that large fragments were ejected at high speed from Vesta has not been reconciled with the present understanding of impact physics. Analytical spallation models predict that an impactor capable of ejecting these 'chips off Vesta' would be almost the size of Vesta! Such an impact would lead to the catastrophic disruption of both bodies. A simpler analysis is outlined, based on comparison with cratering on Mars, and it is shown that Vesta could survive an impact capable of ejecting kilometer-scale fragments at sufficient speed. To what extent does Vesta survive the formation of such a large crater? This is best addressed using a hydrocode such as SALE 2D with centroidal gravity to predict velocities subsequent to impact. The fragmentation outcome and velocity subsequent to the impact described to demonstrate that Vesta survives without large-scale disassembly or overturning of the crust. Vesta and its clan represent a valuable dataset for testing fragmentation hydrocodes such as SALE 2D and SPH 3D at planetary scales. Resolution required to directly model spallation 'chips' on a body 100 times as large is now marginally possible on modern workstations. These boundaries are important in near-surface ejection processes and in large-scale disruption leading to asteroid families and stripped cores.

  20. Large area damage testing of optics

    SciTech Connect

    Sheehan, L.; Kozlowski, M.; Stolz, C.

    1996-04-26

    The damage threshold specifications for the National Ignition Facility will include a mixture of standard small-area tests and new large-area tests. During our studies of laser damage and conditioning processes of various materials we have found that some damage morphologies are fairly small and this damage does not grow with further illumination. This type of damage might not be detrimental to the laser performance. We should therefore assume that some damage can be allowed on the optics, but decide on a maximum damage allowance of damage. A new specification of damage threshold termed {open_quotes}functional damage threshold{close_quotes} was derived. Further correlation of damage size and type to system performance must be determined in order to use this measurement, but it is clear that it will be a large factor in the optics performance specifications. Large-area tests have verified that small-area testing is not always sufficient when the optic in question has defect-initiated damage. This was evident for example on sputtered polarizer and mirror coatings where the defect density was low enough that the features could be missed by standard small- area testing. For some materials, the scale-length at which damage non-uniformities occur will effect the comparison of small-area and large-area tests. An example of this was the sub-aperture tests on KD*P crystals on the Beamlet test station. The tests verified the large-area damage threshold to be similar to that found when testing a small-area. Implying that for this KD*P material, the dominate damage mechanism is of sufficiently small scale-length that small-area testing is capable of determining the threshold. The Beamlet test station experiments also demonstrated the use of on-line laser conditioning to increase the crystals damage threshold.

  1. Efficient Geological Modelling of Large AEM Surveys

    NASA Astrophysics Data System (ADS)

    Bach, Torben; Martlev Pallesen, Tom; Jørgensen, Flemming; Lundh Gulbrandsen, Mats; Mejer Hansen, Thomas

    2014-05-01

    Combining geological expert knowledge with geophysical observations into a final 3D geological model is, in most cases, not a straight forward process. It typically involves many types of data and requires both an understanding of the data and the geological target. When dealing with very large areas, such as modelling of large AEM surveys, the manual task for the geologist to correctly evaluate and properly utilise all the data available in the survey area, becomes overwhelming. In the ERGO project (Efficient High-Resolution Geological Modelling) we address these issues and propose a new modelling methodology enabling fast and consistent modelling of very large areas. The vision of the project is to build a user friendly expert system that enables the combination of very large amounts of geological and geophysical data with geological expert knowledge. This is done in an "auto-pilot" type functionality, named Smart Interpretation, designed to aid the geologist in the interpretation process. The core of the expert system is a statistical model that describes the relation between data and geological interpretation made by a geological expert. This facilitates fast and consistent modelling of very large areas. It will enable the construction of models with high resolution as the system will "learn" the geology of an area directly from interpretations made by a geological expert, and instantly apply it to all hard data in the survey area, ensuring the utilisation of all the data available in the geological model. Another feature is that the statistical model the system creates for one area can be used in another area with similar data and geology. This feature can be useful as an aid to an untrained geologist to build a geological model, guided by the experienced geologist way of interpretation, as quantified by the expert system in the core statistical model. In this project presentation we provide some examples of the problems we are aiming to address in the project

  2. The paradox of large alluvial rivers (Invited)

    NASA Astrophysics Data System (ADS)

    Latrubesse, E. M.

    2010-12-01

    Large alluvial rivers exhibit large floodplains, very gentle slopes, a good selection of bed materials (generally sand), low specific stream power, and could represent the ultimate examples of “dynamic equilibrium” in fluvial systems. However, equilibrium can be discussed at different temporal scales. Base level changes by tectonic or climatic effects, modifications in sediment and water supply or different kinds of human impacts are the traditional causes that could trigger “disequilibrium” and changes in the longitudinal profile. Simultaneously, adjustments of longitudinal profiles were thought to be evolving from downstream to upstream by several processes, being the most common receding erosion. Some authors,have demonstrated that when changes in base level happen, a variety of adjustments can be reached in the lower course in function of the available sediment and water discharge, slopes articulations between the fluvial reach and the continental shelve, among others, and that the adjustments can be transferred upstream significantly in small rivers but not far upstream along large fluvial systems. When analyzing the Quaternary fluvial belts of large rivers in the millennium scale, paleohydrological changes and modifications in floodplain constructional processes or erosion, are associated normally to late Quaternary climatic changes. The study of several of the largest rivers demonstrates that climatic changes and fluvial responses are not always working totally in phase and those direct cause-consequences relations are not a rule. This paper describes floodplain evolution and the lagged geomorphic responses of some large river system to recent climatic changes. Information from some of the largest rivers of the world such as the Amazon, Parana, several tributaries of the Amazon (Negro, Xingú, Tapajos) as well as some large Siberian Rivers was used. Since the last deglaciation, these large fluvial systems have not had enough time to reach equilibrium

  3. Improvements in large window and optics production

    NASA Astrophysics Data System (ADS)

    Hallock, Bob; Messner, Bill; Hall, Chris; Supranowitz, Chris

    2007-04-01

    Fabrication of large optics has been a topic of discussion for decades. As early as the late 1980s, computer-controlled equipment has been used to semi-deterministically correct the figure error of large optics over a number of process iterations. Magnetorheological Finishing, MRF®, was developed and commercialized in the late 1990's to predictably and reliably allow the user to achieve deterministic results on a variety of optical glasses, ceramics and other common optical materials. Large and small optics such as primary mirrors, conformal optics and off-axis components are efficiently fabricated using this approach. More recently, specific processes, MR Fluids and equipment have been developed and implemented to enhance results when finishing large aperture sapphire windows. MRF, by virtue of its unique removal process, overcomes many of the drawbacks of a conventional polishing process. For example, lightweighted optics often exhibit a quilted pattern coincident with their pocket cell structure following conventional pad-based polishing. MRF does not induce mid-frequency errors and is capable of removing existing quilt patterns. Further, odd aperture shapes and part geometries which can represent significant challenges to conventional polish processing are simply and easily corrected with MRF tools. Similarly, aspheric optics which can often present multiple obstacles-particularly when lightweighted and off-axis-typically have a departure from best-fit sphere that is not well matched with to static pad-based polishing tools resulting in pad misfit and associated variations in removal. The conformal subaperture polishing tool inherent to the QED process works as well on typical circular apertures as it does on irregular shapes such as rectangles, petals and trapezoids for example and matches the surface perfectly at all points. Flats, spheres, aspheres and off-axis sections are easily corrected. The schedule uncertainties driven by edge roll and edge control

  4. Large orb-webs adapted to maximise total biomass not rare, large prey

    PubMed Central

    Harmer, Aaron M. T.; Clausen, Philip D.; Wroe, Stephen; Madin, Joshua S.

    2015-01-01

    Spider orb-webs are the ultimate anti-ballistic devices, capable of dissipating the relatively massive kinetic energy of flying prey. Increased web size and prey stopping capacity have co-evolved in a number orb-web taxa, but the selective forces driving web size and performance increases are under debate. The rare, large prey hypothesis maintains that the energetic benefits of rare, very large prey are so much greater than the gains from smaller, more common prey that smaller prey are irrelevant for reproduction. Here, we integrate biophysical and ecological data and models to test a major prediction of the rare, large prey hypothesis, that selection should favour webs with increased stopping capacity and that large prey should comprise a significant proportion of prey stopped by a web. We find that larger webs indeed have a greater capacity to stop large prey. However, based on prey ecology, we also find that these large prey make up a tiny fraction of the total biomass (=energy) potentially captured. We conclude that large webs are adapted to stop more total biomass, and that the capacity to stop rare, but very large, prey is an incidental consequence of the longer radial silks that scale with web size. PMID:26374379

  5. The Large Hadron Collider: Redefining High Energy

    SciTech Connect

    Demers, Sarah

    2007-06-19

    Particle physicists have a description of the forces of nature known as the Standard Model that has successfully withstood decades of testing at laboratories around the world. Though the Standard Model is powerful, it is not complete. Important details like the masses of particles are not explained well, and realities as fundamental as gravity, dark matter, and dark energy are left out altogether. I will discuss gaps in the model and why there is hope that some puzzles will be solved by probing high energies with the Large Hadron Collider. Beginning next year, this machine will accelerate protons to record energies, hurling them around a 27 kilometer ring before colliding them 40 million times per second. Detectors the size of five-story buildings will record the debris of these collisions. The new energy frontier made accessible by the Large Hadron Collider will allow thousands of physicists to explore nature's fundamental forces and particles from a fantastic vantage point.

  6. Large antenna measurement and compensation techniques

    NASA Technical Reports Server (NTRS)

    Rahmatsamii, Y.

    1989-01-01

    Antennas in the range of 20 meters or larger will be an integral part of future satellite communication and scientific payloads. In order to commercially use these large, low sidelobe and multiple-beam antennas, a high level of confidence must be established as to their performance in the 0-g and space environment. It is also desirable to compensate for slowly varying surface distortions which could results from thermal effects. An overview of recent advances in performing rf measurements on large antennas is presented with emphasis given to the application of a space-based far-field range utilizing the Space Shuttle. The concept of surface distortion compensation is discussed by providing numerical and measurement results.

  7. Large poroelastic deformation of a soft material

    NASA Astrophysics Data System (ADS)

    MacMinn, Christopher W.; Dufresne, Eric R.; Wettlaufer, John S.

    2014-11-01

    Flow through a porous material will drive mechanical deformation when the fluid pressure becomes comparable to the stiffness of the solid skeleton. This has applications ranging from hydraulic fracture for recovery of shale gas, where fluid is injected at high pressure, to the mechanics of biological cells and tissues, where the solid skeleton is very soft. The traditional linear theory of poroelasticity captures this fluid-solid coupling by combining Darcy's law with linear elasticity. However, linear elasticity is only volume-conservative to first order in the strain, which can become problematic when damage, plasticity, or extreme softness lead to large deformations. Here, we compare the predictions of linear poroelasticity with those of a large-deformation framework in the context of two model problems. We show that errors in volume conservation are compounded and amplified by coupling with the fluid flow, and can become important even when the deformation is small. We also illustrate these results with a laboratory experiment.

  8. Advances in Structures for Large Space Systems

    NASA Technical Reports Server (NTRS)

    Belvin, W. Keith

    2004-01-01

    The development of structural systems for scientific remote sensing and space exploration has been underway for four decades. The seminal work from 1960 to 1980 provided the basis for many of the design principles of modern space systems. From 1980- 2000 advances in active materials and structures and the maturing of composites technology led to high precision active systems such those used in the Space Interferometry Mission. Recently, thin-film membrane or gossamer structures are being investigated for use in large area space systems because of their low mass and high packaging efficiency. Various classes of Large Space Systems (LSS) are defined in order to describe the goals and system challenges in structures and materials technologies. With an appreciation of both past and current technology developments, future technology challenges are used to develop a list of technology investments that can have significant impacts on LSS development.

  9. Future large broadband switched satellite communications networks

    NASA Technical Reports Server (NTRS)

    Staelin, D. H.; Harvey, R. R.

    1979-01-01

    Critical technical, market, and policy issues relevant to future large broadband switched satellite networks are summarized. Our market projections for the period 1980 to 2000 are compared. Clusters of switched satellites, in lieu of large platforms, etc., are shown to have significant advantages. Analysis of an optimum terrestrial network architecture suggests the proper densities of ground stations and that link reliabilities 99.99% may entail less than a 10% cost premium for diversity protection at 20/30 GHz. These analyses suggest that system costs increase as the 0.6 power of traffic. Cost estimates for nominal 20/30 GHz satellite and ground facilities suggest optimum system configurations might employ satellites with 285 beams, multiple TDMA bands each carrying 256 Mbps, and 16 ft ground station antennas. A nominal development program is outlined.

  10. Distributed control of large space antennas

    NASA Technical Reports Server (NTRS)

    Cameron, J. M.; Hamidi, M.; Lin, Y. H.; Wang, S. J.

    1983-01-01

    A systematic way to choose control design parameters and to evaluate performance for large space antennas is presented. The structural dynamics and control properties for a Hoop and Column Antenna and a Wrap-Rib Antenna are characterized. Some results of the effects of model parameter uncertainties to the stability, surface accuracy, and pointing errors are presented. Critical dynamics and control problems for these antenna configurations are identified and potential solutions are discussed. It was concluded that structural uncertainties and model error can cause serious performance deterioration and can even destabilize the controllers. For the hoop and column antenna, large hoop and long meat and the lack of stiffness between the two substructures result in low structural frequencies. Performance can be improved if this design can be strengthened. The two-site control system is more robust than either single-site control systems for the hoop and column antenna.

  11. Damage Tolerance of Large Shell Structures

    NASA Technical Reports Server (NTRS)

    Minnetyan, L.; Chamis, C. C.

    1999-01-01

    Progressive damage and fracture of large shell structures is investigated. A computer model is used for the assessment of structural response, progressive fracture resistance, and defect/damage tolerance characteristics. Critical locations of a stiffened conical shell segment are identified. Defective and defect-free computer models are simulated to evaluate structural damage/defect tolerance. Safe pressurization levels are assessed for the retention of structural integrity at the presence of damage/ defects. Damage initiation, growth, accumulation, and propagation to fracture are included in the simulations. Damage propagation and burst pressures for defective and defect-free shells are compared to evaluate damage tolerance. Design implications with regard to defect and damage tolerance of a large steel pressure vessel are examined.

  12. Modelling of pyrolysis of large wood particles.

    PubMed

    Sadhukhan, Anup Kumar; Gupta, Parthapratim; Saha, Ranajit Kumar

    2009-06-01

    A fully transient mathematical model has been developed to describe the pyrolysis of large biomass particles. The kinetic model consists of both primary and secondary reactions. The heat transfer model includes conductive and internal convection within the particle and convective and radiative heat transfer between the external surface and the bulk. An implicit Finite Volume Method (FVM) with Tridiagonal Matrix Algorithm (TDMA) is employed to solve the energy conservation equation. Experimental investigations are carried out for wood fines and large wood cylinder and sphere in an electrically heated furnace under inert atmosphere. The model predictions for temperature and mass loss histories are in excellent agreement with experimental results. The effect of internal convection and particle shrinkage on pyrolysis behaviour is investigated and found to be significant. Finally, simulation studies are carried out to analyze the effect of bulk temperature and particle size on total pyrolysis time and the final yield of char. PMID:19231172

  13. Large-scale simulations of reionization

    SciTech Connect

    Kohler, Katharina; Gnedin, Nickolay Y.; Hamilton, Andrew J.S.; /JILA, Boulder

    2005-11-01

    We use cosmological simulations to explore the large-scale effects of reionization. Since reionization is a process that involves a large dynamic range--from galaxies to rare bright quasars--we need to be able to cover a significant volume of the universe in our simulation without losing the important small scale effects from galaxies. Here we have taken an approach that uses clumping factors derived from small scale simulations to approximate the radiative transfer on the sub-cell scales. Using this technique, we can cover a simulation size up to 1280h{sup -1} Mpc with 10h{sup -1} Mpc cells. This allows us to construct synthetic spectra of quasars similar to observed spectra of SDSS quasars at high redshifts and compare them to the observational data. These spectra can then be analyzed for HII region sizes, the presence of the Gunn-Peterson trough, and the Lyman-{alpha} forest.

  14. Large Abdominal Wall Endometrioma Following Laparoscopic Hysterectomy

    PubMed Central

    Borncamp, Erik; Mehaffey, Philip; Rotman, Carlos

    2011-01-01

    Background: Endometriosis is a common condition in women that affects up to 45% of patients in the reproductive age group by causing pelvic pain. It is characterized by the presence of endometrial tissue outside the uterine cavity and is rarely found subcutaneously or in abdominal incisions, causing it to be overlooked in patients with abdominal pain. Methods: A 45-year-old woman presented with lower abdominal pain 2 years following a laparoscopic supracervical hysterectomy. She was found to have incidental cholelithiasis and a large abdominal mass suggestive of a significant ventral hernia on CT scan. Results: Due to the peculiar presentation, surgical intervention took place that revealed a large 9cm×7.6cm×6.2cm abdominal wall endometrioma. Conclusion: Although extrapelvic endometriosis is rare, it should be entertained in the differential diagnosis for the female patient who presents with an abdominal mass and pain and has a previous surgical history. PMID:21902990

  15. Large steam turbine repair: A survey

    SciTech Connect

    Findlan, S.J.; Lube, B. )

    1991-07-01

    This report covers a survey taken to document the current state-of-the-art in repairs to large steam turbines. One objective was to provide information to assist utilities in making repair or replacement decisions. The survey revealed that a large number of repairs have been successfully repaired involving both mechanical and welding repair techniques. Repair techniques have been improving in recent years and are being used more frequently. No guidelines or codes exist for the repair of steam turbine components so each repair is primarily controlled by agreement between the utility, contractor and insurer. Types of repairs are reviewed in this report and in addition, the capabilities of various contractors who are currently active in providing repair service. 40 refs., 10 figs., 4 tabs.

  16. Uncertain vibration equation of large membranes

    NASA Astrophysics Data System (ADS)

    Tapaswini, Smita; Chakraverty, S.; Behera, Diptiranjan

    2014-11-01

    The study of the vibration of large membranes is important due to its well-known applications. There exist various investigations for the above problem where the variables and parameters are given as crisp/exact. In practice, we may not have these parameters exactly but those may be known in some uncertain form. In the present paper, these uncertainties are taken as interval/fuzzy and the authors propose here a new method viz. that of the double parametric form of fuzzy numbers to handle the uncertain problem of large membranes. Finally, the problem has been solved using the Homotopy Perturbation Method (HPM). The present method performs very well in terms of computational efficiency. The reliability of the method is shown for obtaining an approximate numerical solution for different cases. Results are given in terms of plots and are also compared in special cases.

  17. Passive load control for large wind turbines.

    SciTech Connect

    Ashwill, Thomas D.

    2010-05-01

    Wind energy research activities at Sandia National Laboratories focus on developing large rotors that are lighter and more cost-effective than those designed with current technologies. Because gravity scales as the cube of the blade length, gravity loads become a constraining design factor for very large blades. Efforts to passively reduce turbulent loading has shown significant potential to reduce blade weight and capture more energy. Research in passive load reduction for wind turbines began at Sandia in the late 1990's and has moved from analytical studies to blade applications. This paper discusses the test results of two Sandia prototype research blades that incorporate load reduction techniques. The TX-100 is a 9-m long blade that induces bend-twist coupling with the use of off-axis carbon in the skin. The STAR blade is a 27-m long blade that induces bend-twist coupling by sweeping the blade in a geometric fashion.

  18. The missing large impact craters on Ceres

    NASA Astrophysics Data System (ADS)

    Marchi, S.; Ermakov, A. I.; Raymond, C. A.; Fu, R. R.; O'Brien, D. P.; Bland, M. T.; Ammannito, E.; de Sanctis, M. C.; Bowling, T.; Schenk, P.; Scully, J. E. C.; Buczkowski, D. L.; Williams, D. A.; Hiesinger, H.; Russell, C. T.

    2016-07-01

    Asteroids provide fundamental clues to the formation and evolution of planetesimals. Collisional models based on the depletion of the primordial main belt of asteroids predict 10-15 craters >400 km should have formed on Ceres, the largest object between Mars and Jupiter, over the last 4.55 Gyr. Likewise, an extrapolation from the asteroid Vesta would require at least 6-7 such basins. However, Ceres' surface appears devoid of impact craters >~280 km. Here, we show a significant depletion of cerean craters down to 100-150 km in diameter. The overall scarcity of recognizable large craters is incompatible with collisional models, even in the case of a late implantation of Ceres in the main belt, a possibility raised by the presence of ammoniated phyllosilicates. Our results indicate that a significant population of large craters has been obliterated, implying that long-wavelength topography viscously relaxed or that Ceres experienced protracted widespread resurfacing.

  19. MUSE: 3D Spectroscopy with Large Telescopes

    NASA Astrophysics Data System (ADS)

    Kelz, A.; Roth, M. M.; Steinmetz, M.; MUSE Consortium

    The Multi Unit Spectroscopic Explorer (MUSE) is a second generation instrument [1] in development for the Very Large Telescope (VLT) of the European Southern Observatory (ESO). It is a panoramic integral-field spectrograph operating in the visible wavelength range. It combines a wide field of view with the improved spatial resolution provided by adaptive optics and covers a large simultaneous spectral range. MUSE couples the discovery potential of an imaging device to the measuring capabilities of a spectrograph, while taking advantage of the increased spatial resolution provided by adaptive optics. This makes it a unique and powerful tool for discovering objects that cannot be found in imaging surveys. MUSE is optimized for the study of the progenitors of normal nearby galaxies out to very high redshift. It will also allow detailed studies of nearby normal, starburst and interacting galaxies, and of galactic star formation regions.

  20. Cloud Based Processing of Large Photometric Surveys

    NASA Astrophysics Data System (ADS)

    Farivar, R.; Brunner, R. J.; Santucci, R.; Campbell, R.

    2013-10-01

    Astronomy, as is the case with many scientific domains, has entered the realm of being a data rich science. Nowhere is this reflected more clearly than in the growth of large area surveys, such as the recently completed Sloan Digital Sky Survey (SDSS) or the Dark Energy Survey, which will soon obtain PB of imaging data. The data processing on these large surveys is a major challenge. In this paper, we demonstrate a new approach to this common problem. We propose the use of cloud-based technologies (e.g., Hadoop MapReduce) to run a data analysis program (e.g., SExtractor) across a cluster. Using the intermediate key/value pair design of Hadoop, our framework matches objects across different SExtractor invocations to create a unified catalog from all SDSS processed data. We conclude by presenting our experimental results on a 432 core cluster and discuss the lessons we have learned in completing this challenge.

  1. GLAST Large Area Telescope Multiwavelength Planning

    SciTech Connect

    Reimer, O.; Michelson, P.F.; Cameron, R.A.; Digel, S.W.; Thompson, D.J.; Wood, K.S.

    2007-01-03

    Gamma-ray astrophysics depends in many ways on multiwavelength studies. The Gamma-ray Large Area Space Telescope (GLAST) Large Area Telescope (LAT) Collaboration has started multiwavelength planning well before the scheduled 2007 launch of the observatory. Some of the high-priority multiwavelength needs include: (1) availability of contemporaneous radio and X-ray timing of pulsars; (2) expansion of blazar catalogs, including redshift measurements; (3) improved observations of molecular clouds, especially at high galactic latitudes; (4) simultaneous broad-band blazar monitoring; (5) characterization of gamma-ray transients, including gamma ray bursts; (6) radio, optical, X-ray and TeV counterpart searches for reliable and effective sources identification and characterization. Several of these activities are needed to be in place before launch.

  2. Genetics of hereditary large vessel diseases.

    PubMed

    Morisaki, Takayuki; Morisaki, Hiroko

    2016-01-01

    Recent progress in the study of hereditary large vessel diseases such as Marfan syndrome (MFS) have not only identified responsible genes but also provided better understanding of the pathophysiology and revealed possible new therapeutic targets. Genes identified for these diseases include FBN1, TGFBR1, TGFBR2, SMAD3, TGFB2, TGFB3, SKI, EFEMP2, COL3A1, FLNA, ACTA2, MYH11, MYLK and SLC2A10, as well as others. Their dysfunction disrupts the function of transforming growth factor-β (TGF-β) signaling pathways, as well as that of the extracellular matrix and smooth muscle contractile apparatus, resulting in progression of structural damage to large vessels, including aortic aneurysms and dissections. Notably, it has been shown that the TGF-β signaling pathway has a key role in the pathogenesis of MFS and related disorders, which may be important for development of strategies for medical and surgical treatment of thoracic aortic aneurysms and dissections. PMID:26446364

  3. Arithmetic in large GF(2(exp n))

    NASA Technical Reports Server (NTRS)

    Cameron, Kelly

    1993-01-01

    The decoding of Reed Solomon (BCH) codes usually requires large numbers of calculations using GF(2(exp n)) arithmetic. Though efficient algorithms and corresponding circuits for performing basic Galois field arithmetic are known, many of these techniques either become very slow or else require an inordinate amount of circuitry to implement when the size of the Galois field becomes much larger than GF(2(exp 8)). Consequently, most currently available Reed-Solomon decoders are built using small fields, such as GF(2(exp 8)) or GF(2(exp 10)), even though significant coding efficiencies could often be obtained if larger symbol sizes, such as GF(2(exp 16)) or GF(2(exp 32)), were used. Algorithms for performing the basic arithmetic required to decode Reed-Solomon codes have been developed explicitly for use in these large fields. They are discussed in detail.

  4. Fractals and cosmological large-scale structure

    NASA Technical Reports Server (NTRS)

    Luo, Xiaochun; Schramm, David N.

    1992-01-01

    Observations of galaxy-galaxy and cluster-cluster correlations as well as other large-scale structure can be fit with a 'limited' fractal with dimension D of about 1.2. This is not a 'pure' fractal out to the horizon: the distribution shifts from power law to random behavior at some large scale. If the observed patterns and structures are formed through an aggregation growth process, the fractal dimension D can serve as an interesting constraint on the properties of the stochastic motion responsible for limiting the fractal structure. In particular, it is found that the observed fractal should have grown from two-dimensional sheetlike objects such as pancakes, domain walls, or string wakes. This result is generic and does not depend on the details of the growth process.

  5. Operations analysis for a large lunar telescope

    NASA Technical Reports Server (NTRS)

    Thyen, Christopher

    1992-01-01

    Consideration is given to a study of the operations and assembly of a 16-m large lunar telescope (LLT), which deals with the operations and assembly of the telescope from LEO to the lunar surface for assembly. The study of LLT operations and assembly is broken down into three divisions to allow easier operations analysis: earth to orbit operations, LEO operations (transfer to lunar surface operations), and lunar surface operations. The following guidelines were set down to ensure a reasonable starting point for a large, lunar, untended installation: the existence of a lunar base, a space transportation system from LEO to the lunar surface, continuous manning of the lunar base during the assembly period, and availability/capability to perform lunar assembly with the lunar base crew. The launch/vehicle packaging options, lunar site selection and assembly options, and assembly crew assumptions are discussed.

  6. Percutaneous Large Arterial Access Closure Techniques.

    PubMed

    McGraw, Charles J; Gandhi, Ripal T; Vatakencherry, Geogy; Baumann, Frederic; Benenati, James F

    2015-06-01

    Endovascular repair has replaced open surgical repair as the standard of care for treatment of abdominal and thoracic aortic aneurysms in appropriately selected patients owing to its decreased morbidity and length of stay and excellent clinical outcomes. Similarly, there is a progressive trend toward total percutaneous repair of the femoral artery using percutaneous suture-mediated closure devices over open surgical repair due to decreased complications and procedure time. This article describes the techniques of closure for large-bore vascular access most commonly used in endovascular treatment of abdominal and thoracic aortic aneurysms, but could similarly be applied to any procedure requiring large-bore arterial access, such as transcatheter aortic valve replacement. PMID:26070624

  7. Very Large Aperture Diffractive Space Telescope

    SciTech Connect

    Hyde, Roderick Allen

    1998-04-20

    A very large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass ''aiming'' at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The magnifying glass includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the magnifying glass, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets.

  8. Planning for large construction projects in space

    NASA Technical Reports Server (NTRS)

    Disher, J. H.

    1978-01-01

    The paper discusses briefly some broad plans for developing the technology needed for large construction projects in space ranging from orbiting solar power stations to large communications antennas. Space construction classes include assembly of modules, deployment of compacted structures, assembly of passive preformed pieces, and fabrication of structures from sheet stock. Technological areas related to structural concepts include (1) analyses for prediction of structural behavior, structural/control interaction, electromagnetic and control performance, and integrated design development; (2) electronics for signal conditioning and data acquisition, power distribution, and signal channel interference and multipaction; (3) concepts for shape control, attitude/pointing control, and orbital transfer and station keeping; and (4) materials and techniques for 30-year dimensional stable composites, thermal control, thin-lightweight structural alloys, and material joining in space. The concept of a power module for the construction operations is discussed along with a concept for a habitability module.

  9. A charged membrane paradigm at large D

    NASA Astrophysics Data System (ADS)

    Bhattacharyya, Sayantani; Mandlik, Mangesh; Minwalla, Shiraz; Thakur, Somyadip

    2016-04-01

    We study the effective dynamics of black hole horizons in Einstein-Maxwell theory in a large number of spacetime dimensions D. We demonstrate that horizon dynamics may be recast as a well posed initial value problem for the motion of a codimension one non gravitational membrane moving in flat space. The dynamical degrees of freedom of this membrane are its shape, charge density and a divergence free velocity field. We determine the equations that govern membrane dynamics at leading order in the large D expansion. Our derivation of the membrane equations assumes that the solution preserves an SO( D - p - 2) isometry with p held fixed as D is taken to infinity. However we are able to cast our final membrane equations into a completely geometric form that makes no reference to this symmetry algebra.

  10. GLAST Large Area Telescope Multiwavelength Planning

    NASA Technical Reports Server (NTRS)

    Reimer, O.; Michelson, P. F.; Cameron, R. A.; Digel, S. W.; Thompson, D. J.; Wood, K. S.

    2007-01-01

    Gamma-ray astrophysics depends in many ways on multiwavelength studies. The Gamma-ray Large Area Space Telescope (GLAST) Large Area Telescope (LAT) Collaboration has started multiwavelength planning well before the scheduled 2007 launch of the observatory. Some of the high-priority multiwavelength needs include: (1) availability of contemporaneous radio and X-ray timing of pulsars; (2) expansion of blazar catalogs, including redshift measurements; (3) improved observations of molecular clouds, especially at high galactic latitudes; (4) simultaneous broad-spectrum blazar monitoring; (5) characterization of gamma-ray transients, including gamma ray bursts; (6) radio, optical, X-ray and TeV counterpart searches for reliable and effective sources identification and characterization. Several of these activities are needed to be in place before launch.

  11. Buried pipelines in large fault movements

    SciTech Connect

    Wang, L.J.; Wang, L.R.L.

    1995-12-31

    Responses of buried pipelines in large fault movements are examined based upon a non-linear cantilever beam analogy. This analogy assumes that the pipeline in a large deflection zone behaves like a cantilever beam under a transverse-concentrated shear at the inflection point with a uniformly distributed soil pressure along the entire span. The tangent modulus approach is adopted to analyze the coupled axial force-bending moment interaction on pipeline deformations in the inelastic range. The buckling load of compressive pipeline is computed by the modified Newmark`s numerical integration scheme. Parametric studies of both tensile and compressive pipeline responses to various fault movements, pipeline/fault crossing angles, soil/pipe friction angles, buried depths, pipe diameters and thickness are investigated. It is shown by the comparisons that previous findings were unconservative.

  12. The NASA Lewis large wind turbine program

    NASA Technical Reports Server (NTRS)

    Thomas, R. L.; Baldwin, D. H.

    1981-01-01

    The program is directed toward development of the technology for safe, reliable, environmentally acceptable large wind turbines that have the potential to generate a significant amount of electricity at costs competitive with conventional electric generation systems. In addition, these large wind turbines must be fully compatible with electric utility operations and interface requirements. Advances are made by gaining a better understanding of the system design drivers, improvements in the analytical design tools, verification of design methods with operating field data, and the incorporation of new technology and innovative designs. An overview of the program activities is presented and includes results from the first and second generation field machines (Mod-OA, -1, and -2), the design phase of the third generation wind turbine (Mod-5) and the advanced technology projects. Also included is the status of the Department of Interior WTS-4 machine.

  13. Large spin magnetism with cold atoms

    NASA Astrophysics Data System (ADS)

    Laburthe-Tolra, Bruno

    2016-05-01

    The properties of quantum gases made of ultra-cold atoms strongly depend on the interactions between atoms. These interactions lead to condensed-matter-like collective behavior, so that quantum gases appear to be a new platform to study quantum many-body physics. In this seminar, I will focus on the case where the atoms possess an internal (spin) degrees of freedom. The spin of atoms is naturally larger than that of electrons. Therefore, the study of the magnetic properties of ultra-cold gases allows for an exploration of magnetism beyond the typical situation in solid-state physics where magnetism is associated to the s = 1/2 spin of the electron. I will describe three specific cases: spinor Bose-Einstein condensates, where spin-dependent contact interactions introduce new quantum phases and spin dynamics; large spin magnetic atoms where strong dipole-dipole interactions lead to exotic quantum magnetism; large spin Fermi gases.

  14. Large-scale linear rankSVM.

    PubMed

    Lee, Ching-Pei; Lin, Chih-Jen

    2014-04-01

    Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use. PMID:24479776

  15. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  16. A superconducting large-angle magnetic suspension

    NASA Technical Reports Server (NTRS)

    Downer, James; Goldie, James; Torti, Richard

    1991-01-01

    The component technologies were developed required for an advanced control moment gyro (CMG) type of slewing actuator for large payloads. The key component of the CMG is a large-angle magnetic suspension (LAMS). The LAMS combines the functions of the gimbal structure, torque motors, and rotor bearings of a CMG. The LAMS uses a single superconducting source coil and an array of cryoresistive control coils to produce a specific output torque more than an order of magnitude greater than conventional devices. The designed and tested LAMS system is based around an available superconducting solenoid, an array of twelve room-temperature normal control coils, and a multi-input, multi-output control system. The control laws were demonstrated for stabilizing and controlling the LAMS system.

  17. Big Science and the Large Hadron Collider

    NASA Astrophysics Data System (ADS)

    Giudice, Gian Francesco

    2012-03-01

    The Large Hadron Collider (LHC), the particle accelerator operating at CERN, is probably the most complex and ambitious scientific project ever accomplished by humanity. The sheer size of the enterprise, in terms of financial and human resources, naturally raises the question whether society should support such costly basic-research programs. I address this question by first reviewing the process that led to the emergence of Big Science and the role of large projects in the development of science and technology. I then compare the methodologies of Small and Big Science, emphasizing their mutual linkage. Finally, after examining the cost of Big Science projects, I highlight several general aspects of their beneficial implications for society.

  18. Timing characteristics of Large Area Picosecond Photodetectors

    NASA Astrophysics Data System (ADS)

    Adams, B. W.; Elagin, A.; Frisch, H. J.; Obaid, R.; Oberla, E.; Vostrikov, A.; Wagner, R. G.; Wang, J.; Wetstein, M.

    2015-09-01

    The LAPPD Collaboration was formed to develop ultrafast large-area imaging photodetectors based on new methods for fabricating microchannel plates (MCPs). In this paper we characterize the time response using a pulsed, sub-picosecond laser. We observe single-photoelectron time resolutions of a 20 cm × 20 cm MCP consistently below 70 ps, spatial resolutions of roughly 500 μm, and median gains higher than 107. The RMS measured at one particular point on an LAPPD detector is 58 ps, with ± 1σ of 47 ps. The differential time resolution between the signal reaching the two ends of the delay line anode is measured to be 5.1 ps for large signals, with an asymptotic limit falling below 2 ps as noise-over-signal approaches zero.

  19. Optical encryption for large-sized images

    NASA Astrophysics Data System (ADS)

    Sanpei, Takuho; Shimobaba, Tomoyoshi; Kakue, Takashi; Endo, Yutaka; Hirayama, Ryuji; Hiyama, Daisuke; Hasegawa, Satoki; Nagahama, Yuki; Sano, Marie; Oikawa, Minoru; Sugie, Takashige; Ito, Tomoyoshi

    2016-02-01

    We propose an optical encryption framework that can encrypt and decrypt large-sized images beyond the size of the encrypted image using our two methods: random phase-free method and scaled diffraction. In order to record the entire image information on the encrypted image, the large-sized images require the random phase to widely diffuse the object light over the encrypted image; however, the random phase gives rise to the speckle noise on the decrypted images, and it may be difficult to recognize the decrypted images. In order to reduce the speckle noise, we apply our random phase-free method to the framework. In addition, we employ scaled diffraction that calculates light propagation between planes with different sizes by changing the sampling rates.

  20. Airway obstruction secondary to large thyroid adenolipoma

    PubMed Central

    Fitzpatrick, Nicholas; Malik, Paras; Hinton-Bayre, Anton; Lewis, Richard

    2014-01-01

    Adenolipoma of the thyroid gland is a rare benign neoplasm composed of normal thyroid and mature adipose tissue. Ordinarily, only a small amount of fat exists in a normal thyroid gland. CT and MRI may differentiate between benign and malignant lesions, and fine-needle aspirate often assists diagnosis. Surgical excision for adenolipoma is considered curative. We report the case of a 67-year-old man presenting with a large neck lump and evidence of airway obstruction. Imaging revealed a 97×70 mm left thyroid mass with retropharyngeal extension and laryngotracheal compression. Hemithyroidectomy was performed with subsequent histology confirming a large thyroid adenolipoma. The patient's symptoms resolved and he remains asymptomatic with no sign of recurrence 2 years postsurgery. PMID:25199190

  1. Online Community Detection for Large Complex Networks

    PubMed Central

    Pan, Gang; Zhang, Wangsheng; Wu, Zhaohui; Li, Shijian

    2014-01-01

    Complex networks describe a wide range of systems in nature and society. To understand complex networks, it is crucial to investigate their community structure. In this paper, we develop an online community detection algorithm with linear time complexity for large complex networks. Our algorithm processes a network edge by edge in the order that the network is fed to the algorithm. If a new edge is added, it just updates the existing community structure in constant time, and does not need to re-compute the whole network. Therefore, it can efficiently process large networks in real time. Our algorithm optimizes expected modularity instead of modularity at each step to avoid poor performance. The experiments are carried out using 11 public data sets, and are measured by two criteria, modularity and NMI (Normalized Mutual Information). The results show that our algorithm's running time is less than the commonly used Louvain algorithm while it gives competitive performance. PMID:25061683

  2. Environmental effects of large impacts on Mars.

    PubMed

    Segura, Teresa L; Toon, Owen B; Colaprete, Anthony; Zahnle, Kevin

    2002-12-01

    The martian valley networks formed near the end of the period of heavy bombardment of the inner solar system, about 3.5 billion years ago. The largest impacts produced global blankets of very hot ejecta, ranging in thickness from meters to hundreds of meters. Our simulations indicated that the ejecta warmed the surface, keeping it above the freezing point of water for periods ranging from decades to millennia, depending on impactor size, and caused shallow subsurface or polar ice to evaporate or melt. Large impacts also injected steam into the atmosphere from the craters or from water innate to the impactors. From all sources, a typical 100-, 200-, or 250-kilometers asteroid injected about 2, 9, or 16 meters, respectively, of precipitable water into the atmosphere, which eventually rained out at a rate of about 2 meters per year. The rains from a large impact formed rivers and contributed to recharging aquifers. PMID:12471254

  3. Large volume axionic Swiss cheese inflation

    NASA Astrophysics Data System (ADS)

    Misra, Aalok; Shukla, Pramod

    2008-09-01

    Continuing with the ideas of (Section 4 of) [A. Misra, P. Shukla, Moduli stabilization, large-volume dS minimum without anti-D3-branes, (non-)supersymmetric black hole attractors and two-parameter Swiss cheese Calabi Yau's, arXiv: 0707.0105 [hep-th], Nucl. Phys. B, in press], after inclusion of perturbative and non-perturbative α corrections to the Kähler potential and (D1- and D3-) instanton generated superpotential, we show the possibility of slow roll axionic inflation in the large volume limit of Swiss cheese Calabi Yau orientifold compactifications of type IIB string theory. We also include one- and two-loop corrections to the Kähler potential but find the same to be subdominant to the (perturbative and non-perturbative) α corrections. The NS NS axions provide a flat direction for slow roll inflation to proceed from a saddle point to the nearest dS minimum.

  4. Quasisymmetric toroidal plasmas with large mean flows

    SciTech Connect

    Sugama, H.; Watanabe, T.-H.; Nunami, M.; Nishimura, S.

    2011-08-15

    Geometric conditions for quasisymmetric toroidal plasmas with large mean flows on the order of the ion thermal speed are investigated. Equilibrium momentum balance equations including the inertia term due to the large flow velocity are used to show that, for rotating quasisymmetric plasmas with no local currents crossing flux surfaces, all components of the metric tensor should be independent of the toroidal angle in the Boozer coordinates, and consequently these systems need to be rigorously axisymmetric. Unless the local radial currents vanish, the Boozer coordinates do not exist and the toroidal flow velocity cannot take any value other than a very limited class of eigenvalues corresponding to very rapid rotation especially for low beta plasmas.

  5. Arithmetic in large GF(2(exp n))

    NASA Astrophysics Data System (ADS)

    Cameron, Kelly

    The decoding of Reed Solomon (BCH) codes usually requires large numbers of calculations using GF(2(exp n)) arithmetic. Though efficient algorithms and corresponding circuits for performing basic Galois field arithmetic are known, many of these techniques either become very slow or else require an inordinate amount of circuitry to implement when the size of the Galois field becomes much larger than GF(2(exp 8)). Consequently, most currently available Reed-Solomon decoders are built using small fields, such as GF(2(exp 8)) or GF(2(exp 10)), even though significant coding efficiencies could often be obtained if larger symbol sizes, such as GF(2(exp 16)) or GF(2(exp 32)), were used. Algorithms for performing the basic arithmetic required to decode Reed-Solomon codes have been developed explicitly for use in these large fields. They are discussed in detail.

  6. GLAST Large Area Telescope Multiwavelength Planning

    NASA Technical Reports Server (NTRS)

    Thompson, D. J.; Cameron, R. A.; Digel, S. W.; Wood, K. S.

    2006-01-01

    Because gamma-ray astrophysics depends in many ways on multiwavelength studies, the GLAST Large Area Telescope (LAT) Collaboration has started multiwavelength planning well before the scheduled 2007 launch of the observatory. Some of the high-priority needs include: (1) radio and X-ray timing of pulsars; (2) expansion of blazar catalogs, including redshift measurements (3) improved observations of molecular clouds, especially at high galactic latitudes; (4) simultaneous broad-spectrum blazar flare measurements; (5) characterization of gamma-ray transients, including gamma ray bursts; (6) radio, optical, X-ray and TeV counterpart searches for unidentified gamma-ray sources. Work on the first three of these activities is needed before launch. The GLAST Large Area Telescope is an international effort, with U.S. funding provided by the Department of Energy and NASA.

  7. Large area position sensitive β-detector

    NASA Astrophysics Data System (ADS)

    Vaintraub, S.; Hass, M.; Edri, H.; Morali, N.; Segal, T.

    2015-03-01

    A new conceptual design of a large area electron detector, which is position and energy sensitive, was developed. This detector is designed for beta decay energies up to 4 MeV, but in principle can be re-designed for higher energies. The detector incorporates one large plastic scintillator and, in general, a limited number of photomultipliers (7 presently). The current setup was designed and constructed after an extensive Geant4 simulation study. By comparison of a single hit light distribution between the various photomultipliers to a pre-measured accurate position-response map, the anticipated position resolution is around 5 mm. The first benchmark experiments have been conducted in order to calibrate and confirm the position resolution of the detector. The new method, results of the first test experiments and comparison to simulations are presented.

  8. Large field inflation from D-branes

    NASA Astrophysics Data System (ADS)

    Escobar, Dagoberto; Landete, Aitor; Marchesano, Fernando; Regalado, Diego

    2016-04-01

    We propose new large field inflation scenarios built on the framework of F-term axion monodromy. Our setup is based on string compactifications where D-branes create potentials for closed string axions via F-terms. Because the source of the axion potential is different from the standard sources of moduli stabilization, it is possible to lower the inflaton mass as compared to other massive scalars. We discuss a particular class of models based on type IIA flux compactifications with D6-branes. In the small field regime they describe supergravity models of quadratic chaotic inflation with a stabilizer field. In the large field regime the inflaton potential displays a flattening effect due to Planck suppressed corrections, allowing us to easily fit the cosmological parameters of the model within current experimental bounds.

  9. Large deviations for Markov processes with resetting.

    PubMed

    Meylahn, Janusz M; Sabhapandit, Sanjib; Touchette, Hugo

    2015-12-01

    Markov processes restarted or reset at random times to a fixed state or region in space have been actively studied recently in connection with random searches, foraging, and population dynamics. Here we study the large deviations of time-additive functions or observables of Markov processes with resetting. By deriving a renewal formula linking generating functions with and without resetting, we are able to obtain the rate function of such observables, characterizing the likelihood of their fluctuations in the long-time limit. We consider as an illustration the large deviations of the area of the Ornstein-Uhlenbeck process with resetting. Other applications involving diffusions, random walks, and jump processes with resetting or catastrophes are discussed. PMID:26764673

  10. Estimation for large non-centrality parameters

    NASA Astrophysics Data System (ADS)

    Inácio, Sónia; Mexia, João; Fonseca, Miguel; Carvalho, Francisco

    2016-06-01

    We introduce the concept of estimability for models for which accurate estimators can be obtained for the respective parameters. The study was conducted for model with almost scalar matrix using the study of estimability after validation of these models. In the validation of these models we use F statistics with non centrality parameter τ =‖λ/‖2 σ2 when this parameter is sufficiently large we obtain good estimators for λ and α so there is estimability. Thus, we are interested in obtaining a lower bound for the non-centrality parameter. In this context we use for the statistical inference inducing pivot variables, see Ferreira et al. 2013, and asymptotic linearity, introduced by Mexia & Oliveira 2011, to derive confidence intervals for large non-centrality parameters (see Inácio et al. 2015). These results enable us to measure relevance of effects and interactions in multifactors models when we get highly statistically significant the values of F tests statistics.

  11. Large-Scale Organization of Glycosylation Networks

    NASA Astrophysics Data System (ADS)

    Kim, Pan-Jun; Lee, Dong-Yup; Jeong, Hawoong

    2009-03-01

    Glycosylation is a highly complex process to produce a diverse repertoire of cellular glycans that are frequently attached to proteins and lipids. Glycans participate in fundamental biological processes including molecular trafficking and clearance, cell proliferation and apoptosis, developmental biology, immune response, and pathogenesis. N-linked glycans found on proteins are formed by sequential attachments of monosaccharides with the help of a relatively small number of enzymes. Many of these enzymes can accept multiple N-linked glycans as substrates, thus generating a large number of glycan intermediates and their intermingled pathways. Motivated by the quantitative methods developed in complex network research, we investigate the large-scale organization of such N-glycosylation pathways in a mammalian cell. The uncovered results give the experimentally-testable predictions for glycosylation process, and can be applied to the engineering of therapeutic glycoproteins.

  12. Numerical solution of large Lyapunov equations

    NASA Technical Reports Server (NTRS)

    Saad, Youcef

    1989-01-01

    A few methods are proposed for solving large Lyapunov equations that arise in control problems. The common case where the right hand side is a small rank matrix is considered. For the single input case, i.e., when the equation considered is of the form AX + XA(sup T) + bb(sup T) = 0, where b is a column vector, the existence of approximate solutions of the form X = VGV(sup T) where V is N x m and G is m x m, with m small is established. The first class of methods proposed is based on the use of numerical quadrature formulas, such as Gauss-Laguerre formulas, applied to the controllability Grammian. The second is based on a projection process of Galerkin type. Numerical experiments are presented to test the effectiveness of these methods for large problems.

  13. Timing Characteristics of Large Area Picosecond Photodetectors

    SciTech Connect

    Adams, Bernhard W.; Elagin, Andrey L.; Frisch, H.; Obaid, Razib; Oberla, E; Vostrikov, Alexander; Wagner, Robert G.; Wang, Jingbo; Wetstein, Matthew J.; Northrop, R

    2015-09-21

    The LAPPD Collaboration was formed to develop ultralast large-area imaging photodetectors based on new methods for fabricating microchannel plates (MCPs). In this paper we characterize the time response using a pulsed, sub picosecond laser. We observe single photoelectron time resolutions of a 20 cm x 20 cm MCP consistently below 70 ps, spatial resolutions of roughly 500 pm, and median gains higher than 10(7). The RMS measured at one particular point on an LAPPD detector is 58 ps, with in of 47 ps. The differential time resolution between the signal reaching the two ends of the delay line anode is measured to be 5.1 ps for large signals, with an asymptotic limit falling below 2 ps as noise-over-signal approaches zero.

  14. The pathology of large-vessel vasculitides.

    PubMed

    Miller, Dylan V; Maleszewski, Joseph J

    2011-01-01

    Vasculitis affecting large elastic arteries, including the aorta and major proximal branches, encompasses various diseases including Takayasu arteritis, giant cell (or temporal) arteritis, and tertiary syphilis, but also may occur as a rare complication of Behçet's disease, rheumatoid arthritis, sarcoidosis, Cogan syndrome, Kawasaki disease, ankylosing spondylitis, systemic lupus erythematosus and Wegener's granulomatosis. Recent reports have also established a link between inflammatory abdominal aortic aneurysm as well as lymphoplasmacytic thoracic aortitis with an overabundance of IgG4-producing plasma cells and the burgeoning constellation of 'Hyper-IgG4' syndromes. This review focuses on morphologic aspects of large-vessel vasculitis pathology associated with giant cell arteritis, Takayasu arteritis, idiopathic or isolated aortitis, lymphoplasmacytic thoracic and ascending aortitis, and the inflammatory aneurysm/retroperitoneal fibrosis syndrome. PMID:21586202

  15. "Cosmological Parameters from Large Scale Structure"

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    2005-01-01

    This grant has provided primary support for graduate student Mark Neyrinck, and some support for the PI and for colleague Nick Gnedin, who helped co-supervise Neyrinck. This award had two major goals. First, to continue to develop and apply methods for measuring galaxy power spectra on large, linear scales, with a view to constraining cosmological parameters. And second, to begin try to understand galaxy clustering at smaller. nonlinear scales well enough to constrain cosmology from those scales also. Under this grant, the PI and collaborators, notably Max Tegmark. continued to improve their technology for measuring power spectra from galaxy surveys at large, linear scales. and to apply the technology to surveys as the data become available. We believe that our methods are best in the world. These measurements become the foundation from which we and other groups measure cosmological parameters.

  16. Parallel-processing a large scientific problem

    SciTech Connect

    Hiromoto, R.

    1982-01-01

    The author discusses a parallel-processing experiment that uses a particle-in-cell (PIC) code to study the feasibility of doing large-scale scientific calculations on multiple-processor architectures. A multithread version of this Los Alamos PIC code was successfully implemented and timed on a Univac system 1100/80 computer. Use of a single copy of the instruction stream, and common memory to hold data, eliminated data transmission between processors. The multiple-processing algorithm exploits the Pic code's high degree of large, independent tasks, as well as the configuration of the Univac system 1100/80. Timing results for the multithread version of the PIC code using one, two, three, and four identical processors are given and are shown to have promising speedup times when compared to the overall run times measured for a single-thread version of the PIC code. 4 references.

  17. Parallel processing a large scientific problem

    SciTech Connect

    Hiromoto, R.

    1982-01-01

    A parallel-processing experiment is discussed that uses a particle-in-cell (PIC) code to study the feasibility of doing large-scale scientific calculations on multiple-processor architectures. A multithread version of this Los Alamos PIC code was successfully implemented and timed on a UNIVAC System 1100/80 computer. Use of a single copy of the instruction stream, and common memory to hold data, eliminated data transmission between processors. The multiple-processing algorithm exploits the PIC code's high degree of large, independent tasks, as well as the configuration of the UNIVAC System 1100/80. Timing results for the multithread version of the PIC code using one, two, three, and four identical processors are given and are shown to have promising speedup times when compared to the overall run times measured for a single-thread version of the PIC code.

  18. NASA/MSFC Large Stretch Press Study

    NASA Technical Reports Server (NTRS)

    Choate, M. W.; Nealson, W. P.; Jay, G. C.; Buss, W. D.

    1985-01-01

    The purpose of this study was to: A. assess and document the advantages/disadvantages of a government agency investment in a large stretch form press on the order of 5000 tons capacity (per jaw); B. develop a procurement specification for the press; and C. provide trade study data that will permit an optimum site location. Tasks were separated into four major elements: cost study, user survey, site selection, and press design/procurement specification.

  19. Large electron screening effect in different environments

    SciTech Connect

    Cvetinović, Aleksandra Lipoglavšek, Matej; Markelj, Sabina; Vesić, Jelena

    2015-10-15

    Electron screening effect was studied in the {sup 1}H({sup 7}Li,α){sup 4}He, {sup 1}H({sup 11}B,α){sup 4}He and {sup 1}H({sup 19}F,αγ){sup 16}O reactions in inverse kinematics on different hydrogen implanted targets. Results show large electron screening potentials strongly dependent on the proton number Z of the projectile.

  20. Large space structures control algorithm characterization

    NASA Technical Reports Server (NTRS)

    Fogel, E.

    1983-01-01

    Feedback control algorithms are developed for sensor/actuator pairs on large space systems. These algorithms have been sized in terms of (1) floating point operation (FLOP) demands; (2) storage for variables; and (3) input/output data flow. FLOP sizing (per control cycle) was done as a function of the number of control states and the number of sensor/actuator pairs. Storage for variables and I/O sizing was done for specific structure examples.

  1. Accuracy potentials for large space antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1980-01-01

    The relationships among materials selection, truss design, and manufacturing techniques in the interest of surface accuracies for large space antennas are discussed. Among the antenna configurations considered are: tetrahedral truss, pretensioned truss, and geodesic dome and radial rib structures. Comparisons are made of the accuracy achievable by truss and dome structure types for a wide variety of diameters, focal lengths, and wavelength of radiated signal, taking into account such deforming influences as solar heating-caused thermal transients and thermal gradients.

  2. Large numbers hypothesis. I - Classical formalism

    NASA Technical Reports Server (NTRS)

    Adams, P. J.

    1982-01-01

    A self-consistent formulation of physics at the classical level embodying Dirac's large numbers hypothesis (LNH) is developed based on units covariance. A scalar 'field' phi(x) is introduced and some fundamental results are derived from the resultant equations. Some unusual properties of phi are noted such as the fact that phi cannot be the correspondence limit of a normal quantum scalar field.

  3. Ground state energy of large polaron systems

    SciTech Connect

    Benguria, Rafael D.; Frank, Rupert L.; Lieb, Elliott H.

    2015-02-15

    The last unsolved problem about the many-polaron system, in the Pekar–Tomasevich approximation, is the case of bosons with the electron-electron Coulomb repulsion of strength exactly 1 (the “neutral case”). We prove that the ground state energy, for large N, goes exactly as −N{sup 7/5}, and we give upper and lower bounds on the asymptotic coefficient that agree to within a factor of 2{sup 2/5}.

  4. Large icebergs characteristics from altimeter waveforms analysis

    NASA Astrophysics Data System (ADS)

    Tournadre, J.; Bouhier, N.; Girard-Ardhuin, F.; Rémy, F.

    2015-03-01

    Large uncertainties exist on the volume of ice transported by the Southern Ocean large icebergs, a key parameter for climate studies, because of the paucity of information, especially on iceberg thickness. Using icebergs tracks from the National Ice Center (NIC) and Brigham Young University (BYU) databases to select altimeter data over icebergs and a method of analysis of altimeter waveforms, a database of 5366 icebergs freeboard elevation, length, and backscatter covering the 2002-2012 period has been created. The database is analyzed in terms of distributions of freeboard, length, and backscatter showing differences as a function of the iceberg's quadrant of origin. The database allows to analyze the temporal evolution of icebergs and to estimate a melt rate of 35-39 m·yr-1 (neglecting the firn compaction). The total daily volume of ice, estimated by combining the NIC and altimeter sizes and the altimeter freeboards, regularly decreases from 2.2 104km3 in 2002 to 0.9 104km3 in 2012. During this decade, the total loss of ice (˜1800 km3·yr-1) is twice as large as than the input (˜960 km3·yr-1) showing that the system is out of equilibrium after a very large input of ice between 1997 and 2002. Breaking into small icebergs represents 80% (˜1500 km3·yr-1) of the total ice loss while basal melting is only 18% (˜320 km3·yr-1). Small icebergs are thus the major vector of freshwater input in the Southern Ocean.

  5. Pioneer Venus large probe neutral mass spectrometer

    NASA Technical Reports Server (NTRS)

    Hoffman, J.

    1982-01-01

    The deuterium hydrogen abundance ratio in the Venus atmosphere was measured while the inlets to the Pioneer Venus large probe mass spectrometer were coated with sulfuric acid from Venus' clouds. The ratio is (1.6 + or - 0.2) x 10 to the minus two power. It was found that the 100 fold enrichment of deuterium means that Venus outgassed at least 0.3% of a terrestrial ocean and possibly more.

  6. Large space antenna concepts for ESGP

    NASA Technical Reports Server (NTRS)

    Love, Allan W.

    1989-01-01

    It is appropriate to note that 1988 marks the 100th anniversary of the birth of the reflector antenna. It was in 1888 that Heinrich Hertz constructed the first one, a parabolic cylinder made of sheet zinc bent to shape and supported by a wooden frame. Hertz demonstrated the existence of the electromagnetic waves that had been predicted theoretically by James Clerk Maxwell some 22 years earlier. In the 100 years since Hertz's pioneering work the field of electromagnetics has grown explosively: one of the technologies is that of remote sensing of planet Earth by means of electromagnetic waves, using both passive and active sensors located on an Earth Science Geostationary Platform (ESEP). For these purposes some exquisitely sensitive instruments were developed, capable of reaching to the fringes of the known universe, and relying on large reflector antennas to collect the minute signals and direct them to appropriate receiving devices. These antennas are electrically large, with diameters of 3000 to 10,000 wavelengths and with gains approaching 80 to 90 dB. Some of the reflector antennas proposed for ESGP are also electrically large. For example, at 220 GHz a 4-meter reflector is nearly 3000 wavelengths in diameter, and is electrically quite comparable with a number of the millimeter wave radiotelescopes that are being built around the world. Its surface must meet stringent requirements on rms smoothness, and ability to resist deformation. Here, however, the environmental forces at work are different. There are no varying forces due to wind and gravity, but inertial forces due to mechanical scanning must be reckoned with. With this form of beam scanning, minimizing momentum transfer to the space platform is a problem that demands an answer. Finally, reflector surface distortion due to thermal gradients caused by the solar flux probably represents the most challenging problem to be solved if these Large Space Antennas are to achieve the gain and resolution required of

  7. Adjusting Surfaces Of Large Antenna Reflectors

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Adelman, Howard M.; Bailey, Marion C.; Hoftka, Raphael T.

    1989-01-01

    New approach more effective than traditional rms-surface-distortion approach. Optimization procedure for control of shape of reflector of large space antenna (LSA). Main feature is shape-controlling mathematical mechanism driven by need to satisfy explicit EM design requirements. Uses standard finite-element structural analysis, aperture-integration EM analysis, and constrained optimization techniques to predict set of actuator inputs that improves performance of antenna while minimizing applied control effort. Procedure applicable to wide variety of LSA concepts.

  8. 75 FR 72611 - Assessments, Large Bank Pricing

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-24

    ...The FDIC proposes to revise the assessment system applicable to large insured depository institutions (IDIs or institutions) to better differentiate IDIs and take a more forward-looking view of risk; to better take into account the losses that the FDIC may incur if such an IDI fails; and to make technical and other changes to the rules governing the risk-based assessment system, including......

  9. Large-area thin-film modules

    NASA Technical Reports Server (NTRS)

    Tyan, Y. S.; Perez-Albuerne, E. A.

    1985-01-01

    The low cost potential of thin film solar cells can only be fully realized if large area modules can be made economically with good production yields. This paper deals with two of the critical challenges. A scheme is presented which allows the simple, economical realization of the long recognized, preferred module structure of monolithic integration. Another scheme reduces the impact of shorting defects and, as a result, increases the production yields. Analytical results demonstrating the utilization and advantages of such schemes are discussed.

  10. Large field inflation and gravitational entropy

    NASA Astrophysics Data System (ADS)

    Kaloper, Nemanja; Kleban, Matthew; Lawrence, Albion; Sloth, Martin S.

    2016-02-01

    Large field inflation can be sensitive to perturbative and nonperturbative quantum corrections that spoil slow roll. A large number N of light species in the theory, which occur in many string constructions, can amplify these problems. One might even worry that in a de Sitter background, light species will lead to a violation of the covariant entropy bound at large N . If so, requiring the validity of the covariant entropy bound could limit the number of light species and their couplings, which in turn could severely constrain axion-driven inflation. Here we show that there is no such problem when we correctly renormalize models with many light species, taking the physical Planck scale to be Mpl 2≳N MUV2 , where MUV is the cutoff for the quantum field theory coupled to semiclassical quantum gravity. The number of light species then cancels out of the gravitational entropy of de Sitter or near-de Sitter backgrounds at leading order. Working in detail with N scalar fields in de Sitter space, renormalized to one loop order, we show that the gravitational entropy automatically obeys the covariant entropy bound. Furthermore, while the axion decay constant is a strong coupling scale for the axion dynamics, we show that it is not in general the cutoff of 4d semiclassical gravity. After renormalizing the two point function of the inflaton, we note that it is also controlled by scales much below the cutoff. We revisit N -flation and Kachru-Kallosh-Linde-Trivedi-type compactifications in this light, and show that they are perfectly consistent with the covariant entropy bound. Thus, while quantum gravity might yet spoil large field inflation, holographic considerations in the semiclassical theory do not obstruct it.

  11. Large scale properties of the Webgraph

    NASA Astrophysics Data System (ADS)

    Donato, D.; Laura, L.; Leonardi, S.; Millozzi, S.

    2004-03-01

    In this paper we present an experimental study of the properties of web graphs. We study a large crawl from 2001 of 200M pages and about 1.4 billion edges made available by the WebBase project at Stanford[CITE]. We report our experimental findings on the topological properties of such graphs, such as the number of bipartite cores and the distribution of degree, PageRank values and strongly connected components.

  12. Mechanical Properties Of Large Sodium Iodide Crystals

    NASA Technical Reports Server (NTRS)

    Lee, Henry M.

    1988-01-01

    Report presents data on mechanical properties of large crystals of thallium-doped sodium iodide. Five specimens in shape of circular flat plates subjected to mechanical tests. Presents test results for each specimen as plots of differential pressure versus center displacement and differential pressure versus stress at center. Also tabulates raw data. Test program also developed procedure for screening candidate crystals for gamma-ray sensor. Procedure eliminates potentially weak crystals before installed and ensures material yielding kept to minimum.

  13. Big Data Challenges for Large Radio Arrays

    NASA Technical Reports Server (NTRS)

    Jones, Dayton L.; Wagstaff, Kiri; Thompson, David; D'Addario, Larry; Navarro, Robert; Mattmann, Chris; Majid, Walid; Lazio, Joseph; Preston, Robert; Rebbapragada, Umaa

    2012-01-01

    Future large radio astronomy arrays, particularly the Square Kilometre Array (SKA), will be able to generate data at rates far higher than can be analyzed or stored affordably with current practices. This is, by definition, a "big data" problem, and requires an end-to-end solution if future radio arrays are to reach their full scientific potential. Similar data processing, transport, storage, and management challenges face next-generation facilities in many other fields.

  14. Making Large Composite Vessels Without Autoclaves

    NASA Technical Reports Server (NTRS)

    Sigur, W. A.

    1989-01-01

    Method for making fiber-reinforced composite structure relies on heating and differential thermal expansion to provide temperature and pressure necessary to develop full strength, without having to place structure in large, expensive autoclave. Layers of differentially expanding material squeeze fiber-reinforce composite between them when heated. Method suitable for such cylindrical structures as pressure vessels and tanks. Used for both resin-matrix and metal-matrix composites.

  15. Climate projections for selected large marine ecosystems

    NASA Astrophysics Data System (ADS)

    Wang, Muyin; Overland, James E.; Bond, Nicholas A.

    2010-02-01

    In preparation for the Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) modeling centers from around the world carried out sets of global climate simulations under various emission scenarios with a total of 23 coupled atmosphere-ocean general circulation models. We evaluated the models' 20th century hindcasts of selected variables relevant to several large marine ecosystems and examined 21st century projections by a subset of these models under the A1B (middle range) emission scenario. In general we find that a subset (about half) of the models are able to simulate large-scale aspects of the historical observations reasonably well, which provides some confidence in their application for projections of ocean conditions into the future. Over the North Pacific by the mid-21st century, the warming due to the trend in wintertime sea surface temperature (SST) will be 1°-1.5 °C, which is as large as the amplitude of the major mode of variability, the Pacific Decadal Oscillation (PDO). For areas northwest of the Hawaiian Islands, these models projected a steady increase of 1.2 °C in summer SST over the period from 2000 to 2050. For the Bering and Barents seas, a subset of models selected on the basis of their ability to simulate sea-ice area in late 20th century yield an average decrease in sea-ice coverage of 43% and 36%, respectively, by the decade centered on 2050 with a reasonable degree of consistency. On the other hand, model simulations of coastal upwelling for the California, Canary and Humboldt Currents, and of bottom temperatures in the Barents Sea, feature a relatively large degree of uncertainty. These results illustrate that 21st century projections for marine ecosystems in certain regions using present-generation climate models require additional analysis.

  16. Applications of skin grafting in large animals.

    PubMed

    Wilson, D G

    1990-09-01

    Injuries involving full-thickness skin wounds are common in large animals. Skin grafting can shorten the healing time and improve the cosmetic result. Techniques that have been used successfully in the management of full-thickness skin wounds include full-thickness skin grafts, split-thickness skin grafts, tunnel grafts, pinch/punch grafts, and immediate split-thickness skin grafts. The technical aspects of each of these procedures are detailed and representative cases are presented. PMID:2134606

  17. Quality Function Deployment for Large Systems

    NASA Technical Reports Server (NTRS)

    Dean, Edwin B.

    1992-01-01

    Quality Function Deployment (QFD) is typically applied to small subsystems. This paper describes efforts to extend QFD to large scale systems. It links QFD to the system engineering process, the concurrent engineering process, the robust design process, and the costing process. The effect is to generate a tightly linked project management process of high dimensionality which flushes out issues early to provide a high quality, low cost, and, hence, competitive product. A pre-QFD matrix linking customers to customer desires is described.

  18. Supporting large-scale computational science

    SciTech Connect

    Musick, R

    1998-10-01

    A study has been carried out to determine the feasibility of using commercial database management systems (DBMSs) to support large-scale computational science. Conventional wisdom in the past has been that DBMSs are too slow for such data. Several events over the past few years have muddied the clarity of this mindset: 1. 2. 3. 4. Several commercial DBMS systems have demonstrated storage and ad-hoc quer access to Terabyte data sets. Several large-scale science teams, such as EOSDIS [NAS91], high energy physics [MM97] and human genome [Kin93] have adopted (or make frequent use of) commercial DBMS systems as the central part of their data management scheme. Several major DBMS vendors have introduced their first object-relational products (ORDBMSs), which have the potential to support large, array-oriented data. In some cases, performance is a moot issue. This is true in particular if the performance of legacy applications is not reduced while new, albeit slow, capabilities are added to the system. The basic assessment is still that DBMSs do not scale to large computational data. However, many of the reasons have changed, and there is an expiration date attached to that prognosis. This document expands on this conclusion, identifies the advantages and disadvantages of various commercial approaches, and describes the studies carried out in exploring this area. The document is meant to be brief, technical and informative, rather than a motivational pitch. The conclusions within are very likely to become outdated within the next 5-7 years, as market forces will have a significant impact on the state of the art in scientific data management over the next decade.

  19. Large proximal ureteral stones: Ideal treatment modality?

    PubMed Central

    Kadyan, B.; Sabale, V.; Mane, D.; Satav, V.; Mulay, A.; Thakur, N.; Kankalia, S. P.

    2016-01-01

    Background and Purpose: Ideal treatment modality for patients with large impacted proximal ureteral stone remains controversial. We compared laparoscopic transperitoneal ureterolithotomy (Lap-TPUL) and semirigid ureteroscopy for large proximal ureteric stones to evaluate their efficacy and safety. Patients and Methods: From November 2012 to December 2014, we enrolled 122 patients with large (≥1.5 cm) proximal ureteral stone in the study. Patients were randomly divided into two groups: Group A (60 patients), retrograde ureteroscopic lithotripsy using a semirigid ureteroscope; Group B (62 patients), transperitoneal LU (Lap-TPUL). Results: The overall stone-free rate was 71.6% and 93.5% for Group A and Group B respectively (P = 0.008). Auxiliary procedure rate was higher in Group A than in Group B (27.3% vs. 5.6%). The complication rate was 11.2% in Group B versus 25% in Group A. Mean procedure time was higher in laparoscopy group as compared to ureterorenoscopy (URS) groups (84.07 ± 16.80 vs. 62.82 ± 12.71 min). Hospital stay was 4.16 ± 0.67 days in laparoscopy group and 1.18 ± 0.38 days in URS group (P < 0.0001). Conclusion: Laparoscopic transperitoneal ureterolithotomy is a minimally invasive, safe and effective treatment modality and should be recommended to all patients of impacted large proximal stones, which are not amenable to URS or extracorporeal shock-wave lithotripsy or as a primary modality of choice especially if patient is otherwise candidate for open surgery. PMID:27141190

  20. Large Aperture, Scanning, L-Band SAR

    NASA Technical Reports Server (NTRS)

    Moussessian, Alina; Del Castillo, Linda; Bach, Vinh; Grando, Maurio; Quijano, Ubaldo; Smith, Phil; Zawadzki, Mark

    2011-01-01

    We have developed the first L-band membrane-based active phased array. The antenna is a 16x16 element patch array with dimensions of 2.3mx2.6m. The array uses membrane-compatible Transmit/Receive (T/R) modules for electronic beam steering. We will discuss the antenna design, the fabrication of this large array, the T/R module development, the signal distribution approach and the measured results of the array.

  1. The negative relief of large river floodplains

    NASA Astrophysics Data System (ADS)

    Lewin, John; Ashworth, Philip J.

    2014-02-01

    Large floodplains have multiple and complex negative relief assemblages in which depressions fall below local or general floodplain surfaces at a variety of scales. The generation and dynamics of negative relief along major alluvial corridors are described and compared. Such depressions are significant for the storage and passage of surface waters, the creation of a range of riparian, wetland, lacustrine and flowing-water habitats, and the long-term accumulation of organic materials.

  2. Large space antenna concepts for ESGP

    NASA Astrophysics Data System (ADS)

    Love, Allan W.

    1989-07-01

    It is appropriate to note that 1988 marks the 100th anniversary of the birth of the reflector antenna. It was in 1888 that Heinrich Hertz constructed the first one, a parabolic cylinder made of sheet zinc bent to shape and supported by a wooden frame. Hertz demonstrated the existence of the electromagnetic waves that had been predicted theoretically by James Clerk Maxwell some 22 years earlier. In the 100 years since Hertz's pioneering work the field of electromagnetics has grown explosively: one of the technologies is that of remote sensing of planet Earth by means of electromagnetic waves, using both passive and active sensors located on an Earth Science Geostationary Platform (ESEP). For these purposes some exquisitely sensitive instruments were developed, capable of reaching to the fringes of the known universe, and relying on large reflector antennas to collect the minute signals and direct them to appropriate receiving devices. These antennas are electrically large, with diameters of 3000 to 10,000 wavelengths and with gains approaching 80 to 90 dB. Some of the reflector antennas proposed for ESGP are also electrically large. For example, at 220 GHz a 4-meter reflector is nearly 3000 wavelengths in diameter, and is electrically quite comparable with a number of the millimeter wave radiotelescopes that are being built around the world. Its surface must meet stringent requirements on rms smoothness, and ability to resist deformation. Here, however, the environmental forces at work are different. There are no varying forces due to wind and gravity, but inertial forces due to mechanical scanning must be reckoned with. With this form of beam scanning, minimizing momentum transfer to the space platform is a problem that demands an answer. Finally, reflector surface distortion due to thermal gradients caused by the solar flux probably represents the most challenging problem to be solved if these Large Space Antennas are to achieve the gain and resolution required of

  3. Emerging large-screen display technology

    NASA Astrophysics Data System (ADS)

    Blaha, Richard J.

    1992-11-01

    Large-screen display technology is undergoing significant changes because of huge investments being expended to meet the potential high-definition television (HDTV) market. The expected result of this investment is display devices having improved quality and larger areas, which can be immediately used in military command and control operations. This report tracks recent display developments and their potential capabilities for command and control applications.

  4. Large-Angle Anomalies in the CMB

    DOE PAGESBeta

    Copi, Craig J.; Huterer, Dragan; Schwarz, Dominik J.; Starkman, Glenn D.

    2010-01-01

    We review the recently found large-scale anomalies in the maps of temperature anisotropies in the cosmic microwave background. These include alignments of the largest modes of CMB anisotropy with each other and with geometry and direction of motion of the solar ssystem, and the unusually low power at these largest scales. We discuss these findings in relation to expectation from standard inflationary cosmology, their statistical significance, the tools to study them, and the various attempts to explain them.

  5. Sensor filter designs for large flexible spacecraft

    NASA Technical Reports Server (NTRS)

    Nimmo, Nancy A.

    1992-01-01

    Traditional control methods may excite flexible modes causing degraded performance or instability of large flexible space structures (LFSS). Control techniques developed for control of LFSS requires a numerical model of the structure and some knowledge of model error. This will be increasingly difficult with the complex space structures planned for the future. If filters could be used to condition the sensor output, control design would be less demanding. This paper is presented in viewgraph format.

  6. Large rivers of the United States

    USGS Publications Warehouse

    Iseri, Kathleen T.; Langbein, Walter Basil

    1974-01-01

    Information on the flow of the 28 largest rivers in the United States is presented for the base periods 1931-60 and 1941-70. Drainage area, stream length, source, and mouth are included. Table 1 shows the average discharge at downstream gaging stations. Table 2 lists large rivers in order of average discharge at the mouth, based on the period 1941-70.

  7. Very large full configuration interaction calculations

    NASA Astrophysics Data System (ADS)

    Knowles, Peter J.

    1989-03-01

    The extreme sparsity of the solution of the full configuration interaction (full CI) secular equations is exploited in a new algorithm. For very large problems, the high speed memory, disk storage, and CPU requirements are reduced considerably, compared to previous techniques. This allows the possibility of full CI calculations with more than 10 8 Slater determinants. The power of the method is demonstrated in preliminary full CI calculations for the NH molecule, including up to 27901690 determinants.

  8. Radio signals from very large showers

    NASA Technical Reports Server (NTRS)

    Suga, K.; Kakimoto, F.; Nishi, K.

    1985-01-01

    Radio signals from air showers with electron sizes in the range 1 x 10 to the 7th power to 2 x 10 to the 9th power were detected at 50kHz, 170kHz, and 1,647kHz at large core distances in the Akeno square kilometers air-shower array. The field strength is higher than that expected from any mechanisms hitherto proposed.

  9. Introduction to comparing large sequence sets.

    PubMed

    Page, Roderic D M

    2003-02-01

    Comparisons of whole genomes can yield important insights into the evolution of genome structure, such as the role of inversions in bacterial evolution and the identification of large-scale duplications in the human genome. This unit briefly compares two tools for aligning whole genome sequences: MUMmer and PipMaker. These tools differ in both the underlying algorithms used, and in the interface they present to the user. PMID:18428691

  10. Optical design of the COSMO large coronagraph

    NASA Astrophysics Data System (ADS)

    Gallagher, Dennis; Tomczyk, Steven; Zhang, Haiying; Nelson, Peter G.

    2012-09-01

    The Coronal Solar Magnetism Observatory (COSMO) is a facility dedicated to measuring magnetic fields in the corona and chromosphere of the Sun. It will be located on a mountaintop in the Hawaiian Islands and will replace the current Mauna Loa Solar Observatory (MLSO). COSMO will employ a suite of instruments to determine the magnetic field and plasma conditions in the solar atmosphere and will enhance the value of data collected by other observatories on the ground (SOLIS, ATST, FASR) and in space (SDO, Hinode, SOHO, GOES, STEREO, DSCOVR, Solar Probe+, Solar Orbiter). The dynamics and energy flow in the corona are dominated by magnetic fields. To understand the formation of Coronal Mass Ejections (CMEs), their relation to other forms of solar activity, and their progression out into the solar wind requires measurements of coronal magnetic fields. The COSMO suite includes the Large Coronagraph (LC), the Chromosphere and Prominence Magnetometer (ChroMag) and the K-Coronagraph. The Large Coronagraph will employ a 1.5 meter fuse silica singlet lens and birefringent filters to measure magnetic fields out to two solar radii. It will observe over a wide range of wavelengths from 500 to 1100 nm providing the capability of observing a number of coronal, chromospheric, and photospheric emission lines. Of particular importance to measuring coronal magnetic fields are the forbidden emission lines of Fe XIII at 1074.7 nm and 1079.8 nm. These lines are faint and require the very large aperture. NCAR and NSF have provided funding to bring the COSMO Large Coronagraph to a preliminary design review (PDR) state by the end of 2013.

  11. Large Penile Mass With Unusual Benign Histopathology.

    PubMed

    Johnson, Nate; Voznesensky, Maria; VerLee, Graham

    2015-09-01

    Pseudoepitheliomatous hyperplasia is an extremely rare condition presenting as a lesion on the glans penis in older men. Physical exam without biopsy cannot differentiate malignant from nonmalignant growth. We report a case of large penile mass in an elderly male with a history of lichen sclerosis, highly suspicious for malignancy. Subsequent surgical removal and biopsy demonstrated pseudoepitheliomatous hyperplasia, an unusual benign histopathologic diagnosis with unclear prognosis. We review the literature and discuss options for treatment and surveillance. PMID:26793536

  12. Large capacity cryopropellant orbital storage facility

    NASA Technical Reports Server (NTRS)

    Schuster, J. R.

    1987-01-01

    A comprehensive study was performed to develop the major features of a large capacity orbital propellant storage facility for the space-based cryogenic orbital transfer vehicle. Projected propellant usage and delivery schedules can be accommodated by two orbital tank sets of 100,000 lb storage capacity, with advanced missions expected to require increased capacity. Information is given on tank pressurization schemes, propellant transfer configurations, pump specifications, the refrigeration system, and flight tests.

  13. Large-Area Vacuum Ultraviolet Sensors

    NASA Technical Reports Server (NTRS)

    Aslam, Shahid; Franz, David

    2012-01-01

    Pt/(n-doped GaN) Schottky-barrier diodes having active areas as large as 1 cm square have been designed and fabricated as prototypes of photodetectors for the vacuum ultraviolet portion (wavelengths approximately equal 200 nm) of the solar spectrum. In addition to having adequate sensitivity to photons in this wavelength range, these photodetectors are required to be insensitive to visible and infrared components of sunlight and to have relatively low levels of dark current.

  14. Mechanical structure of the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    del Vecchio, Ciro; Davison, Warren B.; Gallieni, Walter W.; Rigato, Gianfranco; Miglietta, Luciano

    1997-03-01

    We present the final design of the alt/az structure of the large binocular telescope. As a final report of the structural performances of the telescope, this paper describes how the azimuth platform and the primary mirror cells have been modeled. Furthermore, a definition of the simulation of the various structural interfaces is given. Finally, the static and dynamic responses at various zenith angles are reported.

  15. The Magnitude and Energy of Large Earthquakes

    NASA Astrophysics Data System (ADS)

    Purcaru, G.

    2003-12-01

    Several magnitudes were introduced to quantify large earthquakes better and more comprehensive than Ms: Mw (moment magnitude; Kanamori, 1977), ME (strain energy magnitude; Purcaru and Berckhemer, 1978), Mt (tsunami magnitude; Abe, 1979), Mm (mantle magnitude; Okal and Talandier, 1985), Me (seismic energy magnitude; Choy and Boatwright, 1995). Although these magnitudes are still subject to different uncertainties, various kinds of earthquakes can now be better understood in terms or combinations of them. They can also be viewd as mappings of basic source parameters: seismic moment, strain energy, seismic energy, stress drop, under certain assumptions or constraints. We studied a set of about 90 large earthquakes (shallow and deeper) occurred in different tectonic regimes, with more reliable source parameters, and compared them in terms of the above magnitudes. We found large differences between the strain energy (mapped to ME) and seismic energy (mapped to Me), and between ME of events with about the same Mw. This confirms that no 1-to-1 correspondence exists between these magnitudes (Purcaru, 2002). One major cause of differences for "normal" earthquakes is the level of the stress drop over asperities which release and partition the strain energy. We quantify the energetic balance of earthquakes in terms of strain energy Est and its components (fracture (Eg), friction (Ef) and seismic (Es) energy) using an extended Hamilton's principle. The earthquakes are thrust-interplate, strike slip, shallow in-slab, slow/tsunami, deep and continental. The (scaled) strain energy equation we derived is: Est/M0 = (1+e(g,s))(Es/M_0), e(g,s) = Eg/E_s, assuming complete stress drop, using the (static) stress drop variability, and that Est and Es are not in a 1-to-1 correspondence. With all uncertainties, our analysis reveal, for a given seismic moment, a large variation of earthquakes in terms of energies, even in the same seismic region. In view of these, for further understanding

  16. Large animal hepatotoxic and nephrotoxic plants.

    PubMed

    Oladosu, L A; Case, A A

    1979-10-01

    The hepatotoxic and nephrotoxic plants of large domestic animals have been reviewed. The most important ones are those widely distributed as weeds over pastures, negelcted forests and grasslands, those used as ornamentals, the nitrate concentrating forage crops, and the cyanophoric plants. Crotolaria spp, the ragwort (Senecia jacobaea), the lantana spp. and heliotopum are common hepatoxic plants. Amaranthus retroflexus, Datura stramonium, Solanum rostratum, and the castor oil plant (Ricinus communis) are nephrotoxic plants. PMID:516370

  17. Science Diplomacy in Large International Collaborations

    NASA Astrophysics Data System (ADS)

    Barish, Barry C.

    2011-04-01

    What opportunities and challenges does the rapidly growing internationalization of science, especially large scale science and technology projects, present for US science policy? On one hand, the interchange of scientists, the sharing of technology and facilities and the working together on common scientific goals promotes better understanding and better science. On the other hand, challenges are presented, because the science cannot be divorced from government policies, and solutions must be found for issues varying from visas to making reliable international commitments.

  18. Large data analysis of different sensory modalities

    NASA Astrophysics Data System (ADS)

    Hsu, Ming-Kai; Szu, Harold

    2014-05-01

    We exam the historical remote digital video and in situ analog acoustic data analyses from the modern Large Data Analysis standpoint. We discussed a potential automation from traditional search engine to modern one. We exam the mathematic theory to answer where the nonlinear dimensional analysis assume a local flat space where linear eigenvalue provided the independent components, then the component is extrapolated to original nonlinear space and assume the local flat reduction remains meaningful at global nonlinear domain.

  19. Antenna pointing of large flexible telecommunications spacecraft

    NASA Technical Reports Server (NTRS)

    Govin, B.; Bousquet, A.

    1985-01-01

    Attitude control problems for large flexible telecommunications spacecraft were investigated. A typical S/C configuration is described and modeled by modal data derived from a finite element analysis. The effect of structural flexibility on radio-frequency sensor is analyzed. Model reduction using modal gain considerations is applied. Two control concepts are investigated: separate central body and antenna pointing control using direct feedback laws, centralized control using modal observer and optimal control. Performances of each concept and the algorithm implementation are assessed.

  20. Fabrication of Large YBCO Superconducting Disks

    NASA Technical Reports Server (NTRS)

    Koczor, Ronald J.; Noever, David A.; Robertson, Glen A.

    1999-01-01

    We have undertaken fabrication of large bulk items to develop a repeatable process and to provide test articles in laboratory experiments investigating reported coupling of electromagnetic fields with the local gravity field in the presence of rotating superconducting disks. A successful process was developed which resulted in fabrication of 30 cm diameter annular disks. The disks were fabricated of the superconductor YBa2Cu3O(7-x). Various material parameters of the disks were measured.