Large-scale structure of randomly jammed spheres
NASA Astrophysics Data System (ADS)
Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio
2017-05-01
We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.
Modelling the large-scale redshift-space 3-point correlation function of galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2017-08-01
We present a configuration-space model of the large-scale galaxy 3-point correlation function (3PCF) based on leading-order perturbation theory and including redshift-space distortions (RSD). This model should be useful in extracting distance-scale information from the 3PCF via the baryon acoustic oscillation method. We include the first redshift-space treatment of biasing by the baryon-dark matter relative velocity. Overall, on large scales the effect of RSD is primarily a renormalization of the 3PCF that is roughly independent of both physical scale and triangle opening angle; for our adopted Ωm and bias values, the rescaling is a factor of ˜1.8. We also present an efficient scheme for computing 3PCF predictions from our model, important for allowing fast exploration of the space of cosmological parameters in future analyses.
Lagrangian space consistency relation for large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, Bart; Hui, Lam; Xiao, Xiao
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Lagrangian space consistency relation for large scale structure
Horn, Bart; Hui, Lam; Xiao, Xiao
2015-09-29
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Lagrangian space consistency relation for large scale structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Horn, Bart; Hui, Lam; Xiao, Xiao, E-mail: bh2478@columbia.edu, E-mail: lh399@columbia.edu, E-mail: xx2146@columbia.edu
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less
Kurashige, Yuki; Yanai, Takeshi
2011-09-07
We present a second-order perturbation theory based on a density matrix renormalization group self-consistent field (DMRG-SCF) reference function. The method reproduces the solution of the complete active space with second-order perturbation theory (CASPT2) when the DMRG reference function is represented by a sufficiently large number of renormalized many-body basis, thereby being named DMRG-CASPT2 method. The DMRG-SCF is able to describe non-dynamical correlation with large active space that is insurmountable to the conventional CASSCF method, while the second-order perturbation theory provides an efficient description of dynamical correlation effects. The capability of our implementation is demonstrated for an application to the potential energy curve of the chromium dimer, which is one of the most demanding multireference systems that require best electronic structure treatment for non-dynamical and dynamical correlation as well as large basis sets. The DMRG-CASPT2/cc-pwCV5Z calculations were performed with a large (3d double-shell) active space consisting of 28 orbitals. Our approach using large-size DMRG reference addressed the problems of why the dissociation energy is largely overestimated by CASPT2 with the small active space consisting of 12 orbitals (3d4s), and also is oversensitive to the choice of the zeroth-order Hamiltonian. © 2011 American Institute of Physics
Construction of CASCI-type wave functions for very large active spaces.
Boguslawski, Katharina; Marti, Konrad H; Reiher, Markus
2011-06-14
We present a procedure to construct a configuration-interaction expansion containing arbitrary excitations from an underlying full-configuration-interaction-type wave function defined for a very large active space. Our procedure is based on the density-matrix renormalization group (DMRG) algorithm that provides the necessary information in terms of the eigenstates of the reduced density matrices to calculate the coefficient of any basis state in the many-particle Hilbert space. Since the dimension of the Hilbert space scales binomially with the size of the active space, a sophisticated Monte Carlo sampling routine is employed. This sampling algorithm can also construct such configuration-interaction-type wave functions from any other type of tensor network states. The configuration-interaction information obtained serves several purposes. It yields a qualitatively correct description of the molecule's electronic structure, it allows us to analyze DMRG wave functions converged for the same molecular system but with different parameter sets (e.g., different numbers of active-system (block) states), and it can be considered a balanced reference for the application of a subsequent standard multi-reference configuration-interaction method.
Prime focus architectures for large space telescopes: reduce surfaces to save cost
NASA Astrophysics Data System (ADS)
Breckinridge, J. B.; Lillie, C. F.
2016-07-01
Conceptual architectures are now being developed to identify future directions for post JWST large space telescope systems to operate in the UV Optical and near IR regions of the spectrum. Here we show that the cost of optical surfaces within large aperture telescope/instrument systems can exceed $100M/reflection when expressed in terms of the aperture increase needed to over come internal absorption loss. We recommend a program in innovative optical design to minimize the number of surfaces by considering multiple functions for mirrors. An example is given using the Rowland circle imaging spectrometer systems for UV space science. With few exceptions, current space telescope architectures are based on systems optimized for ground-based astronomy. Both HST and JWST are classical "Cassegrain" telescopes derived from the ground-based tradition to co-locate the massive primary mirror and the instruments at the same end of the metrology structure. This requirement derives from the dual need to minimize observatory dome size and cost in the presence of the Earth's 1-g gravitational field. Space telescopes, however function in the zero gravity of space and the 1- g constraint is relieved to the advantage of astronomers. Here we suggest that a prime focus large aperture telescope system in space may have potentially have higher transmittance, better pointing, improved thermal and structural control, less internal polarization and broader wavelength coverage than Cassegrain telescopes. An example is given showing how UV astronomy telescopes use single optical elements for multiple functions and therefore have a minimum number of reflections.
NASA Technical Reports Server (NTRS)
1979-01-01
Construction of large systems in space is a technology requiring the development of construction methods to deploy, assemble, and fabricate the elements comprising such systems. A construction method is comprised of all essential functions and operations and related support equipment necessary to accomplish a specific construction task in a particular way. The data base objective is to provide to the designers of large space systems a compendium of the various space construction methods which could have application to their projects.
High-Dimensional Function Approximation With Neural Networks for Large Volumes of Data.
Andras, Peter
2018-02-01
Approximation of high-dimensional functions is a challenge for neural networks due to the curse of dimensionality. Often the data for which the approximated function is defined resides on a low-dimensional manifold and in principle the approximation of the function over this manifold should improve the approximation performance. It has been show that projecting the data manifold into a lower dimensional space, followed by the neural network approximation of the function over this space, provides a more precise approximation of the function than the approximation of the function with neural networks in the original data space. However, if the data volume is very large, the projection into the low-dimensional space has to be based on a limited sample of the data. Here, we investigate the nature of the approximation error of neural networks trained over the projection space. We show that such neural networks should have better approximation performance than neural networks trained on high-dimensional data even if the projection is based on a relatively sparse sample of the data manifold. We also find that it is preferable to use a uniformly distributed sparse sample of the data for the purpose of the generation of the low-dimensional projection. We illustrate these results considering the practical neural network approximation of a set of functions defined on high-dimensional data including real world data as well.
Concept for a power system controller for large space electrical power systems
NASA Technical Reports Server (NTRS)
Lollar, L. F.; Lanier, J. R., Jr.; Graves, J. R.
1981-01-01
The development of technology for a fail-operatonal power system controller (PSC) utilizing microprocessor technology for managing the distribution and power processor subsystems of a large multi-kW space electrical power system is discussed. The specific functions which must be performed by the PSC, the best microprocessor available to do the job, and the feasibility, cost savings, and applications of a PSC were determined. A limited function breadboard version of a PSC was developed to demonstrate the concept and potential cost savings.
Aperture synthesis for microwave radiometers in space
NASA Technical Reports Server (NTRS)
Levine, D. M.; Good, J. C.
1983-01-01
A technique is described for obtaining passive microwave measurements from space with high spatial resolution for remote sensing applications. The technique involves measuring the product of the signal from pairs of antennas at many different antenna spacings, thereby mapping the correlation function of antenna voltage. The intensity of radiation at the source can be obtained from the Fourier transform of this correlation function. Theory is presented to show how the technique can be applied to large extended sources such as the Earth when observed from space. Details are presented for a system with uniformly spaced measurements.
Exploring the Function Space of Deep-Learning Machines
NASA Astrophysics Data System (ADS)
Li, Bo; Saad, David
2018-06-01
The function space of deep-learning machines is investigated by studying growth in the entropy of functions of a given error with respect to a reference function, realized by a deep-learning machine. Using physics-inspired methods we study both sparsely and densely connected architectures to discover a layerwise convergence of candidate functions, marked by a corresponding reduction in entropy when approaching the reference function, gain insight into the importance of having a large number of layers, and observe phase transitions as the error increases.
Ghosh, Soumen; Cramer, Christopher J; Truhlar, Donald G; Gagliardi, Laura
2017-04-01
Predicting ground- and excited-state properties of open-shell organic molecules by electronic structure theory can be challenging because an accurate treatment has to correctly describe both static and dynamic electron correlation. Strongly correlated systems, i.e. , systems with near-degeneracy correlation effects, are particularly troublesome. Multiconfigurational wave function methods based on an active space are adequate in principle, but it is impractical to capture most of the dynamic correlation in these methods for systems characterized by many active electrons. We recently developed a new method called multiconfiguration pair-density functional theory (MC-PDFT), that combines the advantages of wave function theory and density functional theory to provide a more practical treatment of strongly correlated systems. Here we present calculations of the singlet-triplet gaps in oligoacenes ranging from naphthalene to dodecacene. Calculations were performed for unprecedently large orbitally optimized active spaces of 50 electrons in 50 orbitals, and we test a range of active spaces and active space partitions, including four kinds of frontier orbital partitions. We show that MC-PDFT can predict the singlet-triplet splittings for oligoacenes consistent with the best available and much more expensive methods, and indeed MC-PDFT may constitute the benchmark against which those other models should be compared, given the absence of experimental data.
Technology Challenges and Opportunities for Very Large In-Space Structural Systems
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Dorsey, John T.; Watson, Judith J.
2009-01-01
Space solar power satellites and other large space systems will require creative and innovative concepts in order to achieve economically viable designs. The mass and volume constraints of current and planned launch vehicles necessitate highly efficient structural systems be developed. In addition, modularity and in-space deployment/construction will be enabling design attributes. While current space systems allocate nearly 20 percent of the mass to the primary structure, the very large space systems of the future must overcome subsystem mass allocations by achieving a level of functional integration not yet realized. A proposed building block approach with two phases is presented to achieve near-term solar power satellite risk reduction with accompanying long-term technology advances. This paper reviews the current challenges of launching and building very large space systems from a structures and materials perspective utilizing recent experience. Promising technology advances anticipated in the coming decades in modularity, material systems, structural concepts, and in-space operations are presented. It is shown that, together, the current challenges and future advances in very large in-space structural systems may provide the technology pull/push necessary to make solar power satellite systems more technically and economically feasible.
Optimal estimation of large structure model errors. [in Space Shuttle controller design
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1979-01-01
In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.
Large aperture diffractive space telescope
Hyde, Roderick A.
2001-01-01
A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.
N-point statistics of large-scale structure in the Zel'dovich approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
2014-06-01
Motivated by the results presented in a companion paper, here we give a simple analytical expression for the matter n-point functions in the Zel'dovich approximation (ZA) both in real and in redshift space (including the angular case). We present numerical results for the 2-dimensional redshift-space correlation function, as well as for the equilateral configuration for the real-space 3-point function. We compare those to the tree-level results. Our analysis is easily extendable to include Lagrangian bias, as well as higher-order perturbative corrections to the ZA. The results should be especially useful for modelling probes of large-scale structure in the linear regime,more » such as the Baryon Acoustic Oscillations. We make the numerical code used in this paper freely available.« less
A real-space approach to the X-ray phase problem
NASA Astrophysics Data System (ADS)
Liu, Xiangan
Over the past few decades, the phase problem of X-ray crystallography has been explored in reciprocal space in the so called direct methods . Here we investigate the problem using a real-space approach that bypasses the laborious procedure of frequent Fourier synthesis and peak picking. Starting from a completely random structure, we move the atoms around in real space to minimize a cost function. A Monte Carlo method named simulated annealing (SA) is employed to search the global minimum of the cost function which could be constructed in either real space or reciprocal space. In the hybrid minimal principle, we combine the dual space costs together. One part of the cost function monitors the probability distribution of the phase triplets, while the other is a real space cost function which represents the discrepancy between measured and calculated intensities. Compared to the single space cost functions, the dual space cost function has a greatly improved landscape and therefore could prevent the system from being trapped in metastable states. Thus, the structures of large molecules such as virginiamycin (C43H 49N7O10 · 3CH0OH), isoleucinomycin (C60H102N 6O18) and hexadecaisoleucinomycin (HEXIL) (C80H136 N8O24) can now be solved, whereas it would not be possible using the single cost function. When a molecule gets larger, the configurational space becomes larger, and the requirement of CPU time increases exponentially. The method of improved Monte Carlo sampling has demonstrated its capability to solve large molecular structures. The atoms are encouraged to sample the high density regions in space determined by an approximate density map which in turn is updated and modified by averaging and Fourier synthesis. This type of biased sampling has led to considerable reduction of the configurational space. It greatly improves the algorithm compared to the previous uniform sampling. Hence, for instance, 90% of computer run time could be cut in solving the complex structure of isoleucinomycin. Successful trial calculations include larger molecular structures such as HEXIL and a collagen-like peptide (PPG). Moving chemical fragment is proposed to reduce the degrees of freedom. Furthermore, stereochemical parameters are considered for geometric constraints and for a cost function related to chemical energy.
Large deviation function for a driven underdamped particle in a periodic potential
NASA Astrophysics Data System (ADS)
Fischer, Lukas P.; Pietzonka, Patrick; Seifert, Udo
2018-02-01
Employing large deviation theory, we explore current fluctuations of underdamped Brownian motion for the paradigmatic example of a single particle in a one-dimensional periodic potential. Two different approaches to the large deviation function of the particle current are presented. First, we derive an explicit expression for the large deviation functional of the empirical phase space density, which replaces the level 2.5 functional used for overdamped dynamics. Using this approach, we obtain several bounds on the large deviation function of the particle current. We compare these to bounds for overdamped dynamics that have recently been derived, motivated by the thermodynamic uncertainty relation. Second, we provide a method to calculate the large deviation function via the cumulant generating function. We use this method to assess the tightness of the bounds in a numerical case study for a cosine potential.
Divergence of perturbation theory in large scale structures
NASA Astrophysics Data System (ADS)
Pajer, Enrico; van der Woude, Drian
2018-05-01
We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.
Large Phased Array Radar Using Networked Small Parabolic Reflectors
NASA Technical Reports Server (NTRS)
Amoozegar, Farid
2006-01-01
Multifunction phased array systems with radar, telecom, and imaging applications have already been established for flat plate phased arrays of dipoles, or waveguides. In this paper the design trades and candidate options for combining the radar and telecom functions of the Deep Space Network (DSN) into a single large transmit array of small parabolic reflectors will be discussed. In particular the effect of combing the radar and telecom functions on the sizes of individual antenna apertures and the corresponding spacing between the antenna elements of the array will be analyzed. A heterogeneous architecture for the DSN large transmit array is proposed to meet the radar and telecom requirements while considering the budget, scheduling, and strategic planning constrains.
Large aperture segmented optics for space-to-ground communications.
Lucy, R F
1968-08-01
A large aperture, moderate quality segmented optical array for use in noncoherent space-to-ground laser communications is determined as a function of resolution, diameter, focal length, and number of segments in the array. Secondary optics and construction tolerances are also discussed. Performance predictions show a typical receiver to be capable of megahertz communications at Mars distances during daylight operation.
Fresnel Concentrators for Space Solar Power and Solar Thermal Propulsion
NASA Technical Reports Server (NTRS)
Bradford, Rodney; Parks, Robert W.; Craig, Harry B. (Technical Monitor)
2001-01-01
Large deployable Fresnel concentrators are applicable to solar thermal propulsion and multiple space solar power generation concepts. These concentrators can be used with thermophotovoltaic, solar thermionic, and solar dynamic conversion systems. Thin polyimide Fresnel lenses and reflectors can provide tailored flux distribution and concentration ratios matched to receiver requirements. Thin, preformed polyimide film structure components assembled into support structures for Fresnel concentrators provide the capability to produce large inflation-deployed concentrator assemblies. The polyimide film is resistant to the space environment and allows large lightweight assemblies to be fabricated that can be compactly stowed for launch. This work addressed design and fabrication of lightweight polyimide film Fresnel concentrators, alternate materials evaluation, and data management functions for space solar power concepts, architectures, and supporting technology development.
Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.
Scale-space measures for graph topology link protein network architecture to function.
Hulsman, Marc; Dimitrakopoulos, Christos; de Ridder, Jeroen
2014-06-15
The network architecture of physical protein interactions is an important determinant for the molecular functions that are carried out within each cell. To study this relation, the network architecture can be characterized by graph topological characteristics such as shortest paths and network hubs. These characteristics have an important shortcoming: they do not take into account that interactions occur across different scales. This is important because some cellular functions may involve a single direct protein interaction (small scale), whereas others require more and/or indirect interactions, such as protein complexes (medium scale) and interactions between large modules of proteins (large scale). In this work, we derive generalized scale-aware versions of known graph topological measures based on diffusion kernels. We apply these to characterize the topology of networks across all scales simultaneously, generating a so-called graph topological scale-space. The comprehensive physical interaction network in yeast is used to show that scale-space based measures consistently give superior performance when distinguishing protein functional categories and three major types of functional interactions-genetic interaction, co-expression and perturbation interactions. Moreover, we demonstrate that graph topological scale spaces capture biologically meaningful features that provide new insights into the link between function and protein network architecture. Matlab(TM) code to calculate the scale-aware topological measures (STMs) is available at http://bioinformatics.tudelft.nl/TSSA © The Author 2014. Published by Oxford University Press.
The Laplace method for probability measures in Banach spaces
NASA Astrophysics Data System (ADS)
Piterbarg, V. I.; Fatalov, V. R.
1995-12-01
Contents §1. Introduction Chapter I. Asymptotic analysis of continual integrals in Banach space, depending on a large parameter §2. The large deviation principle and logarithmic asymptotics of continual integrals §3. Exact asymptotics of Gaussian integrals in Banach spaces: the Laplace method 3.1. The Laplace method for Gaussian integrals taken over the whole Hilbert space: isolated minimum points ([167], I) 3.2. The Laplace method for Gaussian integrals in Hilbert space: the manifold of minimum points ([167], II) 3.3. The Laplace method for Gaussian integrals in Banach space ([90], [174], [176]) 3.4. Exact asymptotics of large deviations of Gaussian norms §4. The Laplace method for distributions of sums of independent random elements with values in Banach space 4.1. The case of a non-degenerate minimum point ([137], I) 4.2. A degenerate isolated minimum point and the manifold of minimum points ([137], II) §5. Further examples 5.1. The Laplace method for the local time functional of a Markov symmetric process ([217]) 5.2. The Laplace method for diffusion processes, a finite number of non-degenerate minimum points ([116]) 5.3. Asymptotics of large deviations for Brownian motion in the Hölder norm 5.4. Non-asymptotic expansion of a strong stable law in Hilbert space ([41]) Chapter II. The double sum method - a version of the Laplace method in the space of continuous functions §6. Pickands' method of double sums 6.1. General situations 6.2. Asymptotics of the distribution of the maximum of a Gaussian stationary process 6.3. Asymptotics of the probability of a large excursion of a Gaussian non-stationary process §7. Probabilities of large deviations of trajectories of Gaussian fields 7.1. Homogeneous fields and fields with constant dispersion 7.2. Finitely many maximum points of dispersion 7.3. Manifold of maximum points of dispersion 7.4. Asymptotics of distributions of maxima of Wiener fields §8. Exact asymptotics of large deviations of the norm of Gaussian vectors and processes with values in the spaces L_k^p and l^2. Gaussian fields with the set of parameters in Hilbert space 8.1 Exact asymptotics of the distribution of the l_k^p-norm of a Gaussian finite-dimensional vector with dependent coordinates, p > 1 8.2. Exact asymptotics of probabilities of high excursions of trajectories of processes of type \\chi^2 8.3. Asymptotics of the probabilities of large deviations of Gaussian processes with a set of parameters in Hilbert space [74] 8.4. Asymptotics of distributions of maxima of the norms of l^2-valued Gaussian processes 8.5. Exact asymptotics of large deviations for the l^2-valued Ornstein-Uhlenbeck process Bibliography
Large-deviation properties of Brownian motion with dry friction.
Chen, Yaming; Just, Wolfram
2014-10-01
We investigate piecewise-linear stochastic models with regard to the probability distribution of functionals of the stochastic processes, a question that occurs frequently in large deviation theory. The functionals that we are looking into in detail are related to the time a stochastic process spends at a phase space point or in a phase space region, as well as to the motion with inertia. For a Langevin equation with discontinuous drift, we extend the so-called backward Fokker-Planck technique for non-negative support functionals to arbitrary support functionals, to derive explicit expressions for the moments of the functional. Explicit solutions for the moments and for the distribution of the so-called local time, the occupation time, and the displacement are derived for the Brownian motion with dry friction, including quantitative measures to characterize deviation from Gaussian behavior in the asymptotic long time limit.
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2018-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
Simple gain probability functions for large reflector antennas of JPL/NASA
NASA Technical Reports Server (NTRS)
Jamnejad, V.
2003-01-01
Simple models for the patterns as well as their cumulative gain probability and probability density functions of the Deep Space Network antennas are developed. These are needed for the study and evaluation of interference from unwanted sources such as the emerging terrestrial system, High Density Fixed Service, with the Ka-band receiving antenna systems in Goldstone Station of the Deep Space Network.
Large space structures control algorithm characterization
NASA Technical Reports Server (NTRS)
Fogel, E.
1983-01-01
Feedback control algorithms are developed for sensor/actuator pairs on large space systems. These algorithms have been sized in terms of (1) floating point operation (FLOP) demands; (2) storage for variables; and (3) input/output data flow. FLOP sizing (per control cycle) was done as a function of the number of control states and the number of sensor/actuator pairs. Storage for variables and I/O sizing was done for specific structure examples.
Ghosh, Soumen; Cramer, Christopher J.; Truhlar, Donald G.; ...
2017-01-19
Predicting ground- and excited-state properties of open-shell organic molecules by electronic structure theory can be challenging because an accurate treatment has to correctly describe both static and dynamic electron correlation. Strongly correlated systems, i.e., systems with near-degeneracy correlation effects, are particularly troublesome. Multiconfigurational wave function methods based on an active space are adequate in principle, but it is impractical to capture most of the dynamic correlation in these methods for systems characterized by many active electrons. Here, we recently developed a new method called multiconfiguration pair-density functional theory (MC-PDFT), that combines the advantages of wave function theory and density functionalmore » theory to provide a more practical treatment of strongly correlated systems. Here we present calculations of the singlet–triplet gaps in oligoacenes ranging from naphthalene to dodecacene. Calculations were performed for unprecedently large orbitally optimized active spaces of 50 electrons in 50 orbitals, and we test a range of active spaces and active space partitions, including four kinds of frontier orbital partitions. We show that MC-PDFT can predict the singlet–triplet splittings for oligoacenes consistent with the best available and much more expensive methods, and indeed MC-PDFT may constitute the benchmark against which those other models should be compared, given the absence of experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Soumen; Cramer, Christopher J.; Truhlar, Donald G.
Predicting ground- and excited-state properties of open-shell organic molecules by electronic structure theory can be challenging because an accurate treatment has to correctly describe both static and dynamic electron correlation. Strongly correlated systems, i.e., systems with near-degeneracy correlation effects, are particularly troublesome. Multiconfigurational wave function methods based on an active space are adequate in principle, but it is impractical to capture most of the dynamic correlation in these methods for systems characterized by many active electrons. Here, we recently developed a new method called multiconfiguration pair-density functional theory (MC-PDFT), that combines the advantages of wave function theory and density functionalmore » theory to provide a more practical treatment of strongly correlated systems. Here we present calculations of the singlet–triplet gaps in oligoacenes ranging from naphthalene to dodecacene. Calculations were performed for unprecedently large orbitally optimized active spaces of 50 electrons in 50 orbitals, and we test a range of active spaces and active space partitions, including four kinds of frontier orbital partitions. We show that MC-PDFT can predict the singlet–triplet splittings for oligoacenes consistent with the best available and much more expensive methods, and indeed MC-PDFT may constitute the benchmark against which those other models should be compared, given the absence of experimental data.« less
Indentured Parts List Maintenance and Part Assembly Capture Tool - IMPACT
NASA Technical Reports Server (NTRS)
Jain, Bobby; Morris, Jill; Sharpe, Kelly
2004-01-01
Johnson Space Center's (JSC's) indentured parts list (IPL) maintenance and parts assembly capture tool (IMPACT) is an easy-to-use graphical interface for viewing and maintaining the complex assembly hierarchies of large databases. IMPACT, already in use at JSC to support the International Space Station (ISS), queries, updates, modifies, and views data in IPL and associated resource data, functions that it can also perform, with modification, for any large commercial database. By enabling its users to efficiently view and manipulate IPL hierarchical data, IMPACT performs a function unlike that of any other tool. Through IMPACT, users will achieve results quickly, efficiently, and cost effectively.
Deep space communication - A one billion mile noisy channel
NASA Technical Reports Server (NTRS)
Smith, J. G.
1982-01-01
Deep space exploration is concerned with the study of natural phenomena in the solar system with the aid of measurements made at spacecraft on deep space missions. Deep space communication refers to communication between earth and spacecraft in deep space. The Deep Space Network is an earth-based facility employed for deep space communication. It includes a network of large tracking antennas located at various positions around the earth. The goals and achievements of deep space exploration over the past 20 years are discussed along with the broad functional requirements of deep space missions. Attention is given to the differences in space loss between communication satellites and deep space vehicles, effects of the long round-trip light time on spacecraft autonomy, requirements for the use of massive nuclear power plants on spacecraft at large distances from the sun, and the kinds of scientific return provided by a deep space mission. Problems concerning a deep space link of one billion miles are also explored.
Space Generic Open Avionics Architecture (SGOAA): Overview
NASA Technical Reports Server (NTRS)
Wray, Richard B.; Stovall, John R.
1992-01-01
A space generic open avionics architecture created for NASA is described. It will serve as the basis for entities in spacecraft core avionics, capable of being tailored by NASA for future space program avionics ranging from small vehicles such as Moon ascent/descent vehicles to large ones such as Mars transfer vehicles or orbiting stations. The standard consists of: (1) a system architecture; (2) a generic processing hardware architecture; (3) a six class architecture interface model; (4) a system services functional subsystem architectural model; and (5) an operations control functional subsystem architectural model.
Space shuttle EVA opportunities. [a technology assessment
NASA Technical Reports Server (NTRS)
Bland, D. A., Jr.
1976-01-01
A technology assessment is presented on space extravehicular activities (EVA) that will be possible when the space shuttle orbiter is completed and launched. The use of EVA in payload systems design is discussed. Also discussed is space crew training. The role of EVA in connection with the Large Space Telescope and Skylab are described. The value of EVA in constructing structures in space and orbital assembly is examined. Excellent color illustrations are provided which show the proposed EVA functions that were described.
Analysis of space vehicle structures using the transfer-function concept
NASA Technical Reports Server (NTRS)
Heer, E.; Trubert, M. R.
1969-01-01
Analysis of large complex systems is accomplished by dividing it into suitable subsystems and determining the individual dynamical and vibrational responses. Frequency transfer functions then determine the vibrational response of the whole system.
Explorative Function in Williams Syndrome Analyzed through a Large-Scale Task with Multiple Rewards
ERIC Educational Resources Information Center
Foti, F.; Petrosini, L.; Cutuli, D.; Menghini, D.; Chiarotti, F.; Vicari, S.; Mandolesi, L.
2011-01-01
This study aimed to evaluate spatial function in subjects with Williams syndrome (WS) by using a large-scale task with multiple rewards and comparing the spatial abilities of WS subjects with those of mental age-matched control children. In the present spatial task, WS participants had to explore an open space to search nine rewards placed in…
Neural encoding of large-scale three-dimensional space-properties and constraints.
Jeffery, Kate J; Wilson, Jonathan J; Casali, Giulio; Hayman, Robin M
2015-01-01
How the brain represents represent large-scale, navigable space has been the topic of intensive investigation for several decades, resulting in the discovery that neurons in a complex network of cortical and subcortical brain regions co-operatively encode distance, direction, place, movement etc. using a variety of different sensory inputs. However, such studies have mainly been conducted in simple laboratory settings in which animals explore small, two-dimensional (i.e., flat) arenas. The real world, by contrast, is complex and three dimensional with hills, valleys, tunnels, branches, and-for species that can swim or fly-large volumetric spaces. Adding an additional dimension to space adds coding challenges, a primary reason for which is that several basic geometric properties are different in three dimensions. This article will explore the consequences of these challenges for the establishment of a functional three-dimensional metric map of space, one of which is that the brains of some species might have evolved to reduce the dimensionality of the representational space and thus sidestep some of these problems.
Energy and momentum management of the Space Station using magnetically suspended composite rotors
NASA Technical Reports Server (NTRS)
Eisenhaure, D. B.; Oglevie, R. E.; Keckler, C. R.
1985-01-01
The research addresses the feasibility of using magnetically suspended composite rotors to jointly perform the energy and momentum management functions of an advanced manned Space Station. Recent advancements in composite materials, magnetic suspensions, and power conversion electronics have given flywheel concepts the potential to simultaneously perform these functions for large, long duration spacecraft, while offering significant weight, volume, and cost savings over conventional approaches. The Space Station flywheel concept arising out of this study consists of a composite-material rotor, a large-angle magnetic suspension (LAMS) system, an ironless armature motor/generator, and high-efficiency power conversion electronics. The LAMS design permits the application of appropriate spacecraft control torques without the use of conventional mechanical gimbals. In addition, flywheel systems have the growth potential and modularity needed to play a key role in many future system developments.
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
NASA Astrophysics Data System (ADS)
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
Validation of a unique concept for a low-cost, lightweight space-deployable antenna structure
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Bilyeu, G. D.; Veal, G. R.
1993-01-01
An experiment conducted in the framework of a NASA In-Space Technology Experiments Program based on a concept of inflatable deployable structures is described. The concept utilizes very low inflation pressure to maintain the required geometry on orbit and gravity-induced deflection of the structure precludes any meaningful ground-based demonstrations of functions performance. The experiment is aimed at validating and characterizing the mechanical functional performance of a 14-m-diameter inflatable deployable reflector antenna structure in the orbital operational environment. Results of the experiment are expected to significantly reduce the user risk associated with using large space-deployable antennas by demonstrating the functional performance of a concept that meets the criteria for low-cost, lightweight, and highly reliable space-deployable structures.
Besley, Nicholas A
2016-10-11
The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .
A study of the role of pyrotechnic systems on the space shuttle program
NASA Technical Reports Server (NTRS)
Lake, E. R.; Thompson, S. J.; Drexelius, V. W.
1973-01-01
Pyrotechnic systems, high burn rate propellant and explosive-actuated mechanisms, have been used extensively in aerospace vehicles to perform a variety of work functions, including crew escape, staging, deployment and destruction. Pyrotechnic system principles are described in this report along with their applications on typical military fighter aircraft, Mercury, Gemini, Apollo, and a representative unmanned spacecraft. To consider the possible pyrotechnic applications on the space shuttle the mechanical functions on a large commercial aircraft, similar in scale to the shuttle orbiter, were reviewed. Many potential applications exist for pyrotechnic system on the space shuttle, both in conventional short-duration functions and in longer duration and/or repetitive type gas generators.
Confining potential in momentum space
NASA Technical Reports Server (NTRS)
Norbury, John W.; Kahana, David E.; Maung, Khin Maung
1992-01-01
A method is presented for the solution in momentum space of the bound state problem with a linear potential in r space. The potential is unbounded at large r leading to a singularity at small q. The singularity is integrable, when regulated by exponentially screening the r-space potential, and is removed by a subtraction technique. The limit of zero screening is taken analytically, and the numerical solution of the subtracted integral equation gives eigenvalues and wave functions in good agreement with position space calculations.
Phases of a stack of membranes in a large number of dimensions of configuration space
NASA Astrophysics Data System (ADS)
Borelli, M. E.; Kleinert, H.
2001-05-01
The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.
Role of Sports Facilities in the Process of Revitalization of Brownfields
NASA Astrophysics Data System (ADS)
Taraszkiewicz, Karolina; Nyka, Lucyna
2017-10-01
The paper gives an evidence that building a large sports facility can generate beneficial urban space transformation and a significant improvement in the dilapidated urban areas. On the basis of theoretical investigations and case studies it can be proved that sports facilities introduced to urban brownfields could be considered one of the best known large scale revitalization methods. Large urban spaces surrounding sport facilities such as stadiums and other sports arenas create excellent conditions for designing additional recreational function, such as parks and other green areas. Since sports venues are very often located on brownfields and post-industrial spaces, there are usually well related with canals, rivers and other water routes or reservoirs. Such spaces become attractors for large groups of people. This, in effect initiate the process of introducing housing estates to the area and gradually the development of multifunctional urban structure. As research shows such process of favourable urban transformation could be based on implementing several important preconditions. One of the most significant one is the formation of the new communication infrastructure, which links newly formed territories with the well-structured urban core. Well planned program of the new sports facilities is also a very important factor. As research shows multifunctional large sports venues may function in the city as a new kind of public space that stimulates new genres of social relations, offers entertainment and free time activities, not necessarily related with sport. This finally leads to the creation of new jobs and more general improvement of a widely understood image of the district, growing appreciation for the emerging new location and consequently new investments in the neighbouring areas. The research gives new evidence to the ongoing discussion on the drawbacks and benefits of placing stadiums and sports arenas in the urban core.
Regulatory networks and connected components of the neutral space. A look at functional islands
NASA Astrophysics Data System (ADS)
Boldhaus, G.; Klemm, K.
2010-09-01
The functioning of a living cell is largely determined by the structure of its regulatory network, comprising non-linear interactions between regulatory genes. An important factor for the stability and evolvability of such regulatory systems is neutrality - typically a large number of alternative network structures give rise to the necessary dynamics. Here we study the discretized regulatory dynamics of the yeast cell cycle [Li et al., PNAS, 2004] and the set of networks capable of reproducing it, which we call functional. Among these, the empirical yeast wildtype network is close to optimal with respect to sparse wiring. Under point mutations, which establish or delete single interactions, the neutral space of functional networks is fragmented into ≈ 4.7 × 108 components. One of the smaller ones contains the wildtype network. On average, functional networks reachable from the wildtype by mutations are sparser, have higher noise resilience and fewer fixed point attractors as compared with networks outside of this wildtype component.
Innovative telescope architectures for future large space observatories
NASA Astrophysics Data System (ADS)
Polidan, Ronald S.; Breckinridge, James B.; Lillie, Charles F.; MacEwen, Howard A.; Flannery, Martin R.; Dailey, Dean R.
2016-10-01
Over the past few years, we have developed a concept for an evolvable space telescope (EST) that is assembled on orbit in three stages, growing from a 4×12-m telescope in Stage 1, to a 12-m filled aperture in Stage 2, and then to a 20-m filled aperture in Stage 3. Stage 1 is launched as a fully functional telescope and begins gathering science data immediately after checkout on orbit. This observatory is then periodically augmented in space with additional mirror segments, structures, and newer instruments to evolve the telescope over the years to a 20-m space telescope. We discuss the EST architecture, the motivation for this approach, and the benefits it provides over current approaches to building and maintaining large space observatories.
Cognitive engineering models in space systems
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1992-01-01
NASA space systems, including mission operations on the ground and in space, are complex, dynamic, predominantly automated systems in which the human operator is a supervisory controller. The human operator monitors and fine-tunes computer-based control systems and is responsible for ensuring safe and efficient system operation. In such systems, the potential consequences of human mistakes and errors may be very large, and low probability of such events is likely. Thus, models of cognitive functions in complex systems are needed to describe human performance and form the theoretical basis of operator workstation design, including displays, controls, and decision support aids. The operator function model represents normative operator behavior-expected operator activities given current system state. The extension of the theoretical structure of the operator function model and its application to NASA Johnson mission operations and space station applications is discussed.
Analysis and Ground Testing for Validation of the Inflatable Sunshield in Space (ISIS) Experiment
NASA Technical Reports Server (NTRS)
Lienard, Sebastien; Johnston, John; Adams, Mike; Stanley, Diane; Alfano, Jean-Pierre; Romanacci, Paolo
2000-01-01
The Next Generation Space Telescope (NGST) design requires a large sunshield to protect the large aperture mirror and instrument module from constant solar exposure at its L2 orbit. The structural dynamics of the sunshield must be modeled in order to predict disturbances to the observatory attitude control system and gauge effects on the line of site jitter. Models of large, non-linear membrane systems are not well understood and have not been successfully demonstrated. To answer questions about sunshield dynamic behavior and demonstrate controlled deployment, the NGST project is flying a Pathfinder experiment, the Inflatable Sunshield in Space (ISIS). This paper discusses in detail the modeling and ground-testing efforts performed at the Goddard Space Flight Center to: validate analytical tools for characterizing the dynamic behavior of the deployed sunshield, qualify the experiment for the Space Shuttle, and verify the functionality of the system. Included in the discussion will be test parameters, test setups, problems encountered, and test results.
The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU
NASA Astrophysics Data System (ADS)
Lara, A.; Niembro, T.
2017-12-01
We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.
NASA Astrophysics Data System (ADS)
Fisher, Karl B.
1995-08-01
The relation between the galaxy correlation functions in real-space and redshift-space is derived in the linear regime by an appropriate averaging of the joint probability distribution of density and velocity. The derivation recovers the familiar linear theory result on large scales but has the advantage of clearly revealing the dependence of the redshift distortions on the underlying peculiar velocity field; streaming motions give rise to distortions of θ(Ω0.6/b) while variations in the anisotropic velocity dispersion yield terms of order θ(Ω1.2/b2). This probabilistic derivation of the redshift-space correlation function is similar in spirit to the derivation of the commonly used "streaming" model, in which the distortions are given by a convolution of the real-space correlation function with a velocity distribution function. The streaming model is often used to model the redshift-space correlation function on small, highly nonlinear, scales. There have been claims in the literature, however, that the streaming model is not valid in the linear regime. Our analysis confirms this claim, but we show that the streaming model can be made consistent with linear theory provided that the model for the streaming has the functional form predicted by linear theory and that the velocity distribution is chosen to be a Gaussian with the correct linear theory dispersion.
Importance sampling large deviations in nonequilibrium steady states. I.
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T
2018-03-28
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Importance sampling large deviations in nonequilibrium steady states. I
NASA Astrophysics Data System (ADS)
Ray, Ushnish; Chan, Garnet Kin-Lic; Limmer, David T.
2018-03-01
Large deviation functions contain information on the stability and response of systems driven into nonequilibrium steady states and in such a way are similar to free energies for systems at equilibrium. As with equilibrium free energies, evaluating large deviation functions numerically for all but the simplest systems is difficult because by construction they depend on exponentially rare events. In this first paper of a series, we evaluate different trajectory-based sampling methods capable of computing large deviation functions of time integrated observables within nonequilibrium steady states. We illustrate some convergence criteria and best practices using a number of different models, including a biased Brownian walker, a driven lattice gas, and a model of self-assembly. We show how two popular methods for sampling trajectory ensembles, transition path sampling and diffusion Monte Carlo, suffer from exponentially diverging correlations in trajectory space as a function of the bias parameter when estimating large deviation functions. Improving the efficiencies of these algorithms requires introducing guiding functions for the trajectories.
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1981-01-01
Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.
NASA Technical Reports Server (NTRS)
Berdahl, M.
1980-01-01
The use of a self pulsed laser system for accurately describing the surface shape of large space deployed antenna structures was evaluated. Tests with a breadboard system verified functional operation with short time resolution on the order of .2 mm, nonambiguous ranging, and a maximum range capability on the order of 150 m. The projected capability of the system is resolution of less than .1 mm over a reasonable time period and a range extension to over 300 m.
Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin
2009-03-01
Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.
Primordial black holes and uncertainties in the choice of the window function
NASA Astrophysics Data System (ADS)
Ando, Kenta; Inomata, Keisuke; Kawasaki, Masahiro
2018-05-01
Primordial black holes (PBHs) can be produced by the perturbations that exit the horizon during the inflationary phase. While inflation models predict the power spectrum of the perturbations in Fourier space, the PBH abundance depends on the probability distribution function of density perturbations in real space. To estimate the PBH abundance in a given inflation model, we must relate the power spectrum in Fourier space to the probability density function in real space by coarse graining the perturbations with a window function. However, there are uncertainties on what window function should be used, which could change the relation between the PBH abundance and the power spectrum. This is particularly important in considering PBHs with mass 30 M⊙, which account for the LIGO events because the required power spectrum is severely constrained by the observations. In this paper, we investigate how large an influence the uncertainties on the choice of a window function has over the power spectrum required for LIGO PBHs. As a result, it is found that the uncertainties significantly affect the prediction for the stochastic gravitational waves induced by the second-order effect of the perturbations. In particular, the pulsar timing array constraints on the produced gravitational waves could disappear for the real-space top-hat window function.
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
Discharge transient coupling in large space power systems
NASA Technical Reports Server (NTRS)
Stevens, N. John; Stillwell, R. P.
1990-01-01
Experiments have shown that plasma environments can induce discharges in solar arrays. These plasmas simulate the environments found in low earth orbits where current plans call for operation of very large power systems. The discharges could be large enough to couple into the power system and possibly disrupt operations. Here, the general concepts of the discharge mechanism and the techniques of coupling are discussed. Data from both ground and flight experiments are reviewed to obtain an expected basis for the interactions. These concepts were applied to the Space Station solar array and distribution system as an example of the large space power system. The effect of discharges was found to be a function of the discharge site. For most sites in the array discharges would not seriously impact performance. One location at the negative end of the array was identified as a position where discharges could couple to charge stored in system capacitors. This latter case could impact performance.
Comparing fixed and variable-width Gaussian networks.
Kůrková, Věra; Kainen, Paul C
2014-09-01
The role of width of Gaussians in two types of computational models is investigated: Gaussian radial-basis-functions (RBFs) where both widths and centers vary and Gaussian kernel networks which have fixed widths but varying centers. The effect of width on functional equivalence, universal approximation property, and form of norms in reproducing kernel Hilbert spaces (RKHS) is explored. It is proven that if two Gaussian RBF networks have the same input-output functions, then they must have the same numbers of units with the same centers and widths. Further, it is shown that while sets of input-output functions of Gaussian kernel networks with two different widths are disjoint, each such set is large enough to be a universal approximator. Embedding of RKHSs induced by "flatter" Gaussians into RKHSs induced by "sharper" Gaussians is described and growth of the ratios of norms on these spaces with increasing input dimension is estimated. Finally, large sets of argminima of error functionals in sets of input-output functions of Gaussian RBFs are described. Copyright © 2014 Elsevier Ltd. All rights reserved.
Towards anti-causal Green's function for three-dimensional sub-diffraction focusing
NASA Astrophysics Data System (ADS)
Ma, Guancong; Fan, Xiying; Ma, Fuyin; de Rosny, Julien; Sheng, Ping; Fink, Mathias
2018-06-01
In causal physics, the causal Green's function describes the radiation of a point source. Its counterpart, the anti-causal Green's function, depicts a spherically converging wave. However, in free space, any converging wave must be followed by a diverging one. Their interference gives rise to the diffraction limit that constrains the smallest possible dimension of a wave's focal spot in free space, which is half the wavelength. Here, we show with three-dimensional acoustic experiments that we can realize a stand-alone anti-causal Green's function in a large portion of space up to a subwavelength distance from the focus point by introducing a near-perfect absorber for spherical waves at the focus. We build this subwavelength absorber based on membrane-type acoustic metamaterial, and experimentally demonstrate focusing of spherical waves beyond the diffraction limit.
NASA Technical Reports Server (NTRS)
Smith, W. W.
1981-01-01
The five major tasks of the program are reported. Task 1 is a literature search followed by selection and definition of seven generic spacecraft classes. Task 2 covers the determination and description of important disturbance effects. Task 3 applies the disturbances to the generic spacecraft and adds maneuver and stationkeeping functions to define total auxiliary propulsion systems requirements for control. The important auxiliary propulsion system characteristics are identified and sensitivities to control functions and large space system characteristics determined. In Task 4, these sensitivities are quantified and the optimum auxiliary propulsion system characteristics determined. Task 5 compares the desired characteristics with those available for both electrical and chemical auxiliary propulsion systems to identify the directions technology advances should take.
Systems Engineering and Reusable Avionics
NASA Technical Reports Server (NTRS)
Conrad, James M.; Murphy, Gloria
2010-01-01
One concept for future space flights is to construct building blocks for a wide variety of avionics systems. Once a unit has served its original purpose, it can be removed from the original vehicle and reused in a similar or dissimilar function, depending on the function blocks the unit contains. For example: Once a lunar lander has reached the moon's surface, an engine controller for the Lunar Decent Module would be removed and used for a lunar rover motor control unit or for a Environmental Control Unit for a Lunar Habitat. This senior design project included the investigation of a wide range of functions of space vehicles and possible uses. Specifically, this includes: (1) Determining and specifying the basic functioning blocks of space vehicles. (2) Building and demonstrating a concept model. (3) Showing high reliability is maintained. The specific implementation of this senior design project included a large project team made up of Systems, Electrical, Computer, and Mechanical Engineers/Technologists. The efforts were made up of several sub-groups that each worked on a part of the entire project. The large size and complexity made this project one of the more difficult to manage and advise. Typical projects only have 3-4 students, but this project had 10 students from five different disciplines. This paper describes the difference of this large project compared to typical projects, and the challenges encountered. It also describes how the systems engineering approach was successfully implemented so that the students were able to meet nearly all of the project requirements.
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.
Parametric Cost Models for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Dollinger, Courtney
2010-01-01
Multivariable parametric cost models for space telescopes provide several benefits to designers and space system project managers. They identify major architectural cost drivers and allow high-level design trades. They enable cost-benefit analysis for technology development investment. And, they provide a basis for estimating total project cost. A survey of historical models found that there is no definitive space telescope cost model. In fact, published models vary greatly [1]. Thus, there is a need for parametric space telescopes cost models. An effort is underway to develop single variable [2] and multi-variable [3] parametric space telescope cost models based on the latest available data and applying rigorous analytical techniques. Specific cost estimating relationships (CERs) have been developed which show that aperture diameter is the primary cost driver for large space telescopes; technology development as a function of time reduces cost at the rate of 50% per 17 years; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and increasing mass reduces cost.
NASA Technical Reports Server (NTRS)
1984-01-01
The Large Deployable Reflector (LDR), a proposed 20 m diameter telescope designed for infrared and submillimeter astronomical measurements from space, is discussed in terms of scientific purposes, capabilities, current status, and history of development. The LDR systems goals and functional/telescope requirements are enumerated.
Some thoughts on the management of large, complex international space ventures
NASA Technical Reports Server (NTRS)
Lee, T. J.; Kutzer, Ants; Schneider, W. C.
1992-01-01
Management issues relevant to the development and deployment of large international space ventures are discussed with particular attention given to previous experience. Management approaches utilized in the past are labeled as either simple or complex, and signs of efficient management are examined. Simple approaches include those in which experiments and subsystems are developed for integration into spacecraft, and the Apollo-Soyuz Test Project is given as an example of a simple multinational approach. Complex approaches include those for ESA's Spacelab Project and the Space Station Freedom in which functional interfaces cross agency and political boundaries. It is concluded that individual elements of space programs should be managed by individual participating agencies, and overall configuration control is coordinated by level with a program director acting to manage overall objectives and project interfaces.
Suppression of phase mixing in drift-kinetic plasma turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, J. T., E-mail: joseph.parker@stfc.ac.uk; OCIAM, Mathematical Institute, University of Oxford, Andrew Wiles Building, Radcliffe Observatory Quarter, Woodstock Road, Oxford OX2 6GG; Brasenose College, Radcliffe Square, Oxford OX1 4AJ
2016-07-15
Transfer of free energy from large to small velocity-space scales by phase mixing leads to Landau damping in a linear plasma. In a turbulent drift-kinetic plasma, this transfer is statistically nearly canceled by an inverse transfer from small to large velocity-space scales due to “anti-phase-mixing” modes excited by a stochastic form of plasma echo. Fluid moments (density, velocity, and temperature) are thus approximately energetically isolated from the higher moments of the distribution function, so phase mixing is ineffective as a dissipation mechanism when the plasma collisionality is small.
Interactions between large space power systems and low-Earth-orbit plasmas
NASA Technical Reports Server (NTRS)
Stevens, N. J.
1985-01-01
There is a growing tendency to plan space missions that will incorporate very large space power systems. These space power systems must function in the space plasma environment, which can impose operational limitations. As the power output increases, the operating voltage also must increase and this voltage, exposed at solar array interconnects, interacts with the local plasma. The implications of such interactions are considered. The available laboratory data for biased array segment tests are reviewed to demonstrate the basic interactions considered. A data set for a floating high voltage array test was used to generate approximate relationships for positive and negative current collection from plasmas. These relationships were applied to a hypothetical 100 kW power system operating in a 400 km, near equatorial orbit. It was found that discharges from the negative regions of the array are the most probable limiting factor in array operation.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
NASA Astrophysics Data System (ADS)
Anderson, G. A.; MacCallum, T. K.; Poynter, J. E.; Klaus, D., Dr.
1998-01-01
Paragon Space Development Corporation (SDC) has developed an Autonomous Biological System (ABS) that can be flown in space to provide for long term growth and breeding of aquatic plants, animals, microbes and algae. The system functions autonomously and in isolation from the spacecraft life support systems and with no mandatory crew time required for function or observation. The ABS can also be used for long term plant and animal life support and breeding on a free flyer space craft. The ABS units are a research tool for both pharmaceutical and basic space biological sciences. Development flights in May of 1996 and September, 1996 through January, 1997 were largely successful, showing both that the hardware and life systems are performing with beneficial results, though some surprises were still found. The two space flights, plus the current flight now on Mir, are expected to result in both a scientific and commercially usable system for breeding and propagation of animals and plants in space.
Toward a standardized structural-functional group connectome in MNI space.
Horn, Andreas; Blankenburg, Felix
2016-01-01
The analysis of the structural architecture of the human brain in terms of connectivity between its subregions has provided profound insights into its underlying functional organization and has coined the concept of the "connectome", a structural description of the elements forming the human brain and the connections among them. Here, as a proof of concept, we introduce a novel group connectome in standard space based on a large sample of 169 subjects from the Enhanced Nathan Kline Institute-Rockland Sample (eNKI-RS). Whole brain structural connectomes of each subject were estimated with a global tracking approach, and the resulting fiber tracts were warped into standard stereotactic (MNI) space using DARTEL. Employing this group connectome, the results of published tracking studies (i.e., the JHU white matter and Oxford thalamic connectivity atlas) could be largely reproduced directly within MNI space. In a second analysis, a study that examined structural connectivity between regions of a functional network, namely the default mode network, was reproduced. Voxel-wise structural centrality was then calculated and compared to others' findings. Furthermore, including additional resting-state fMRI data from the same subjects, structural and functional connectivity matrices between approximately forty thousand nodes of the brain were calculated. This was done to estimate structure-function agreement indices of voxel-wise whole brain connectivity. Taken together, the combination of a novel whole brain fiber tracking approach and an advanced normalization method led to a group connectome that allowed (at least heuristically) performing fiber tracking directly within MNI space. Such an approach may be used for various purposes like the analysis of structural connectivity and modeling experiments that aim at studying the structure-function relationship of the human connectome. Moreover, it may even represent a first step toward a standard DTI template of the human brain in stereotactic space. The standardized group connectome might thus be a promising new resource to better understand and further analyze the anatomical architecture of the human brain on a population level. Copyright © 2015 Elsevier Inc. All rights reserved.
A space-based public service platform for terrestrial rescue operations
NASA Technical Reports Server (NTRS)
Fleisig, R.; Bernstein, J.; Cramblit, D. C.
1977-01-01
The space-based Public Service Platform (PSP) is a multibeam, high-gain communications relay satellite that can provide a variety of functions for a large number of people on earth equipped with extremely small, very low cost transceivers. This paper describes the PSP concept, the rationale used to derive the concept, the criteria for selecting specific communication functions to be performed, and the advantages of performing such functions via satellite. The discussion focuses on the benefits of using a PSP for natural disaster warning; control of attendant rescue/assistance operations; and rescue of people in downed aircraft, aboard sinking ships, lost or injured on land.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, V.
1988-01-01
A generic procedure for the parameter optimization of a digital control law for a large-order flexible flight vehicle or large space structure modeled as a sampled data system is presented. A linear quadratic Guassian type cost function was minimized, while satisfying a set of constraints on the steady-state rms values of selected design responses, using a constrained optimization technique to meet multiple design requirements. Analytical expressions for the gradients of the cost function and the design constraints on mean square responses with respect to the control law design variables are presented.
Control - Demands mushroom as station grows
NASA Technical Reports Server (NTRS)
Szirmay, S. Z.; Blair, J.
1983-01-01
The NASA space station, which is presently in the planning stage, is to be composed of both rigid and nonrigid modules, rotating elements, and flexible appendages subjected to environmental disturbances from the earth's atmospheric gravity gradient, and magnetic field, as well as solar radiation and self-generated disturbances. Control functions, which will originally include attitude control, docking and berthing control, and system monitoring and management, will with evolving mission objectives come to encompass such control functions as articulation control, autonomous navigation, space traffic control, and large space structure control. Attention is given to the advancements in modular, distributed, and adaptive control methods, as well as system identification and hardware fault tolerance techniques, which will be required.
Xu, Xin; Huang, Zhenhua; Graves, Daniel; Pedrycz, Witold
2014-12-01
In order to deal with the sequential decision problems with large or continuous state spaces, feature representation and function approximation have been a major research topic in reinforcement learning (RL). In this paper, a clustering-based graph Laplacian framework is presented for feature representation and value function approximation (VFA) in RL. By making use of clustering-based techniques, that is, K-means clustering or fuzzy C-means clustering, a graph Laplacian is constructed by subsampling in Markov decision processes (MDPs) with continuous state spaces. The basis functions for VFA can be automatically generated from spectral analysis of the graph Laplacian. The clustering-based graph Laplacian is integrated with a class of approximation policy iteration algorithms called representation policy iteration (RPI) for RL in MDPs with continuous state spaces. Simulation and experimental results show that, compared with previous RPI methods, the proposed approach needs fewer sample points to compute an efficient set of basis functions and the learning control performance can be improved for a variety of parameter settings.
Zou, Wenli; Cai, Ziyu; Wang, Jiankang; Xin, Kunyu
2018-04-29
Based on two-component relativistic atomic calculations, a free electron density function (EDF) library has been developed for nearly all the known ECPs of the elements Li (Z = 3) up to Ubn (Z = 120), which can be interfaced into modern quantum chemistry programs to save the .wfx wavefunction file. The applicability of this EDF library is demonstrated by the analyses of the quantum theory of atoms in molecules (QTAIM) and other real space functions on HeCuF, PtO42+, OgF 4 , and TlCl 3 (DMSO) 2 . When a large-core ECP is used, it shows that the corrections by EDF may significantly improve the properties of some density-derived real space functions, but they are invalid for the wavefunction-depending real space functions. To classify different chemical bonds and especially some nonclassical interactions, a list of universal criteria has also been proposed. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Physical Model of the Genotype-to-Phenotype Map of Proteins
NASA Astrophysics Data System (ADS)
Tlusty, Tsvi; Libchaber, Albert; Eckmann, Jean-Pierre
2017-04-01
How DNA is mapped to functional proteins is a basic question of living matter. We introduce and study a physical model of protein evolution which suggests a mechanical basis for this map. Many proteins rely on large-scale motion to function. We therefore treat protein as learning amorphous matter that evolves towards such a mechanical function: Genes are binary sequences that encode the connectivity of the amino acid network that makes a protein. The gene is evolved until the network forms a shear band across the protein, which allows for long-range, soft modes required for protein function. The evolution reduces the high-dimensional sequence space to a low-dimensional space of mechanical modes, in accord with the observed dimensional reduction between genotype and phenotype of proteins. Spectral analysis of the space of 1 06 solutions shows a strong correspondence between localization around the shear band of both mechanical modes and the sequence structure. Specifically, our model shows how mutations are correlated among amino acids whose interactions determine the functional mode.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
On the accuracy of modelling the dynamics of large space structures
NASA Technical Reports Server (NTRS)
Diarra, C. M.; Bainum, P. M.
1985-01-01
Proposed space missions will require large scale, light weight, space based structural systems. Large space structure technology (LSST) systems will have to accommodate (among others): ocean data systems; electronic mail systems; large multibeam antenna systems; and, space based solar power systems. The structures are to be delivered into orbit by the space shuttle. Because of their inherent size, modelling techniques and scaling algorithms must be developed so that system performance can be predicted accurately prior to launch and assembly. When the size and weight-to-area ratio of proposed LSST systems dictate that the entire system be considered flexible, there are two basic modeling methods which can be used. The first is a continuum approach, a mathematical formulation for predicting the motion of a general orbiting flexible body, in which elastic deformations are considered small compared with characteristic body dimensions. This approach is based on an a priori knowledge of the frequencies and shape functions of all modes included within the system model. Alternatively, finite element techniques can be used to model the entire structure as a system of lumped masses connected by a series of (restoring) springs and possibly dampers. In addition, a computational algorithm was developed to evaluate the coefficients of the various coupling terms in the equations of motion as applied to the finite element model of the Hoop/Column.
NASA Astrophysics Data System (ADS)
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
Adaptive structures for precision controlled large space systems
NASA Technical Reports Server (NTRS)
Garba, John A.; Wada, Ben K.; Fanson, James L.
1991-01-01
The stringent accuracy and ground test validation requirements of some of the future space missions will require new approaches in structural design. Adaptive structures, structural systems that can vary their geometric congiguration as well as their physical properties, are primary candidates for meeting the functional requirements for such missions. Research performed in the development of such adaptive structural systems is described.
NASA Astrophysics Data System (ADS)
Kamimoto, Shingo; Kawai, Takahiro; Koike, Tatsuya
2016-12-01
Inspired by the symbol calculus of linear differential operators of infinite order applied to the Borel transformed WKB solutions of simple-pole type equation [Kamimoto et al. (RIMS Kôkyûroku Bessatsu B 52:127-146, 2014)], which is summarized in Section 1, we introduce in Section 2 the space of simple resurgent functions depending on a parameter with an infra-exponential type growth order, and then we define the assigning operator A which acts on the space and produces resurgent functions with essential singularities. In Section 3, we apply the operator A to the Borel transforms of the Voros coefficient and its exponentiation for the Whittaker equation with a large parameter so that we may find the Borel transforms of the Voros coefficient and its exponentiation for the boosted Whittaker equation with a large parameter. In Section 4, we use these results to find the explicit form of the alien derivatives of the Borel transformed WKB solutions of the boosted Whittaker equation with a large parameter. The results in this paper manifest the importance of resurgent functions with essential singularities in developing the exact WKB analysis, the WKB analysis based on the resurgent function theory. It is also worth emphasizing that the concrete form of essential singularities we encounter is expressed by the linear differential operators of infinite order.
Xu, Jason; Minin, Vladimir N
2015-07-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.
Xu, Jason; Minin, Vladimir N.
2016-01-01
Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377
The impact of galaxy formation on satellite kinematics and redshift-space distortions
NASA Astrophysics Data System (ADS)
Orsi, Álvaro A.; Angulo, Raúl E.
2018-04-01
Galaxy surveys aim to map the large-scale structure of the Universe and use redshift-space distortions to constrain deviations from general relativity and probe the existence of massive neutrinos. However, the amount of information that can be extracted is limited by the accuracy of theoretical models used to analyse the data. Here, by using the L-Galaxies semi-analytical model run over the Millennium-XXL N-body simulation, we assess the impact of galaxy formation on satellite kinematics and the theoretical modelling of redshift-space distortions. We show that different galaxy selection criteria lead to noticeable differences in the radial distributions and velocity structure of satellite galaxies. Specifically, whereas samples of stellar mass selected galaxies feature satellites that roughly follow the dark matter, emission line satellite galaxies are located preferentially in the outskirts of haloes and display net infall velocities. We demonstrate that capturing these differences is crucial for modelling the multipoles of the correlation function in redshift space, even on large scales. In particular, we show how modelling small-scale velocities with a single Gaussian distribution leads to a poor description of the measured clustering. In contrast, we propose a parametrization that is flexible enough to model the satellite kinematics and that leads to an accurate description of the correlation function down to sub-Mpc scales. We anticipate that our model will be a necessary ingredient in improved theoretical descriptions of redshift-space distortions, which together could result in significantly tighter cosmological constraints and a more optimal exploitation of future large data sets.
Characterizing the Space Debris Environment with a Variety of SSA Sensors
NASA Technical Reports Server (NTRS)
Stansbery, Eugene G.
2010-01-01
Damaging space debris spans a wide range of sizes and altitudes. Therefore no single method or sensor can fully characterize the space debris environment. Space debris researchers use a variety of radars and optical telescopes to characterize the space debris environment in terms of number, altitude, and inclination distributions. Some sensors, such as phased array radars, are designed to search a large volume of the sky and can be instrumental in detecting new breakups and cataloging and precise tracking of relatively large debris. For smaller debris sizes more sensitivity is needed which can be provided, in part, by large antenna gains. Larger antenna gains, however, produce smaller fields of view. Statistical measurements of the debris environment with less precise orbital parameters result. At higher altitudes, optical telescopes become the more sensitive instrument and present their own measurement difficulties. Space Situational Awareness, or SSA, is concerned with more than the number and orbits of satellites. SSA also seeks to understand such parameters as the function, shape, and composition of operational satellites. Similarly, debris researchers are seeking to characterize similar parameters for space debris to improve our knowledge of the risks debris poses to operational satellites as well as determine sources of debris for future mitigation. This paper will discuss different sensor and sensor types and the role that each plays in fully characterizing the space debris environment.
Hollow-structured mesoporous materials: chemical synthesis, functionalization and applications.
Li, Yongsheng; Shi, Jianlin
2014-05-28
Hollow-structured mesoporous materials (HMMs), as a kind of mesoporous material with unique morphology, have been of great interest in the past decade because of the subtle combination of the hollow architecture with the mesoporous nanostructure. Benefitting from the merits of low density, large void space, large specific surface area, and, especially, the good biocompatibility, HMMs present promising application prospects in various fields, such as adsorption and storage, confined catalysis when catalytically active species are incorporated in the core and/or shell, controlled drug release, targeted drug delivery, and simultaneous diagnosis and therapy of cancers when the surface and/or core of the HMMs are functionalized with functional ligands and/or nanoparticles, and so on. In this review, recent progress in the design, synthesis, functionalization, and applications of hollow mesoporous materials are discussed. Two main synthetic strategies, soft-templating and hard-templating routes, are broadly sorted and described in detail. Progress in the main application aspects of HMMs, such as adsorption and storage, catalysis, and biomedicine, are also discussed in detail in this article, in terms of the unique features of the combined large void space in the core and the mesoporous network in the shell. Functionalization of the core and pore/outer surfaces with functional organic groups and/or nanoparticles, and their performance, are summarized in this article. Finally, an outlook of their prospects and challenges in terms of their controlled synthesis and scaled application is presented. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Takashima, Takeshi; Ogawa, Emiko; Asamura, Kazushi; Hikishima, Mitsuru
2018-05-01
Arase is a small scientific satellite program conducted by the Institute of Space and Astronautical Science/Japan Aerospace Exploration Agency, which is dedicated to the detailed study of the radiation belts around Earth through in situ observations. In particular, the goal is to directly observe the interaction between plasma waves and particles, which cause the generation of high-energy electrons. To observe the waves and particles in detail, we must record large volumes of burst data with high transmission rates through onboard mission network systems. For this purpose, we developed a high-speed and highly reliable mission network based on SpaceWire, as well as a new and large memory data recorder equipped with a data search function based on observation time (the time index, TI, is the satellite time starting from when the spacecraft is powered on.) with respect to the orbital data generated in large quantities. By adopting a new transaction concept of a ring topology network with SpaceWire, we could secure a redundant mission network system without using large routers and having to suppress the increase in cable weight. We confirmed that their orbit performs as designed.[Figure not available: see fulltext.
Comprehensive phase diagram of two-dimensional space charge doped Bi2Sr2CaCu2O8+x.
Sterpetti, Edoardo; Biscaras, Johan; Erb, Andreas; Shukla, Abhay
2017-12-12
The phase diagram of hole-doped high critical temperature superconductors as a function of doping and temperature has been intensively studied with chemical variation of doping. Chemical doping can provoke structural changes and disorder, masking intrinsic effects. Alternatively, a field-effect transistor geometry with an electrostatically doped, ultra-thin sample can be used. However, to probe the phase diagram, carrier density modulation beyond 10 14 cm -2 and transport measurements performed over a large temperature range are needed. Here we use the space charge doping method to measure transport characteristics from 330 K to low temperature. We extract parameters and characteristic temperatures over a large doping range and establish a comprehensive phase diagram for one-unit-cell-thick BSCCO-2212 as a function of doping, temperature and disorder.
Grassmann phase space theory and the Jaynes-Cummings model
NASA Astrophysics Data System (ADS)
Dalton, B. J.; Garraway, B. M.; Jeffers, J.; Barnett, S. M.
2013-07-01
The Jaynes-Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherent state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes-Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker-Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker-Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes-Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker-Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker-Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions-that are also equivalent to the canonical Grassmann distribution function-to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum-atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes-Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum-atom optics.
Probabilistic #D data fusion for multiresolution surface generation
NASA Technical Reports Server (NTRS)
Manduchi, R.; Johnson, A. E.
2002-01-01
In this paper we present an algorithm for adaptive resolution integration of 3D data collected from multiple distributed sensors. The input to the algorithm is a set of 3D surface points and associated sensor models. Using a probabilistic rule, a surface probability function is generated that represents the probability that a particular volume of space contains the surface. The surface probability function is represented using an octree data structure; regions of space with samples of large conariance are stored at a coarser level than regions of space containing samples with smaller covariance. The algorithm outputs an adaptive resolution surface generated by connecting points that lie on the ridge of surface probability with triangles scaled to match the local discretization of space given by the algorithm, we present results from 3D data generated by scanning lidar and structure from motion.
Clustering in the SDSS Redshift Survey
NASA Astrophysics Data System (ADS)
Zehavi, I.; Blanton, M. R.; Frieman, J. A.; Weinberg, D. H.; SDSS Collaboration
2002-05-01
We present measurements of clustering in the Sloan Digital Sky Survey (SDSS) galaxy redshift survey. Our current sample consists of roughly 80,000 galaxies with redshifts in the range 0.02 < z < 0.2, covering about 1200 square degrees. We measure the clustering in redshift space and in real space. The two-dimensional correlation function ξ (rp,π ) shows clear signatures of redshift distortions, both the small-scale ``fingers-of-God'' effect and the large-scale compression. The inferred real-space correlation function is well described by a power law. The SDSS is especially suitable for investigating the dependence of clustering on galaxy properties, due to the wealth of information in the photometric survey. We focus on the dependence of clustering on color and on luminosity.
NASA Technical Reports Server (NTRS)
West, J. B.; Elliott, A. R.; Guy, H. J.; Prisk, G. K.
1997-01-01
The lung is exquisitely sensitive to gravity, and so it is of interest to know how its function is altered in the weightlessness of space. Studies on National Aeronautics and Space Administration (NASA) Spacelabs during the last 4 years have provided the first comprehensive data on the extensive changes in pulmonary function that occur in sustained microgravity. Measurements of pulmonary function were made on astronauts during space shuttle flights lasting 9 and 14 days and were compared with extensive ground-based measurements before and after the flights. Compared with preflight measurements, cardiac output increased by 18% during space flight, and stroke volume increased by 46%. Paradoxically, the increase in stroke volume occurred in the face of reductions in central venous pressure and circulating blood volume. Diffusing capacity increased by 28%, and the increase in the diffusing capacity of the alveolar membrane was unexpectedly large based on findings in normal gravity. The change in the alveolar membrane may reflect the effects of uniform filling of the pulmonary capillary bed. Distributions of blood flow and ventilation throughout the lung were more uniform in space, but some unevenness remained, indicating the importance of nongravitational factors. A surprising finding was that airway closing volume was approximately the same in microgravity and in normal gravity, emphasizing the importance of mechanical properties of the airways in determining whether they close. Residual volume was unexpectedly reduced by 18% in microgravity, possibly because of uniform alveolar expansion. The findings indicate that pulmonary function is greatly altered in microgravity, but none of the changes observed so far will apparently limit long-term space flight. In addition, the data help to clarify how gravity affects pulmonary function in the normal gravity environment on Earth.
A General Conditional Large Deviation Principle
La Cour, Brian R.; Schieve, William C.
2015-07-18
Given a sequence of Borel probability measures on a Hausdorff space which satisfy a large deviation principle (LDP), we consider the corresponding sequence of measures formed by conditioning on a set B. If the large deviation rate function I is good and effectively continuous, and the conditioning set has the property that (1)more » $$\\overline{B°}$$=$$\\overline{B}$$ and (2) I(x)<∞ for all xε$$\\overline{B}$$, then the sequence of conditional measures satisfies a LDP with the good, effectively continuous rate function I B, where I B(x)=I(x)-inf I(B) if xε$$\\overline{B}$$ and I B(x)=∞ otherwise.« less
NASA Astrophysics Data System (ADS)
Lu, Wei; Sun, Jianfeng; Hou, Peipei; Xu, Qian; Xi, Yueli; Zhou, Yu; Zhu, Funan; Liu, Liren
2017-08-01
Performance of satellite laser communications between GEO and LEO satellites can be influenced by background light noise appeared in the field of view due to sunlight or planets and some comets. Such influences should be studied on the ground testing platform before the space application. In this paper, we introduce a simulator that can simulate the real case of background light noise in space environment during the data talking via laser beam between two lonely satellites. This simulator can not only simulate the effect of multi-wavelength spectrum, but also the effects of adjustable angles of field-of-view, large range of adjustable optical power and adjustable deflection speeds of light noise in space environment. We integrate these functions into a device with small and compact size for easily mobile use. Software control function is also achieved via personal computer to adjust these functions arbitrarily. Keywords:
Redshift space clustering of galaxies and cold dark matter model
NASA Technical Reports Server (NTRS)
Bahcall, Neta A.; Cen, Renyue; Gramann, Mirt
1993-01-01
The distorting effect of peculiar velocities on the power speturm and correlation function of IRAS and optical galaxies is studied. The observed redshift space power spectra and correlation functions of IRAS and optical the galaxies over the entire range of scales are directly compared with the corresponding redshift space distributions using large-scale computer simulations of cold dark matter (CDM) models in order to study the distortion effect of peculiar velocities on the power spectrum and correlation function of the galaxies. It is found that the observed power spectrum of IRAS and optical galaxies is consistent with the spectrum of an Omega = 1 CDM model. The problems that such a model currently faces may be related more to the high value of Omega in the model than to the shape of the spectrum. A low-density CDM model is also investigated and found to be consistent with the data.
Computing the Dynamic Response of a Stratified Elastic Half Space Using Diffuse Field Theory
NASA Astrophysics Data System (ADS)
Sanchez-Sesma, F. J.; Perton, M.; Molina Villegas, J. C.
2015-12-01
The analytical solution for the dynamic response of an elastic half-space for a normal point load at the free surface is due to Lamb (1904). For a tangential force, we have Chaós (1960) formulae. For an arbitrary load at any depth within a stratified elastic half space, the resulting elastic field can be given in the same fashion, by using an integral representation in the radial wavenumber domain. Typically, computations use discrete wave number (DWN) formalism and Fourier analysis allows for solution in space and time domain. Experimentally, these elastic Greeńs functions might be retrieved from ambient vibrations correlations when assuming a diffuse field. In fact, the field could not be totally diffuse and only parts of the Green's functions, associated to surface or body waves, are retrieved. In this communication, we explore the computation of Green functions for a layered media on top of a half-space using a set of equipartitioned elastic plane waves. Our formalism includes body and surface waves (Rayleigh and Love waves). These latter waves correspond to the classical representations in terms of normal modes in the asymptotic case of large separation distance between source and receiver. This approach allows computing Green's functions faster than DWN and separating the surface and body wave contributions in order to better represent Green's function experimentally retrieved.
Pigot, Alex L; Trisos, Christopher H; Tobias, Joseph A
2016-01-13
Variation in species richness across environmental gradients may be associated with an expanded volume or increased packing of ecological niche space. However, the relative importance of these alternative scenarios remains unknown, largely because standardized information on functional traits and their ecological relevance is lacking for major diversity gradients. Here, we combine data on morphological and ecological traits for 523 species of passerine birds distributed across an Andes-to-Amazon elevation gradient. We show that morphological traits capture substantial variation in species dietary (75%) and foraging niches (60%) when multiple independent trait dimensions are considered. Having established these relationships, we show that the 14-fold increase in species richness towards the lowlands is associated with both an increased volume and density of functional trait space. However, we find that increases in volume contribute little to changes in richness, with most (78%) lowland species occurring within the range of trait space occupied at high elevations. Taken together, our results suggest that high species richness is mainly associated with a denser occupation of functional trait space, implying an increased specialization or overlap of ecological niches, and supporting the view that niche packing is the dominant trend underlying gradients of increasing biodiversity towards the lowland tropics. © 2016 The Author(s).
Pigot, Alex L.; Trisos, Christopher H.; Tobias, Joseph A.
2016-01-01
Variation in species richness across environmental gradients may be associated with an expanded volume or increased packing of ecological niche space. However, the relative importance of these alternative scenarios remains unknown, largely because standardized information on functional traits and their ecological relevance is lacking for major diversity gradients. Here, we combine data on morphological and ecological traits for 523 species of passerine birds distributed across an Andes-to-Amazon elevation gradient. We show that morphological traits capture substantial variation in species dietary (75%) and foraging niches (60%) when multiple independent trait dimensions are considered. Having established these relationships, we show that the 14-fold increase in species richness towards the lowlands is associated with both an increased volume and density of functional trait space. However, we find that increases in volume contribute little to changes in richness, with most (78%) lowland species occurring within the range of trait space occupied at high elevations. Taken together, our results suggest that high species richness is mainly associated with a denser occupation of functional trait space, implying an increased specialization or overlap of ecological niches, and supporting the view that niche packing is the dominant trend underlying gradients of increasing biodiversity towards the lowland tropics. PMID:26740616
ERIC Educational Resources Information Center
Johnson, Wendy; Gow, Alan J.; Corley, Janie; Starr, John M.; Deary, Ian J.
2010-01-01
Though mental ability tends to be relatively stable throughout the lifespan, many still argue that late life cognitive function largely reflects education, social class, and environmental circumstances. Instead, it may be that early life cognitive function contributes to each of these in turn, as well as to late life cognitive function. This paper…
Study of genetic direct search algorithms for function optimization
NASA Technical Reports Server (NTRS)
Zeigler, B. P.
1974-01-01
The results are presented of a study to determine the performance of genetic direct search algorithms in solving function optimization problems arising in the optimal and adaptive control areas. The findings indicate that: (1) genetic algorithms can outperform standard algorithms in multimodal and/or noisy optimization situations, but suffer from lack of gradient exploitation facilities when gradient information can be utilized to guide the search. (2) For large populations, or low dimensional function spaces, mutation is a sufficient operator. However for small populations or high dimensional functions, crossover applied in about equal frequency with mutation is an optimum combination. (3) Complexity, in terms of storage space and running time, is significantly increased when population size is increased or the inversion operator, or the second level adaptation routine is added to the basic structure.
Increased intracranial pressure in mini-pigs exposed to simulated solar particle event radiation
NASA Astrophysics Data System (ADS)
Sanzari, Jenine K.; Muehlmatt, Amy; Savage, Alexandria; Lin, Liyong; Kennedy, Ann R.
2014-02-01
Changes in intracranial pressure (ICP) during space flight have stimulated an area of research in space medicine. It is widely speculated that elevations in ICP contribute to structural and functional ocular changes, including deterioration in vision, which is also observed during space flight. The aim of this study was to investigate changes in opening pressure (OP) occurring as a result of ionizing radiation exposure (at doses and dose-rates relevant to solar particle event radiation). We used a large animal model, the Yucatan mini-pig, and were able to obtain measurements over a 90 day period. This is the first investigation to show long term recordings of ICP in a large animal model without an invasive craniotomy procedure. Further, this is the first investigation reporting increased ICP after radiation exposure.
Shielding in ungated field emitter arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, J. R.; Jensen, K. L.; Shiffler, D. A.
Cathodes consisting of arrays of high aspect ratio field emitters are of great interest as sources of electron beams for vacuum electronic devices. The desire for high currents and current densities drives the cathode designer towards a denser array, but for ungated emitters, denser arrays also lead to increased shielding, in which the field enhancement factor β of each emitter is reduced due to the presence of the other emitters in the array. To facilitate the study of these arrays, we have developed a method for modeling high aspect ratio emitters using tapered dipole line charges. This method can bemore » used to investigate proximity effects from similar emitters an arbitrary distance away and is much less computationally demanding than competing simulation approaches. Here, we introduce this method and use it to study shielding as a function of array geometry. Emitters with aspect ratios of 10{sup 2}–10{sup 4} are modeled, and the shielding-induced reduction in β is considered as a function of tip-to-tip spacing for emitter pairs and for large arrays with triangular and square unit cells. Shielding is found to be negligible when the emitter spacing is greater than the emitter height for the two-emitter array, or about 2.5 times the emitter height in the large arrays, in agreement with previously published results. Because the onset of shielding occurs at virtually the same emitter spacing in the square and triangular arrays, the triangular array is preferred for its higher emitter density at a given emitter spacing. The primary contribution to shielding in large arrays is found to come from emitters within a distance of three times the unit cell spacing for both square and triangular arrays.« less
Isolating relativistic effects in large-scale structure
NASA Astrophysics Data System (ADS)
Bonvin, Camille
2014-12-01
We present a fully relativistic calculation of the observed galaxy number counts in the linear regime. We show that besides the density fluctuations and redshift-space distortions, various relativistic effects contribute to observations at large scales. These effects all have the same physical origin: they result from the fact that our coordinate system, namely the galaxy redshift and the incoming photons’ direction, is distorted by inhomogeneities in our Universe. We then discuss the impact of the relativistic effects on the angular power spectrum and on the two-point correlation function in configuration space. We show that the latter is very well adapted to isolate the relativistic effects since it naturally makes use of the symmetries of the different contributions. In particular, we discuss how the Doppler effect and the gravitational redshift distortions can be isolated by looking for a dipole in the cross-correlation function between a bright and a faint population of galaxies.
NASA Technical Reports Server (NTRS)
2002-01-01
Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4(pi) sr), the sensitive energy range of the instrument ((approx) 10 MeV to 100 GeV) and abundant components (proton, alpha, e(sup -), e(sup +), (mu)(sup -), (mu)(sup +) and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functions in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.
NASA Technical Reports Server (NTRS)
McElwain, Michael; Van Gorkom, Kyle; Bowers, Charles W.; Carnahan, Timothy M.; Kimble, Randy A.; Knight, J. Scott; Lightsey, Paul; Maghami, Peiman G.; Mustelier, David; Niedner, Malcolm B.;
2017-01-01
The James Webb Space Telescope (JWST) is a large (6.5 m) cryogenic segmented aperture telescope with science instruments that cover the near- and mid-infrared from 0.6-27 microns. The large aperture not only provides high photometric sensitivity, but it also enables high angular resolution across the bandpass, with a diffraction limited point spread function (PSF) at wavelengths longer than 2 microns. The JWST PSF quality and stability are intimately tied to the science capabilities as it is convolved with the astrophysical scene. However, the PSF evolves at a variety of timescales based on telescope jitter and thermal distortion as the observatory attitude is varied. We present the image quality and stability requirements, recent predictions from integrated modeling, measurements made during ground-based testing, and performance characterization activities that will be carried out as part of the commissioning process.
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn
NASA Astrophysics Data System (ADS)
Gargano, A.; Coraggio, L.; Itaco, N.
2017-09-01
This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.
Griffis, Joseph C.; Elkhetali, Abdurahman S.; Burge, Wesley K.; Chen, Richard H.; Bowman, Anthony D.; Szaflarski, Jerzy P.; Visscher, Kristina M.
2016-01-01
Psychophysical and neurobiological evidence suggests that central and peripheral vision are specialized for different functions. This specialization of function might be expected to lead to differences in the large-scale functional interactions of early cortical areas that represent central and peripheral visual space. Here, we characterize differences in whole-brain functional connectivity among sectors in primary visual cortex (V1) corresponding to central, near-peripheral, and far-peripheral vision during resting fixation. Importantly, our analyses reveal that eccentricity sectors in V1 have different functional connectivity with non-visual areas associated with large-scale brain networks. Regions associated with the fronto-parietal control network are most strongly connected with central sectors of V1, regions associated with the cingulo-opercular control network are most strongly connected with near-peripheral sectors of V1, and regions associated with the default mode and auditory networks are most strongly connected with far-peripheral sectors of V1. Additional analyses suggest that similar patterns are present during eyes-closed rest. These results suggest that different types of visual information may be prioritized by large-scale brain networks with distinct functional profiles, and provide insights into how the small-scale functional specialization within early visual regions such as V1 relates to the large-scale organization of functionally distinct whole-brain networks. PMID:27554527
A Practical Computational Method for the Anisotropic Redshift-Space 3-Point Correlation Function
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.
2018-04-01
We present an algorithm enabling computation of the anisotropic redshift-space galaxy 3-point correlation function (3PCF) scaling as N2, with N the number of galaxies. Our previous work showed how to compute the isotropic 3PCF with this scaling by expanding the radially-binned density field around each galaxy in the survey into spherical harmonics and combining these coefficients to form multipole moments. The N2 scaling occurred because this approach never explicitly required the relative angle between a galaxy pair about the primary galaxy. Here we generalize this work, demonstrating that in the presence of azimuthally-symmetric anisotropy produced by redshift-space distortions (RSD) the 3PCF can be described by two triangle side lengths, two independent total angular momenta, and a spin. This basis for the anisotropic 3PCF allows its computation with negligible additional work over the isotropic 3PCF. We also present the covariance matrix of the anisotropic 3PCF measured in this basis. Our algorithm tracks the full 5-D redshift-space 3PCF, uses an accurate line of sight to each triplet, is exact in angle, and easily handles edge correction. It will enable use of the anisotropic large-scale 3PCF as a probe of RSD in current and upcoming large-scale redshift surveys.
NASA Astrophysics Data System (ADS)
Song, Dongpo; Lin, Ying; Qian, Gang; Wang, Xinyu; Liu, Xiaohui; Li, Cheng; Watkins, James
2014-03-01
The preparation of well-ordered nanocomposites using block copolymers and nanoparticles (NPs) with precise control over their spatial organization at different length scales remains challenging, especially for NP cores up to 10 nm in diameter and for domain spacings greater than 100 nm. In this work, these challenges have been overcome using amphiphilic bottle brush block copolymers as templates for the self-assembly of ordered, periodic hybrid materials with domain spacings more than 130 nm using functionalized NPs with core diameters up to 15 nm. CdSe NPs of 10 nm or gold NPs of 15 nm bearing 11-mercaptoundecyl-hydroquinone or poly(4-vinylphenol) ligands were selectively incorporated within (polynorbornene-g-polystyrene)-b- (polynorbornene-g-polyethylene oxide) copolymers by taking advantage of hydrogen bonding between the ligand and PEO domain. Well-ordered composites with cylindrical and lamellar morphologies and NP loadings of up to 30 wt% in the target domains were achieved. This strategy provides a simple and robust means to create ordered hybrid materials of large domain spacings allowing for relatively large functional nanoparticles. This work was supported by the NSF Center for Hierarchical Manufacturing at the University of Massachusetts (CMMI-1025020).
Suzuki, Satoshi N; Kachi, Naoki; Suzuki, Jun-Ichirou
2008-09-01
During the development of an even-aged plant population, the spatial distribution of individuals often changes from a clumped pattern to a random or regular one. The development of local size hierarchies in an Abies forest was analysed for a period of 47 years following a large disturbance in 1959. In 1980 all trees in an 8 x 8 m plot were mapped and their height growth after the disturbance was estimated. Their mortality and growth were then recorded at 1- to 4-year intervals between 1980 and 2006. Spatial distribution patterns of trees were analysed by the pair correlation function. Spatial correlations between tree heights were analysed with a spatial autocorrelation function and the mark correlation function. The mark correlation function was able to detect a local size hierarchy that could not be detected by the spatial autocorrelation function alone. The small-scale spatial distribution pattern of trees changed from clumped to slightly regular during the 47 years. Mortality occurred in a density-dependent manner, which resulted in regular spacing between trees after 1980. The spatial autocorrelation and mark correlation functions revealed the existence of tree patches consisting of large trees at the initial stage. Development of a local size hierarchy was detected within the first decade after the disturbance, although the spatial autocorrelation was not negative. Local size hierarchies that developed persisted until 2006, and the spatial autocorrelation became negative at later stages (after about 40 years). This is the first study to detect local size hierarchies as a prelude to regular spacing using the mark correlation function. The results confirm that use of the mark correlation function together with the spatial autocorrelation function is an effective tool to analyse the development of a local size hierarchy of trees in a forest.
DESTINY, The Dark Energy Space Telescope
NASA Technical Reports Server (NTRS)
Pasquale, Bert A.; Woodruff, Robert A.; Benford, Dominic J.; Lauer, Tod
2007-01-01
We have proposed the development of a low-cost space telescope, Destiny, as a concept for the NASA/DOE Joint Dark Energy Mission. Destiny is a 1.65m space telescope, featuring a near-infrared (0.85-1.7m) survey camera/spectrometer with a moderate flat-field field of view (FOV). Destiny will probe the properties of dark energy by obtaining a Hubble diagram based on Type Ia supernovae and a large-scale mass power spectrum derived from weak lensing distortions of field galaxies as a function of redshift.
Analytical solutions of the space-time fractional Telegraph and advection-diffusion equations
NASA Astrophysics Data System (ADS)
Tawfik, Ashraf M.; Fichtner, Horst; Schlickeiser, Reinhard; Elhanbaly, A.
2018-02-01
The aim of this paper is to develop a fractional derivative model of energetic particle transport for both uniform and non-uniform large-scale magnetic field by studying the fractional Telegraph equation and the fractional advection-diffusion equation. Analytical solutions of the space-time fractional Telegraph equation and space-time fractional advection-diffusion equation are obtained by use of the Caputo fractional derivative and the Laplace-Fourier technique. The solutions are given in terms of Fox's H function. As an illustration they are applied to the case of solar energetic particles.
Grassmann phase space theory and the Jaynes–Cummings model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalton, B.J., E-mail: bdalton@swin.edu.au; Centre for Atom Optics and Ultrafast Spectroscopy, Swinburne University of Technology, Melbourne, Victoria 3122; Garraway, B.M.
2013-07-15
The Jaynes–Cummings model of a two-level atom in a single mode cavity is of fundamental importance both in quantum optics and in quantum physics generally, involving the interaction of two simple quantum systems—one fermionic system (the TLA), the other bosonic (the cavity mode). Depending on the initial conditions a variety of interesting effects occur, ranging from ongoing oscillations of the atomic population difference at the Rabi frequency when the atom is excited and the cavity is in an n-photon Fock state, to collapses and revivals of these oscillations starting with the atom unexcited and the cavity mode in a coherentmore » state. The observation of revivals for Rydberg atoms in a high-Q microwave cavity is key experimental evidence for quantisation of the EM field. Theoretical treatments of the Jaynes–Cummings model based on expanding the state vector in terms of products of atomic and n-photon states and deriving coupled equations for the amplitudes are a well-known and simple method for determining the effects. In quantum optics however, the behaviour of the bosonic quantum EM field is often treated using phase space methods, where the bosonic mode annihilation and creation operators are represented by c-number phase space variables, with the density operator represented by a distribution function of these variables. Fokker–Planck equations for the distribution function are obtained, and either used directly to determine quantities of experimental interest or used to develop c-number Langevin equations for stochastic versions of the phase space variables from which experimental quantities are obtained as stochastic averages. Phase space methods have also been developed to include atomic systems, with the atomic spin operators being represented by c-number phase space variables, and distribution functions involving these variables and those for any bosonic modes being shown to satisfy Fokker–Planck equations from which c-number Langevin equations are often developed. However, atomic spin operators satisfy the standard angular momentum commutation rules rather than the commutation rules for bosonic annihilation and creation operators, and are in fact second order combinations of fermionic annihilation and creation operators. Though phase space methods in which the fermionic operators are represented directly by c-number phase space variables have not been successful, the anti-commutation rules for these operators suggest the possibility of using Grassmann variables—which have similar anti-commutation properties. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of phase space methods in quantum optics to treat fermionic systems by representing fermionic annihilation and creation operators directly by Grassmann phase space variables is rather rare. This paper shows that phase space methods using a positive P type distribution function involving both c-number variables (for the cavity mode) and Grassmann variables (for the TLA) can be used to treat the Jaynes–Cummings model. Although it is a Grassmann function, the distribution function is equivalent to six c-number functions of the two bosonic variables. Experimental quantities are given as bosonic phase space integrals involving the six functions. A Fokker–Planck equation involving both left and right Grassmann differentiations can be obtained for the distribution function, and is equivalent to six coupled equations for the six c-number functions. The approach used involves choosing the canonical form of the (non-unique) positive P distribution function, in which the correspondence rules for the bosonic operators are non-standard and hence the Fokker–Planck equation is also unusual. Initial conditions, such as those above for initially uncorrelated states, are discussed and used to determine the initial distribution function. Transformations to new bosonic variables rotating at the cavity frequency enable the six coupled equations for the new c-number functions–that are also equivalent to the canonical Grassmann distribution function–to be solved analytically, based on an ansatz from an earlier paper by Stenholm. It is then shown that the distribution function is exactly the same as that determined from the well-known solution based on coupled amplitude equations. In quantum–atom optics theories for many atom bosonic and fermionic systems are needed. With large atom numbers, treatments must often take into account many quantum modes—especially for fermions. Generalisations of phase space distribution functions of phase space variables for a few modes to phase space distribution functionals of field functions (which represent the field operators, c-number fields for bosons, Grassmann fields for fermions) are now being developed for large systems. For the fermionic case, the treatment of the simple two mode problem represented by the Jaynes–Cummings model is a useful test case for the future development of phase space Grassmann distribution functional methods for fermionic applications in quantum–atom optics. -- Highlights: •Novel phase space theory of the Jaynes–Cummings model using Grassmann variables. •Fokker–Planck equations solved analytically. •Results agree with the standard quantum optics treatment. •Grassmann phase space theory applicable to fermion many-body problems.« less
Omega from the anisotropy of the redshift correlation function
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1993-01-01
Peculiar velocities distort the correlation function of galaxies observed in redshift space. In the large scale, linear regime, the distortion takes a characteristic quadrupole plus hexadecapole form, with the amplitude of the distortion depending on the cosmological density parameter omega. Preliminary measurements are reported here of the harmonics of the correlation function in the CfA, SSRS, and IRAS 2 Jansky redshift surveys. The observed behavior of the harmonics agrees qualitatively with the predictions of linear theory on large scales in every survey. However, real anisotropy in the galaxy distribution induces large fluctuations in samples which do not yet probe a sufficiently fair volume of the Universe. In the CfA 14.5 sample in particular, the Great Wall induces a large negative quadrupole, which taken at face value implies an unrealistically large omega 20. The IRAS 2 Jy survey, which covers a substantially larger volume than the optical surveys and is less affected by fingers-of-god, yields a more reliable and believable value, omega = 0.5 sup +.5 sub -.25.
Space Station and the life sciences
NASA Technical Reports Server (NTRS)
White, R. J.; Leonard, J. I.; Cramer, D. B.; Bishop, W. P.
1983-01-01
Previous fundamental research in space life sciences is examined, and consideration is devoted to studies relevant to Space Station activities. Microgravity causes weight loss, hemoconcentration, and orthostatic intolerance when astronauts returns to earth. Losses in bone density, bone calcium, and muscle nitrogen have also been observed, together with cardiovascular deconditioning, fluid-electrolyte metabolism alteration, and space sickness. Experiments have been performed with plants, bacteria, fungi, protozoa, tissue cultures, invertebrate species, and with nonhuman vertebrates, showing little effect on simple cell functions. The Spacelab first flight will feature seven life science experiments and the second flight, two. Further studies will be performed on later flights. Continued life science studies to optimize human performance in space are necessary for the efficient operation of a Space Station and the assembly of large space structures, particularly in interaction with automated machinery.
NASA Technical Reports Server (NTRS)
Avery, Don E.; Kaszubowski, Martin J.; Kearney, Michael E.; Howard, Trevor P.
1996-01-01
It is anticipated that as the utilization of space increases in both the government and commercial sec tors the re will be a high degree of interest in materials and coatings research as well as research in space environment definition, deployable structures, multi-functional structures and electronics. The International Space Station (ISS) is an excellent platform for long-term technology development because it provides large areas for external attached payloads, power and data capability, and ready access for experiment exchange and return. An alliance of SPACEHAB, MicroCraft, Inc. and SpaceTec, Inc. has been formed to satisfy this research need through commercial utilization of the capabilities of ISS. The alliance will provide a family of facilities designed to provide low-cost, reliable access to space for experimenters. This service would start as early as 1997 and mature to a fully functional attached facility on ISS by 2001. The alliances facilities are based on early activities by NASA, Langley Research Center (LaRC) to determine the feasibility of a Material Exposure Facility (MEF).
Robotic influence in the conceptual design of mechanical systems in space and vice versa - A survey
NASA Technical Reports Server (NTRS)
Sanger, George F.
1988-01-01
A survey of methods using robotic devices to construct structural elements in space is presented. Two approaches to robotic construction are considered: one in which the structural elements are designed using conventional aerospace techniques which tend to constrain the function aspects of robotics and one in which the structural elements are designed from the conceptual stage with built-in robotic features. Examples are presented of structural building concepts using robotics, including the construction of the SP-100 nuclear reactor power system, a multimirror large aperture IR space telescope concept, retrieval and repair in space, and the Flight Telerobotic Servicer.
NASA Technical Reports Server (NTRS)
Manford, J. S.; Bennett, G. R.
1985-01-01
The Space Station Program will incorporate analysis of operations constraints and considerations in the early design phases to avoid the need for later modifications to the Space Station for operations. The application of modern tools and administrative techniques to minimize the cost of performing effective orbital operations planning and design analysis in the preliminary design phase of the Space Station Program is discussed. Tools and techniques discussed include: approach for rigorous analysis of operations functions, use of the resources of a large computer network, and providing for efficient research and access to information.
Challenging Technology, and Technology Infusion into 21st Century
NASA Technical Reports Server (NTRS)
Chau, S. N.; Hunter, D. J.
2001-01-01
In preparing for the space exploration challenges of the next century, the National Aeronautics and Space Administration (NASA) Center for Integrated Space Micro-Systems (CISM) is chartered to develop advanced spacecraft systems that can be adapted for a large spectrum of future space missions. Enabling this task are revolutions in the miniaturization of electrical, mechanical, and computational functions. On the other hand, these revolutionary technologies usually have much lower readiness levels than those required by flight projects. The mission of the Advanced Micro Spacecraft (AMS) task in CISM is to bridge the readiness gap between advanced technologies and flight projects. Additional information is contained in the original extended abstract.
Low-cost space-varying FIR filter architecture for computational imaging systems
NASA Astrophysics Data System (ADS)
Feng, Guotong; Shoaib, Mohammed; Schwartz, Edward L.; Dirk Robinson, M.
2010-01-01
Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system requires low-complexity algorithms enabling space-varying sharpening. In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost space-varying FIR filter architecture.
Target Discovery for Precision Medicine Using High-Throughput Genome Engineering.
Guo, Xinyi; Chitale, Poonam; Sanjana, Neville E
2017-01-01
Over the past few years, programmable RNA-guided nucleases such as the CRISPR/Cas9 system have ushered in a new era of precision genome editing in diverse model systems and in human cells. Functional screens using large libraries of RNA guides can interrogate a large hypothesis space to pinpoint particular genes and genetic elements involved in fundamental biological processes and disease-relevant phenotypes. Here, we review recent high-throughput CRISPR screens (e.g. loss-of-function, gain-of-function, and targeting noncoding elements) and highlight their potential for uncovering novel therapeutic targets, such as those involved in cancer resistance to small molecular drugs and immunotherapies, tumor evolution, infectious disease, inborn genetic disorders, and other therapeutic challenges.
Space Shuttle critical function audit
NASA Technical Reports Server (NTRS)
Sacks, Ivan J.; Dipol, John; Su, Paul
1990-01-01
A large fault-tolerance model of the main propulsion system of the US space shuttle has been developed. This model is being used to identify single components and pairs of components that will cause loss of shuttle critical functions. In addition, this model is the basis for risk quantification of the shuttle. The process used to develop and analyze the model is digraph matrix analysis (DMA). The DMA modeling and analysis process is accessed via a graphics-based computer user interface. This interface provides coupled display of the integrated system schematics, the digraph models, the component database, and the results of the fault tolerance and risk analyses.
Model error estimation for distributed systems described by elliptic equations
NASA Technical Reports Server (NTRS)
Rodriguez, G.
1983-01-01
A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.
NASA Astrophysics Data System (ADS)
Buessen, Finn Lasse; Roscher, Dietrich; Diehl, Sebastian; Trebst, Simon
2018-02-01
The pseudofermion functional renormalization group (pf-FRG) is one of the few numerical approaches that has been demonstrated to quantitatively determine the ordering tendencies of frustrated quantum magnets in two and three spatial dimensions. The approach, however, relies on a number of presumptions and approximations, in particular the choice of pseudofermion decomposition and the truncation of an infinite number of flow equations to a finite set. Here we generalize the pf-FRG approach to SU (N )-spin systems with arbitrary N and demonstrate that the scheme becomes exact in the large-N limit. Numerically solving the generalized real-space renormalization group equations for arbitrary N , we can make a stringent connection between the physically most significant case of SU(2) spins and more accessible SU (N ) models. In a case study of the square-lattice SU (N ) Heisenberg antiferromagnet, we explicitly demonstrate that the generalized pf-FRG approach is capable of identifying the instability indicating the transition into a staggered flux spin liquid ground state in these models for large, but finite, values of N . In a companion paper [Roscher et al., Phys. Rev. B 97, 064416 (2018), 10.1103/PhysRevB.97.064416] we formulate a momentum-space pf-FRG approach for SU (N ) spin models that allows us to explicitly study the large-N limit and access the low-temperature spin liquid phase.
Precision of radio science instrumentation for planetary exploration
NASA Technical Reports Server (NTRS)
Asmar, S. W.; Armstrong, J. W.; Iess, L.; Tortora, P.
2004-01-01
The Deep Space Network is the largest and most sensitive scientific telecommunications facility Primary function: providing two-way communication between the Earth and spacecraft exploring the solar system Instrumented with large parabolic reflectors, high-power transmitters, low-noise amplifiers & receivers.
NASA Astrophysics Data System (ADS)
Kähler, Sven; Olsen, Jeppe
2017-11-01
A computational method is presented for systems that require high-level treatments of static and dynamic electron correlation but cannot be treated using conventional complete active space self-consistent field-based methods due to the required size of the active space. Our method introduces an efficient algorithm for perturbative dynamic correlation corrections for compact non-orthogonal MCSCF calculations. In the algorithm, biorthonormal expansions of orbitals and CI-wave functions are used to reduce the scaling of the performance determining step from quadratic to linear in the number of configurations. We describe a hierarchy of configuration spaces that can be chosen for the active space. Potential curves for the nitrogen molecule and the chromium dimer are compared for different configuration spaces. Already the most compact spaces yield qualitatively correct potentials that with increasing size of configuration spaces systematically approach complete active space results.
2014-01-01
Background Due to rapid sequencing of genomes, there are now millions of deposited protein sequences with no known function. Fast sequence-based comparisons allow detecting close homologs for a protein of interest to transfer functional information from the homologs to the given protein. Sequence-based comparison cannot detect remote homologs, in which evolution has adjusted the sequence while largely preserving structure. Structure-based comparisons can detect remote homologs but most methods for doing so are too expensive to apply at a large scale over structural databases of proteins. Recently, fragment-based structural representations have been proposed that allow fast detection of remote homologs with reasonable accuracy. These representations have also been used to obtain linearly-reducible maps of protein structure space. It has been shown, as additionally supported from analysis in this paper that such maps preserve functional co-localization of the protein structure space. Methods Inspired by a recent application of the Latent Dirichlet Allocation (LDA) model for conducting structural comparisons of proteins, we propose higher-order LDA-obtained topic-based representations of protein structures to provide an alternative route for remote homology detection and organization of the protein structure space in few dimensions. Various techniques based on natural language processing are proposed and employed to aid the analysis of topics in the protein structure domain. Results We show that a topic-based representation is just as effective as a fragment-based one at automated detection of remote homologs and organization of protein structure space. We conduct a detailed analysis of the information content in the topic-based representation, showing that topics have semantic meaning. The fragment-based and topic-based representations are also shown to allow prediction of superfamily membership. Conclusions This work opens exciting venues in designing novel representations to extract information about protein structures, as well as organizing and mining protein structure space with mature text mining tools. PMID:25080993
Efficient conformational space exploration in ab initio protein folding simulation.
Ullah, Ahammed; Ahmed, Nasif; Pappu, Subrata Dey; Shatabda, Swakkhar; Ullah, A Z M Dayem; Rahman, M Sohel
2015-08-01
Ab initio protein folding simulation largely depends on knowledge-based energy functions that are derived from known protein structures using statistical methods. These knowledge-based energy functions provide us with a good approximation of real protein energetics. However, these energy functions are not very informative for search algorithms and fail to distinguish the types of amino acid interactions that contribute largely to the energy function from those that do not. As a result, search algorithms frequently get trapped into the local minima. On the other hand, the hydrophobic-polar (HP) model considers hydrophobic interactions only. The simplified nature of HP energy function makes it limited only to a low-resolution model. In this paper, we present a strategy to derive a non-uniform scaled version of the real 20×20 pairwise energy function. The non-uniform scaling helps tackle the difficulty faced by a real energy function, whereas the integration of 20×20 pairwise information overcomes the limitations faced by the HP energy function. Here, we have applied a derived energy function with a genetic algorithm on discrete lattices. On a standard set of benchmark protein sequences, our approach significantly outperforms the state-of-the-art methods for similar models. Our approach has been able to explore regions of the conformational space which all the previous methods have failed to explore. Effectiveness of the derived energy function is presented by showing qualitative differences and similarities of the sampled structures to the native structures. Number of objective function evaluation in a single run of the algorithm is used as a comparison metric to demonstrate efficiency.
NASA Technical Reports Server (NTRS)
Vilnrotter, Victor A.
2012-01-01
The potential development of large aperture ground-based "photon bucket" optical receivers for deep space communications has received considerable attention recently. One approach currently under investigation proposes to polish the aluminum reflector panels of 34-meter microwave antennas to high reflectance, and accept the relatively large spotsize generated by even state-of-the-art polished aluminum panels. Here we describe the experimental effort currently underway at the Deep Space Network (DSN) Goldstone Communications Complex in California, to test and verify these concepts in a realistic operational environment. A custom designed aluminum panel has been mounted on the 34 meter research antenna at Deep-Space Station 13 (DSS-13), and a remotely controlled CCD camera with a large CCD sensor in a weather-proof container has been installed next to the subreflector, pointed directly at the custom polished panel. Using the planet Jupiter as the optical point-source, the point-spread function (PSF) generated by the polished panel has been characterized, the array data processed to determine the center of the intensity distribution, and expected communications performance of the proposed polished panel optical receiver has been evaluated.
An Expanded Role for the Dorsal Auditory Pathway in Sensorimotor Control and Integration
Rauschecker, Josef P.
2010-01-01
The dual-pathway model of auditory cortical processing assumes that two largely segregated processing streams originating in the lateral belt subserve the two main functions of hearing: identification of auditory “objects”, including speech; and localization of sounds in space (Rauschecker and Tian, 2000). Evidence has accumulated, chiefly from work in humans and nonhuman primates, that an antero-ventral pathway supports the former function, whereas a postero-dorsal stream supports the latter, i.e. processing of space and motion-in-space. In addition, the postero-dorsal stream has also been postulated to subserve some functions of speech and language in humans. A recent review (Rauschecker and Scott, 2009) has proposed the possibility that both functions of the postero-dorsal pathway can be subsumed under the same structural forward model: an efference copy sent from prefrontal and premotor cortex provides the basis for “optimal state estimation” in the inferior parietal lobe and in sensory areas of the posterior auditory cortex. The current article corroborates this model by adding and discussing recent evidence. PMID:20850511
Effect of normalized plasma frequency on electron phase-space orbits in a free-electron laser
NASA Astrophysics Data System (ADS)
Ji, Yu-Pin; Wang, Shi-Jian; Xu, Jing-Yue; Xu, Yong-Gen; Liu, Xiao-Xu; Lu, Hong; Huang, Xiao-Li; Zhang, Shi-Chang
2014-02-01
Irregular phase-space orbits of the electrons are harmful to the electron-beam transport quality and hence deteriorate the performance of a free-electron laser (FEL). In previous literature, it was demonstrated that the irregularity of the electron phase-space orbits could be caused in several ways, such as varying the wiggler amplitude and inducing sidebands. Based on a Hamiltonian model with a set of self-consistent differential equations, it is shown in this paper that the electron-beam normalized plasma frequency functions not only couple the electron motion with the FEL wave, which results in the evolution of the FEL wave field and a possible power saturation at a large beam current, but also cause the irregularity of the electron phase-space orbits when the normalized plasma frequency has a sufficiently large value, even if the initial energy of the electron is equal to the synchronous energy or the FEL wave does not reach power saturation.
NASA Technical Reports Server (NTRS)
1976-01-01
The six themes identified by the Workshop have many common navigation guidance and control needs. All the earth orbit themes have a strong requirement for attitude, figure and stabilization control of large space structures, a requirement not currently being supported. All but the space transportation theme have need for precision pointing of spacecraft and instruments. In addition all the themes have requirements for increasing autonomous operations for such activities as spacecraft and experiment operations, onboard mission modification, rendezvous and docking, spacecraft assembly and maintenance, navigation and guidance, and self-checkout, test and repair. Major new efforts are required to conceptualize new approaches to large space antennas and arrays that are lightweight, readily deployable, and capable of precise attitude and figure control. Conventional approaches offer little hope of meeting these requirements. Functions that can benefit from increasing automation or autonomous operations are listed.
Macroscopic features of quantum fluctuations in large-N qubit systems
NASA Astrophysics Data System (ADS)
Klimov, Andrei B.; Muñoz, Carlos
2014-05-01
We introduce a discrete Q function of an N-qubit system projected into the space of symmetric measurements as a tool for analyzing general properties of quantum systems in the macroscopic limit. For known states the projected Q function helps to visualize the results of collective measurements, and for unknown states it can be approximately reconstructed by measuring the lowest moments of the collective variables.
Research Program for Vibration Control in Structures
NASA Technical Reports Server (NTRS)
Mingori, D. L.; Gibson, J. S.
1986-01-01
Purpose of program to apply control theory to large space structures (LSS's) and design practical compensator for suppressing vibration. Program models LSS as distributed system. Control theory applied to produce compensator described by functional gains and transfer functions. Used for comparison of robustness of low- and high-order compensators that control surface vibrations of realistic wrap-rib antenna. Program written in FORTRAN for batch execution.
Modeling microbial community structure and functional diversity across time and space.
Larsen, Peter E; Gibbons, Sean M; Gilbert, Jack A
2012-07-01
Microbial communities exhibit exquisitely complex structure. Many aspects of this complexity, from the number of species to the total number of interactions, are currently very difficult to examine directly. However, extraordinary efforts are being made to make these systems accessible to scientific investigation. While recent advances in high-throughput sequencing technologies have improved accessibility to the taxonomic and functional diversity of complex communities, monitoring the dynamics of these systems over time and space - using appropriate experimental design - is still expensive. Fortunately, modeling can be used as a lens to focus low-resolution observations of community dynamics to enable mathematical abstractions of functional and taxonomic dynamics across space and time. Here, we review the approaches for modeling bacterial diversity at both the very large and the very small scales at which microbial systems interact with their environments. We show that modeling can help to connect biogeochemical processes to specific microbial metabolic pathways. © 2012 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
Oscillatory bistability of real-space transfer in semiconductor heterostructures
NASA Astrophysics Data System (ADS)
Do˙ttling, R.; Scho˙ll, E.
1992-01-01
Charge transport parallel to the layers of a modulation-doped GaAs/AlxGa1-xAs heterostructure is studied theoretically. The heating of electrons by the applied electric field leads to real-space transfer of electrons from the GaAs into the adjacent AlxGa1-xAs layer. For sufficiently large dc bias, spontaneous periodic 100-GHz current oscillations, and bistability and hysteretic switching transitions between oscillatory and stationary states are predicted. We present a detailed investigation of complex bifurcation scenarios as a function of the bias voltage U0 and the load resistance RL. For large RL subcritical Hopf bifurcations and global bifurcations of limit cycles are displayed.
Dyons and dyonic black holes in su (N ) Einstein-Yang-Mills theory in anti-de Sitter spacetime
NASA Astrophysics Data System (ADS)
Shepherd, Ben L.; Winstanley, Elizabeth
2016-03-01
We present new spherically symmetric, dyonic soliton and black hole solutions of the su (N ) Einstein-Yang-Mills equations in four-dimensional asymptotically anti-de Sitter spacetime. The gauge field has nontrivial electric and magnetic components and is described by N -1 magnetic gauge field functions and N -1 electric gauge field functions. We explore the phase space of solutions in detail for su (2 ) and su (3 ) gauge groups. Combinations of the electric gauge field functions are monotonic and have no zeros; in general the magnetic gauge field functions may have zeros. The phase space of solutions is extremely rich, and we find solutions in which the magnetic gauge field functions have more than fifty zeros. Of particular interest are solutions for which the magnetic gauge field functions have no zeros, which exist when the negative cosmological constant has sufficiently large magnitude. We conjecture that at least some of these nodeless solutions may be stable under linear, spherically symmetric, perturbations.
Renormalizable Quantum Field Theories in the Large -n Limit
NASA Astrophysics Data System (ADS)
Guruswamy, Sathya
1995-01-01
In this thesis, we study two examples of renormalizable quantum field theories in the large-N limit. Chapter one is a general introduction describing physical motivations for studying such theories. In chapter two, we describe the large-N method in field theory and discuss the pioneering work of 't Hooft in large-N two-dimensional Quantum Chromodynamics (QCD). In chapter three we study a spherically symmetric approximation to four-dimensional QCD ('spherical QCD'). We recast spherical QCD into a bilocal (constrained) theory of hadrons which in the large-N limit is equivalent to large-N spherical QCD for all energy scales. The linear approximation to this theory gives an eigenvalue equation which is the analogue of the well-known 't Hooft's integral equation in two dimensions. This eigenvalue equation is a scale invariant one and therefore leads to divergences in the theory. We give a non-perturbative renormalization prescription to cure this and obtain a beta function which shows that large-N spherical QCD is asymptotically free. In chapter four, we review the essentials of conformal field theories in two and higher dimensions, particularly in the context of critical phenomena. In chapter five, we study the O(N) non-linear sigma model on three-dimensional curved spaces in the large-N limit and show that there is a non-trivial ultraviolet stable critical point at which it becomes conformally invariant. We study this model at this critical point on examples of spaces of constant curvature and compute the mass gap in the theory, the free energy density (which turns out to be a universal function of the information contained in the geometry of the manifold) and the two-point correlation functions. The results we get give an indication that this model is an example of a three-dimensional analogue of a rational conformal field theory. A conclusion with a brief summary and remarks follows at the end.
Cubic map algebra functions for spatio-temporal analysis
Mennis, J.; Viger, R.; Tomlin, C.D.
2005-01-01
We propose an extension of map algebra to three dimensions for spatio-temporal data handling. This approach yields a new class of map algebra functions that we call "cube functions." Whereas conventional map algebra functions operate on data layers representing two-dimensional space, cube functions operate on data cubes representing two-dimensional space over a third-dimensional period of time. We describe the prototype implementation of a spatio-temporal data structure and selected cube function versions of conventional local, focal, and zonal map algebra functions. The utility of cube functions is demonstrated through a case study analyzing the spatio-temporal variability of remotely sensed, southeastern U.S. vegetation character over various land covers and during different El Nin??o/Southern Oscillation (ENSO) phases. Like conventional map algebra, the application of cube functions may demand significant data preprocessing when integrating diverse data sets, and are subject to limitations related to data storage and algorithm performance. Solutions to these issues include extending data compression and computing strategies for calculations on very large data volumes to spatio-temporal data handling.
Naden, Levi N; Shirts, Michael R
2016-04-12
We show how thermodynamic properties of molecular models can be computed over a large, multidimensional parameter space by combining multistate reweighting analysis with a linear basis function approach. This approach reduces the computational cost to estimate thermodynamic properties from molecular simulations for over 130,000 tested parameter combinations from over 1000 CPU years to tens of CPU days. This speed increase is achieved primarily by computing the potential energy as a linear combination of basis functions, computed from either modified simulation code or as the difference of energy between two reference states, which can be done without any simulation code modification. The thermodynamic properties are then estimated with the Multistate Bennett Acceptance Ratio (MBAR) as a function of multiple model parameters without the need to define a priori how the states are connected by a pathway. Instead, we adaptively sample a set of points in parameter space to create mutual configuration space overlap. The existence of regions of poor configuration space overlap are detected by analyzing the eigenvalues of the sampled states' overlap matrix. The configuration space overlap to sampled states is monitored alongside the mean and maximum uncertainty to determine convergence, as neither the uncertainty or the configuration space overlap alone is a sufficient metric of convergence. This adaptive sampling scheme is demonstrated by estimating with high precision the solvation free energies of charged particles of Lennard-Jones plus Coulomb functional form with charges between -2 and +2 and generally physical values of σij and ϵij in TIP3P water. We also compute entropy, enthalpy, and radial distribution functions of arbitrary unsampled parameter combinations using only the data from these sampled states and use the estimates of free energies over the entire space to examine the deviation of atomistic simulations from the Born approximation to the solvation free energy.
"Simulated molecular evolution" or computer-generated artifacts?
Darius, F; Rojas, R
1994-11-01
1. The authors define a function with value 1 for the positive examples and 0 for the negative ones. They fit a continuous function but do not deal at all with the error margin of the fit, which is almost as large as the function values they compute. 2. The term "quality" for the value of the fitted function gives the impression that some biological significance is associated with values of the fitted function strictly between 0 and 1, but there is no justification for this kind of interpretation and finding the point where the fit achieves its maximum does not make sense. 3. By neglecting the error margin the authors try to optimize the fitted function using differences in the second, third, fourth, and even fifth decimal place which have no statistical significance. 4. Even if such a fit could profit from more data points, the authors should first prove that the region of interest has some kind of smoothness, that is, that a continuous fit makes any sense at all. 5. "Simulated molecular evolution" is a misnomer. We are dealing here with random search. Since the margin of error is so large, the fitted function does not provide statistically significant information about the points in search space where strings with cleavage sites could be found. This implies that the method is a highly unreliable stochastic search in the space of strings, even if the neural network is capable of learning some simple correlations. 6. Classical statistical methods are for these kind of problems with so few data points clearly superior to the neural networks used as a "black box" by the authors, which in the way they are structured provide a model with an error margin as large as the numbers being computed.7. And finally, even if someone would provide us with a function which separates strings with cleavage sites from strings without them perfectly, so-called simulated molecular evolution would not be better than random selection.Since a perfect fit would only produce exactly ones or zeros,starting a search in a region of space where all strings in the neighborhood get the value zero would not provide any kind of directional information for new iterations. We would just skip from one point to the other in a typical random walk manner.
2012-05-22
tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition, we use x2f mpi – a Fortran library...for parallel vector-valued function evaluation (used with ISAT in this context) – to efficiently redistribute the chemistry workload among the...Constrained-Equilibrium (RCCE) method, and tabulation of the reduced space is performed using the In Situ Adaptive Tabulation ( ISAT ) algorithm. In addition
Rates of Space Weathering in Lunar Regolith Grains
NASA Technical Reports Server (NTRS)
Zhang, S.; Keller, L. P.
2012-01-01
While the processes and products of lunar space weathering are reasonably well-studied, their accumulation rates in lunar soils are poorly constrained. Previously, we showed that the thickness of solar wind irradiated rims on soil grains is a smooth function of their solar flare particle track density, whereas the thickness of vapor-deposited rims was largely independent of track density [1]. Here, we have extended these preliminary results with data on additional grains from other mature soils.
Variations of cosmic large-scale structure covariance matrices across parameter space
NASA Astrophysics Data System (ADS)
Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte
2017-03-01
The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.
Numerical optimization in Hilbert space using inexact function and gradient evaluations
NASA Technical Reports Server (NTRS)
Carter, Richard G.
1989-01-01
Trust region algorithms provide a robust iterative technique for solving non-convex unstrained optimization problems, but in many instances it is prohibitively expensive to compute high accuracy function and gradient values for the method. Of particular interest are inverse and parameter estimation problems, since function and gradient evaluations involve numerically solving large systems of differential equations. A global convergence theory is presented for trust region algorithms in which neither function nor gradient values are known exactly. The theory is formulated in a Hilbert space setting so that it can be applied to variational problems as well as the finite dimensional problems normally seen in trust region literature. The conditions concerning allowable error are remarkably relaxed: relative errors in the gradient error condition is automatically satisfied if the error is orthogonal to the gradient approximation. A technique for estimating gradient error and improving the approximation is also presented.
Basis for substrate recognition and distinction by matrix metalloproteinases
Ratnikov, Boris I.; Cieplak, Piotr; Gramatikoff, Kosi; Pierce, James; Eroshkin, Alexey; Igarashi, Yoshinobu; Kazanov, Marat; Sun, Qing; Godzik, Adam; Osterman, Andrei; Stec, Boguslaw; Strongin, Alex; Smith, Jeffrey W.
2014-01-01
Genomic sequencing and structural genomics produced a vast amount of sequence and structural data, creating an opportunity for structure–function analysis in silico [Radivojac P, et al. (2013) Nat Methods 10(3):221–227]. Unfortunately, only a few large experimental datasets exist to serve as benchmarks for function-related predictions. Furthermore, currently there are no reliable means to predict the extent of functional similarity among proteins. Here, we quantify structure–function relationships among three phylogenetic branches of the matrix metalloproteinase (MMP) family by comparing their cleavage efficiencies toward an extended set of phage peptide substrates that were selected from ∼64 million peptide sequences (i.e., a large unbiased representation of substrate space). The observed second-order rate constants [k(obs)] across the substrate space provide a distance measure of functional similarity among the MMPs. These functional distances directly correlate with MMP phylogenetic distance. There is also a remarkable and near-perfect correlation between the MMP substrate preference and sequence identity of 50–57 discontinuous residues surrounding the catalytic groove. We conclude that these residues represent the specificity-determining positions (SDPs) that allowed for the expansion of MMP proteolytic function during evolution. A transmutation of only a few selected SDPs proximal to the bound substrate peptide, and contributing the most to selectivity among the MMPs, is sufficient to enact a global change in the substrate preference of one MMP to that of another, indicating the potential for the rational and focused redesign of cleavage specificity in MMPs. PMID:25246591
Finite size effects in the thermodynamics of a free neutral scalar field
NASA Astrophysics Data System (ADS)
Parvan, A. S.
2018-04-01
The exact analytical lattice results for the partition function of the free neutral scalar field in one spatial dimension in both the configuration and the momentum space were obtained in the framework of the path integral method. The symmetric square matrices of the bilinear forms on the vector space of fields in both configuration space and momentum space were found explicitly. The exact lattice results for the partition function were generalized to the three-dimensional spatial momentum space and the main thermodynamic quantities were derived both on the lattice and in the continuum limit. The thermodynamic properties and the finite volume corrections to the thermodynamic quantities of the free real scalar field were studied. We found that on the finite lattice the exact lattice results for the free massive neutral scalar field agree with the continuum limit only in the region of small values of temperature and volume. However, at these temperatures and volumes the continuum physical quantities for both massive and massless scalar field deviate essentially from their thermodynamic limit values and recover them only at high temperatures or/and large volumes in the thermodynamic limit.
Need, utilization, and configuration of a large, multi-G centrifuge on the Space Station
NASA Technical Reports Server (NTRS)
Bonting, Sjoerd L.
1987-01-01
A large, multi-g centrifuge is required on the Space Station (1) to provide valid 1-g controls for the study of zero-g effects on animals and plants and to study readaptation to 1 g; (2) to store animals at 1 g prior to short-term zero-g experimentation; (3) to permit g-level threshold studies of gravity effects. These requirements can be met by a 13-ft-diam., center-mounted centrifuge, on which up to 48 modular habitats with animals (squirrel monkey, rat, mouse) and plants are attached. The advantages of locating this centrifuge with the vivarium, a common environmental control and life support system, a general-purpose work station and storage of food, water, and supplies in an attached short module, are elaborated. Servicing and operation of the centrifuge, as well as minimizing its impact on other Space Station functions are also considered.
Large-scale 3D galaxy correlation function and non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raccanelli, Alvise; Doré, Olivier; Bertacca, Daniele
We investigate the properties of the 2-point galaxy correlation function at very large scales, including all geometric and local relativistic effects --- wide-angle effects, redshift space distortions, Doppler terms and Sachs-Wolfe type terms in the gravitational potentials. The general three-dimensional correlation function has a nonzero dipole and octupole, in addition to the even multipoles of the flat-sky limit. We study how corrections due to primordial non-Gaussianity and General Relativity affect the multipolar expansion, and we show that they are of similar magnitude (when f{sub NL} is small), so that a relativistic approach is needed. Furthermore, we look at how large-scalemore » corrections depend on the model for the growth rate in the context of modified gravity, and we discuss how a modified growth can affect the non-Gaussian signal in the multipoles.« less
FGWAS: Functional genome wide association analysis.
Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-10-01
Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Edwards, M. R.
Extended space flight requires foods and medicines that sustain crew health and vitality. The health and therapeutic needs for the entire crew and their children for a 100-year space flight must be sustainable. The starship cannot depend on resupply or carry a large cargo of pharmaceuticals. Everything in the starship must be completely recyclable and reconstructable, including food, feed, textiles, building materials, pharmaceuticals, vaccines, and medicines. Smart microfarms will produce functional foods with superior nutrition and sensory attributes. These foods provide high-quality protein and nutralence (nutrient density), that avoids obesity, diabetes, and other Western diseases. The combination of functional foods, lifestyle actions, and medicines will support crew immunity, energy, vitality, sustained strong health, and longevity. Smart microfarms enable the production of fresh medicines in hours or days, eliminating the need for a large dispensary, which eliminates concern over drug shelf life. Smart microfarms are adaptable to the extreme growing area, resource, and environmental constraints associated with an extended starship expedition.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goings, Joshua J.; Li, Xiaosong, E-mail: xsli@uw.edu
2016-06-21
One of the challenges of interpreting electronic circular dichroism (ECD) band spectra is that different states may have different rotatory strength signs, determined by their absolute configuration. If the states are closely spaced and opposite in sign, observed transitions may be washed out by nearby states, unlike absorption spectra where transitions are always positive additive. To accurately compute ECD bands, it is necessary to compute a large number of excited states, which may be prohibitively costly if one uses the linear-response time-dependent density functional theory (TDDFT) framework. Here we implement a real-time, atomic-orbital based TDDFT method for computing the entiremore » ECD spectrum simultaneously. The method is advantageous for large systems with a high density of states. In contrast to previous implementations based on real-space grids, the method is variational, independent of nuclear orientation, and does not rely on pseudopotential approximations, making it suitable for computation of chiroptical properties well into the X-ray regime.« less
NASA Technical Reports Server (NTRS)
Mcclelland, J.; Silk, J.
1978-01-01
Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
Ku, S.; Hager, R.; Chang, C. S.; ...
2016-04-01
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S.; Hager, R.; Chang, C. S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. In conclusion, the numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
A new hybrid-Lagrangian numerical scheme for gyrokinetic simulation of tokamak edge plasma
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ku, S., E-mail: sku@pppl.gov; Hager, R.; Chang, C.S.
In order to enable kinetic simulation of non-thermal edge plasmas at a reduced computational cost, a new hybrid-Lagrangian δf scheme has been developed that utilizes the phase space grid in addition to the usual marker particles, taking advantage of the computational strengths from both sides. The new scheme splits the particle distribution function of a kinetic equation into two parts. Marker particles contain the fast space-time varying, δf, part of the distribution function and the coarse-grained phase-space grid contains the slow space-time varying part. The coarse-grained phase-space grid reduces the memory-requirement and the computing cost, while the marker particles providemore » scalable computing ability for the fine-grained physics. Weights of the marker particles are determined by a direct weight evolution equation instead of the differential form weight evolution equations that the conventional delta-f schemes use. The particle weight can be slowly transferred to the phase space grid, thereby reducing the growth of the particle weights. The non-Lagrangian part of the kinetic equation – e.g., collision operation, ionization, charge exchange, heat-source, radiative cooling, and others – can be operated directly on the phase space grid. Deviation of the particle distribution function on the velocity grid from a Maxwellian distribution function – driven by ionization, charge exchange and wall loss – is allowed to be arbitrarily large. The numerical scheme is implemented in the gyrokinetic particle code XGC1, which specializes in simulating the tokamak edge plasma that crosses the magnetic separatrix and is in contact with the material wall.« less
High Energy Scattering in the AdS/CFT Correspondence
NASA Astrophysics Data System (ADS)
Penedones, Joao
2007-12-01
This work explores the celebrated AdS/CFT correspondence in the regime of high energy scattering in Anti--de Sitter (AdS) spacetime. In particular, we develop the eikonal approximation to high energy scattering in AdS and explore its consequences for the dual Conformal Field Theory (CFT). Using position space Feynman rules, we rederive the eikonal approximation for high energy scattering in flat space. Following this intuitive position space perspective, we then generalize the eikonal approximation for high energy scattering in AdS and other spacetimes. Remarkably, we are able to resum, in terms of a generalized phase shift, ladder and cross ladder Witten diagrams associated to the exchange of an AdS spin j field, to all orders in the coupling constant. By the AdS/CFT correspondence, the eikonal amplitude in AdS is related to the four point function of CFT primary operators in the regime of large 't Hooft coupling, including all terms of the 1/N expansion. We then show that the eikonal amplitude determines the behavior of the CFT four point function for small values of the cross ratios in a Lorentzian regime and that this controls its high spin and dimension conformal partial wave decomposition. These results allow us to determine the anomalous dimension of high spin and dimension double trace primary operators, by relating it to the AdS eikonal phase shift. Finally we find that, at large energies and large impact parameters in AdS, the gravitational interaction dominates all other interactions, as in flat space. Therefore, the anomalous dimension of double trace operators, associated to graviton exchange in AdS, yields a universal prediction for CFT's with AdS gravitational duals.
Alternate assembly sequence databook for the Tier 2 Bus-1 option of the International Space Station
NASA Technical Reports Server (NTRS)
Brewer, L. M.; Cirillo, W. M.; Cruz, J. N.; Hall, J. B.; Troutman, P. A.; Monell, D. W.; Garn, M. A.; Heck, M. L.; Kumar, R. R.; Llewellyn, C. P.
1995-01-01
The JSC International Space Station program office requested that SSB prepare a databook to document the alternate space station assembly sequence known as Tier 2, which assumes that the Russian participation has been eliminated and that the functions that were supplied by the Russians (propulsion, resupply, initial attitude control, communications, etc.) are now supplied by the U.S. Tier 2 utilizes the Lockheed Bus-l to replace much of the missing Russian functionality. The space station at each stage of its buildup during the Tier 2 assembly sequence is characterized in terms of of properties, functionality, resource balances, operations, logistics, attitude control, microgravity environment and propellant usage. The assembly sequence as analyzed was defined by JSC as a first iteration, with subsequent iterations required to address some of the issues that the analysis in this databook identified. Several significant issues were identified, including: less than desirable orbit lifetimes, shortage of EVA, large flight attitudes, poor microgravity environments, and reboost propellant shortages. Many of these issues can be resolved but at the cost of possible baseline modifications and revisions in the proposed Tier 2 assembly sequence.
Hofmeijer, Jeannette; Amelink, G Johan; Algra, Ale; van Gijn, Jan; Macleod, Malcolm R; Kappelle, L Jaap; van der Worp, H Bart
2006-09-11
Patients with a hemispheric infarct and massive space-occupying brain oedema have a poor prognosis. Despite maximal conservative treatment, the case fatality rate may be as high as 80%, and most survivors are left severely disabled. Non-randomised studies suggest that decompressive surgery reduces mortality substantially and improves functional outcome of survivors. This study is designed to compare the efficacy of decompressive surgery to improve functional outcome with that of conservative treatment in patients with space-occupying supratentorial infarction The study design is that of a multi-centre, randomised clinical trial, which will include 112 patients aged between 18 and 60 years with a large hemispheric infarct with space-occupying oedema that leads to a decrease in consciousness. Patients will be randomised to receive either decompressive surgery in combination with medical treatment or best medical treatment alone. Randomisation will be stratified for the intended mode of conservative treatment (intensive care or stroke unit care). The primary outcome measure will be functional outcome, as determined by the score on the modified Rankin Scale, at one year.
High-resolution (>5 800 time-bandwidth product) shear mode TeO2 deflector
NASA Astrophysics Data System (ADS)
Soos, Jolanta I.; Caviris, Nicholas P.; Phuvan, Sonlinh
1992-12-01
Acousto-optic deflectors play an important role in optical signal processing systems due to their real time processing capabilities, as well as their conversion capabilities of a function of time to a function of space and time. In this work Brimrose investigated the design and fabrication of state-of-the-art, very large time-bandwidth acousto-optic devices from TeO2 single crystals.
Widefield TSCSPC-systems with large-area-detectors: application in simultaneous multi-channel-FLIM
NASA Astrophysics Data System (ADS)
Stepanov, Sergei; Bakhlanov, Sergei; Drobchenko, Evgeny; Eckert, Hann-Jörg; Kemnitz, Klaus
2010-11-01
Novel proximity-type Time- and Space-Correlated Single Photon Counting (TSCSPC) crossed-delay-line (DL)- and multi-anode (MA)-systems of outstanding performance and homogeneity were developed, using large-area detector heads of 25 and 40 mm diameter. Instrument response functions IRF(space) = (60 +/- 5) μm FWHM and IRF(time) = (28 +/- 3) ps FWHM were achieved over the full 12 cm2 area of the detector. Deadtime at throughput of 105 cps is 10% for "high-resolution" system and 5% in the "video"-system at 106 cps, at slightly reduced time- and space resolution. A fluorescence lifetime of (3.5 +/- 1) ps can be recovered from multi-exponential dynamics of a single living cyanobacterium (Acaryochloris marina). The present large-area detectors are particularly useful in simultaneous multichannel applications, such as 2-colour anisotropy or 4-colour lifetime imaging, utilizing dual- or quad-view image splitters. The long-term stability, low- excitation-intensity (< 100 mW/cm2) widefield systems enable minimal-invasive observation, without significant bleaching or photodynamic reactions, thus allowing long-period observation of up to several hours in living cells.
PLATSIM: A Simulation and Analysis Package for Large-Order Flexible Systems. Version 2.0
NASA Technical Reports Server (NTRS)
Maghami, Peiman G.; Kenny, Sean P.; Giesy, Daniel P.
1997-01-01
The software package PLATSIM provides efficient time and frequency domain analysis of large-order generic space platforms. PLATSIM can perform open-loop analysis or closed-loop analysis with linear or nonlinear control system models. PLATSIM exploits the particular form of sparsity of the plant matrices for very efficient linear and nonlinear time domain analysis, as well as frequency domain analysis. A new, original algorithm for the efficient computation of open-loop and closed-loop frequency response functions for large-order systems has been developed and is implemented within the package. Furthermore, a novel and efficient jitter analysis routine which determines jitter and stability values from time simulations in a very efficient manner has been developed and is incorporated in the PLATSIM package. In the time domain analysis, PLATSIM simulates the response of the space platform to disturbances and calculates the jitter and stability values from the response time histories. In the frequency domain analysis, PLATSIM calculates frequency response function matrices and provides the corresponding Bode plots. The PLATSIM software package is written in MATLAB script language. A graphical user interface is developed in the package to provide convenient access to its various features.
Mathematical models for the reflection coefficients of dielectric half-spaces
NASA Technical Reports Server (NTRS)
Evans, D. D.
1973-01-01
The reflection coefficients at normal incidence are found for a large class of one-dimensionally inhomogeneous or stratified half-spaces, which contain a homogeneous half-space. The formulation of the problem involves a combination of the classical boundary value technique, and the nonclassical principle of invariant imbedding. Solutions are in closed form and expressible in terms of Bessel functions. All results are given in terms of the ratio of the distance between free space and the homogeneous half-space to the wavelength in vacuo. One special case is that of an arbitrary number of layers lying on a homogeneous half-space where the dielectric constant of each layer has a constant gradient. A number of other special cases, limiting cases, and generalizations are developed including one in which the thickness of the top layer obeys a probability distribution.
Philosophies Applied in the Selection of Space Suit Joint Range of Motion Requirements
NASA Technical Reports Server (NTRS)
Aitchison, Lindsway; Ross, Amy; Matty, Jennifer
2009-01-01
Space suits are the most important tool for astronauts working in harsh space and planetary environments; suits keep crewmembers alive and allow them to perform exploration, construction, and scientific tasks on a routine basis over a period of several months. The efficiency with which the tasks are performed is largely dictated by the mobility features of the space suit. For previous space suit development programs, the mobility requirements were written as pure functional mobility requirements that did not separate joint ranges of motion from the joint torques. The Constellation Space Suit Element has the goal to make more quantitative mobility requirements that focused on the individual components of mobility to enable future suit designers to build and test systems more effectively. This paper details the test planning and selection process for the Constellation space suit pressure garment range of motion requirements.
A Hierarchical Framework for State-Space Matrix Inference and Clustering.
Zuo, Chandler; Chen, Kailei; Hewitt, Kyle J; Bresnick, Emery H; Keleş, Sündüz
2016-09-01
In recent years, a large number of genomic and epigenomic studies have been focusing on the integrative analysis of multiple experimental datasets measured over a large number of observational units. The objectives of such studies include not only inferring a hidden state of activity for each unit over individual experiments, but also detecting highly associated clusters of units based on their inferred states. Although there are a number of methods tailored for specific datasets, there is currently no state-of-the-art modeling framework for this general class of problems. In this paper, we develop the MBASIC ( M atrix B ased A nalysis for S tate-space I nference and C lustering) framework. MBASIC consists of two parts: state-space mapping and state-space clustering. In state-space mapping, it maps observations onto a finite state-space, representing the activation states of units across conditions. In state-space clustering, MBASIC incorporates a finite mixture model to cluster the units based on their inferred state-space profiles across all conditions. Both the state-space mapping and clustering can be simultaneously estimated through an Expectation-Maximization algorithm. MBASIC flexibly adapts to a large number of parametric distributions for the observed data, as well as the heterogeneity in replicate experiments. It allows for imposing structural assumptions on each cluster, and enables model selection using information criterion. In our data-driven simulation studies, MBASIC showed significant accuracy in recovering both the underlying state-space variables and clustering structures. We applied MBASIC to two genome research problems using large numbers of datasets from the ENCODE project. The first application grouped genes based on transcription factor occupancy profiles of their promoter regions in two different cell types. The second application focused on identifying groups of loci that are similar to a GATA2 binding site that is functional at its endogenous locus by utilizing transcription factor occupancy data and illustrated applicability of MBASIC in a wide variety of problems. In both studies, MBASIC showed higher levels of raw data fidelity than analyzing these data with a two-step approach using ENCODE results on transcription factor occupancy data.
Assembling the Past and the Future of the City through Designing Coworking Facilities
NASA Astrophysics Data System (ADS)
Lukman, Y. A.; Ekomadyo, A. S.; Wibowo, A. S.
2018-05-01
Bandung is known as a creative city in Indonesia, which can be seen from the large number of communities in Bandung that work in the creative industry. A creative city can be further developed by a good understanding of its local identity. One of the characteristic features of Bandung are the numerous old buildings across the city. Unfortunately, these buildings are no longer utilized optimally due to a mismatch between their function and typology. Housing new functions in old buildings to meet present space needs can increase their value. One kind of new function that fits well in old buildings, especially in Bandung, is the coworking space, a new type of workspace that has emerged as a result of the needs of today’s society. Mutually beneficial relations can be formed when the old building is well suited to carry out its function as a coworking space for the creative class. The idea is to assemble the past (using an old building as a workplace) and the future (developing the creative economy) through designing coworking facilities. The design simulation conducted in this study used the ‘third place’ theory by Ray Oldenburg as well as the approach of adaptive building reuse. By changing old buildings in Bandung into new workplaces for the creative class – coworking spaces – Bandung can maintain the city’s identity and provide new workplaces or public spaces for communities to develop their creativity and increase city income.
System Learning via Exploratory Data Analysis: Seeing Both the Forest and the Trees
NASA Astrophysics Data System (ADS)
Habash Krause, L.
2014-12-01
As the amount of observational Earth and Space Science data grows, so does the need for learning and employing data analysis techniques that can extract meaningful information from those data. Space-based and ground-based data sources from all over the world are used to inform Earth and Space environment models. However, with such a large amount of data comes a need to organize those data in a way such that trends within the data are easily discernible. This can be tricky due to the interaction between physical processes that lead to partial correlation of variables or multiple interacting sources of causality. With the suite of Exploratory Data Analysis (EDA) data mining codes available at MSFC, we have the capability to analyze large, complex data sets and quantitatively identify fundamentally independent effects from consequential or derived effects. We have used these techniques to examine the accuracy of ionospheric climate models with respect to trends in ionospheric parameters and space weather effects. In particular, these codes have been used to 1) Provide summary "at-a-glance" surveys of large data sets through categorization and/or evolution over time to identify trends, distribution shapes, and outliers, 2) Discern the underlying "latent" variables which share common sources of causality, and 3) Establish a new set of basis vectors by computing Empirical Orthogonal Functions (EOFs) which represent the maximum amount of variance for each principal component. Some of these techniques are easily implemented in the classroom using standard MATLAB functions, some of the more advanced applications require the statistical toolbox, and applications to unique situations require more sophisiticated levels of programming. This paper will present an overview of the range of tools available and how they might be used for a variety of time series Earth and Space Science data sets. Examples of feature recognition from both 1D and 2D (e.g. imagery) time series data sets will be presented.
Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis
NASA Astrophysics Data System (ADS)
Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.
As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.
Naranjo, Yandi; Pons, Miquel; Konrat, Robert
2012-01-01
The number of existing protein sequences spans a very small fraction of sequence space. Natural proteins have overcome a strong negative selective pressure to avoid the formation of insoluble aggregates. Stably folded globular proteins and intrinsically disordered proteins (IDPs) use alternative solutions to the aggregation problem. While in globular proteins folding minimizes the access to aggregation prone regions, IDPs on average display large exposed contact areas. Here, we introduce the concept of average meta-structure correlation maps to analyze sequence space. Using this novel conceptual view we show that representative ensembles of folded and ID proteins show distinct characteristics and respond differently to sequence randomization. By studying the way evolutionary constraints act on IDPs to disable a negative function (aggregation) we might gain insight into the mechanisms by which function-enabling information is encoded in IDPs.
Predicting astronaut radiation doses from major solar particle events using artificial intelligence
NASA Astrophysics Data System (ADS)
Tehrani, Nazila H.
1998-06-01
Space radiation is an important issue for manned space flight. For long missions outside of the Earth's magnetosphere, there are two major sources of exposure. Large Solar Particle Events (SPEs) consisting of numerous energetic protons and other heavy ions emitted by the Sun, and the Galactic Cosmic Rays (GCRs) that constitute an isotropic radiation field of low flux and high energy. In deep-space missions both SPEs and GCRs can be hazardous to the space crew. SPEs can provide an acute dose, which is a large dose over a short period of time. The acute doses from a large SPE that could be received by an astronaut with shielding as thick as a spacesuit maybe as large as 500 cGy. GCRs will not provide acute doses, but may increase the lifetime risk of cancer from prolonged exposures in a range of 40-50 cSv/yr. In this research, we are using artificial intelligence to model the dose-time profiles during a major solar particle event. Artificial neural networks are reliable approximators for nonlinear functions. In this study we design a dynamic network. This network has the ability to update its dose predictions as new input dose data is received while the event is occurring. To accomplish this temporal behavior of the system we use an innovative Sliding Time-Delay Neural Network (STDNN). By using a STDNN one can predict doses received from large SPEs while the event is happening. The parametric fits and actual calculated doses for the skin, eye and bone marrow are used. The parametric data set obtained by fitting the Weibull functional forms to the calculated dose points has been divided into two subsets. The STDNN has been trained using some of these parametric events. The other subset of parametric data and the actual doses are used for testing with the resulting weights and biases of the first set. This is done to show that the network can generalize. Results of this testing indicate that the STDNN is capable of predicting doses from events that it has not seen before.
Large scale crystallization of protein pharmaceuticals in microgravity via temperature change
NASA Technical Reports Server (NTRS)
Long, Marianna M.
1992-01-01
The major objective of this research effort is the temperature driven growth of protein crystals in large batches in the microgravity environment of space. Pharmaceutical houses are developing protein products for patient care, for example, human insulin, human growth hormone, interferons, and tissue plasminogen activator or TPA, the clot buster for heart attack victims. Except for insulin, these are very high value products; they are extremely potent in small quantities and have a great value per gram of material. It is feasible that microgravity crystallization can be a cost recoverable, economically sound final processing step in their manufacture. Large scale protein crystal growth in microgravity has significant advantages from the basic science and the applied science standpoints. Crystal growth can proceed unhindered due to lack of surface effects. Dynamic control is possible and relatively easy. The method has the potential to yield large quantities of pure crystalline product. Crystallization is a time honored procedure for purifying organic materials and microgravity crystallization could be the final step to remove trace impurities from high value protein pharmaceuticals. In addition, microgravity grown crystals could be the final formulation for those medicines that need to be administered in a timed release fashion. Long lasting insulin, insulin lente, is such a product. Also crystalline protein pharmaceuticals are more stable for long-term storage. Temperature, as the initiation step, has certain advantages. Again, dynamic control of the crystallization process is possible and easy. A temperature step is non-invasive and is the most subtle way to control protein solubility and therefore crystallization. Seeding is not necessary. Changes in protein and precipitant concentrations and pH are not necessary. Finally, this method represents a new way to crystallize proteins in space that takes advantage of the unique microgravity environment. The results from two flights showed that the hardware performed perfectly, many crystals were produced, and they were much larger than their ground grown controls. Morphometric analysis was done on over 4,000 crystals to establish crystal size, size distribution, and relative size. Space grown crystals were remarkably larger than their earth grown counterparts and crystal size was a function of PCF volume. That size distribution for the space grown crystals was a function of PCF volume may indicate that ultimate size was a function of temperature gradient. Since the insulin protein concentration was very low, 0.4 mg/ml, the size distribution could also be following the total amount of protein in each of the PCF's. X-ray analysis showed that the bigger space grown insulin crystals diffracted to higher resolution than their ground grown controls. When the data were normalized for size, they still indicated that the space crystals were better than the ground crystals.
Exploring Connectivity in Sequence Space of Functional RNA
NASA Technical Reports Server (NTRS)
Wei, Chenyu; Pohorille, Andrzej; Popovic, Milena; Ditzler, Mark
2017-01-01
Emergence of replicable genetic molecules was one of the marking points in the origin of life, evolution of which can be conceptualized as a walk through the space of all possible sequences. A theoretical concept of fitness landscape helps to understand evolutionary processes through assigning a value of fitness to each genotype. Then, evolution of a phenotype is viewed as a series of consecutive, single-point mutations. Natural selection biases evolution toward peaks of high fitness and away from valleys of low fitness. whereas neutral drift occurs in the sequence space without direction as mutations are introduced at random. Large networks of neutral or near-neutral mutations on a fitness landscape, especially for sufficiently long genomes, are possible or even inevitable. Their detection in experiments, however, has been elusive. Although a few near-neutral evolutionary pathways have been found, recent experimental evidence indicates landscapes consist of largely isolated islands. The generality of these results, however, is not clear, as the genome length or the fraction of functional molecules in the genotypic space might have been insufficient for the emergence of large, neutral networks. Thorough investigation on the structure of the fitness landscape is essential to understand the mechanisms of evolution of early genomes. RNA molecules are commonly assumed to play the pivotal role in the origin of genetic systems. They are widely believed to be early, if not the earliest, genetic and catalytic molecules, with abundant biochemical activities as aptamers and ribozymes, i.e. RNA molecules capable, respectively, to bind small molecules or catalyze chemical reactions. Here, we present results of our recent studies on the structure of the sequence space of RNA ligase ribozymes selected through in vitro evolution. Several hundred thousands of sequences active to a different degree were obtained by way of deep sequencing. Analysis of these sequences revealed several large clusters defined such that every sequence in a cluster can be reached from any other sequence in the same cluster through a series of single point mutations. Sequences in a single cluster appear to adopt more than one secondary structure. The mechanism of refolding within a single cluster was examined. To shed light on possible evolutionary paths in the space of ribozymes, the connectivity between clusters was investigated. The effect of length of RNA molecules on the structure of the fitness landscape and possible evolutionary paths was examined by way of comparing functional sequences of 20 and 80 nucleobases in length. It was found that sequences of different lengths shared secondary structure motifs that were presumed responsible for catalytic activity, with increasing complexity and global structural rearrangements emerging in longer molecules.
Yang, Jun; Guan, Yingying; Xia, Jianhong Cecilia; Jin, Cui; Li, Xueming
2018-10-15
In this study, a green space classification system for urban fringes was established based on multisource land use data from Ganjingzi District, China (2000-2015). The purpose of this study was to explore the spatiotemporal variation of green space landscapes and ecosystem service values (ESV). During 2006-2015, as urbanization advanced rapidly, the green space area decreased significantly (359.57 to 213.46 km 2 ), the ESV decreased from 397.42 to 124.93 million yuan, and the dynamic degrees of ESV variation were always <0. The green space large plaque index and class area both declined and the number of plaques and plaque density increased, indicating green space landscape fragmentation. The dynamic degrees of ESV variation in western and northern regions (with relatively intensive green space distributions) were higher than in the east. The ESV for closed forestland and sparse woodland had the highest functional values of ecological regulation and support, whereas dry land and irrigated cropland provided the highest functional values of production supply. The findings of this study are expected to provide support for better construction practices in Dalian and for the improvement of the ecological environment. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Patrick, Brian; Moore, James; Hackenberger, Wesley; Jiang, Xiaoning
2013-01-01
A lightweight, cryogenically capable, scalable, deformable mirror has been developed for space telescopes. This innovation makes use of polymer-based membrane mirror technology to enable large-aperture mirrors that can be easily launched and deployed. The key component of this innovation is a lightweight, large-stroke, cryogenic actuator array that combines the high degree of mirror figure control needed with a large actuator influence function. The latter aspect of the innovation allows membrane mirror figure correction with a relatively low actuator density, preserving the lightweight attributes of the system. The principal components of this technology are lightweight, low-profile, high-stroke, cryogenic-capable piezoelectric actuators based on PMN-PT (piezoelectric lead magnesium niobate-lead titanate) single-crystal configured in a flextensional actuator format; high-quality, low-thermal-expansion polymer membrane mirror materials developed by NeXolve; and electrostatic coupling between the membrane mirror and the piezoelectric actuator assembly to minimize problems such as actuator print-through.
NASA Technical Reports Server (NTRS)
Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.
1991-01-01
A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.
Grassmann phase space methods for fermions. II. Field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dalton, B.J., E-mail: bdalton@swin.edu.au; Jeffers, J.; Barnett, S.M.
In both quantum optics and cold atom physics, the behaviour of bosonic photons and atoms is often treated using phase space methods, where mode annihilation and creation operators are represented by c-number phase space variables, with the density operator equivalent to a distribution function of these variables. The anti-commutation rules for fermion annihilation, creation operators suggests the possibility of using anti-commuting Grassmann variables to represent these operators. However, in spite of the seminal work by Cahill and Glauber and a few applications, the use of Grassmann phase space methods in quantum-atom optics to treat fermionic systems is rather rare, thoughmore » fermion coherent states using Grassmann variables are widely used in particle physics. This paper presents a phase space theory for fermion systems based on distribution functionals, which replace the density operator and involve Grassmann fields representing anti-commuting fermion field annihilation, creation operators. It is an extension of a previous phase space theory paper for fermions (Paper I) based on separate modes, in which the density operator is replaced by a distribution function depending on Grassmann phase space variables which represent the mode annihilation and creation operators. This further development of the theory is important for the situation when large numbers of fermions are involved, resulting in too many modes to treat separately. Here Grassmann fields, distribution functionals, functional Fokker–Planck equations and Ito stochastic field equations are involved. Typical applications to a trapped Fermi gas of interacting spin 1/2 fermionic atoms and to multi-component Fermi gases with non-zero range interactions are presented, showing that the Ito stochastic field equations are local in these cases. For the spin 1/2 case we also show how simple solutions can be obtained both for the untrapped case and for an optical lattice trapping potential.« less
Shaping up nucleic acid computation.
Chen, Xi; Ellington, Andrew D
2010-08-01
Nucleic acid-based nanotechnology has always been perceived as novel, but has begun to move from theoretical demonstrations to practical applications. In particular, the large address spaces available to nucleic acids can be exploited to encode algorithms and/or act as circuits and thereby process molecular information. In this review we not only revisit several milestones in the field of nucleic acid-based computation, but also highlight how the prospects for nucleic acid computation go beyond just a large address space. Functional nucleic acid elements (aptamers, ribozymes, and deoxyribozymes) can serve as inputs and outputs to the environment, and can act as logical elements. Into the future, the chemical dynamics of nucleic acids may prove as useful as hybridization for computation. Copyright © 2010 Elsevier Ltd. All rights reserved.
Townes Group Activities from 1983-2000: Personal Recollections of William Danchi
NASA Technical Reports Server (NTRS)
Danchi, William C.
2015-01-01
I arrived in Berkeley in October 1983 as a post-doc, and my appointment was at the Space Sciences Laboratory (SSL). During that time the group was very large, with multiple activities led by Charlie himself and also by Senior Fellows such as John Lacy, Dan Jaffe, and Al Betz at the top of the hill at Space Sciences. Another significant contingent of the Townes group was housed in Birge Hall on campus, led by Reinhard Genzel when he was an Assistant Professor in the Physics Department. Although the group encompassed two separate locations, it functioned as one large group. Either we rode with Charlie up and down the hill, or (if we were concerned about our safety!) we took the bus.
Preliminary Design of a Manned Nuclear Electric Propulsion Vehicle Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
Irwin, Ryan W.; Tinker, Michael L.
2005-01-01
Nuclear electric propulsion (NEP) vehicles will be needed for future manned missions to Mars and beyond. Candidate designs must be identified for further detailed design from a large array of possibilities. Genetic algorithms have proven their utility in conceptual design studies by effectively searching a large design space to pinpoint unique optimal designs. This research combined analysis codes for NEP subsystems with a genetic algorithm. The use of penalty functions with scaling ratios was investigated to increase computational efficiency. Also, the selection of design variables for optimization was considered to reduce computation time without losing beneficial design search space. Finally, trend analysis of a reference mission to the asteroids yielded a group of candidate designs for further analysis.
Heating of large format filters in sub-mm and fir space optics
NASA Astrophysics Data System (ADS)
Baccichet, N.; Savini, G.
2017-11-01
Most FIR and sub-mm space borne observatories use polymer-based quasi-optical elements like filters and lenses, due to their high transparency and low absorption in such wavelength ranges. Nevertheless, data from those missions have proven that thermal imbalances in the instrument (not caused by filters) can complicate the data analysis. Consequently, for future, higher precision instrumentation, further investigation is required on any thermal imbalances embedded in such polymer-based filters. Particularly, in this paper the heating of polymers when operating at cryogenic temperature in space will be studied. Such phenomenon is an important aspect of their functioning since the transient emission of unwanted thermal radiation may affect the scientific measurements. To assess this effect, a computer model was developed for polypropylene based filters and PTFE-based coatings. Specifically, a theoretical model of their thermal properties was created and used into a multi-physics simulation that accounts for conductive and radiative heating effects of large optical elements, the geometry of which was suggested by the large format array instruments designed for future space missions. It was found that in the simulated conditions, the filters temperature was characterized by a time-dependent behaviour, modulated by a small scale fluctuation. Moreover, it was noticed that thermalization was reached only when a low power input was present.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotasidis, Fotis A., E-mail: Fotis.Kotasidis@unige.ch; Zaidi, Habib; Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva
2014-06-15
Purpose: The Ingenuity time-of-flight (TF) PET/MR is a recently developed hybrid scanner combining the molecular imaging capabilities of PET with the excellent soft tissue contrast of MRI. It is becoming common practice to characterize the system's point spread function (PSF) and understand its variation under spatial transformations to guide clinical studies and potentially use it within resolution recovery image reconstruction algorithms. Furthermore, due to the system's utilization of overlapping and spherical symmetric Kaiser-Bessel basis functions during image reconstruction, its image space PSF and reconstructed spatial resolution could be affected by the selection of the basis function parameters. Hence, a detailedmore » investigation into the multidimensional basis function parameter space is needed to evaluate the impact of these parameters on spatial resolution. Methods: Using an array of 12 × 7 printed point sources, along with a custom made phantom, and with the MR magnet on, the system's spatially variant image-based PSF was characterized in detail. Moreover, basis function parameters were systematically varied during reconstruction (list-mode TF OSEM) to evaluate their impact on the reconstructed resolution and the image space PSF. Following the spatial resolution optimization, phantom, and clinical studies were subsequently reconstructed using representative basis function parameters. Results: Based on the analysis and under standard basis function parameters, the axial and tangential components of the PSF were found to be almost invariant under spatial transformations (∼4 mm) while the radial component varied modestly from 4 to 6.7 mm. Using a systematic investigation into the basis function parameter space, the spatial resolution was found to degrade for basis functions with a large radius and small shape parameter. However, it was found that optimizing the spatial resolution in the reconstructed PET images, while having a good basis function superposition and keeping the image representation error to a minimum, is feasible, with the parameter combination range depending upon the scanner's intrinsic resolution characteristics. Conclusions: Using the printed point source array as a MR compatible methodology for experimentally measuring the scanner's PSF, the system's spatially variant resolution properties were successfully evaluated in image space. Overall the PET subsystem exhibits excellent resolution characteristics mainly due to the fact that the raw data are not under-sampled/rebinned, enabling the spatial resolution to be dictated by the scanner's intrinsic resolution and the image reconstruction parameters. Due to the impact of these parameters on the resolution properties of the reconstructed images, the image space PSF varies both under spatial transformations and due to basis function parameter selection. Nonetheless, for a range of basis function parameters, the image space PSF remains unaffected, with the range depending on the scanner's intrinsic resolution properties.« less
Large-scale transport across narrow gaps in rod bundles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guellouz, M.S.; Tavoularis, S.
1995-09-01
Flow visualization and how-wire anemometry were used to investigate the velocity field in a rectangular channel containing a single cylindrical rod, which could be traversed on the centreplane to form gaps of different widths with the plane wall. The presence of large-scale, quasi-periodic structures in the vicinity of the gap has been demonstrated through flow visualization, spectral analysis and space-time correlation measurements. These structures are seen to exist even for relatively large gaps, at least up to W/D=1.350 (W is the sum of the rod diameter, D, and the gap width). The above measurements appear to compatible with the fieldmore » of a street of three-dimensional, counter-rotating vortices, whose detailed structure, however, remains to be determined. The convection speed and the streamwise spacing of these vortices have been determined as functions of the gap size.« less
Space construction base support requirements for environmental control and life support systems
NASA Technical Reports Server (NTRS)
Thiele, R. J.; Secord, T. C.; Murphy, G. L.
1977-01-01
A Space Station analysis study is being performed for NASA which identifies cost-effective Space Station options that can provide a space facility capable of performing space construction, space manufacturing, cosmological research, earth services, and other functions. A space construction base concept for the construction of large structures, such as those needed to implement satellite solar power for earth usage, will be used as a basis for discussing requirements that impact the design selection, level of integration, and operation of environmental control and life support systems (ECLSS). The space construction base configuration also provides a basic Space Station facility that can accommodate biological manufacturing modules, ultrapure glasses manufacturing modules, and modules for other services in a building-block fashion. Examples of special problems that could dictate hardware required to augment the basic ECLSS for autonomous modules will be highlighted. Additionally, overall intravehicular (IVA) and extravehicular (EVA) activities and requirements that could impact the basic station ECLSS degree of closure are discussed.
Real- and redshift-space halo clustering in f(R) cosmologies
NASA Astrophysics Data System (ADS)
Arnalte-Mur, Pablo; Hellwing, Wojciech A.; Norberg, Peder
2017-05-01
We present two-point correlation function statistics of the mass and the haloes in the chameleon f(R) modified gravity scenario using a series of large-volume N-body simulations. Three distinct variations of f(R) are considered (F4, F5 and F6) and compared to a fiducial Λ cold dark matter (ΛCDM) model in the redshift range z ∈ [0, 1]. We find that the matter clustering is indistinguishable for all models except for F4, which shows a significantly steeper slope. The ratio of the redshift- to real-space correlation function at scales >20 h-1 Mpc agrees with the linear General Relativity (GR) Kaiser formula for the viable f(R) models considered. We consider three halo populations characterized by spatial abundances comparable to that of luminous red galaxies and galaxy clusters. The redshift-space halo correlation functions of F4 and F5 deviate significantly from ΛCDM at intermediate and high redshift, as the f(R) halo bias is smaller than or equal to that of the ΛCDM case. Finally, we introduce a new model-independent clustering statistic to distinguish f(R) from GR: the relative halo clustering ratio - R. The sampling required to adequately reduce the scatter in R will be available with the advent of the next-generation galaxy redshift surveys. This will foster a prospective avenue to obtain largely model-independent cosmological constraints on this class of modified gravity models.
StePS: Stereographically Projected Cosmological Simulations
NASA Astrophysics Data System (ADS)
Rácz, Gábor; Szapudi, István; Csabai, István; Dobos, László
2018-05-01
StePS (Stereographically Projected Cosmological Simulations) compactifies the infinite spatial extent of the Universe into a finite sphere with isotropic boundary conditions to simulate the evolution of the large-scale structure. This eliminates the need for periodic boundary conditions, which are a numerical convenience unsupported by observation and which modifies the law of force on large scales in an unrealistic fashion. StePS uses stereographic projection for space compactification and naive O(N2) force calculation; this arrives at a correlation function of the same quality more quickly than standard (tree or P3M) algorithms with similar spatial and mass resolution. The N2 force calculation is easy to adapt to modern graphics cards, hence StePS can function as a high-speed prediction tool for modern large-scale surveys.
Large Space Antenna Systems Technology, 1984
NASA Technical Reports Server (NTRS)
Boyer, W. J. (Compiler)
1985-01-01
Mission applications for large space antenna systems; large space antenna structural systems; materials and structures technology; structural dynamics and control technology, electromagnetics technology, large space antenna systems and the Space Station; and flight test and evaluation were examined.
Microlens Masses from Astrometry and Parallax in Space-based Surveys: From Planets to Black Holes
NASA Astrophysics Data System (ADS)
Gould, Andrew; Yee, Jennifer C.
2014-03-01
We show that space-based microlensing experiments can recover lens masses and distances for a large fraction of all events (those with individual photometric errors <~ 0.01 mag) using a combination of one-dimensional microlens parallaxes and astrometric microlensing. This will provide a powerful probe of the mass distributions of planets, black holes, and neutron stars, the distribution of planets as a function of Galactic environment, and the velocity distributions of black holes and neutron stars. While systematics are in principle a significant concern, we show that it is possible to vet against all systematics (known and unknown) using single-epoch precursor observations with the Hubble Space Telescope roughly 10 years before the space mission.
TeraHertz Space Telescope (TST)
NASA Astrophysics Data System (ADS)
Dunn, Marina Madeline; Lesser, David; O'Dougherty, Stephan; Swift, Brandon; Pat, Terrance; Cortez, German; Smith, Steve; Goldsmith, Paul; Walker, Christopher K.
2017-01-01
The Terahertz Space Telescope (TST) utilizes breakthrough inflatable technology to create a ~25 m far-infrared observing system at a fraction of the cost of previous space telescopes. As a follow-on to JWST and Herschel, TST will probe the FIR/THz regime with unprecedented sensitivity and angular resolution, answering fundamental questions concerning the origin and destiny of the cosmos. Prior and planned space telescopes have barely scratched the surface of what can be learned in this wavelength region. TST will pick up where JWST and Herschel leave off. At ~30µm TST will have ~10x the sensitivity and ~3x the angular resolution of JWST. At longer wavelengths it will have ~1000x the sensitivity of Herschel and ~7 times the angular resolution. TST can achieve this at low cost through the innovative use of inflatable technology. A recently-completed NIAC Phase II study (Large Balloon Reflector) validated, both analytically and experimentally, the concept of a large inflatable spherical reflector and demonstrated critical telescope functions. In our poster we will introduce the TST concept and compare its performance to past, present, and proposed far-infrared observatories.
GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT
NASA Astrophysics Data System (ADS)
Strubbe, David A.
GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.
Werdelin, Lars; Lewis, Margaret E.
2013-01-01
We analyze functional richness and functional evenness of the carnivoran guild in eastern Africa from 3.5 Ma to 1.5 Ma, and compare them to the present day. The data consist of characters of the craniodental apparatus of 76 species of fossil and extant carnivorans, divided into four 0.5 Ma time slices from 3.5 to 1.5 Ma, together with the modern fauna. Focus is on large (>21.5 kg) carnivores. Results show that the large carnivore guild has lost nearly 99% of its functional richness since 3.5 Ma, in a process starting prior to 2 Ma. Measurement of functional evenness shows the modern large carnivore guild to be unique in being randomly distributed in morphospace while in all past time slices there is significant clustering of species. The results are analyzed in the light of known changes to climate and environment in eastern Africa. We conclude that climate change is unlikely to explain all of the changes found and suggest that the evolution of early hominins into carnivore niche space, especially the evolution of derived dietary strategies after 2 Ma, played a significant part in the reduction of carnivore functional richness. PMID:23483948
Large Space Antenna Systems Technology, 1984
NASA Technical Reports Server (NTRS)
Boyer, W. J. (Compiler)
1985-01-01
Papers are presented which provide a comprehensive review of space missions requiring large antenna systems and of the status of key technologies required to enable these missions. Topic areas include mission applications for large space antenna systems, large space antenna structural systems, materials and structures technology, structural dynamics and control technology, electromagnetics technology, large space antenna systems and the space station, and flight test and evaluation.
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Fast large scale structure perturbation theory using one-dimensional fast Fourier transforms
Schmittfull, Marcel; Vlah, Zvonimir; McDonald, Patrick
2016-05-01
The usual fluid equations describing the large-scale evolution of mass density in the universe can be written as local in the density, velocity divergence, and velocity potential fields. As a result, the perturbative expansion in small density fluctuations, usually written in terms of convolutions in Fourier space, can be written as a series of products of these fields evaluated at the same location in configuration space. Based on this, we establish a new method to numerically evaluate the 1-loop power spectrum (i.e., Fourier transform of the 2-point correlation function) with one-dimensional fast Fourier transforms. This is exact and a fewmore » orders of magnitude faster than previously used numerical approaches. Numerical results of the new method are in excellent agreement with the standard quadrature integration method. This fast model evaluation can in principle be extended to higher loop order where existing codes become painfully slow. Our approach follows by writing higher order corrections to the 2-point correlation function as, e.g., the correlation between two second-order fields or the correlation between a linear and a third-order field. These are then decomposed into products of correlations of linear fields and derivatives of linear fields. In conclusion, the method can also be viewed as evaluating three-dimensional Fourier space convolutions using products in configuration space, which may also be useful in other contexts where similar integrals appear.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Dynamic tests on the NASA Langley CSI evolutionary model
NASA Technical Reports Server (NTRS)
Troidl, H.; Elliott, K. B.
1993-01-01
A modal analysis study, representing one of the anticipated 'Cooperative Spacecraft Structural Dynamics Experiments on the NASA Langley CSI Evolutionary Model', was carried out as a sub-task under the NASA/DLR collaboration in dynamics and control of large space systems. The CSI evolutionary testbed (CEM) is designed for the development of Controls-Structures Interaction (CSI) technology to improve space science platform pointing. For orbiting space structures like large flexible trusses, new identification challenges arise due to their specific dynamic characteristics (low frequencies and high modal density) on the one hand, and the limited possibilities of exciting such structures and measuring their responses on orbit on the other. The main objective was to investigate the modal identification potential of several different types of forcing functions that could possibly be realized with on-board excitation equipment using a minimum number of exciter locations as well as response locations. These locations were defined in an analytical test prediction process used to study the implications of measuring and analyzing the responses thus produced. It turned out that broadband excitation is needed for a general modal survey, but if only certain modes are of particular interest, combinations of exponentially decaying sine functions provide favorable excitation conditions as they allow to concentrate the available energy on the modes being of special interest. From a practical point-of-view structural nonlinearities as well as noisy measurements make the analysis more difficult, especially in the low frequency range and when the modes are closely spaced.
He, W; Zhao, S; Liu, X; Dong, S; Lv, J; Liu, D; Wang, J; Meng, Z
2013-12-04
Large-scale next-generation sequencing (NGS)-based resequencing detects sequence variations, constructs evolutionary histories, and identifies phenotype-related genotypes. However, NGS-based resequencing studies generate extraordinarily large amounts of data, making computations difficult. Effective use and analysis of these data for NGS-based resequencing studies remains a difficult task for individual researchers. Here, we introduce ReSeqTools, a full-featured toolkit for NGS (Illumina sequencing)-based resequencing analysis, which processes raw data, interprets mapping results, and identifies and annotates sequence variations. ReSeqTools provides abundant scalable functions for routine resequencing analysis in different modules to facilitate customization of the analysis pipeline. ReSeqTools is designed to use compressed data files as input or output to save storage space and facilitates faster and more computationally efficient large-scale resequencing studies in a user-friendly manner. It offers abundant practical functions and generates useful statistics during the analysis pipeline, which significantly simplifies resequencing analysis. Its integrated algorithms and abundant sub-functions provide a solid foundation for special demands in resequencing projects. Users can combine these functions to construct their own pipelines for other purposes.
Advanced UVOIR Mirror Technology Development for Very Large Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip
2011-01-01
Objective of this work is to define and initiate a long-term program to mature six inter-linked critical technologies for future UVOIR space telescope mirrors to TRL6 by 2018 so that a viable flight mission can be proposed to the 2020 Decadal Review. (1) Large-Aperture, Low Areal Density, High Stiffness Mirrors: 4 to 8 m monolithic & 8 to 16 m segmented primary mirrors require larger, thicker, stiffer substrates. (2) Support System:Large-aperture mirrors require large support systems to ensure that they survive launch and deploy on orbit in a stress-free and undistorted shape. (3) Mid/High Spatial Frequency Figure Error:A very smooth mirror is critical for producing a high-quality point spread function (PSF) for high-contrast imaging. (4) Segment Edges:Edges impact PSF for high-contrast imaging applications, contributes to stray light noise, and affects the total collecting aperture. (5) Segment-to-Segment Gap Phasing:Segment phasing is critical for producing a high-quality temporally stable PSF. (6) Integrated Model Validation:On-orbit performance is determined by mechanical and thermal stability. Future systems require validated performance models. We are pursuing multiple design paths give the science community the option to enable either a future monolithic or segmented space telescope.
Comparison of molecular dynamics and superfamily spaces of protein domain deformation.
Velázquez-Muriel, Javier A; Rueda, Manuel; Cuesta, Isabel; Pascual-Montano, Alberto; Orozco, Modesto; Carazo, José-María
2009-02-17
It is well known the strong relationship between protein structure and flexibility, on one hand, and biological protein function, on the other hand. Technically, protein flexibility exploration is an essential task in many applications, such as protein structure prediction and modeling. In this contribution we have compared two different approaches to explore the flexibility space of protein domains: i) molecular dynamics (MD-space), and ii) the study of the structural changes within superfamily (SF-space). Our analysis indicates that the MD-space and the SF-space display a significant overlap, but are still different enough to be considered as complementary. The SF-space space is wider but less complex than the MD-space, irrespective of the number of members in the superfamily. Also, the SF-space does not sample all possibilities offered by the MD-space, but often introduces very large changes along just a few deformation modes, whose number tend to a plateau as the number of related folds in the superfamily increases. Theoretically, we obtained two conclusions. First, that function restricts the access to some flexibility patterns to evolution, as we observe that when a superfamily member changes to become another, the path does not completely overlap with the physical deformability. Second, that conformational changes from variation in a superfamily are larger and much simpler than those allowed by physical deformability. Methodologically, the conclusion is that both spaces studied are complementary, and have different size and complexity. We expect this fact to have application in fields as 3D-EM/X-ray hybrid models or ab initio protein folding.
Comparison of molecular dynamics and superfamily spaces of protein domain deformation
Velázquez-Muriel, Javier A; Rueda, Manuel; Cuesta, Isabel; Pascual-Montano, Alberto; Orozco, Modesto; Carazo, José-María
2009-01-01
Background It is well known the strong relationship between protein structure and flexibility, on one hand, and biological protein function, on the other hand. Technically, protein flexibility exploration is an essential task in many applications, such as protein structure prediction and modeling. In this contribution we have compared two different approaches to explore the flexibility space of protein domains: i) molecular dynamics (MD-space), and ii) the study of the structural changes within superfamily (SF-space). Results Our analysis indicates that the MD-space and the SF-space display a significant overlap, but are still different enough to be considered as complementary. The SF-space space is wider but less complex than the MD-space, irrespective of the number of members in the superfamily. Also, the SF-space does not sample all possibilities offered by the MD-space, but often introduces very large changes along just a few deformation modes, whose number tend to a plateau as the number of related folds in the superfamily increases. Conclusion Theoretically, we obtained two conclusions. First, that function restricts the access to some flexibility patterns to evolution, as we observe that when a superfamily member changes to become another, the path does not completely overlap with the physical deformability. Second, that conformational changes from variation in a superfamily are larger and much simpler than those allowed by physical deformability. Methodologically, the conclusion is that both spaces studied are complementary, and have different size and complexity. We expect this fact to have application in fields as 3D-EM/X-ray hybrid models or ab initio protein folding. PMID:19220918
NASA Technical Reports Server (NTRS)
Blumenthal, George R.; Johnston, Kathryn V.
1994-01-01
The Sachs-Wolfe effect is known to produce large angular scale fluctuations in the cosmic microwave background radiation (CMBR) due to gravitational potential fluctuations. We show how the angular correlation function of the CMBR can be expressed explicitly in terms of the mass autocorrelation function xi(r) in the universe. We derive analytic expressions for the angular correlation function and its multipole moments in terms of integrals over xi(r) or its second moment, J(sub 3)(r), which does not need to satisfy the sort of integral constraint that xi(r) must. We derive similar expressions for bulk flow velocity in terms of xi and J(sub 3). One interesting result that emerges directly from this analysis is that, for all angles theta, there is a substantial contribution to the correlation function from a wide range of distance r and that radial shape of this contribution does not vary greatly with angle.
Effects of nanopillar array diameter and spacing on cancer cell capture and cell behaviors
NASA Astrophysics Data System (ADS)
Wang, Shunqiang; Wan, Yuan; Liu, Yaling
2014-10-01
While substrates with nanopillars (NPs) have emerged as promising platforms for isolation of circulating tumor cells (CTCs), the influence of diameter and spacing of NPs on CTC capture is still unclear. In this paper, CTC-capture yield and cell behaviors have been investigated by using antibody functionalized NPs of various diameters (120-1100 nm) and spacings (35-800 nm). The results show a linear relationship between the cell capture yield and effective contact area of NP substrates where a NP array of small diameter and reasonable spacing is preferred; however, spacing that is too small or too large adversely impairs the capture efficiency and specificity, respectively. In addition, the formation of pseudopodia between captured cells and the substrate is found to be dependent not only on cell adhesion status but also on elution strength and shear direction. These findings provide essential guidance in designing NP substrates for more efficient capture of CTCs and manipulation of cytomorphology in future.While substrates with nanopillars (NPs) have emerged as promising platforms for isolation of circulating tumor cells (CTCs), the influence of diameter and spacing of NPs on CTC capture is still unclear. In this paper, CTC-capture yield and cell behaviors have been investigated by using antibody functionalized NPs of various diameters (120-1100 nm) and spacings (35-800 nm). The results show a linear relationship between the cell capture yield and effective contact area of NP substrates where a NP array of small diameter and reasonable spacing is preferred; however, spacing that is too small or too large adversely impairs the capture efficiency and specificity, respectively. In addition, the formation of pseudopodia between captured cells and the substrate is found to be dependent not only on cell adhesion status but also on elution strength and shear direction. These findings provide essential guidance in designing NP substrates for more efficient capture of CTCs and manipulation of cytomorphology in future. Electronic supplementary information (ESI) available: Additional details about calculation of maximal displacement of an individual NP; additional study of substrate wettability through Cassie's Law; additional details about selection of incubation time and shaking speeds. See DOI: 10.1039/c4nr02854f
Cosmological Distortions in Redshift Space
NASA Astrophysics Data System (ADS)
Ryden, Barbara S.
1995-05-01
The long-sought value of q_0, the deceleration parameter, remains elusive. One method of finding q_0 is to measure the distortions of large scale structure in redshift space. If the Hubble constant changes with time, then the mapping between redshift space and real space is nonlinear, even in the absence of peculiar motions. When q_0 > -1, structures in redshift space will be distorted along the line of sight; the distortion is proportional to (1 + q_0 ) z in the limit that the redshift z is small. The cosmological distortions at z <= 0.2 can be found by measuring the shapes of voids in redshift surveys of galaxies (such as the upcoming Sloane Digital Sky Survey). The cosmological distortions are masked to some extent by the distortions caused by small-scale peculiar velocities; it is difficult to measure the shape of a void when the fingers of God are poking into it. The cosmological distortions at z ~ 1 can be found by measuring the correlation function of quasars as a function of redshift and of angle relative to the line of sight. Finding q_0 by measuring distortions in redshift space, like the classical methods of determining q_0, is simple and elegant in principle but complicated and messy in practice.
Pei, Du; Ye, Ke
2016-11-02
Here, we test the 3d-3d correspondence for theories that are labeled by Lens spaces. We find a full agreement between the index of the 3d N=2 “Lens space theory” T [L(p, 1)] and the partition function of complex Chern-Simons theory on L(p, 1). In particular, for p = 1, we show how the familiar S 3 partition function of Chern-Simons theory arises from the index of a free theory. For large p, we find that the index of T[L(p, 1)] becomes a constant independent of p. In addition, we study T[L(p, 1)] on the squashed three-sphere S b 3. Thismore » enables us to see clearly, at the level of partition function, to what extent G C complex Chern-Simons theory can be thought of as two copies of Chern-Simons theory with compact gauge group G.« less
Expanding the biomass derived chemical space
Brun, Nicolas; Hesemann, Peter
2017-01-01
Biorefinery aims at the conversion of biomass and renewable feedstocks into fuels and platform chemicals, in analogy to conventional oil refinery. In the past years, the scientific community has defined a number of primary building blocks that can be obtained by direct biomass decomposition. However, the large potential of this “renewable chemical space” to contribute to the generation of value added bio-active compounds and materials still remains unexplored. In general, biomass derived building blocks feature a diverse range of chemical functionalities. In order to be integrated into value-added compounds, they require additional functionalization and/or covalent modification thereby generating secondary building blocks. The latter can be thus regarded as functional components of bio-active molecules or materials and represent an expansion of the renewable chemical space. This perspective highlights the most recent developments and opportunities for the synthesis of secondary biomass derived building blocks and their application to the preparation of value added products. PMID:28959397
Villéger, Sébastien; Miranda, Julia Ramos; Hernandez, Domingo Flores; Mouillot, David
2012-01-01
The concept of β-diversity, defined as dissimilarity among communities, has been widely used to investigate biodiversity patterns and community assembly rules. However, in ecosystems with high taxonomic β-diversity, due to marked environmental gradients, the level of functional β-diversity among communities is largely overlooked while it may reveal processes shaping community structure. Here, decomposing biodiversity indices into α (local) and γ (regional) components, we estimated taxonomic and functional β-diversity among tropical estuarine fish communities, through space and time. We found extremely low functional β-diversity values among fish communities (<1.5%) despite high dissimilarity in species composition and species dominance. Additionally, in contrast to the high α and γ taxonomic diversities, α and γ functional diversities were very close to the minimal value. These patterns were caused by two dominant functional groups which maintained a similar functional structure over space and time, despite the strong dissimilarity in taxonomic structure along environmental gradients. Our findings suggest that taxonomic and functional β-diversity deserve to be quantified simultaneously since these two facets can show contrasting patterns and the differences can in turn shed light on community assembly rules. PMID:22792395
NASA Astrophysics Data System (ADS)
Verdebout, S.; Jönsson, P.; Gaigalas, G.; Godefroid, M.; Froese Fischer, C.
2010-04-01
Multiconfiguration expansions frequently target valence correlation and correlation between valence electrons and the outermost core electrons. Correlation within the core is often neglected. A large orbital basis is needed to saturate both the valence and core-valence correlation effects. This in turn leads to huge numbers of configuration state functions (CSFs), many of which are unimportant. To avoid the problems inherent to the use of a single common orthonormal orbital basis for all correlation effects in the multiconfiguration Hartree-Fock (MCHF) method, we propose to optimize independent MCHF pair-correlation functions (PCFs), bringing their own orthonormal one-electron basis. Each PCF is generated by allowing single- and double-excitations from a multireference (MR) function. This computational scheme has the advantage of using targeted and optimally localized orbital sets for each PCF. These pair-correlation functions are coupled together and with each component of the MR space through a low dimension generalized eigenvalue problem. Nonorthogonal orbital sets being involved, the interaction and overlap matrices are built using biorthonormal transformation of the coupled basis sets followed by a counter-transformation of the PCF expansions. Applied to the ground state of beryllium, the new method gives total energies that are lower than the ones from traditional complete active space (CAS)-MCHF calculations using large orbital active sets. It is fair to say that we now have the possibility to account for, in a balanced way, correlation deep down in the atomic core in variational calculations.
Functional Brain Networks Develop from a “Local to Distributed” Organization
Power, Jonathan D.; Dosenbach, Nico U. F.; Church, Jessica A.; Miezin, Francis M.; Schlaggar, Bradley L.; Petersen, Steven E.
2009-01-01
The mature human brain is organized into a collection of specialized functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. In this report, we combine resting state functional connectivity MRI (rs-fcMRI), graph analysis, community detection, and spring-embedding visualization techniques to analyze four separate networks defined in earlier studies. As we have previously reported, we find, across development, a trend toward ‘segregation’ (a general decrease in correlation strength) between regions close in anatomical space and ‘integration’ (an increased correlation strength) between selected regions distant in space. The generalization of these earlier trends across multiple networks suggests that this is a general developmental principle for changes in functional connectivity that would extend to large-scale graph theoretic analyses of large-scale brain networks. Communities in children are predominantly arranged by anatomical proximity, while communities in adults predominantly reflect functional relationships, as defined from adult fMRI studies. In sum, over development, the organization of multiple functional networks shifts from a local anatomical emphasis in children to a more “distributed” architecture in young adults. We argue that this “local to distributed” developmental characterization has important implications for understanding the development of neural systems underlying cognition. Further, graph metrics (e.g., clustering coefficients and average path lengths) are similar in child and adult graphs, with both showing “small-world”-like properties, while community detection by modularity optimization reveals stable communities within the graphs that are clearly different between young children and young adults. These observations suggest that early school age children and adults both have relatively efficient systems that may solve similar information processing problems in divergent ways. PMID:19412534
Functional brain networks develop from a "local to distributed" organization.
Fair, Damien A; Cohen, Alexander L; Power, Jonathan D; Dosenbach, Nico U F; Church, Jessica A; Miezin, Francis M; Schlaggar, Bradley L; Petersen, Steven E
2009-05-01
The mature human brain is organized into a collection of specialized functional networks that flexibly interact to support various cognitive functions. Studies of development often attempt to identify the organizing principles that guide the maturation of these functional networks. In this report, we combine resting state functional connectivity MRI (rs-fcMRI), graph analysis, community detection, and spring-embedding visualization techniques to analyze four separate networks defined in earlier studies. As we have previously reported, we find, across development, a trend toward 'segregation' (a general decrease in correlation strength) between regions close in anatomical space and 'integration' (an increased correlation strength) between selected regions distant in space. The generalization of these earlier trends across multiple networks suggests that this is a general developmental principle for changes in functional connectivity that would extend to large-scale graph theoretic analyses of large-scale brain networks. Communities in children are predominantly arranged by anatomical proximity, while communities in adults predominantly reflect functional relationships, as defined from adult fMRI studies. In sum, over development, the organization of multiple functional networks shifts from a local anatomical emphasis in children to a more "distributed" architecture in young adults. We argue that this "local to distributed" developmental characterization has important implications for understanding the development of neural systems underlying cognition. Further, graph metrics (e.g., clustering coefficients and average path lengths) are similar in child and adult graphs, with both showing "small-world"-like properties, while community detection by modularity optimization reveals stable communities within the graphs that are clearly different between young children and young adults. These observations suggest that early school age children and adults both have relatively efficient systems that may solve similar information processing problems in divergent ways.
NASA Technical Reports Server (NTRS)
Tinker, Michael L.
1998-01-01
Application of the free-suspension residual flexibility modal test method to the International Space Station Pathfinder structure is described. The Pathfinder, a large structure of the general size and weight of Space Station module elements, was also tested in a large fixed-base fixture to simulate Shuttle Orbiter payload constraints. After correlation of the Pathfinder finite element model to residual flexibility test data, the model was coupled to a fixture model, and constrained modes and frequencies were compared to fixed-base test. modes. The residual flexibility model compared very favorably to results of the fixed-base test. This is the first known direct comparison of free-suspension residual flexibility and fixed-base test results for a large structure. The model correlation approach used by the author for residual flexibility data is presented. Frequency response functions (FRF) for the regions of the structure that interface with the environment (a test fixture or another structure) are shown to be the primary tools for model correlation that distinguish or characterize the residual flexibility approach. A number of critical issues related to use of the structure interface FRF for correlating the model are then identified and discussed, including (1) the requirement of prominent stiffness lines, (2) overcoming problems with measurement noise which makes the antiresonances or minima in the functions difficult to identify, and (3) the use of interface stiffness and lumped mass perturbations to bring the analytical responses into agreement with test data. It is shown that good comparison of analytical-to-experimental FRF is the key to obtaining good agreement of the residual flexibility values.
Rare behavior of growth processes via umbrella sampling of trajectories
NASA Astrophysics Data System (ADS)
Klymko, Katherine; Geissler, Phillip L.; Garrahan, Juan P.; Whitelam, Stephen
2018-03-01
We compute probability distributions of trajectory observables for reversible and irreversible growth processes. These results reveal a correspondence between reversible and irreversible processes, at particular points in parameter space, in terms of their typical and atypical trajectories. Thus key features of growth processes can be insensitive to the precise form of the rate constants used to generate them, recalling the insensitivity to microscopic details of certain equilibrium behavior. We obtained these results using a sampling method, inspired by the "s -ensemble" large-deviation formalism, that amounts to umbrella sampling in trajectory space. The method is a simple variant of existing approaches, and applies to ensembles of trajectories controlled by the total number of events. It can be used to determine large-deviation rate functions for trajectory observables in or out of equilibrium.
Functional Generalized Structured Component Analysis.
Suk, Hye Won; Hwang, Heungsun
2016-12-01
An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.
MAPPING GROWTH AND GRAVITY WITH ROBUST REDSHIFT SPACE DISTORTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwan, Juliana; Lewis, Geraint F.; Linder, Eric V.
2012-04-01
Redshift space distortions (RSDs) caused by galaxy peculiar velocities provide a window onto the growth rate of large-scale structure and a method for testing general relativity. We investigate through a comparison of N-body simulations to various extensions of perturbation theory beyond the linear regime, the robustness of cosmological parameter extraction, including the gravitational growth index {gamma}. We find that the Kaiser formula and some perturbation theory approaches bias the growth rate by 1{sigma} or more relative to the fiducial at scales as large as k > 0.07 h Mpc{sup -1}. This bias propagates to estimates of the gravitational growth indexmore » as well as {Omega}{sub m} and the equation-of-state parameter and presents a significant challenge to modeling RSDs. We also determine an accurate fitting function for a combination of line-of-sight damping and higher order angular dependence that allows robust modeling of the redshift space power spectrum to substantially higher k.« less
Security aspects of space operations data
NASA Technical Reports Server (NTRS)
Schmitz, Stefan
1993-01-01
This paper deals with data security. It identifies security threats to European Space Agency's (ESA) In Orbit Infrastructure Ground Segment (IOI GS) and proposes a method of dealing with its complex data structures from the security point of view. It is part of the 'Analysis of Failure Modes, Effects Hazards and Risks of the IOI GS for Operations, including Backup Facilities and Functions' carried out on behalf of the European Space Operations Center (ESOC). The security part of this analysis has been prepared with the following aspects in mind: ESA's large decentralized ground facilities for operations, the multiple organizations/users involved in the operations and the developments of ground data systems, and the large heterogeneous network structure enabling access to (sensitive) data which does involve crossing organizational boundaries. An IOI GS data objects classification is introduced to determine the extent of the necessary protection mechanisms. The proposal of security countermeasures is oriented towards the European 'Information Technology Security Evaluation Criteria (ITSEC)' whose hierarchically organized requirements can be directly mapped to the security sensitivity classification.
Density-functional theory simulation of large quantum dots
NASA Astrophysics Data System (ADS)
Jiang, Hong; Baranger, Harold U.; Yang, Weitao
2003-10-01
Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.
Remarks on worldsheet theories dual to free large N gauge theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aharony, Ofer; SITP, Department of Physics and SLAC, Stanford University, Stanford, California 94305; David, Justin R.
2007-05-15
We continue to investigate properties of the worldsheet conformal field theories (CFTs) which are conjectured to be dual to free large N gauge theories, using the mapping of Feynman diagrams to the worldsheet suggested in [R. Gopakumar, Phys. Rev. D 70, 025009 (2004); ibid.70, 025010 (2004); C. R. Physique 5, 1111 (2004); Phys. Rev. D 72, 066008 (2005)]. The modular invariance of these CFTs is shown to be built into the formalism. We show that correlation functions in these CFTs which are localized on subspaces of the moduli space may be interpreted as delta-function distributions, and that this can bemore » consistent with a local worldsheet description given some constraints on the operator product expansion coefficients. We illustrate these features by a detailed analysis of a specific four-point function diagram. To reliably compute this correlator, we use a novel perturbation scheme which involves an expansion in the large dimension of some operators.« less
Laser Beam Steering/shaping for Free Space Optical Communication
NASA Technical Reports Server (NTRS)
Wang, Xinghua; Wang, Bin; Bos, Philip J.; Anderson, James E.; Pouch, John; Miranda, Felix; McManamon, Paul F.
2004-01-01
The 2-D Optical Phased Array (OPA) antenna based on a Liquid Crystal On Silicon (LCoS) device can be considered for use in free space optical communication as an active beam controlling device. Several examples of the functionality of the device include: beam steering in the horizontal and elevation direction; high resolution wavefront compensation in a large telescope; and beam shaping with the computer generated kinoform. Various issues related to the diffraction efficiency, steering range, steering accuracy as well as the magnitude of wavefront compensation are discussed.
Pre-expanded Intercostal Perforator Super-Thin Skin Flap.
Liao, Yunjun; Luo, Yong; Lu, Feng; Hyakusoku, Hiko; Gao, Jianhua; Jiang, Ping
2017-01-01
This article introduces pre-expanded super-thin intercostal perforator flaps, particularly the flap that has a perforator from the first to second intercostal spaces. The key techniques, advantages and disadvantages, and complications and management of this flap are described. At present, the thinnest possible flap is achieved by thinning the pre-expanded flap that has a perforator from the first to second intercostal spaces. It is used to reconstruct large defects on the face and neck, thus restoring function and cosmetic appearance. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA management of the Space Shuttle Program
NASA Technical Reports Server (NTRS)
Peters, F.
1975-01-01
The management system and management technology described have been developed to meet stringent cost and schedule constraints of the Space Shuttle Program. Management of resources available to this program requires control and motivation of a large number of efficient creative personnel trained in various technical specialties. This must be done while keeping track of numerous parallel, yet interdependent activities involving different functions, organizations, and products all moving together in accordance with intricate plans for budgets, schedules, performance, and interaction. Some techniques developed to identify problems at an early stage and seek immediate solutions are examined.
Preliminary Multi-Variable Cost Model for Space Telescopes
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Hendrichs, Todd
2010-01-01
Parametric cost models are routinely used to plan missions, compare concepts and justify technology investments. This paper reviews the methodology used to develop space telescope cost models; summarizes recently published single variable models; and presents preliminary results for two and three variable cost models. Some of the findings are that increasing mass reduces cost; it costs less per square meter of collecting aperture to build a large telescope than a small telescope; and technology development as a function of time reduces cost at the rate of 50% per 17 years.
ClustENM: ENM-Based Sampling of Essential Conformational Space at Full Atomic Resolution
Kurkcuoglu, Zeynep; Bahar, Ivet; Doruker, Pemra
2016-01-01
Accurate sampling of conformational space and, in particular, the transitions between functional substates has been a challenge in molecular dynamic (MD) simulations of large biomolecular systems. We developed an Elastic Network Model (ENM)-based computational method, ClustENM, for sampling large conformational changes of biomolecules with various sizes and oligomerization states. ClustENM is an iterative method that combines ENM with energy minimization and clustering steps. It is an unbiased technique, which requires only an initial structure as input, and no information about the target conformation. To test the performance of ClustENM, we applied it to six biomolecular systems: adenylate kinase (AK), calmodulin, p38 MAP kinase, HIV-1 reverse transcriptase (RT), triosephosphate isomerase (TIM), and the 70S ribosomal complex. The generated ensembles of conformers determined at atomic resolution show good agreement with experimental data (979 structures resolved by X-ray and/or NMR) and encompass the subspaces covered in independent MD simulations for TIM, p38, and RT. ClustENM emerges as a computationally efficient tool for characterizing the conformational space of large systems at atomic detail, in addition to generating a representative ensemble of conformers that can be advantageously used in simulating substrate/ligand-binding events. PMID:27494296
NASA Astrophysics Data System (ADS)
Khoei, A. R.; Samimi, M.; Azami, A. R.
2007-02-01
In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.
International manned lunar base - Beginning the 21st century in space
NASA Astrophysics Data System (ADS)
Smith, Harlan J.; Gurshtejn, Aleksandr A.; Mendell, Wendell
An evaluation is made of requirements for, and advantages in, the creation of a manned lunar base whose functions emphasize astronomical investigations. These astronomical studies would be able to capitalize on the lunar environment's ultrahigh vacuum, highly stable surface, dark and cold sky, low-G, absence of wind, isolation from terrestrial 'noise', locally usable ceramic raw materials, and large radiotelescope dish-supporting hemispherical craters. Large telescope structures would be nearly free of the gravity and wind loads that complicate their design on earth.
The 18th Aerospace Mechanisms Symposium
NASA Technical Reports Server (NTRS)
1984-01-01
Topics concerning aerospace mechanisms, their functional performance, and design specifications are presented. Discussed subjects include the design and development of release mechanisms, actuators, linear driver/rate controllers, antenna and appendage deployment systems, position control systems, and tracking mechanisms for antennas and solar arrays. Engine design, spaceborne experiments, and large space structure technology are also examined.
New optical and radio frequency angular tropospheric refraction models for deep space applications
NASA Technical Reports Server (NTRS)
Berman, A. L.; Rockwell, S. T.
1976-01-01
The development of angular tropospheric refraction models for optical and radio frequency usage is presented. The models are compact analytic functions, finite over the entire domain of elevation angle, and accurate over large ranges of pressure, temperature, and relative humidity. Additionally, FORTRAN subroutines for each of the models are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de
2016-11-15
We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less
Digital robust control law synthesis using constrained optimization
NASA Technical Reports Server (NTRS)
Mukhopadhyay, Vivekananda
1989-01-01
Development of digital robust control laws for active control of high performance flexible aircraft and large space structures is a research area of significant practical importance. The flexible system is typically modeled by a large order state space system of equations in order to accurately represent the dynamics. The active control law must satisy multiple conflicting design requirements and maintain certain stability margins, yet should be simple enough to be implementable on an onboard digital computer. Described here is an application of a generic digital control law synthesis procedure for such a system, using optimal control theory and constrained optimization technique. A linear quadratic Gaussian type cost function is minimized by updating the free parameters of the digital control law, while trying to satisfy a set of constraints on the design loads, responses and stability margins. Analytical expressions for the gradients of the cost function and the constraints with respect to the control law design variables are used to facilitate rapid numerical convergence. These gradients can be used for sensitivity study and may be integrated into a simultaneous structure and control optimization scheme.
On hierarchical solutions to the BBGKY hierarchy
NASA Technical Reports Server (NTRS)
Hamilton, A. J. S.
1988-01-01
It is thought that the gravitational clustering of galaxies in the universe may approach a scale-invariant, hierarchical form in the small separation, large-clustering regime. Past attempts to solve the Born-Bogoliubov-Green-Kirkwood-Yvon (BBGKY) hierarchy in this regime have assumed a certain separable hierarchical form for the higher order correlation functions of galaxies in phase space. It is shown here that such separable solutions to the BBGKY equations must satisfy the condition that the clustered component of the solution has cluster-cluster correlations equal to galaxy-galaxy correlations to all orders. The solutions also admit the presence of an arbitrary unclustered component, which plays no dyamical role in the large-clustering regime. These results are a particular property of the specific separable model assumed for the correlation functions in phase space, not an intrinsic property of spatially hierarchical solutions to the BBGKY hierarchy. The observed distribution of galaxies does not satisfy the required conditions. The disagreement between theory and observation may be traced, at least in part, to initial conditions which, if Gaussian, already have cluster correlations greater than galaxy correlations.
Simulation study on dynamics model of two kinds of on-orbit soft-contact mechanism
NASA Astrophysics Data System (ADS)
Ye, X.; Dong, Z. H.; Yang, F.
2018-05-01
Aiming at the problem that the operating conditions of the space manipulator is harsh and the space manipulator could not bear the large collision momentum, this paper presents a new concept and technical method, namely soft contact technology. Based on ADAMS dynamics software, this paper compares and simulates the mechanism model of on-orbit soft-contact mechanism based on the bionic model and the integrated double joint model. The main purpose is to verify the path planning ability and the momentum buffering ability based on the different design concept mechanism. The simulation results show that both the two mechanism models have the path planning function before the space target contact, and also has the momentum buffer and controllability during the space target contact process.
Hybrid Vlasov simulations for alpha particles heating in the solar wind
NASA Astrophysics Data System (ADS)
Perrone, Denise; Valentini, Francesco; Veltri, Pierluigi
2011-06-01
Heating and acceleration of heavy ions in the solar wind and corona represent a long-standing theoretical problem in space physics and are distinct experimental signatures of kinetic processes occurring in collisionless plasmas. To address this problem, we propose the use of a low-noise hybrid-Vlasov code in four dimensional phase space (1D in physical space and 3D in velocity space) configuration. We trigger a turbulent cascade injecting the energy at large wavelengths and analyze the role of kinetic effects along the development of the energy spectra. Following the evolution of both proton and α distribution functions shows that both the ion species significantly depart from the maxwellian equilibrium, with the appearance of beams of accelerated particles in the direction parallel to the background magnetic field.
Battery-powered thin film deposition process for coating telescope mirrors in space
NASA Astrophysics Data System (ADS)
Sheikh, David A.
2016-07-01
Aluminum films manufactured in the vacuum of space may increase the broadband reflectance response of a space telescope operating in the EUV (50-nm to 115-nm) by eliminating absorbing metal-fluorides and metal-oxides, which significantly reduce aluminum's reflectance below 115-nm. Recent developments in battery technology allow small lithium batteries to rapidly discharge large amounts of energy. It is therefore conceivable to power an array of resistive evaporation filaments in a space environment, using a reasonable mass of batteries and other hardware. This paper presents modeling results for coating thickness as a function of position, for aluminum films made with a hexagonal array of battery powered evaporation sources. The model is based on measured data from a single battery-powered evaporation source.
The evolvability of programmable hardware.
Raman, Karthik; Wagner, Andreas
2011-02-06
In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected 'neutral networks' in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 10(45) logic circuits ('genotypes') and 10(19) logic functions ('phenotypes'). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry.
The evolvability of programmable hardware
Raman, Karthik; Wagner, Andreas
2011-01-01
In biological systems, individual phenotypes are typically adopted by multiple genotypes. Examples include protein structure phenotypes, where each structure can be adopted by a myriad individual amino acid sequence genotypes. These genotypes form vast connected ‘neutral networks’ in genotype space. The size of such neutral networks endows biological systems not only with robustness to genetic change, but also with the ability to evolve a vast number of novel phenotypes that occur near any one neutral network. Whether technological systems can be designed to have similar properties is poorly understood. Here we ask this question for a class of programmable electronic circuits that compute digital logic functions. The functional flexibility of such circuits is important in many applications, including applications of evolutionary principles to circuit design. The functions they compute are at the heart of all digital computation. We explore a vast space of 1045 logic circuits (‘genotypes’) and 1019 logic functions (‘phenotypes’). We demonstrate that circuits that compute the same logic function are connected in large neutral networks that span circuit space. Their robustness or fault-tolerance varies very widely. The vicinity of each neutral network contains circuits with a broad range of novel functions. Two circuits computing different functions can usually be converted into one another via few changes in their architecture. These observations show that properties important for the evolvability of biological systems exist in a commercially important class of electronic circuitry. They also point to generic ways to generate fault-tolerant, adaptable and evolvable electronic circuitry. PMID:20534598
State-space model with deep learning for functional dynamics estimation in resting-state fMRI.
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2016-04-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. Copyright © 2016 Elsevier Inc. All rights reserved.
State-space model with deep learning for functional dynamics estimation in resting-state fMRI
Suk, Heung-Il; Wee, Chong-Yaw; Lee, Seong-Whan; Shen, Dinggang
2017-01-01
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach. PMID:26774612
The Space Station as a Construction Base for Large Space Structures
NASA Technical Reports Server (NTRS)
Gates, R. M.
1985-01-01
The feasibility of using the Space Station as a construction site for large space structures is examined. An overview is presented of the results of a program entitled Definition of Technology Development Missions (TDM's) for Early Space Stations - Large Space Structures. The definition of LSS technology development missions must be responsive to the needs of future space missions which require large space structures. Long range plans for space were assembled by reviewing Space System Technology Models (SSTM) and other published sources. Those missions which will use large space structures were reviewed to determine the objectives which must be demonstrated by technology development missions. The three TDM's defined during this study are: (1) a construction storage/hangar facility; (2) a passive microwave radiometer; and (3) a precision optical system.
Construction of Prototype Lightweight Mirrors
NASA Technical Reports Server (NTRS)
Robinson, William G.
1997-01-01
This contract and the work described was in support of a Seven Segment Demonstrator (SSD) and demonstration of a different technology for construction of lightweight mirrors. The objectives of the SSD were to demonstrate functionality and performance of a seven segment prototype array of hexagonal mirrors and supporting electromechanical components which address design issues critical to space optics deployed in large space based telescopes for astronomy and for optics used in spaced based optical communications systems. The SSD was intended to demonstrate technologies which can support the following capabilities; Transportation in dense packaging to existing launcher payload envelopes, then deployable on orbit to form space telescope with large aperture. Provide very large (less than 10 meters) primary reflectors of low mass and cost. Demonstrate the capability to form a segmented primary or quaternary mirror into a quasi-continuous surface with individual subapertures phased so that near diffraction limited imaging in the visible wavelength region is achieved. Continuous compensation of optical wavefront due to perturbations caused by imperfections, natural disturbances, and equipment induced vibrations/deflections to provide near diffraction limited imaging performance in the visible wavelength region. Demonstrate the feasibility of fabricating such systems with reduced mass and cost compared to past approaches. While the SSD could not be expected to satisfy all of the above capabilities, the intent was to start identifying and understanding new technologies that might be applicable to these goals.
NASA Astrophysics Data System (ADS)
Kim, M. Y.; Tylka, A. J.; Dietrich, W. F.; Cucinotta, F. A.
2012-12-01
The occasional occurrence of solar particle events (SPEs) with large amounts of energy is non-predictable, while the expected frequency is strongly influenced by solar cycle activity. The potential for exposure to large SPEs with high energy levels is the major concern during extra-vehicular activities (EVAs) on the Moon, near Earth object, and Mars surface for future long duration space missions. We estimated the propensity for SPE occurrence with large proton fluence as a function of time within a typical future solar cycle from a non-homogeneous Poisson model using the historical database for measurements of protons with energy > 30 MeV, Φ30. The database includes a comprehensive collection of historical data set for the past 5 solar cycles. Using all the recorded proton fluence of SPEs, total fluence distributions of Φ30, Φ60, and Φ100 were simulated ranging from its 5th to 95th percentile for each mission durations. In addition to the total particle intensity of SPEs, the detailed energy spectra of protons, especially at high energy levels, were recognized as extremely important for assessing the radiation cancer risk associated with energetic particles for large events. For radiation exposure assessments of major SPEs, we used the spectral functional form of a double power law in rigidity (the so-called Band function), which have provided a satisfactory representation of the combined satellite and neutron monitor data from ~10 MeV to ~10 GeV. The dependencies of exposure risk were evaluated as a function of proton fluence at a given energy threshold of 30, 60, and 100 MeV, and overall risk prediction was improved as the energy level threshold increases from 30 to 60 to 100 MeV. The results can be applied to the development of approaches of improved radiation protection for astronauts, as well as the optimization of mission planning and shielding for future space missions.
QCD evolution of the Sivers function
NASA Astrophysics Data System (ADS)
Aybat, S. M.; Collins, J. C.; Qiu, J. W.; Rogers, T. C.
2012-02-01
We extend the Collins-Soper-Sterman (CSS) formalism to apply it to the spin dependence governed by the Sivers function. We use it to give a correct numerical QCD evolution of existing fixed-scale fits of the Sivers function. With the aid of approximations useful for the nonperturbative region, we present the results as parametrizations of a Gaussian form in transverse-momentum space, rather than in the Fourier conjugate transverse coordinate space normally used in the CSS formalism. They are specifically valid at small transverse momentum. Since evolution has been applied, our results can be used to make predictions for Drell-Yan and semi-inclusive deep inelastic scattering at energies different from those where the original fits were made. Our evolved functions are of a form that they can be used in the same parton-model factorization formulas as used in the original fits, but now with a predicted scale dependence in the fit parameters. We also present a method by which our evolved functions can be corrected to allow for twist-3 contributions at large parton transverse momentum.
Zeil, Catharina; Widmann, Michael; Fademrecht, Silvia; Vogel, Constantin; Pleiss, Jürgen
2016-05-01
The Lactamase Engineering Database (www.LacED.uni-stuttgart.de) was developed to facilitate the classification and analysis of TEM β-lactamases. The current version contains 474 TEM variants. Two hundred fifty-nine variants form a large scale-free network of highly connected point mutants. The network was divided into three subnetworks which were enriched by single phenotypes: one network with predominantly 2be and two networks with 2br phenotypes. Fifteen positions were found to be highly variable, contributing to the majority of the observed variants. Since it is expected that a considerable fraction of the theoretical sequence space is functional, the currently sequenced 474 variants represent only the tip of the iceberg of functional TEM β-lactamase variants which form a huge natural reservoir of highly interconnected variants. Almost 50% of the variants are part of a quartet. Thus, two single mutations that result in functional enzymes can be combined into a functional protein. Most of these quartets consist of the same phenotype, or the mutations are additive with respect to the phenotype. By predicting quartets from triplets, 3,916 unknown variants were constructed. Eighty-seven variants complement multiple quartets and therefore have a high probability of being functional. The construction of a TEM β-lactamase network and subsequent analyses by clustering and quartet prediction are valuable tools to gain new insights into the viable sequence space of TEM β-lactamases and to predict their phenotype. The highly connected sequence space of TEM β-lactamases is ideally suited to network analysis and demonstrates the strengths of network analysis over tree reconstruction methods. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Electron Beam Brazing of Titanium for Construction in Space
NASA Technical Reports Server (NTRS)
Flom, Yury
2006-01-01
An extended presence of humans in space requires an in-situ capability to construct various permanent structures to support scientific research, power generation, communication, radiation shielding and other functions. Electron Beam (EB) vacuum brazing has been identified as one of the best joining processes for in-space joining, particular for making a large quantity of permanent joints as required for construction of the sizeable truss structures. Thin wall titanium tubes are perhaps the best choice because of their high stiffness, excellent strength-to-weight ratio and great metal forming and joining ability. An innovative EB vacuum spot brazing process is being developed at Goddard Space Flight Center to be used for robotic as well as human-assisted construction in space. This paper describes experimental results obtained during the initial effort of EB brazing of titanium tubes with the special emphasis on low temperature aluminum filler metals.
Human capabilities in space. [man machine interaction
NASA Technical Reports Server (NTRS)
Nicogossian, A. E.
1984-01-01
Man's ability to live and perform useful work in space was demonstrated throughout the history of manned space flight. Current planning envisions a multi-functional space station. Man's unique abilities to respond to the unforeseen and to operate at a level of complexity exceeding any reasonable amount of previous planning distinguish him from present day machines. His limitations, however, include his inherent inability to survive without protection, his limited strength, and his propensity to make mistakes when performing repetitive and monotonous tasks. By contrast, an automated system does routine and delicate tasks, exerts force smoothly and precisely, stores, and recalls large amounts of data, and performs deductive reasoning while maintaining a relative insensitivity to the environment. The establishment of a permanent presence of man in space demands that man and machines be appropriately combined in spaceborne systems. To achieve this optimal combination, research is needed in such diverse fields as artificial intelligence, robotics, behavioral psychology, economics, and human factors engineering.
Operational development of small plant growth systems
NASA Technical Reports Server (NTRS)
Scheld, H. W.; Magnuson, J. W.; Sauer, R. L.
1986-01-01
The results of a study undertaken on the first phase of an empricial effort in the development of small plant growth chambers for production of salad type vegetables on space shuttle or space station are discussed. The overall effort is visualized as providing the underpinning of practical experience in handling of plant systems in space which will provide major support for future efforts in planning, design, and construction of plant-based (phytomechanical) systems for support of human habitation in space. The assumptions underlying the effort hold that large scale phytomechanical habitability support systems for future space stations must evolve from the simple to the complex. The highly complex final systems will be developed from the accumulated experience and data gathered from repetitive tests and trials of fragments or subsystems of the whole in an operational mode. These developing system components will, meanwhile, serve a useful operational function in providing psychological support and diversion for the crews.
Interactive information processing for NASA's mesoscale analysis and space sensor program
NASA Technical Reports Server (NTRS)
Parker, K. G.; Maclean, L.; Reavis, N.; Wilson, G.; Hickey, J. S.; Dickerson, M.; Karitani, S.; Keller, D.
1985-01-01
The Atmospheric Sciences Division (ASD) of the Systems Dynamics Laboratory at NASA's Marshall Space Flight Center (MSFC) is currently involved in interactive information processing for the Mesoscale Analysis and Space Sensor (MASS) program. Specifically, the ASD is engaged in the development and implementation of new space-borne remote sensing technology to observe and measure mesoscale atmospheric processes. These space measurements and conventional observational data are being processed together to gain an improved understanding of the mesoscale structure and the dynamical evolution of the atmosphere relative to cloud development and precipitation processes. To satisfy its vast data processing requirements, the ASD has developed a Researcher Computer System consiting of three primary computer systems which provides over 20 scientists with a wide range of capabilities for processing and displaying a large volumes of remote sensing data. Each of the computers performs a specific function according to its unique capabilities.
Redshift-space distortions with the halo occupation distribution - II. Analytic model
NASA Astrophysics Data System (ADS)
Tinker, Jeremy L.
2007-01-01
We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.
Large Space Systems Technology, 1979. [antenna and space platform systems conference
NASA Technical Reports Server (NTRS)
Ward, J. C., Jr. (Compiler)
1980-01-01
Items of technology and developmental efforts in support of the large space systems technology programs are described. The major areas of interest are large antennas systems, large space platform systems, and activities that support both antennas and platform systems.
Critical issues related to registration of space objects and transparency of space activities
NASA Astrophysics Data System (ADS)
Jakhu, Ram S.; Jasani, Bhupendra; McDowell, Jonathan C.
2018-02-01
The main purpose of the 1975 Registration Convention is to achieve transparency in space activities and this objective is motivated by the belief that a mandatory registration system would assist in the identification of space objects launched into outer space. This would also consequently contribute to the application and development of international law governing the exploration and use of outer space. States Parties to the Convention furnish the required information to the United Nations' Register of Space Objects. However, the furnished information is often so general that it may not be as helpful in creating transparency as had been hoped by the drafters of the Convention. While registration of civil satellites has been furnished with some general details, till today, none of the Parties have described the objects as having military functions despite the fact that a large number of such objects do perform military functions as well. In some cases, the best they have done is to indicate that the space objects are for their defense establishments. Moreover, the number of registrations of space objects is declining. This paper addresses the challenges posed by the non-registration of space objects. Particularly, the paper provides some data about the registration and non-registration of satellites and the States that have and have not complied with their legal obligations. It also analyses the specific requirements of the Convention, the reasons for non-registration, new challenges posed by the registration of small satellites and the on-orbit transfer of satellites. Finally, the paper provides some recommendations on how to enhance the registration of space objects, on the monitoring of the implementation of the Registration Convention and consequently how to achieve maximum transparency in space activities.
Large scale structure in universes dominated by cold dark matter
NASA Technical Reports Server (NTRS)
Bond, J. Richard
1986-01-01
The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.
Statistical mechanics and thermodynamic limit of self-gravitating fermions in D dimensions.
Chavanis, Pierre-Henri
2004-06-01
We discuss the statistical mechanics of a system of self-gravitating fermions in a space of dimension D. We plot the caloric curves of the self-gravitating Fermi gas giving the temperature as a function of energy and investigate the nature of phase transitions as a function of the dimension of space. We consider stable states (global entropy maxima) as well as metastable states (local entropy maxima). We show that for D> or =4, there exists a critical temperature (for sufficiently large systems) and a critical energy below which the system cannot be found in statistical equilibrium. Therefore, for D> or =4, quantum mechanics cannot stabilize matter against gravitational collapse. This is similar to a result found by Ehrenfest (1917) at the atomic level for Coulomb forces. This makes the dimension D=3 of our Universe very particular with possible implications regarding the anthropic principle. Our study joins a long tradition of scientific and philosophical papers that examined how the dimension of space affects the laws of physics.
The Gaussian streaming model and convolution Lagrangian effective field theory
Vlah, Zvonimir; Castorina, Emanuele; White, Martin
2016-12-05
We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less
Recent experience in simultaneous control-structure optimization
NASA Technical Reports Server (NTRS)
Salama, M.; Ramaker, R.; Milman, M.
1989-01-01
To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.
Development of Laser-Polarized Noble Gas Magnetic Resonance Imaging (MRI) Technology
NASA Technical Reports Server (NTRS)
Walsworth, Ronald L.
2004-01-01
We are developing technology for laser-polarized noble gas nuclear magnetic resonance (NMR), with the aim of enabling it as a novel biomedical imaging tool for ground-based and eventually space-based application. This emerging multidisciplinary technology enables high-resolution gas-space magnetic resonance imaging (MRI)-e.g., of lung ventilation, perfusion, and gas-exchange. In addition, laser-polarized noble gases (3He and 1BXe) do not require a large magnetic field for sensitive NMR detection, opening the door to practical MRI with novel, open-access magnet designs at very low magnetic fields (and hence in confined spaces). We are pursuing two specific aims in this technology development program. The first aim is to develop an open-access, low-field (less than 0.01 T) instrument for MRI studies of human gas inhalation as a function of subject orientation, and the second aim is to develop functional imaging of the lung using laser-polarized He-3 and Xe-129.
The Gaussian streaming model and convolution Lagrangian effective field theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vlah, Zvonimir; Castorina, Emanuele; White, Martin, E-mail: zvlah@stanford.edu, E-mail: ecastorina@berkeley.edu, E-mail: mwhite@berkeley.edu
We update the ingredients of the Gaussian streaming model (GSM) for the redshift-space clustering of biased tracers using the techniques of Lagrangian perturbation theory, effective field theory (EFT) and a generalized Lagrangian bias expansion. After relating the GSM to the cumulant expansion, we present new results for the real-space correlation function, mean pairwise velocity and pairwise velocity dispersion including counter terms from EFT and bias terms through third order in the linear density, its leading derivatives and its shear up to second order. We discuss the connection to the Gaussian peaks formalism. We compare the ingredients of the GSM tomore » a suite of large N-body simulations, and show the performance of the theory on the low order multipoles of the redshift-space correlation function and power spectrum. We highlight the importance of a general biasing scheme, which we find to be as important as higher-order corrections due to non-linear evolution for the halos we consider on the scales of interest to us.« less
NASA Technical Reports Server (NTRS)
Griner, D. B.
1979-01-01
The paper considers the bidirectional reflectance distribution function (BRDF) of black coatings used on stray light suppression systems for the Space Telescope (ST). The ST stray light suppression requirement is to reduce earth, moon, and sun light in the focal plane to a level equivalent to one 23 Mv star per square arcsecond, an attenuation of 14 orders of magnitude. It is impractical to verify the performance of a proposed baffle system design by full scale tests because of the large size of the ST, so that a computer analysis is used to select the design. Accurate computer analysis requires a knowledge of the diffuse scatter at all angles from the surface of the coatings, for all angles of incident light. During the early phases of the ST program a BRDF scanner was built at the Marshall Space Flight Center to study the scatter from black materials; the measurement system is described and the results of measurements on samples proposed for use on the ST are presented.
The pinwheel pupil discovery: exoplanet science & improved processing with segmented telescopes
NASA Astrophysics Data System (ADS)
Breckinridge, James Bernard
2018-01-01
In this paper, we show that by using a “pinwheel” architecture for the segmented primary mirror and curved supports for the secondary mirror, we can achieve a near uniform diffraction background in ground and space large telescope systems needed for high SNR exoplanet science. Also, the point spread function will be nearly rotationally symmetric, enabling improved digital image reconstruction. Large (>4-m) aperture space telescopes are needed to characterize terrestrial exoplanets by direct imaging coronagraphy. Launch vehicle volume constrains these apertures are segmented and deployed in space to form a large mirror aperture that is masked by the gaps between the hexagonal segments and the shadows of the secondary support system. These gaps and shadows over the pupil result in an image plane point spread function that has bright spikes, which may mask or obscure exoplanets.These telescope artifact mask faint exoplanets, making it necessary for the spacecraft to make a roll about the boresight and integrate again to make sure no planets are missed. This increases integration time, and requires expensive space-craft resources to do bore-sight roll.Currently the LUVOIR and HabEx studies have several significant efforts to develop special purpose A/O technology and to place complex absorbing apodizers over their Hex pupils to shape the unwanted diffracted light. These strong apodizers absorb light, decreasing system transmittance and reducing SNR. Implementing curved pupil obscurations will eliminate the need for the highly absorbing apodizers and thus result in higher SNR.Quantitative analysis of diffraction patterns that use the pinwheel architecture are compared to straight hex-segment edges with a straight-line secondary shadow mask to show a gain of over a factor of 100 by reducing the background. For the first-time astronomers are able to control and minimize image plane diffraction background “noise”. This technology will enable 10-m segmented apertures to perform nearly the same as a 10-meter monolith filled aperture. The pinwheel pupil will enable a significant gain in exoplanet SNR.
Whole metagenome profiles of particulates collected from the International Space Station
Be, Nicholas A.; Avila-Herrera, Aram; Allen, Jonathan E.; ...
2017-07-17
Background The built environment of the International Space Station (ISS) is a highly specialized space in terms of both physical characteristics and habitation requirements. It is unique with respect to conditions of microgravity, exposure to space radiation, and increased carbon dioxide concentrations. Additionally, astronauts inhabit a large proportion of this environment. The microbial composition of ISS particulates has been reported; however, its functional genomics, which are pertinent due to potential impact of its constituents on human health and operational mission success, are not yet characterized. Methods This study examined the whole metagenome of ISS microbes at both species- and gene-levelmore » resolution. Air filter and dust samples from the ISS were analyzed and compared to samples collected in a terrestrial cleanroom environment. Furthermore, metagenome mining was carried out to characterize dominant, virulent, and novel microorganisms. The whole genome sequences of select cultivable strains isolated from these samples were extracted from the metagenome and compared. Results Species-level composition in the ISS was found to be largely dominated by Corynebacterium ihumii GD7, with overall microbial diversity being lower in the ISS relative to the cleanroom samples. When examining detection of microbial genes relevant to human health such as antimicrobial resistance and virulence genes, it was found that a larger number of relevant gene categories were observed in the ISS relative to the cleanroom. Strain-level cross-sample comparisons were made for Corynebacterium, Bacillus, and Aspergillus showing possible distinctions in the dominant strain between samples. Conclusion Species-level analyses demonstrated distinct differences between the ISS and cleanroom samples, indicating that the cleanroom population is not necessarily reflective of space habitation environments. Lastly, the overall population of viable microorganisms and the functional diversity inherent to this unique closed environment are of critical interest with respect to future space habitation. Observations and studies such as these will be important to evaluating the conditions required for long-term health of human occupants in such environments.« less
Whole metagenome profiles of particulates collected from the International Space Station
DOE Office of Scientific and Technical Information (OSTI.GOV)
Be, Nicholas A.; Avila-Herrera, Aram; Allen, Jonathan E.
Background The built environment of the International Space Station (ISS) is a highly specialized space in terms of both physical characteristics and habitation requirements. It is unique with respect to conditions of microgravity, exposure to space radiation, and increased carbon dioxide concentrations. Additionally, astronauts inhabit a large proportion of this environment. The microbial composition of ISS particulates has been reported; however, its functional genomics, which are pertinent due to potential impact of its constituents on human health and operational mission success, are not yet characterized. Methods This study examined the whole metagenome of ISS microbes at both species- and gene-levelmore » resolution. Air filter and dust samples from the ISS were analyzed and compared to samples collected in a terrestrial cleanroom environment. Furthermore, metagenome mining was carried out to characterize dominant, virulent, and novel microorganisms. The whole genome sequences of select cultivable strains isolated from these samples were extracted from the metagenome and compared. Results Species-level composition in the ISS was found to be largely dominated by Corynebacterium ihumii GD7, with overall microbial diversity being lower in the ISS relative to the cleanroom samples. When examining detection of microbial genes relevant to human health such as antimicrobial resistance and virulence genes, it was found that a larger number of relevant gene categories were observed in the ISS relative to the cleanroom. Strain-level cross-sample comparisons were made for Corynebacterium, Bacillus, and Aspergillus showing possible distinctions in the dominant strain between samples. Conclusion Species-level analyses demonstrated distinct differences between the ISS and cleanroom samples, indicating that the cleanroom population is not necessarily reflective of space habitation environments. Lastly, the overall population of viable microorganisms and the functional diversity inherent to this unique closed environment are of critical interest with respect to future space habitation. Observations and studies such as these will be important to evaluating the conditions required for long-term health of human occupants in such environments.« less
Whole metagenome profiles of particulates collected from the International Space Station.
Be, Nicholas A; Avila-Herrera, Aram; Allen, Jonathan E; Singh, Nitin; Checinska Sielaff, Aleksandra; Jaing, Crystal; Venkateswaran, Kasthuri
2017-07-17
The built environment of the International Space Station (ISS) is a highly specialized space in terms of both physical characteristics and habitation requirements. It is unique with respect to conditions of microgravity, exposure to space radiation, and increased carbon dioxide concentrations. Additionally, astronauts inhabit a large proportion of this environment. The microbial composition of ISS particulates has been reported; however, its functional genomics, which are pertinent due to potential impact of its constituents on human health and operational mission success, are not yet characterized. This study examined the whole metagenome of ISS microbes at both species- and gene-level resolution. Air filter and dust samples from the ISS were analyzed and compared to samples collected in a terrestrial cleanroom environment. Furthermore, metagenome mining was carried out to characterize dominant, virulent, and novel microorganisms. The whole genome sequences of select cultivable strains isolated from these samples were extracted from the metagenome and compared. Species-level composition in the ISS was found to be largely dominated by Corynebacterium ihumii GD7, with overall microbial diversity being lower in the ISS relative to the cleanroom samples. When examining detection of microbial genes relevant to human health such as antimicrobial resistance and virulence genes, it was found that a larger number of relevant gene categories were observed in the ISS relative to the cleanroom. Strain-level cross-sample comparisons were made for Corynebacterium, Bacillus, and Aspergillus showing possible distinctions in the dominant strain between samples. Species-level analyses demonstrated distinct differences between the ISS and cleanroom samples, indicating that the cleanroom population is not necessarily reflective of space habitation environments. The overall population of viable microorganisms and the functional diversity inherent to this unique closed environment are of critical interest with respect to future space habitation. Observations and studies such as these will be important to evaluating the conditions required for long-term health of human occupants in such environments.
The benefits of adaptive parametrization in multi-objective Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ghisu, Tiziano; Parks, Geoffrey T.; Jaeggi, Daniel M.; Jarrett, Jerome P.; Clarkson, P. John
2010-10-01
In real-world optimization problems, large design spaces and conflicting objectives are often combined with a large number of constraints, resulting in a highly multi-modal, challenging, fragmented landscape. The local search at the heart of Tabu Search, while being one of its strengths in highly constrained optimization problems, requires a large number of evaluations per optimization step. In this work, a modification of the pattern search algorithm is proposed: this modification, based on a Principal Components' Analysis of the approximation set, allows both a re-alignment of the search directions, thereby creating a more effective parametrization, and also an informed reduction of the size of the design space itself. These changes make the optimization process more computationally efficient and more effective - higher quality solutions are identified in fewer iterations. These advantages are demonstrated on a number of standard analytical test functions (from the ZDT and DTLZ families) and on a real-world problem (the optimization of an axial compressor preliminary design).
Definition of technology development missions for early space stations: Large space structures
NASA Technical Reports Server (NTRS)
1983-01-01
The testbed role of an early (1990-95) manned space station in large space structures technology development is defined and conceptual designs for large space structures development missions to be conducted at the space station are developed. Emphasis is placed on defining requirements and benefits of development testing on a space station in concert with ground and shuttle tests.
LVQ and backpropagation neural networks applied to NASA SSME data
NASA Technical Reports Server (NTRS)
Doniere, Timothy F.; Dhawan, Atam P.
1993-01-01
Feedfoward neural networks with backpropagation learning have been used as function approximators for modeling the space shuttle main engine (SSME) sensor signals. The modeling of these sensor signals is aimed at the development of a sensor fault detection system that can be used during ground test firings. The generalization capability of a neural network based function approximator depends on the training vectors which in this application may be derived from a number of SSME ground test-firings. This yields a large number of training vectors. Large training sets can cause the time required to train the network to be very large. Also, the network may not be able to generalize for large training sets. To reduce the size of the training sets, the SSME test-firing data is reduced using the learning vector quantization (LVQ) based technique. Different compression ratios were used to obtain compressed data in training the neural network model. The performance of the neural model trained using reduced sets of training patterns is presented and compared with the performance of the model trained using complete data. The LVQ can also be used as a function approximator. The performance of the LVQ as a function approximator using reduced training sets is presented and compared with the performance of the backpropagation network.
NASA Astrophysics Data System (ADS)
Sridhar, Srivatsan; Maurogordato, Sophie; Benoist, Christophe; Cappi, Alberto; Marulli, Federico
2017-04-01
Context. The next generation of galaxy surveys will provide cluster catalogues probing an unprecedented range of scales, redshifts, and masses with large statistics. Their analysis should therefore enable us to probe the spatial distribution of clusters with high accuracy and derive tighter constraints on the cosmological parameters and the dark energy equation of state. However, for the majority of these surveys, redshifts of individual galaxies will be mostly estimated by multiband photometry which implies non-negligible errors in redshift resulting in potential difficulties in recovering the real-space clustering. Aims: We investigate to which accuracy it is possible to recover the real-space two-point correlation function of galaxy clusters from cluster catalogues based on photometric redshifts, and test our ability to detect and measure the redshift and mass evolution of the correlation length r0 and of the bias parameter b(M,z) as a function of the uncertainty on the cluster redshift estimate. Methods: We calculate the correlation function for cluster sub-samples covering various mass and redshift bins selected from a 500 deg2 light-cone limited to H < 24. In order to simulate the distribution of clusters in photometric redshift space, we assign to each cluster a redshift randomly extracted from a Gaussian distribution having a mean equal to the cluster cosmological redshift and a dispersion equal to σz. The dispersion is varied in the range σ(z=0)=\\frac{σz{1+z_c} = 0.005,0.010,0.030} and 0.050, in order to cover the typical values expected in forthcoming surveys. The correlation function in real-space is then computed through estimation and deprojection of wp(rp). Four mass ranges (from Mhalo > 2 × 1013h-1M⊙ to Mhalo > 2 × 1014h-1M⊙) and six redshift slices covering the redshift range [0, 2] are investigated, first using cosmological redshifts and then for the four photometric redshift configurations. Results: From the analysis of the light-cone in cosmological redshifts we find a clear increase of the correlation amplitude as a function of redshift and mass. The evolution of the derived bias parameter b(M,z) is in fair agreement with theoretical expectations. We calculate the r0-d relation up to our highest mass, highest redshift sample tested (z = 2,Mhalo > 2 × 1014h-1M⊙). From our pilot sample limited to Mhalo > 5 × 1013h-1M⊙(0.4 < z < 0.7), we find that the real-space correlation function can be recovered by deprojection of wp(rp) within an accuracy of 5% for σz = 0.001 × (1 + zc) and within 10% for σz = 0.03 × (1 + zc). For higher dispersions (besides σz > 0.05 × (1 + zc)), the recovery becomes noisy and difficult. The evolution of the correlation in redshift and mass is clearly detected for all σz tested, but requires a large binning in redshift to be detected significantly between individual redshift slices when increasing σz. The best-fit parameters (r0 and γ) as well as the bias obtained from the deprojection method for all σz are within the 1σ uncertainty of the zc sample.
Decoupling local mechanics from large-scale structure in modular metamaterials.
Yang, Nan; Silverberg, Jesse L
2017-04-04
A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such "inverse design" is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module's design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.
Decoupling local mechanics from large-scale structure in modular metamaterials
NASA Astrophysics Data System (ADS)
Yang, Nan; Silverberg, Jesse L.
2017-04-01
A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.
Point processes in arbitrary dimension from fermionic gases, random matrix theory, and number theory
NASA Astrophysics Data System (ADS)
Torquato, Salvatore; Scardicchio, A.; Zachary, Chase E.
2008-11-01
It is well known that one can map certain properties of random matrices, fermionic gases, and zeros of the Riemann zeta function to a unique point process on the real line \\mathbb {R} . Here we analytically provide exact generalizations of such a point process in d-dimensional Euclidean space \\mathbb {R}^d for any d, which are special cases of determinantal processes. In particular, we obtain the n-particle correlation functions for any n, which completely specify the point processes in \\mathbb {R}^d . We also demonstrate that spin-polarized fermionic systems in \\mathbb {R}^d have these same n-particle correlation functions in each dimension. The point processes for any d are shown to be hyperuniform, i.e., infinite wavelength density fluctuations vanish, and the structure factor (or power spectrum) S(k) has a non-analytic behavior at the origin given by S(k)~|k| (k \\rightarrow 0 ). The latter result implies that the pair correlation function g2(r) tends to unity for large pair distances with a decay rate that is controlled by the power law 1/rd+1, which is a well-known property of bosonic ground states and more recently has been shown to characterize maximally random jammed sphere packings. We graphically display one-and two-dimensional realizations of the point processes in order to vividly reveal their 'repulsive' nature. Indeed, we show that the point processes can be characterized by an effective 'hard core' diameter that grows like the square root of d. The nearest-neighbor distribution functions for these point processes are also evaluated and rigorously bounded. Among other results, this analysis reveals that the probability of finding a large spherical cavity of radius r in dimension d behaves like a Poisson point process but in dimension d+1, i.e., this probability is given by exp[-κ(d)rd+1] for large r and finite d, where κ(d) is a positive d-dependent constant. We also show that as d increases, the point process behaves effectively like a sphere packing with a coverage fraction of space that is no denser than 1/2d. This coverage fraction has a special significance in the study of sphere packings in high-dimensional Euclidean spaces.
Aminopropyl-Silica Hybrid Particles as Supports for Humic Acids Immobilization.
Sándor, Mónika; Nistor, Cristina Lavinia; Szalontai, Gábor; Stoica, Rusandica; Nicolae, Cristian Andi; Alexandrescu, Elvira; Fazakas, József; Oancea, Florin; Donescu, Dan
2016-01-08
A series of aminopropyl-functionalized silica nanoparticles were prepared through a basic two step sol-gel process in water. Prior to being aminopropyl-functionalized, silica particles with an average diameter of 549 nm were prepared from tetraethyl orthosilicate (TEOS), using a Stöber method. In a second step, aminopropyl-silica particles were prepared by silanization with 3-aminopropyltriethoxysilane (APTES), added drop by drop to the sol-gel mixture. The synthesized amino-functionalized silica particles are intended to be used as supports for immobilization of humic acids (HA), through electrostatic bonds. Furthermore, by inserting beside APTES, unhydrolysable mono-, di- or trifunctional alkylsilanes (methyltriethoxy silane (MeTES), trimethylethoxysilane (Me₃ES), diethoxydimethylsilane (Me₂DES) and 1,2-bis(triethoxysilyl)ethane (BETES)) onto silica particles surface, the spacing of the free amino groups was intended in order to facilitate their interaction with HA large molecules. Two sorts of HA were used for evaluating the immobilization capacity of the novel aminosilane supports. The results proved the efficient functionalization of silica nanoparticles with amino groups and showed that the immobilization of the two tested types of humic acid substances was well achieved for all the TEOS/APTES = 20/1 (molar ratio) silica hybrids having or not having the amino functions spaced by alkyl groups. It was shown that the density of aminopropyl functions is low enough at this low APTES fraction and do not require a further spacing by alkyl groups. Moreover, all the hybrids having negative zeta potential values exhibited low interaction with HA molecules.
SP_Ace: a new code to derive stellar parameters and elemental abundances
NASA Astrophysics Data System (ADS)
Boeche, C.; Grebel, E. K.
2016-03-01
Context. Ongoing and future massive spectroscopic surveys will collect large numbers (106-107) of stellar spectra that need to be analyzed. Highly automated software is needed to derive stellar parameters and chemical abundances from these spectra. Aims: We developed a new method of estimating the stellar parameters Teff, log g, [M/H], and elemental abundances. This method was implemented in a new code, SP_Ace (Stellar Parameters And Chemical abundances Estimator). This is a highly automated code suitable for analyzing the spectra of large spectroscopic surveys with low or medium spectral resolution (R = 2000-20 000). Methods: After the astrophysical calibration of the oscillator strengths of 4643 absorption lines covering the wavelength ranges 5212-6860 Å and 8400-8924 Å, we constructed a library that contains the equivalent widths (EW) of these lines for a grid of stellar parameters. The EWs of each line are fit by a polynomial function that describes the EW of the line as a function of the stellar parameters. The coefficients of these polynomial functions are stored in a library called the "GCOG library". SP_Ace, a code written in FORTRAN95, uses the GCOG library to compute the EWs of the lines, constructs models of spectra as a function of the stellar parameters and abundances, and searches for the model that minimizes the χ2 deviation when compared to the observed spectrum. The code has been tested on synthetic and real spectra for a wide range of signal-to-noise and spectral resolutions. Results: SP_Ace derives stellar parameters such as Teff, log g, [M/H], and chemical abundances of up to ten elements for low to medium resolution spectra of FGK-type stars with precision comparable to the one usually obtained with spectra of higher resolution. Systematic errors in stellar parameters and chemical abundances are presented and identified with tests on synthetic and real spectra. Stochastic errors are automatically estimated by the code for all the parameters. A simple Web front end of SP_Ace can be found at http://dc.g-vo.org/SP_ACE while the source code will be published soon. Full Tables D.1-D.3 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/587/A2
Technical Assessment: Integrated Photonics
2015-10-01
in global internet protocol traffic as a function of time by local access technology. Photonics continues to play a critical role in enabling this...communication networks. This has enabled services like the internet , high performance computing, and power-efficient large-scale data centers. The...signal processing, quantum information science, and optics for free space applications. However major obstacles challenge the implementation of
Condition of cardiovascular systems of astronauts during flight of Soyuz orbital station
NASA Technical Reports Server (NTRS)
Degtyarev, V. A.; Popov, I. I.; Batenchuk-Tusko, T. V.; Kolmykova, N. D.; Lapshina, N. A.; Kirillova, Z. A.; Doroshev, V. G.; Kukushkin, Y. A.
1975-01-01
Extensive studies of blood circulation functions during manned space flight demonstrated a pronounced tendency toward an increase in minute volume of the blood and a decrease in pulse wave propagation rate. Individual blood circulation indices had large amplitude fluctuations. Physical work loads caused slow recovery of heart rate, arterial pressure and minute blood volume.
LSI logic for phase-control rectifiers
NASA Technical Reports Server (NTRS)
Dolland, C.
1980-01-01
Signals for controlling phase-controlled rectifier circuit are generated by combinatorial logic than can be implemented in large-scale integration (LSI). LSI circuit saves space, weight, and assembly time compared to previous controls that employ one-shot multivibrators, latches, and capacitors. LSI logic functions by sensing three phases of ac power source and by comparing actual currents with intended currents.
NASA Astrophysics Data System (ADS)
Alberts, Samantha J.
The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.
Kurashige, Yuki; Saitow, Masaaki; Chalupský, Jakub; Yanai, Takeshi
2014-06-28
The O-O (oxygen-oxygen) bond formation is widely recognized as a key step of the catalytic reaction of dioxygen evolution from water. Recently, the water oxidation catalyzed by potassium ferrate (K2FeO4) was investigated on the basis of experimental kinetic isotope effect analysis assisted by density functional calculations, revealing the intramolecular oxo-coupling mechanism within a di-iron(vi) intermediate, or diferrate [Sarma et al., J. Am. Chem. Soc., 2012, 134, 15371]. Here, we report a detailed examination of this diferrate-mediated O-O bond formation using scalable multireference electronic structure theory. High-dimensional correlated many-electron wave functions beyond the one-electron picture were computed using the ab initio density matrix renormalization group (DMRG) method along the O-O bond formation pathway. The necessity of using large active space arises from the description of complex electronic interactions and varying redox states both associated with two-center antiferromagnetic multivalent iron-oxo coupling. Dynamic correlation effects on top of the active space DMRG wave functions were additively accounted for by complete active space second-order perturbation (CASPT2) and multireference configuration interaction (MRCI) based methods, which were recently introduced by our group. These multireference methods were capable of handling the double shell effects in the extended active space treatment. The calculations with an active space of 36 electrons in 32 orbitals, which is far over conventional limitation, provide a quantitatively reliable prediction of potential energy profiles and confirmed the viability of the direct oxo coupling. The bonding nature of Fe-O and dual bonding character of O-O are discussed using natural orbitals.
What North America's skeleton crew of megafauna tells us about community disassembly
2017-01-01
Functional trait diversity is increasingly used to model future changes in community structure despite a poor understanding of community disassembly's effects on functional diversity. By tracking the functional diversity of the North American large mammal fauna through the End-Pleistocene megafaunal extinction and up to the present, I show that contrary to expectations, functionally unique species are no more likely to go extinct than functionally redundant species. This makes total functional richness loss no worse than expected given similar taxonomic richness declines. However, where current species sit in functional space relative to pre-anthropogenic baselines is not random and likely explains ecosystem functional changes better than total functional richness declines. Prehistoric extinctions have left many extant species functionally isolated and future extinctions will cause even more rapid drops in functional richness. PMID:28077767
Space Station - Government and industry launch joint venture
NASA Astrophysics Data System (ADS)
Nichols, R. G.
1985-04-01
After the development of the space transportation system over the last decade, the decision to launch a permanently manned space station was announced by President Reagan in his 1984 State of the Union Address. As a result of work performed by the Space Station Task Force created in 1982, NASA was able to present Congress with a plan for achieving the President's objective. The plan envisions a space station which would cost about $8 billion and be operational as early as 1992. The functions of the Space Station would include the servicing of satellites. In addition, the station would serve as a base for the construction of large space structures, and provide facilities for research and development. The Space Station design selected by NASA is the 'Power Tower', a 450-foot-long truss structure which will travel in orbit with its main axis perpendicular to the earth's surface. Attention is given to the living and working quarters for the crew, the location of earth observation equipment and astronomical instruments, and details regarding the employment of the Station.
Advances in Mechanical Architectures of Large Precision Space Apertures
NASA Astrophysics Data System (ADS)
Datashvili, Leri; Maghaldadze, Nikoloz; Endler, Stephan; Pauw, Julian; He, Peng; Baier, Horst; Ihle, Alexander; Santiago Prowlad, Julian
2014-06-01
Recent advances in development of mechanical architectures of large deployable reflectors (LDRs) through the projects of the European Space Agency are addressed in this paper. Two different directions of LDR architectures are being investigated and developed at LSS and LLB. These are LDRs with knitted metal mesh and with flexible shell-membrane reflecting surfaces. The first direction is matured and required advancing of the novel architecture of the supporting structure that provides deployment and final shape accuracy of the metal mesh is underway. The second direction is rather new and its current development stage is focused on investigations of dimensional stability of the flexible shell-membrane reflecting surface. In both directions 5 m diameter functional models will be built to demonstrate achieved performances, which shall prepare the basis for further improvement of their technology readiness levels.
Combined injury syndrome in space-related radiation environments
NASA Astrophysics Data System (ADS)
Dons, R. F.; Fohlmeister, U.
The risk of combined injury (CI) to space travelers is a function of exposure to anomalously large surges of a broad spectrum of particulate and photon radiations, conventional trauma (T), and effects of weightlessness including decreased intravascular fluid volume, and myocardial deconditioning. CI may occur even at relatively low doses of radiation which can synergistically enhance morbidity and mortality from T. Without effective countermeasures, prolonged residence in space is expected to predispose most individuals to bone fractures as a result of calcium loss in the microgravity environment. Immune dysfunction may occur from residence in space independent of radiation exposure. Thus, wound healing would be compromised if infection were to occur. Survival of the space traveler with CI would be significantly compromised if there were delays in wound closure or in the application of simple supportive medical or surgical therapies. Particulate radiation has the potential for causing greater gastrointestinal injury than photon radiation, but bone healing should not be compromised at the expected doses of either type of radiation in space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seljak, Uroš; McDonald, Patrick, E-mail: useljak@berkeley.edu, E-mail: pvmcdonald@lbl.gov
We develop a phase space distribution function approach to redshift space distortions (RSD), in which the redshift space density can be written as a sum over velocity moments of the distribution function. These moments are density weighted and have well defined physical interpretation: their lowest orders are density, momentum density, and stress energy density. The series expansion is convergent if kμu/aH < 1, where k is the wavevector, H the Hubble parameter, u the typical gravitational velocity and μ = cos θ, with θ being the angle between the Fourier mode and the line of sight. We perform an expansionmore » of these velocity moments into helicity modes, which are eigenmodes under rotation around the axis of Fourier mode direction, generalizing the scalar, vector, tensor decomposition of perturbations to an arbitrary order. We show that only equal helicity moments correlate and derive the angular dependence of the individual contributions to the redshift space power spectrum. We show that the dominant term of μ{sup 2} dependence on large scales is the cross-correlation between the density and scalar part of momentum density, which can be related to the time derivative of the matter power spectrum. Additional terms contributing to μ{sup 2} and dominating on small scales are the vector part of momentum density-momentum density correlations, the energy density-density correlations, and the scalar part of anisotropic stress density-density correlations. The second term is what is usually associated with the small scale Fingers-of-God damping and always suppresses power, but the first term comes with the opposite sign and always adds power. Similarly, we identify 7 terms contributing to μ{sup 4} dependence. Some of the advantages of the distribution function approach are that the series expansion converges on large scales and remains valid in multi-stream situations. We finish with a brief discussion of implications for RSD in galaxies relative to dark matter, highlighting the issue of scale dependent bias of velocity moments correlators.« less
Structures that Contribute to Middle-Ear Admittance in Chinchilla
Rosowski, John J.; Ravicz, Michael E.; Songer, Jocelyn E.
2009-01-01
We describe measurements of middle-ear input admittance in chinchillas (Chinchilla lanigera) before and after various manipulations that define the contributions of different middle-ear components to function. The chinchilla’s middle-ear air spaces have a large effect on the low-frequency compliance of the middle ear, and removing the influences of these spaces reveals a highly admittant tympanic membrane and ossicular chain. Measurements of the admittance of the air spaces reveal that the high-degree of segmentation of these spaces has only a small effect on the admittance. Draining the cochlea further increases the middle-ear admittance at low frequencies and removes a low-frequency (less than 300 Hz) level dependence in the admittance. Spontaneous or sound-driven contractions of the middle-ear muscles in deeply anesthetized animals were associated with significant changes in middle-ear admittance. PMID:16944166
Back-illuminated large area frame transfer CCDs for space-based hyper-spectral imaging applications
NASA Astrophysics Data System (ADS)
Philbrick, Robert H.; Gilmore, Angelo S.; Schrein, Ronald J.
2016-07-01
Standard offerings of large area, back-illuminated full frame CCD sensors are available from multiple suppliers and they continue to be commonly deployed in ground- and space-based applications. By comparison the availability of large area frame transfers CCDs is sparse, with the accompanying 2x increase in die area no doubt being a contributing factor. Modern back-illuminated CCDs yield very high quantum efficiency in the 290 to 400 nm band, a wavelength region of great interest in space-based instruments studying atmospheric phenomenon. In fast framing (e.g. 10 - 20 Hz), space-based applications such as hyper-spectral imaging, the use of a mechanical shutter to block incident photons during readout can prove costly and lower instrument reliability. The emergence of large area, all-digital visible CMOS sensors, with integrate while read functionality, are an alternative solution to CCDs; but, even after factoring in reduced complexity and cost of support electronics, the present cost to implement such novel sensors is prohibitive to cost constrained missions. Hence, there continues to be a niche set of applications where large area, back-illuminated frame transfer CCDs with high UV quantum efficiency, high frame rate, high full well, and low noise provide an advantageous solution. To address this need a family of large area frame transfer CCDs has been developed that includes 2048 (columns) x 256 (rows) (FT4), 2048 x 512 (FT5), and 2048 x 1024 (FT6) full frame transfer CCDs; and a 2048 x 1024 (FT7) split-frame transfer CCD. Each wafer contains 4 FT4, 2 FT5, 2 FT6, and 2 FT7 die. The designs have undergone radiation and accelerated life qualification and the electro-optical performance of these CCDs over the wavelength range of 290 to 900 nm is discussed.
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Silk, Joseph; Vittorio, Nicola
1992-01-01
A new technique is used to compute the correlation function for large-angle cosmic microwave background anisotropies resulting from both the space and time variations in the gravitational potential in flat, vacuum-dominated, cold dark matter cosmological models. Such models with Omega sub 0 of about 0.2, fit the excess power, relative to the standard cold dark matter model, observed in the large-scale galaxy distribution and allow a high value for the Hubble constant. The low order multipoles and quadrupole anisotropy that are potentially observable by COBE and other ongoing experiments should definitively test these models.
Verheijen, Lieneke M; Aerts, Rien; Bönisch, Gerhard; Kattge, Jens; Van Bodegom, Peter M
2016-01-01
Plant functional types (PFTs) aggregate the variety of plant species into a small number of functionally different classes. We examined to what extent plant traits, which reflect species' functional adaptations, can capture functional differences between predefined PFTs and which traits optimally describe these differences. We applied Gaussian kernel density estimation to determine probability density functions for individual PFTs in an n-dimensional trait space and compared predicted PFTs with observed PFTs. All possible combinations of 1-6 traits from a database with 18 different traits (total of 18 287 species) were tested. A variety of trait sets had approximately similar performance, and 4-5 traits were sufficient to classify up to 85% of the species into PFTs correctly, whereas this was 80% for a bioclimatically defined tree PFT classification. Well-performing trait sets included combinations of correlated traits that are considered functionally redundant within a single plant strategy. This analysis quantitatively demonstrates how structural differences between PFTs are reflected in functional differences described by particular traits. Differentiation between PFTs is possible despite large overlap in plant strategies and traits, showing that PFTs are differently positioned in multidimensional trait space. This study therefore provides the foundation for important applications for predictive ecology. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Klauser, Benedikt; Rehm, Charlotte; Summerer, Daniel; Hartig, Jörg S
2015-01-01
Synthetic RNA-based switches are a growing class of genetic controllers applied in synthetic biology to engineer cellular functions. In this chapter, we detail a protocol for the selection of posttranscriptional controllers of gene expression in yeast using the Schistosoma mansoni hammerhead ribozyme as a central catalytic unit. Incorporation of a small molecule-sensing aptamer domain into the ribozyme renders its activity ligand-dependent. Aptazymes display numerous advantages over conventional protein-based transcriptional controllers, namely, the use of little genomic space for encryption, their modular architecture allowing for easy reprogramming to new inputs, the physical linkage to the message to be controlled, and the ability to function without protein cofactors. Herein, we describe the method to select ribozyme-based switches of gene expression in Saccharomyces cerevisiae that we successfully implemented to engineer neomycin- and theophylline-responsive switches. We also highlight how to adapt the protocol to screen for switches responsive to other ligands. Reprogramming of the sensor unit and incorporation into any RNA of interest enables the fulfillment of a variety of regulatory functions. However, proper functioning of the aptazyme is largely dependent on optimal connection between the aptamer and the catalytic core. We obtained functional switches from a pool of variants carrying randomized connection sequences by an in vivo selection in MaV203 yeast cells that allows screening of a large sequence space of up to 1×10(9) variants. The protocol given explains how to construct aptazyme libraries, carry out the in vivo selection and characterize novel ON- and OFF-switches. © 2015 Elsevier Inc. All rights reserved.
Digital processing of mesoscale analysis and space sensor data
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.
1985-01-01
The mesoscale analysis and space sensor (MASS) data management and analysis system on the research computer system is presented. The MASS data base management and analysis system was implemented on the research computer system which provides a wide range of capabilities for processing and displaying large volumes of conventional and satellite derived meteorological data. The research computer system consists of three primary computers (HP-1000F, Harris/6, and Perkin-Elmer 3250), each of which performs a specific function according to its unique capabilities. The overall tasks performed concerning the software, data base management and display capabilities of the research computer system in terms of providing a very effective interactive research tool for the digital processing of mesoscale analysis and space sensor data is described.
Capability 9.3 Assembly and Deployment
NASA Technical Reports Server (NTRS)
Dorsey, John
2005-01-01
Large space systems are required for a range of operational, commercial and scientific missions objectives however, current launch vehicle capacities substantially limit the size of space systems (on-orbit or planetary). Assembly and Deployment is the process of constructing a spacecraft or system from modules which may in turn have been constructed from sub-modules in a hierarchical fashion. In-situ assembly of space exploration vehicles and systems will require a broad range of operational capabilities, including: Component transfer and storage, fluid handling, construction and assembly, test and verification. Efficient execution of these functions will require supporting infrastructure, that can: Receive, store and protect (materials, components, etc.); hold and secure; position, align and control; deploy; connect/disconnect; construct; join; assemble/disassemble; dock/undock; and mate/demate.
NASA Technical Reports Server (NTRS)
1984-01-01
The large space structures technology development missions to be performed on an early manned space station was studied and defined and the resources needed and the design implications to an early space station to carry out these large space structures technology development missions were determined. Emphasis is being placed on more detail in mission designs and space station resource requirements.
NASA Astrophysics Data System (ADS)
Shimojo, Fuyuki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya
2008-02-01
A linear-scaling algorithm based on a divide-and-conquer (DC) scheme has been designed to perform large-scale molecular-dynamics (MD) simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). Electronic wave functions are represented on a real-space grid, which is augmented with a coarse multigrid to accelerate the convergence of iterative solutions and with adaptive fine grids around atoms to accurately calculate ionic pseudopotentials. Spatial decomposition is employed to implement the hierarchical-grid DC-DFT algorithm on massively parallel computers. The largest benchmark tests include 11.8×106 -atom ( 1.04×1012 electronic degrees of freedom) calculation on 131 072 IBM BlueGene/L processors. The DC-DFT algorithm has well-defined parameters to control the data locality, with which the solutions converge rapidly. Also, the total energy is well conserved during the MD simulation. We perform first-principles MD simulations based on the DC-DFT algorithm, in which large system sizes bring in excellent agreement with x-ray scattering measurements for the pair-distribution function of liquid Rb and allow the description of low-frequency vibrational modes of graphene. The band gap of a CdSe nanorod calculated by the DC-DFT algorithm agrees well with the available conventional DFT results. With the DC-DFT algorithm, the band gap is calculated for larger system sizes until the result reaches the asymptotic value.
NASA Astrophysics Data System (ADS)
Okumura, Teppei; Takada, Masahiro; More, Surhud; Masaki, Shogo
2017-07-01
The peculiar velocity field measured by redshift-space distortions (RSD) in galaxy surveys provides a unique probe of the growth of large-scale structure. However, systematic effects arise when including satellite galaxies in the clustering analysis. Since satellite galaxies tend to reside in massive haloes with a greater halo bias, the inclusion boosts the clustering power. In addition, virial motions of the satellite galaxies cause a significant suppression of the clustering power due to non-linear RSD effects. We develop a novel method to recover the redshift-space power spectrum of haloes from the observed galaxy distribution by minimizing the contamination of satellite galaxies. The cylinder-grouping method (CGM) we study effectively excludes satellite galaxies from a galaxy sample. However, we find that this technique produces apparent anisotropies in the reconstructed halo distribution over all the scales which mimic RSD. On small scales, the apparent anisotropic clustering is caused by exclusion of haloes within the anisotropic cylinder used by the CGM. On large scales, the misidentification of different haloes in the large-scale structures, aligned along the line of sight, into the same CGM group causes the apparent anisotropic clustering via their cross-correlation with the CGM haloes. We construct an empirical model for the CGM halo power spectrum, which includes correction terms derived using the CGM window function at small scales as well as the linear matter power spectrum multiplied by a simple anisotropic function at large scales. We apply this model to a mock galaxy catalogue at z = 0.5, designed to resemble Sloan Digital Sky Survey-III Baryon Oscillation Spectroscopic Survey (BOSS) CMASS galaxies, and find that our model can predict both the monopole and quadrupole power spectra of the host haloes up to k < 0.5 {{h Mpc^{-1}}} to within 5 per cent.
Small black holes in global AdS spacetime
NASA Astrophysics Data System (ADS)
Jokela, Niko; Pönni, Arttu; Vuorinen, Aleksi
2016-04-01
We study the properties of two-point functions and quasinormal modes in a strongly coupled field theory holographically dual to a small black hole in global anti-de Sitter spacetime. Our results are seen to smoothly interpolate between known limits corresponding to large black holes and thermal AdS space, demonstrating that the Son-Starinets prescription works even when there is no black hole in the spacetime. Omitting issues related to the internal space, the results can be given a field theory interpretation in terms of the microcanonical ensemble, which provides access to energy densities forbidden in the canonical description.
Cost Modeling for Space Optical Telescope Assemblies
NASA Technical Reports Server (NTRS)
Stahl, H. Philip; Henrichs, Todd; Luedtke, Alexander; West, Miranda
2011-01-01
Parametric cost models are used to plan missions, compare concepts and justify technology investments. This paper reviews an on-going effort to develop cost modes for space telescopes. This paper summarizes the methodology used to develop cost models and documents how changes to the database have changed previously published preliminary cost models. While the cost models are evolving, the previously published findings remain valid: it costs less per square meter of collecting aperture to build a large telescope than a small telescope; technology development as a function of time reduces cost; and lower areal density telescopes cost more than more massive telescopes.
Molloy, Kevin; Shehu, Amarda
2013-01-01
Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers.
Long-range strategy for remote sensing: an integrated supersystem
NASA Astrophysics Data System (ADS)
Glackin, David L.; Dodd, Joseph K.
1995-12-01
Present large space-based remote sensing systems, and those planned for the next two decades, remain dichotomous and custom-built. An integrated architecture might reduce total cost without limiting system performance. An example of such an architecture, developed at The Aerospace Corporation, explores the feasibility of reducing overall space systems costs by forming a 'super-system' which will provide environmental, earth resources and theater surveillance information to a variety of users. The concept involves integration of programs, sharing of common spacecraft bus designs and launch vehicles, use of modular components and subsystems, integration of command and control and data capture functions, and establishment of an integrated program office. Smart functional modules that are easily tested and replaced are used wherever possible in the space segment. Data is disseminated to systems such as NASA's EOSDIS, and data processing is performed at established centers of expertise. This concept is advanced for potential application as a follow-on to currently budgeted and planned space-based remote sensing systems. We hope that this work will serve to engender discussion that may be of assistance in leading to multinational remote sensing systems with greater cost effectiveness at no loss of utility to the end user.
Electronic transport close to semi-infinite 2D systems and their interfaces
NASA Astrophysics Data System (ADS)
Xia, Fanbing; Wang, Jian; Jian Wang's research Group Team
Transport properties of 2D materials especially close to their boundary has received much attention after the successful fabrication of Graphene. While most previous work is devoted to the conventional lead-device-lead setup with a finite size center area, this project investigates real space transport properties of infinite and semi-infinite 2D systems under the framework of Non-equilibrium Green's function. The commonly used method of calculating Green's function by inverting matrices in the real space can be unstable in dealing with large systems as sometimes it gives non-converging result. By transforming from the real space to momentum space, the author managed to replace the matrix inverting process by Brillouin Zone integral which can be greatly simplified by the application of contour integral. Combining this methodology with Dyson equations, we are able to calculate transport properties of semi-infinite graphene close to its zigzag boundary and its combination with other material including s-wave superconductor. Interference pattern of transmitted and reflected electrons, Graphene lensing effects and difference between Specular Andreev reflection and normal Andreev reflection are verified. We also generalize how to apply this method to a broad range of 2D materials. The University of Hong Kong.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Myoung-Jae; Jung, Young-Dae, E-mail: ydjung@hanyang.ac.kr; Department of Applied Physics and Department of Bionanotechnology, Hanyang University, Ansan, Kyunggi-Do 15588
The dispersion relation and the dissipation process of the space-charge wave propagating in a bounded plasma such as a cylindrical waveguide are investigated by employing the longitudinal dielectric permittivity that contains the diffusivity based on the Dupree theory of turbulent plasma. We derived the dispersion relation for space-charge wave in terms of the radius of cylindrical waveguide and the roots of the Bessel function of the first kind which appears as the boundary condition. We find that the wave frequency for a lower-order root of the Bessel function is higher than that of a higher-order root. We also find thatmore » the dissipation is greatest for the lowest-order root, but it is suppressed significantly as the order of the root increases. The wave frequency and the dissipation process are enhanced as the radius of cylindrical waveguide increases. However, they are always smaller than the case of bulk plasma. We find that the diffusivity of turbulent plasma would enhance the damping of space-charge waves, especially, in the range of small wave number. For a large wave number, the diffusivity has little effect on the damping.« less
Galaxy power-spectrum responses and redshift-space super-sample effect
NASA Astrophysics Data System (ADS)
Li, Yin; Schmittfull, Marcel; Seljak, Uroš
2018-02-01
As a major source of cosmological information, galaxy clustering is susceptible to long-wavelength density and tidal fluctuations. These long modes modulate the growth and expansion rate of local structures, shifting them in both amplitude and scale. These effects are often named the growth and dilation effects, respectively. In particular the dilation shifts the baryon acoustic oscillation (BAO) peak and breaks the assumption of the Alcock-Paczynski (AP) test. This cannot be removed with reconstruction techniques because the effect originates from long modes outside the survey. In redshift space, the long modes generate a large-scale radial peculiar velocity that affects the redshift-space distortion (RSD) signal. We compute the redshift-space response functions of the galaxy power spectrum to long density and tidal modes at leading order in perturbation theory, including both the growth and dilation terms. We validate these response functions against measurements from simulated galaxy mock catalogs. As one application, long density and tidal modes beyond the scale of a survey correlate various observables leading to an excess error known as the super-sample covariance, and thus weaken their constraining power. We quantify the super-sample effect on BAO, AP, and RSD measurements, and study its impact on current and future surveys.
NASA Technical Reports Server (NTRS)
Soosaar, K.
1982-01-01
Some performance requirements and development needs for the design of large space structures are described. Areas of study include: (1) dynamic response of large space structures; (2) structural control and systems integration; (3) attitude control; and (4) large optics and flexibility. Reference is made to a large space telescope.
Advanced spacecraft thermal control techniques
NASA Technical Reports Server (NTRS)
Fritz, C. H.
1977-01-01
The problems of rejecting large amounts of heat from spacecraft were studied. Shuttle Space Laboratory heat rejection uses 1 kW for pumps and fans for every 5 kW (thermal) heat rejection. This is rather inefficient, and for future programs more efficient methods were examined. Two advanced systems were studied and compared to the present pumped-loop system. The advanced concepts are the air-cooled semipassive system, which features rejection of a large percentage of the load through the outer skin, and the heat pipe system, which incorporates heat pipes for every thermal control function.
The two-dimensional Stefan problem with slightly varying heat flux
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gammon, J.; Howarth, J.A.
1995-09-01
The authors solve the two-dimensional stefan problem of solidification in a half-space, where the heat flux at the wall is a slightly varying function of positioning along the wall, by means of a large Stefan number approximation (which turns out to be equivalent to a small time solution), and then by means of the Heat Balance Integral Method, which is valid for all time, and which agrees with the large Stefan number solution for small times. A representative solution is given for a particular form of the heat flux perturbation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, James, E-mail: 9jhb3@queensu.ca; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca
In this paper we show that it is possible to use an iterative eigensolver in conjunction with Halverson and Poirier’s symmetrized Gaussian (SG) basis [T. Halverson and B. Poirier, J. Chem. Phys. 137, 224101 (2012)] to compute accurate vibrational energy levels of molecules with as many as five atoms. This is done, without storing and manipulating large matrices, by solving a regular eigenvalue problem that makes it possible to exploit direct-product structure. These ideas are combined with a new procedure for selecting which basis functions to use. The SG basis we work with is orders of magnitude smaller than themore » basis made by using a classical energy criterion. We find significant convergence errors in previous calculations with SG bases. For sum-of-product Hamiltonians, SG bases large enough to compute accurate levels are orders of magnitude larger than even simple pruned bases composed of products of harmonic oscillator functions.« less
Vortex Formation Time is Not an Index of Ventricular Function
Vlachos, Pavlos P.; Little, William C.
2015-01-01
The diastolic intraventricular ring vortex formation and pinch-off process may provide clinically useful insights into diastolic function in health and disease. The vortex ring formation time (FT) concept, based on hydrodynamic experiments dealing with unconfined (large tank) flow, has attracted considerable attention and popularity. Dynamic conditions evolving within the very confined space of a filling, expansible ventricular chamber with relaxing and rebounding viscoelastic muscular boundaries, diverge from unconfined (large tank) flow and encompass rebounding walls’ suction and myocardial relaxation. Indeed, clinical/physiological findings seeking validation in vivo failed to support the notion that FT is an index of normal/abnormal diastolic ventricular function. Therefore, FT as originally proposed cannot and should not be utilized as such an index. Evidently, physiologically accurate models accounting for coupled hydrodynamic and (patho)physiological myocardial wall interactions with the intraventricular flow are still needed to enhance our understanding and yield diastolic function indices useful and reliable in the clinical setting. PMID:25609509
Voronoi Tessellation for reducing the processing time of correlation functions
NASA Astrophysics Data System (ADS)
Cárdenas-Montes, Miguel; Sevilla-Noarbe, Ignacio
2018-01-01
The increase of data volume in Cosmology is motivating the search of new solutions for solving the difficulties associated with the large processing time and precision of calculations. This is specially true in the case of several relevant statistics of the galaxy distribution of the Large Scale Structure of the Universe, namely the two and three point angular correlation functions. For these, the processing time has critically grown with the increase of the size of the data sample. Beyond parallel implementations to overcome the barrier of processing time, space partitioning algorithms are necessary to reduce the computational load. These can delimit the elements involved in the correlation function estimation to those that can potentially contribute to the final result. In this work, Voronoi Tessellation is used to reduce the processing time of the two-point and three-point angular correlation functions. The results of this proof-of-concept show a significant reduction of the processing time when preprocessing the galaxy positions with Voronoi Tessellation.
RELIABILITY OF THE DETECTION OF THE BARYON ACOUSTIC PEAK
DOE Office of Scientific and Technical Information (OSTI.GOV)
MartInez, Vicent J.; Arnalte-Mur, Pablo; De la Cruz, Pablo
2009-05-01
The correlation function of the distribution of matter in the universe shows, at large scales, baryon acoustic oscillations, which were imprinted prior to recombination. This feature was first detected in the correlation function of the luminous red galaxies of the Sloan Digital Sky Survey (SDSS). Recently, the final release (DR7) of the SDSS has been made available, and the useful volume is about two times bigger than in the old sample. We present here, for the first time, the redshift-space correlation function of this sample at large scales together with that for one shallower, but denser volume-limited subsample drawn frommore » the Two-Degree Field Redshift Survey. We test the reliability of the detection of the acoustic peak at about 100 h {sup -1} Mpc and the behavior of the correlation function at larger scales by means of careful estimation of errors. We confirm the presence of the peak in the latest data although broader than in previous detections.« less
Functional CAR models for large spatially correlated functional datasets.
Zhang, Lin; Baladandayuthapani, Veerabhadran; Zhu, Hongxiao; Baggerly, Keith A; Majewski, Tadeusz; Czerniak, Bogdan A; Morris, Jeffrey S
2016-01-01
We develop a functional conditional autoregressive (CAR) model for spatially correlated data for which functions are collected on areal units of a lattice. Our model performs functional response regression while accounting for spatial correlations with potentially nonseparable and nonstationary covariance structure, in both the space and functional domains. We show theoretically that our construction leads to a CAR model at each functional location, with spatial covariance parameters varying and borrowing strength across the functional domain. Using basis transformation strategies, the nonseparable spatial-functional model is computationally scalable to enormous functional datasets, generalizable to different basis functions, and can be used on functions defined on higher dimensional domains such as images. Through simulation studies, we demonstrate that accounting for the spatial correlation in our modeling leads to improved functional regression performance. Applied to a high-throughput spatially correlated copy number dataset, the model identifies genetic markers not identified by comparable methods that ignore spatial correlations.
Slattery, Timothy J; Yates, Mark; Angele, Bernhard
2016-12-01
Despite the large number of eye movement studies conducted over the past 30+ years, relatively few have examined the influence that font characteristics have on reading. However, there has been renewed interest in 1 particular font characteristic, letter spacing, which has both theoretical (visual word recognition) and applied (font design) importance. Recently published results that letter spacing has a bigger impact on the reading performance of dyslexic children have perhaps garnered the most attention (Zorzi et al., 2012). Unfortunately, the effects of increased interletter spacing have been mixed with some authors reporting facilitation and others reporting inhibition (van den Boer & Hakvoort, 2015). The authors present findings from 3 experiments designed to resolve the seemingly inconsistent letter-spacing effects and provide clarity to researchers and font designers and researchers. The results indicate that the direction of spacing effects depend on the size of the default spacing chosen by font developers. Experiment 3 found that interletter spacing interacts with interword spacing, as the required space between words depends on the amount of space used between letters. Interword spacing also interacted with word type as the inhibition seen with smaller interword spacing was evident with nouns and verbs but not with function words. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Using galaxy pairs to investigate the three-point correlation function in the squeezed limit
NASA Astrophysics Data System (ADS)
Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.
2017-11-01
We investigate the three-point correlation function (3PCF) in the squeezed limit by considering galaxy pairs as discrete objects and cross-correlating them with the galaxy field. We develop an efficient algorithm using fast Fourier transforms to compute such cross-correlations and their associated pair-galaxy bias bp, g and the squeezed 3PCF coefficient Qeff. We implement our method using N-body cosmological simulations and a fiducial halo occupation distribution (HOD) and present the results in both the real space and redshift space. In real space, we observe a peak in bp, g and Qeff at pair separation of ∼2 Mpc, attributed to the fact that galaxy pairs at 2 Mpc separation trace the most massive dark matter haloes. We also see strong anisotropy in the bp, g and Qeff signals that track the large-scale filamentary structure. In redshift space, both the 2 Mpc peak and the anisotropy are significantly smeared out along the line of sight due to finger-of-God effect. In both the real space and redshift space, the squeezed 3PCF shows a factor of 2 variation, contradicting the hierarchical ansatz, but offering rich information on the galaxy-halo connection. Thus, we explore the possibility of using the squeezed 3PCF to constrain the HOD. When we compare two simple HOD models that are closely matched in their projected two-point correlation function (2PCF), we do not yet see a strong variation in the 3PCF that is clearly disentangled from variations in the projected 2PCF. Nevertheless, we propose that more complicated HOD models, e.g. those incorporating assembly bias, can break degeneracies in the 2PCF and show a distinguishable squeezed 3PCF signal.
Paquet, Victor; Joseph, Caroline; D'Souza, Clive
2012-01-01
Anthropometric studies typically require a large number of individuals that are selected in a manner so that demographic characteristics that impact body size and function are proportionally representative of a user population. This sampling approach does not allow for an efficient characterization of the distribution of body sizes and functions of sub-groups within a population and the demographic characteristics of user populations can often change with time, limiting the application of the anthropometric data in design. The objective of this study is to demonstrate how demographically representative user populations can be developed from samples that are not proportionally representative in order to improve the application of anthropometric data in design. An engineering anthropometry problem of door width and clear floor space width is used to illustrate the value of the approach.
Power subsystem automation study
NASA Technical Reports Server (NTRS)
Imamura, M. S.; Moser, R. L.; Veatch, M.
1983-01-01
Generic power-system elements and their potential faults are identified. Automation functions and their resulting benefits are defined and automation functions between power subsystem, central spacecraft computer, and ground flight-support personnel are partitioned. All automation activities were categorized as data handling, monitoring, routine control, fault handling, planning and operations, or anomaly handling. Incorporation of all these classes of tasks, except for anomaly handling, in power subsystem hardware and software was concluded to be mandatory to meet the design and operational requirements of the space station. The key drivers are long mission lifetime, modular growth, high-performance flexibility, a need to accommodate different electrical user-load equipment, onorbit assembly/maintenance/servicing, and potentially large number of power subsystem components. A significant effort in algorithm development and validation is essential in meeting the 1987 technology readiness date for the space station.
Heat flow in chains driven by thermal noise
NASA Astrophysics Data System (ADS)
Fogedby, Hans C.; Imparato, Alberto
2012-04-01
We consider the large deviation function for a classical harmonic chain composed of N particles driven at the end points by heat reservoirs, first derived in the quantum regime by Saito and Dhar (2007 Phys. Rev. Lett. 99 180601) and in the classical regime by Saito and Dhar (2011 Phys. Rev. E 83 041121) and Kundu et al (2011 J. Stat. Mech. P03007). Within a Langevin description we perform this calculation on the basis of a standard path integral calculation in Fourier space. The cumulant generating function yielding the large deviation function is given in terms of a transmission Green's function and is consistent with the fluctuation theorem. We find a simple expression for the tails of the heat distribution, which turns out to decay exponentially. We, moreover, consider an extension of a single-particle model suggested by Derrida and Brunet (2005 Einstein Aujourd'hui (Les Ulis: EDP Sciences)) and discuss the two-particle case. We also discuss the limit for large N and present a closed expression for the cumulant generating function. Finally, we present a derivation of the fluctuation theorem on the basis of a Fokker-Planck description. This result is not restricted to the harmonic case but is valid for a general interaction potential between the particles.
Solid rocket booster performance evaluation model. Volume 1: Engineering description
NASA Technical Reports Server (NTRS)
1974-01-01
The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.
Vancoillie, Steven; Malmqvist, Per Åke; Veryazov, Valera
2016-04-12
The chromium dimer has long been a benchmark molecule to evaluate the performance of different computational methods ranging from density functional theory to wave function methods. Among the latter, multiconfigurational perturbation theory was shown to be able to reproduce the potential energy surface of the chromium dimer accurately. However, for modest active space sizes, it was later shown that different definitions of the zeroth-order Hamiltonian have a large impact on the results. In this work, we revisit the system for the third time with multiconfigurational perturbation theory, now in order to increase the active space of the reference wave function. This reduces the impact of the choice of zeroth-order Hamiltonian and improves the shape of the potential energy surface significantly. We conclude by comparing our results of the dissocation energy and vibrational spectrum to those obtained from several highly accurate multiconfigurational methods and experiment. For a meaningful comparison, we used the extrapolation to the complete basis set for all methods involved.
NASA Astrophysics Data System (ADS)
Barriot, Jean-Pierre; Serafini, Jonathan; Sichoix, Lydie; Benna, Mehdi; Kofman, Wlodek; Herique, Alain
We investigate the inverse problem of imaging the internal structure of comet 67P/ Churyumov-Gerasimenko from radiotomography CONSERT data by using a coupled regularized inversion of the Helmholtz equations. A first set of Helmholtz equations, written w.r.t a basis of 3D Hankel functions describes the wave propagation outside the comet at large distances, a second set of Helmholtz equations, written w.r.t. a basis of 3D Zernike functions describes the wave propagation throughout the comet with avariable permittivity. Both sets are connected by continuity equations over a sphere that surrounds the comet. This approach, derived from GPS water vapor tomography of the atmosphere,will permit a full 3D inversion of the internal structure of the comet, contrary to traditional approaches that use a discretization of space at a fraction of the radiowave wavelength.
Systematic Expansion of Active Spaces beyond the CASSCF Limit: A GASSCF/SplitGAS Benchmark Study.
Vogiatzis, Konstantinos D; Li Manni, Giovanni; Stoneburner, Samuel J; Ma, Dongxia; Gagliardi, Laura
2015-07-14
The applicability and accuracy of the generalized active space self-consistent field, (GASSCF), and (SplitGAS) methods are presented. The GASSCF method enables the exploration of larger active spaces than with the conventional complete active space SCF, (CASSCF), by fragmentation of a large space into subspaces and by controlling the interspace excitations. In the SplitGAS method, the GAS configuration interaction, CI, expansion is further partitioned in two parts: the principal, which includes the most important configuration state functions, and an extended, containing less relevant but not negligible ones. An effective Hamiltonian is then generated, with the extended part acting as a perturbation to the principal space. Excitation energies of ozone, furan, pyrrole, nickel dioxide, and copper tetrachloride dianion are reported. Various partitioning schemes of the GASSCF and SplitGAS CI expansions are considered and compared with the complete active space followed by second-order perturbation theory, (CASPT2), and multireference CI method, (MRCI), or available experimental data. General guidelines for the optimum applicability of these methods are discussed together with their current limitations.
Innovative Robot Archetypes for In-Space Construction and Maintenance
NASA Technical Reports Server (NTRS)
Rehnmark, Fredrik; Ambrose, Robert O.; Kennedy, Brett; Diftler, Myron; Mehling Joshua; Brigwater, Lyndon; Radford, Nicolaus; Goza, S. Michael; Culbert, Christopher
2005-01-01
The space environment presents unique challenges and opportunities in the assembly, inspection and maintenance of orbital and transit spaceflight systems. While conventional Extra-Vehicular Activity (EVA) technology, out of necessity, addresses each of the challenges, relatively few of the opportunities have been exploited due to crew safety and reliability considerations. Extra-Vehicular Robotics (EVR) is one of the least-explored design spaces but offers many exciting innovations transcending the crane-like Space Shuttle and International Space Station Remote Manipulator System (RMS) robots used for berthing, coarse positioning and stabilization. Microgravity environments can support new robotic archetypes with locomotion and manipulation capabilities analogous to undersea creatures. Such diversification could enable the next generation of space science platforms and vehicles that are too large and fragile to launch and deploy as self-contained payloads. Sinuous manipulators for minimally invasive inspection and repair in confined spaces, soft-stepping climbers with expansive leg reach envelopes and free-flying nanosatellite cameras can access EVA worksites generally not accessible to humans in spacesuits. These and other novel robotic archetypes are presented along with functionality concepts
Modeling velocity space-time correlations in wind farms
NASA Astrophysics Data System (ADS)
Lukassen, Laura J.; Stevens, Richard J. A. M.; Meneveau, Charles; Wilczek, Michael
2016-11-01
Turbulent fluctuations of wind velocities cause power-output fluctuations in wind farms. The statistics of velocity fluctuations can be described by velocity space-time correlations in the atmospheric boundary layer. In this context, it is important to derive simple physics-based models. The so-called Tennekes-Kraichnan random sweeping hypothesis states that small-scale velocity fluctuations are passively advected by large-scale velocity perturbations in a random fashion. In the present work, this hypothesis is used with an additional mean wind velocity to derive a model for the spatial and temporal decorrelation of velocities in wind farms. It turns out that in the framework of this model, space-time correlations are a convolution of the spatial correlation function with a temporal decorrelation kernel. In this presentation, first results on the comparison to large eddy simulations will be presented and the potential of the approach to characterize power output fluctuations of wind farms will be discussed. Acknowledgements: 'Fellowships for Young Energy Scientists' (YES!) of FOM, the US National Science Foundation Grant IIA 1243482, and support by the Max Planck Society.
Dobbins, T J; Ida, K; Suzuki, C; Yoshinuma, M; Kobayashi, T; Suzuki, Y; Yoshida, M
2017-09-01
A new Motional Stark Effect (MSE) analysis routine has been developed for improved spatial resolution in the core of the Large Helical Device (LHD). The routine was developed to reduce the dependency of the analysis on the Pfirsch-Schlüter (PS) current in the core. The technique used the change in the polarization angle as a function of flux in order to find the value of diota/dflux at each measurement location. By integrating inwards from the edge, the iota profile can be recovered from this method. This reduces the results' dependency on the PS current because the effect of the PS current on the MSE measurement is almost constant as a function of flux in the core; therefore, the uncertainty in the PS current has a minimal effect on the calculation of the iota profile. In addition, the VMEC database was remapped from flux into r/a space by interpolating in mode space in order to improve the database core resolution. These changes resulted in a much smoother iota profile, conforming more to the physics expectations of standard discharge scenarios in the core of the LHD.
Optical technologies for space sensor
NASA Astrophysics Data System (ADS)
Wang, Hu; Liu, Jie; Xue, Yaoke; Liu, Yang; Liu, Meiying; Wang, Lingguang; Yang, Shaodong; Lin, Shangmin; Chen, Su; Luo, Jianjun
2015-10-01
Space sensors are used in navigation sensor fields. The sun, the earth, the moon and other planets are used as frame of reference to obtain stellar position coordinates, and then to control the attitude of an aircraft. Being the "eyes" of the space sensors, Optical sensor system makes images of the infinite far stars and other celestial bodies. It directly affects measurement accuracy of the space sensor, indirectly affecting the data updating rate. Star sensor technology is the pilot for Space sensors. At present more and more attention is paid on all-day star sensor technology. By day and night measurements of the stars, the aircraft's attitude in the inertial coordinate system can be provided. Facing the requirements of ultra-high-precision, large field of view, wide spectral range, long life and high reliability, multi-functional optical system, we integration, integration optical sensors will be future space technology trends. In the meantime, optical technologies for space-sensitive research leads to the development of ultra-precision optical processing, optical and precision test machine alignment technology. It also promotes the development of long-life optical materials and applications. We have achieved such absolute distortion better than ±1um, Space life of at least 15years of space-sensitive optical system.
Rattei, Thomas; Tischler, Patrick; Götz, Stefan; Jehl, Marc-André; Hoser, Jonathan; Arnold, Roland; Conesa, Ana; Mewes, Hans-Werner
2010-01-01
The prediction of protein function as well as the reconstruction of evolutionary genesis employing sequence comparison at large is still the most powerful tool in sequence analysis. Due to the exponential growth of the number of known protein sequences and the subsequent quadratic growth of the similarity matrix, the computation of the Similarity Matrix of Proteins (SIMAP) becomes a computational intensive task. The SIMAP database provides a comprehensive and up-to-date pre-calculation of the protein sequence similarity matrix, sequence-based features and sequence clusters. As of September 2009, SIMAP covers 48 million proteins and more than 23 million non-redundant sequences. Novel features of SIMAP include the expansion of the sequence space by including databases such as ENSEMBL as well as the integration of metagenomes based on their consistent processing and annotation. Furthermore, protein function predictions by Blast2GO are pre-calculated for all sequences in SIMAP and the data access and query functions have been improved. SIMAP assists biologists to query the up-to-date sequence space systematically and facilitates large-scale downstream projects in computational biology. Access to SIMAP is freely provided through the web portal for individuals (http://mips.gsf.de/simap/) and for programmatic access through DAS (http://webclu.bio.wzw.tum.de/das/) and Web-Service (http://mips.gsf.de/webservices/services/SimapService2.0?wsdl).
NASA Astrophysics Data System (ADS)
Ward, K. M.; Lin, F. C.
2017-12-01
Recent advances in seismic data-acquisition technology paired with an increasing interest from the academic passive source seismological community have opened up new scientific targets and imaging possibilities, often referred to as Large-N experiments (large number of instruments). The success of these and other deployments has motivated individual researchers, as well as the larger seismological community, to invest in the next generation of nodal geophones. Although the new instruments have battery life and bandwidth limitations compared to broadband instruments, the relatively low deployment and procurement cost of these new nodal geophones provides an additional novel tool for researchers. Here, we explore the viability of using autonomous three-component nodal geophones to calculate teleseismic Ps receiver functions by comparison of co-located broadband stations and highlight some potential advantages with a dense nodal array deployed around the Upper Geyser basin in Yellowstone National Park. Two key findings from this example include (1) very dense nodal arrays can be used to image small-scale features in the shallow crust that typical broadband station spacing would alias, and (2) nodal arrays with a larger footprint could be used to image deeper features with greater or equal detail as typical broadband deployments but at a reduced deployment cost. The success of the previous example has motivated a larger 2-D line across the Cascadia subduction zone. In the summer of 2017, we deployed 174 nodal geophones with an average site spacing of 750 m. Synthetic tests with dense station spacing ( 1 km) reveal subtler features of the system that is consistent with our preliminary receiver function results from our Cascadia deployment. With the increasing availability of nodal geophones to individual researchers and the successful demonstration that nodal geophones are a viable instrument for receiver function studies, numerous scientific targets can be investigated at reduced costs or in expanded detail.
Design principles of hair-like structures as biological machines
2018-01-01
Hair-like structures are prevalent throughout biology and frequently act to sense or alter interactions with an organism's environment. The overall shape of a hair is simple: a long, filamentous object that protrudes from the surface of an organism. This basic design, however, can confer a wide range of functions, owing largely to the flexibility and large surface area that it usually possesses. From this simple structural basis, small changes in geometry, such as diameter, curvature and inter-hair spacing, can have considerable effects on mechanical properties, allowing functions such as mechanosensing, attachment, movement and protection. Here, we explore how passive features of hair-like structures, both individually and within arrays, enable diverse functions across biology. Understanding the relationships between form and function can provide biologists with an appreciation for the constraints and possibilities on hair-like structures. Additionally, such structures have already been used in biomimetic engineering with applications in sensing, water capture and adhesion. By examining hairs as a functional mechanical unit, geometry and arrangement can be rationally designed to generate new engineering devices and ideas. PMID:29848593
Excitation spectra of retinal by multiconfiguration pair-density functional theory.
Dong, Sijia S; Gagliardi, Laura; Truhlar, Donald G
2018-03-07
Retinal is the chromophore in proteins responsible for vision. The absorption maximum of retinal is sensitive to mutations of the protein. However, it is not easy to predict the absorption spectrum of retinal accurately, and questions remain even after intensive investigation. Retinal poses a challenge for Kohn-Sham density functional theory (KS-DFT) because of the charge transfer character in its excitations, and it poses a challenge for wave function theory because the large size of the molecule makes multiconfigurational perturbation theory methods expensive. In this study, we demonstrate that multiconfiguration pair-density functional theory (MC-PDFT) provides an efficient way to predict the vertical excitation energies of 11-Z retinal, and it reproduces the experimentally determined absorption band widths and peak positions better than complete active space second-order perturbation theory (CASPT2). The consistency between complete active space self-consistent field (CASSCF) and KS-DFT dipole moments is demonstrated to be a useful criterion in selecting the active space. We also found that the nature of the terminal groups and the conformations of retinal play a significant role in the absorption spectrum. By considering a thermal distribution of conformations, we predict an absorption spectrum of retinal that is consistent with the experimental gas-phase spectrum. The location of the absorption peak and the spectral broadening based on MC-PDFT calculations agree better with experiments than those of CASPT2.
Tensor voting for image correction by global and local intensity alignment.
Jia, Jiaya; Tang, Chi-Keung
2005-01-01
This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mizuno, T
2004-09-03
Cosmic-ray background fluxes were modeled based on existing measurements and theories and are presented here. The model, originally developed for the Gamma-ray Large Area Space Telescope (GLAST) Balloon Experiment, covers the entire solid angle (4{pi} sr), the sensitive energy range of the instrument ({approx} 10 MeV to 100 GeV) and abundant components (proton, alpha, e{sup -}, e{sup +}, {mu}{sup -}, {mu}{sup +} and gamma). It is expressed in analytic functions in which modulations due to the solar activity and the Earth geomagnetism are parameterized. Although the model is intended to be used primarily for the GLAST Balloon Experiment, model functionsmore » in low-Earth orbit are also presented and can be used for other high energy astrophysical missions. The model has been validated via comparison with the data of the GLAST Balloon Experiment.« less
Advanced Video Analysis Needs for Human Performance Evaluation
NASA Technical Reports Server (NTRS)
Campbell, Paul D.
1994-01-01
Evaluators of human task performance in space missions make use of video as a primary source of data. Extraction of relevant human performance information from video is often a labor-intensive process requiring a large amount of time on the part of the evaluator. Based on the experiences of several human performance evaluators, needs were defined for advanced tools which could aid in the analysis of video data from space missions. Such tools should increase the efficiency with which useful information is retrieved from large quantities of raw video. They should also provide the evaluator with new analytical functions which are not present in currently used methods. Video analysis tools based on the needs defined by this study would also have uses in U.S. industry and education. Evaluation of human performance from video data can be a valuable technique in many industrial and institutional settings where humans are involved in operational systems and processes.
Matching by linear programming and successive convexification.
Jiang, Hao; Drew, Mark S; Li, Ze-Nian
2007-06-01
We present a novel convex programming scheme to solve matching problems, focusing on the challenging problem of matching in a large search range and with cluttered background. Matching is formulated as metric labeling with L1 regularization terms, for which we propose a novel linear programming relaxation method and an efficient successive convexification implementation. The unique feature of the proposed relaxation scheme is that a much smaller set of basis labels is used to represent the original label space. This greatly reduces the size of the searching space. A successive convexification scheme solves the labeling problem in a coarse to fine manner. Importantly, the original cost function is reconvexified at each stage, in the new focus region only, and the focus region is updated so as to refine the searching result. This makes the method well-suited for large label set matching. Experiments demonstrate successful applications of the proposed matching scheme in object detection, motion estimation, and tracking.
Background and programmatic approach for the development of orbital fluid resupply tankers
NASA Technical Reports Server (NTRS)
Griffin, J. W.
1986-01-01
Onorbit resupply of fluids will be essential to the evolving generation of large and long-life orbital stations and satellites. These types of services are also needed to improve the economics of space operations, and not only optimize the expenditures for government funded programs, but also pave the way for commercial development of space resources. To meet these requirements, a family of tankers must be developed to resupply a variety of fluids. Economics of flight hardware development will require that each tanker within this family be capable of satisfying a variety of functions, including not only fluid resupply from the Space Shuttle Orbiter, but also resupply from Space Station and the orbital maneuvering vehicle (OMV). This paper discusses the justification, the programmatic objectives, and the advanced planning within NASA for the development of this fleet of multifunction orbital fluid resupply tankers.
Deriving Tools from Real-Time Runs: A New CCMC Support for SEC and AFWA
NASA Technical Reports Server (NTRS)
Hesse, Michael; Rastatter, Lutz; MacNeice, Peter; Kuznetsova, Masha
2007-01-01
The Community Coordinated Modeling Center (CCMC) is a US inter-agency activity aiming at research in support of the generation of advanced space weather models. As one of its main functions, the CCMC provides to researchers the use of space science models, even if they are not model owners themselves. In particular, the CCMC provides to the research community the execution of "runs-on-request" for specific events of interest to space science researchers. Through this activity and the concurrent development of advanced visualization tools, CCMC provides, to the general science community, unprecedented access to a large number of state-of-the-art research models. CCMC houses models that cover the entire domain from the Sun to the Earth. In this presentation, we will provide an overview of CCMC modeling services that are available to support activities at the Space Environment Center, or at the Air Force Weather Agency.
Control-structure interaction study for the Space Station solar dynamic power module
NASA Technical Reports Server (NTRS)
Cheng, J.; Ianculescu, G.; Ly, J.; Kim, M.
1991-01-01
The authors investigate the feasibility of using a conventional PID (proportional plus integral plus derivative) controller design to perform the pointing and tracking functions for the Space Station Freedom solar dynamic power module. Using this simple controller design, the control/structure interaction effects were also studied without assuming frequency bandwidth separation. From the results, the feasibility of a simple solar dynamic control solution with a reduced-order model, which satisfies the basic system pointing and stability requirements, is suggested. However, the conventional control design approach is shown to be very much influenced by the order of reduction of the plant model, i.e., the number of the retained elastic modes from the full-order model. This suggests that, for complex large space structures, such as the Space Station Freedom solar dynamic, the conventional control system design methods may not be adequate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swartzentruber, Phillip D.; John Balk, Thomas, E-mail: john.balk@uky.edu; Effgen, Michael P.
2014-07-01
Osmium-ruthenium films with different microstructures were deposited onto dispenser cathodes and subjected to 1000 h of close-spaced diode testing. Tailored microstructures were achieved by applying substrate biasing during deposition, and these were evaluated with scanning electron microscopy, x-ray diffraction, and energy dispersive x-ray spectroscopy before and after close-spaced diode testing. Knee temperatures determined from the close-spaced diode test data were used to evaluate cathode performance. Cathodes with a large (10-11) Os-Ru film texture possessed comparatively low knee temperatures. Furthermore, a low knee temperature correlated with a low effective work function as calculated from the close-spaced diode data. It is proposedmore » that the formation of strong (10-11) texture is responsible for the superior performance of the cathode with a multilayered Os-Ru coating.« less
X-ray simulations method for the large field of view
NASA Astrophysics Data System (ADS)
Schelokov, I. A.; Grigoriev, M. V.; Chukalina, M. V.; Asadchikov, V. E.
2018-03-01
In the standard approach, X-ray simulation is usually limited to the step of spatial sampling to calculate the convolution of integrals of the Fresnel type. Explicitly the sampling step is determined by the size of the last Fresnel zone in the beam aperture. In other words, the spatial sampling is determined by the precision of integral convolution calculations and is not connected with the space resolution of an optical scheme. In the developed approach the convolution in the normal space is replaced by computations of the shear strain of ambiguity function in the phase space. The spatial sampling is then determined by the space resolution of an optical scheme. The sampling step can differ in various directions because of the source anisotropy. The approach was used to simulate original images in the X-ray Talbot interferometry and showed that the simulation can be applied to optimize the methods of postprocessing.
Equivalence of meson scattering amplitudes in strong coupling lattice and flat space string theory
NASA Astrophysics Data System (ADS)
Armoni, Adi; Ireson, Edwin; Vadacchino, Davide
2018-03-01
We consider meson scattering in the framework of the lattice strong coupling expansion. In particular we derive an expression for the 4-point function of meson operators in the planar limit of scalar Chromodynamics. Interestingly, in the naive continuum limit the expression coincides with an independently known result, that of the worldline formalism. Moreover, it was argued by Makeenko and Olesen that (assuming confinement) the resulting scattering amplitude in momentum space is the celebrated expression proposed by Veneziano several decades ago. This motivates us to also use holography in order to argue that the continuum expression for the scattering amplitude is related to the result obtained from flat space string theory. Our results hint that at strong coupling and large-Nc the naive continuum limit of the lattice formalism can be related to a flat space string theory.
Operation's Concept for Array-Based Deep Space Network
NASA Technical Reports Server (NTRS)
Bagri, Durgadas S.; Statman, Joseph I.; Gatti, Mark S.
2005-01-01
The Array-based Deep Space Network (DSNArray) will be a part of more than 10(exp 3) times increase in the downlink/telemetry capability of the Deep space Network (DSN). The key function of the DSN-Array is to provide cost-effective, robust Telemetry, Tracking and Command (TT&C) services to the space missions of NASA and its international partners. It provides an expanded approach to the use of an array-based system. Instead of using the array as an element in the existing DSN, relying to a large extent on the DSN infrastructure, we explore a broader departure from the current DSN, using fewer elements of the existing DSN, and establishing a more modern Concept of Operations. This paper gives architecture of DSN-Array and its operation's philosophy. It also describes customer's view of operations, operations management and logistics - including maintenance philosophy, anomaly analysis and reporting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Benthem, Mark H.
2016-05-04
This software is employed for 3D visualization of X-ray diffraction (XRD) data with functionality for slicing, reorienting, isolating and plotting of 2D color contour maps and 3D renderings of large datasets. The program makes use of the multidimensionality of textured XRD data where diffracted intensity is not constant over a given set of angular positions (as dictated by the three defined dimensional angles of phi, chi, and two-theta). Datasets are rendered in 3D with intensity as a scaler which is represented as a rainbow color scale. A GUI interface and scrolling tools along with interactive function via the mouse allowmore » for fast manipulation of these large datasets so as to perform detailed analysis of diffraction results with full dimensionality of the diffraction space.« less
Large size space construction for space exploitation
NASA Astrophysics Data System (ADS)
Kondyurin, Alexey
2016-07-01
Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).
A transformed path integral approach for solution of the Fokker-Planck equation
NASA Astrophysics Data System (ADS)
Subramaniam, Gnana M.; Vedula, Prakash
2017-10-01
A novel path integral (PI) based method for solution of the Fokker-Planck equation is presented. The proposed method, termed the transformed path integral (TPI) method, utilizes a new formulation for the underlying short-time propagator to perform the evolution of the probability density function (PDF) in a transformed computational domain where a more accurate representation of the PDF can be ensured. The new formulation, based on a dynamic transformation of the original state space with the statistics of the PDF as parameters, preserves the non-negativity of the PDF and incorporates short-time properties of the underlying stochastic process. New update equations for the state PDF in a transformed space and the parameters of the transformation (including mean and covariance) that better accommodate nonlinearities in drift and non-Gaussian behavior in distributions are proposed (based on properties of the SDE). Owing to the choice of transformation considered, the proposed method maps a fixed grid in transformed space to a dynamically adaptive grid in the original state space. The TPI method, in contrast to conventional methods such as Monte Carlo simulations and fixed grid approaches, is able to better represent the distributions (especially the tail information) and better address challenges in processes with large diffusion, large drift and large concentration of PDF. Additionally, in the proposed TPI method, error bounds on the probability in the computational domain can be obtained using the Chebyshev's inequality. The benefits of the TPI method over conventional methods are illustrated through simulations of linear and nonlinear drift processes in one-dimensional and multidimensional state spaces. The effects of spatial and temporal grid resolutions as well as that of the diffusion coefficient on the error in the PDF are also characterized.
NASA Technical Reports Server (NTRS)
Vilnrotter, V.
2011-01-01
The potential development of large aperture ground?based "photon bucket" optical receivers for deep space communications has received considerable attention recently. One approach currently under investigation is to polish the aluminum reflector panels of 34?meter microwave antennas to high reflectance, and accept the relatively large spotsize generated by state of?the?art polished aluminum panels. Theoretical analyses of receiving antenna pointing, temporal synchronization and data detection have been addressed in previous papers. Here we describe the experimental effort currently underway at the Deep Space Network (DSN) Goldstone Communications Complex in California, to test and verify these concepts in a realistic operational environment. Two polished aluminum panels (a standard DSN panel polished to high reflectance, and a custom designed aluminum panel with much better surface quality) have been mounted on the 34 meter research antenna at Deep?Space Station 13 (DSS?13), and a remotely controlled CCD camera with a large CCD sensor in a weather?proof container has been installed next to the subreflector, pointed directly at the custom polished panel. The point?spread function (PSF) generated by the Vertex polished panel has been determined to be smaller than the sensor of the CCD camera, hence a detailed picture of the PSF can be obtained every few seconds, and the sensor array data processed to determine the center of the intensity distribution. In addition to estimating the center coordinates, expected communications performance can also been evaluated with the recorded data. The results of preliminary pointing experiments with the Vertex polished panel receiver using the planet Jupiter to simulate the PSF generated by a deep?space optical transmitter are presented and discussed in this paper.
Slepoy, A; Peters, M D; Thompson, A P
2007-11-30
Molecular dynamics and other molecular simulation methods rely on a potential energy function, based only on the relative coordinates of the atomic nuclei. Such a function, called a force field, approximately represents the electronic structure interactions of a condensed matter system. Developing such approximate functions and fitting their parameters remains an arduous, time-consuming process, relying on expert physical intuition. To address this problem, a functional programming methodology was developed that may enable automated discovery of entirely new force-field functional forms, while simultaneously fitting parameter values. The method uses a combination of genetic programming, Metropolis Monte Carlo importance sampling and parallel tempering, to efficiently search a large space of candidate functional forms and parameters. The methodology was tested using a nontrivial problem with a well-defined globally optimal solution: a small set of atomic configurations was generated and the energy of each configuration was calculated using the Lennard-Jones pair potential. Starting with a population of random functions, our fully automated, massively parallel implementation of the method reproducibly discovered the original Lennard-Jones pair potential by searching for several hours on 100 processors, sampling only a minuscule portion of the total search space. This result indicates that, with further improvement, the method may be suitable for unsupervised development of more accurate force fields with completely new functional forms. Copyright (c) 2007 Wiley Periodicals, Inc.
Large Space Systems Technology, Part 2, 1981
NASA Technical Reports Server (NTRS)
Boyer, W. J. (Compiler)
1982-01-01
Four major areas of interest are covered: technology pertinent to large antenna systems; technology related to the control of large space systems; basic technology concerning structures, materials, and analyses; and flight technology experiments. Large antenna systems and flight technology experiments are described. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. These research studies represent state-of-the art technology that is necessary for the development of large space systems. A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems.
Aminopropyl-Silica Hybrid Particles as Supports for Humic Acids Immobilization
Sándor, Mónika; Nistor, Cristina Lavinia; Szalontai, Gábor; Stoica, Rusandica; Nicolae, Cristian Andi; Alexandrescu, Elvira; Fazakas, József; Oancea, Florin; Donescu, Dan
2016-01-01
A series of aminopropyl-functionalized silica nanoparticles were prepared through a basic two step sol-gel process in water. Prior to being aminopropyl-functionalized, silica particles with an average diameter of 549 nm were prepared from tetraethyl orthosilicate (TEOS), using a Stöber method. In a second step, aminopropyl-silica particles were prepared by silanization with 3-aminopropyltriethoxysilane (APTES), added drop by drop to the sol-gel mixture. The synthesized amino-functionalized silica particles are intended to be used as supports for immobilization of humic acids (HA), through electrostatic bonds. Furthermore, by inserting beside APTES, unhydrolysable mono-, di- or trifunctional alkylsilanes (methyltriethoxy silane (MeTES), trimethylethoxysilane (Me3ES), diethoxydimethylsilane (Me2DES) and 1,2-bis(triethoxysilyl)ethane (BETES)) onto silica particles surface, the spacing of the free amino groups was intended in order to facilitate their interaction with HA large molecules. Two sorts of HA were used for evaluating the immobilization capacity of the novel aminosilane supports. The results proved the efficient functionalization of silica nanoparticles with amino groups and showed that the immobilization of the two tested types of humic acid substances was well achieved for all the TEOS/APTES = 20/1 (molar ratio) silica hybrids having or not having the amino functions spaced by alkyl groups. It was shown that the density of aminopropyl functions is low enough at this low APTES fraction and do not require a further spacing by alkyl groups. Moreover, all the hybrids having negative zeta potential values exhibited low interaction with HA molecules. PMID:28787834
Structural-electromagnetic bidirectional coupling analysis of space large film reflector antennas
NASA Astrophysics Data System (ADS)
Zhang, Xinghua; Zhang, Shuxin; Cheng, ZhengAi; Duan, Baoyan; Yang, Chen; Li, Meng; Hou, Xinbin; Li, Xun
2017-10-01
As used for energy transmission, a space large film reflector antenna (SLFRA) is characterized by large size and enduring high power density. The structural flexibility and the microwave radiation pressure (MRP) will lead to the phenomenon of structural-electromagnetic bidirectional coupling (SEBC). In this paper, the SEBC model of SLFRA is presented, then the deformation induced by the MRP and the corresponding far field pattern deterioration are simulated. Results show that, the direction of the MRP is identical to the normal of the reflector surface, and the magnitude is proportional to the power density and the square of cosine incident angle. For a typical cosine function distributed electric field, the MRP is a square of cosine distributed across the diameter. The maximum deflections of SLFRA linearly increase with the increasing microwave power densities and the square of the reflector diameters, and vary inversely with the film thicknesses. When the reflector diameter becomes 100 m large and the microwave power density exceeds 102 W/cm2, the gain loss of the 6.3 μm-thick reflector goes beyond 0.75 dB. When the MRP-induced deflection degrades the reflector performance, the SEBC should be taken into account.
2012-01-01
Background A crucial issue for the sustainability of societies is how to maintain health and functioning in older people. With increasing age, losses in vision, hearing, balance, mobility and cognitive capacity render older people particularly exposed to environmental barriers. A central building block of human functioning is walking. Walking difficulties may start to develop in midlife and become increasingly prevalent with age. Life-space mobility reflects actual mobility performance by taking into account the balance between older adults internal physiologic capacity and the external challenges they encounter in daily life. The aim of the Life-Space Mobility in Old Age (LISPE) project is to examine how home and neighborhood characteristics influence people’s health, functioning, disability, quality of life and life-space mobility in the context of aging. In addition, examine whether a person’s health and function influence life-space mobility. Design This paper describes the study protocol of the LISPE project, which is a 2-year prospective cohort study of community-dwelling older people aged 75 to 90 (n = 848). The data consists of a baseline survey including face-to-face interviews, objective observation of the home environment and a physical performance test in the participant’s home. All the baseline participants will be interviewed over the phone one and two years after baseline to collect data on life-space mobility, disability and participation restriction. Additional home interviews and environmental evaluations will be conducted for those who relocate during the study period. Data on mortality and health service use will be collected from national registers. In a substudy on walking activity and life space, 358 participants kept a 7-day diary and, in addition, 176 participants also wore an accelerometer. Discussion Our study, which includes extensive data collection with a large sample, provides a unique opportunity to study topics of importance for aging societies. A novel approach is employed which enables us to study the interactions of environmental features and individual characteristics underlying the life-space of older people. Potentially, the results of this study will contribute to improvements in strategies to postpone or prevent progression to disability and loss of independence. PMID:23170987
Anisotropic magnification distortion of the 3D galaxy correlation. II. Fourier and redshift space
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hui Lam; Department of Physics, Columbia University, New York, New York 10027; Institute of Theoretical Physics, Chinese University of Hong Kong
2008-03-15
In paper I of this series we discuss how magnification bias distorts the 3D correlation function by enhancing the observed correlation in the line-of-sight (LOS) orientation, especially on large scales. This lensing anisotropy is distinctive, making it possible to separately measure the galaxy-galaxy, galaxy-magnification and magnification-magnification correlations. Here we extend the discussion to the power spectrum and also to redshift space. In real space, pairs oriented close to the LOS direction are not protected against nonlinearity even if the pair separation is large; this is because nonlinear fluctuations can enter through gravitational lensing at a small transverse separation (or i.e.more » impact parameter). The situation in Fourier space is different: by focusing on a small wave number k, as is usually done, linearity is guaranteed because both the LOS and transverse wave numbers must be small. This is why magnification distortion of the galaxy correlation appears less severe in Fourier space. Nonetheless, the effect is non-negligible, especially for the transverse Fourier modes, and should be taken into account in interpreting precision measurements of the galaxy power spectrum, for instance those that focus on the baryon oscillations. The lensing induced anisotropy of the power spectrum has a shape that is distinct from the more well-known redshift space anisotropies due to peculiar motions and the Alcock-Paczynski effect. The lensing anisotropy is highly localized in Fourier space while redshift space distortions are more spread out. This means that one could separate the magnification bias component in real observations, implying that potentially it is possible to perform a gravitational lensing measurement without measuring galaxy shapes.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
We present a pedagogical systematic investigation of the accuracy of Eulerian and Lagrangian perturbation theories of large-scale structure. We show that significant differences exist between them especially when trying to model the Baryon Acoustic Oscillations (BAO). We find that the best available model of the BAO in real space is the Zel'dovich Approximation (ZA), giving an accuracy of ∼<3% at redshift of z = 0 in modelling the matter 2-pt function around the acoustic peak. All corrections to the ZA around the BAO scale are perfectly perturbative in real space. Any attempt to achieve better precision requires calibrating the theorymore » to simulations because of the need to renormalize those corrections. In contrast, theories which do not fully preserve the ZA as their solution, receive O(1) corrections around the acoustic peak in real space at z = 0, and are thus of suspicious convergence at low redshift around the BAO. As an example, we find that a similar accuracy of 3% for the acoustic peak is achieved by Eulerian Standard Perturbation Theory (SPT) at linear order only at z ≈ 4. Thus even when SPT is perturbative, one needs to include loop corrections for z∼<4 in real space. In Fourier space, all models perform similarly, and are controlled by the overdensity amplitude, thus recovering standard results. However, that comes at a price. Real space cleanly separates the BAO signal from non-linear dynamics. In contrast, Fourier space mixes signal from short mildly non-linear scales with the linear signal from the BAO to the level that non-linear contributions from short scales dominate. Therefore, one has little hope in constructing a systematic theory for the BAO in Fourier space.« less
Large space systems technology, 1980, volume 1
NASA Technical Reports Server (NTRS)
Kopriver, F., III (Compiler)
1981-01-01
The technological and developmental efforts in support of the large space systems technology are described. Three major areas of interests are emphasized: (1) technology pertient to large antenna systems; (2) technology related to large space systems; and (3) activities that support both antenna and platform systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimojo, Fuyuki; Hattori, Shinnosuke; Department of Physics, Kumamoto University, Kumamoto 860-8555
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at themore » peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques are employed for efficiently calculating the long-range exact exchange correction and excited-state forces. The NAQMD trajectories are analyzed to extract the rates of various excitonic processes, which are then used in KMC simulation to study the dynamics of the global exciton flow network. This has allowed the study of large-scale photoexcitation dynamics in 6400-atom amorphous molecular solid, reaching the experimental time scales.« less
Marcus, Hani J.; Seneci, Carlo A.; Hughes-Hallett, Archie; Cundy, Thomas P.; Nandi, Dipankar; Yang, Guang-Zhong; Darzi, Ara
2015-01-01
Background. Surgical approaches such as transanal endoscopic microsurgery, which utilize small operative working spaces, and are necessarily single-port, are particularly demanding with standard instruments and have not been widely adopted. The aim of this study was to compare simultaneously surgical performance in single-port versus multiport approaches, and small versus large working spaces. Methods. Ten novice, 4 intermediate, and 1 expert surgeons were recruited from a university hospital. A preclinical randomized crossover study design was implemented, comparing performance under the following conditions: (1) multiport approach and large working space, (2) multiport approach and intermediate working space, (3) single-port approach and large working space, (4) single-port approach and intermediate working space, and (5) single-port approach and small working space. In each case, participants performed a peg transfer and pattern cutting tasks, and each task repetition was scored. Results. Intermediate and expert surgeons performed significantly better than novices in all conditions (P < .05). Performance in single-port surgery was significantly worse than multiport surgery (P < .01). In multiport surgery, there was a nonsignificant trend toward worsened performance in the intermediate versus large working space. In single-port surgery, there was a converse trend; performances in the intermediate and small working spaces were significantly better than in the large working space. Conclusions. Single-port approaches were significantly more technically challenging than multiport approaches, possibly reflecting loss of instrument triangulation. Surprisingly, in single-port approaches, in which triangulation was no longer a factor, performance in large working spaces was worse than in intermediate and small working spaces. PMID:26464468
Simulating Vibrations in a Complex Loaded Structure
NASA Technical Reports Server (NTRS)
Cao, Tim T.
2005-01-01
The Dynamic Response Computation (DIRECT) computer program simulates vibrations induced in a complex structure by applied dynamic loads. Developed to enable rapid analysis of launch- and landing- induced vibrations and stresses in a space shuttle, DIRECT also can be used to analyze dynamic responses of other structures - for example, the response of a building to an earthquake, or the response of an oil-drilling platform and attached tanks to large ocean waves. For a space-shuttle simulation, the required input to DIRECT includes mathematical models of the space shuttle and its payloads, and a set of forcing functions that simulates launch and landing loads. DIRECT can accommodate multiple levels of payload attachment and substructure as well as nonlinear dynamic responses of structural interfaces. DIRECT combines the shuttle and payload models into a single structural model, to which the forcing functions are then applied. The resulting equations of motion are reduced to an optimum set and decoupled into a unique format for simulating dynamics. During the simulation, maximum vibrations, loads, and stresses are monitored and recorded for subsequent analysis to identify structural deficiencies in the shuttle and/or payloads.
NASA Technical Reports Server (NTRS)
Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo
1992-01-01
A set of functional requirements for software configuration management (CM) and metrics reporting for Space Station Freedom ground systems software are described. This report is one of a series from a study of the interfaces among the Ground Systems Development Environment (GSDE), the development systems for the Space Station Training Facility (SSTF) and the Space Station Control Center (SSCC), and the target systems for SSCC and SSTF. The focus is on the CM of the software following delivery to NASA and on the software metrics that relate to the quality and maintainability of the delivered software. The CM and metrics requirements address specific problems that occur in large-scale software development. Mechanisms to assist in the continuing improvement of mission operations software development are described.
Telepresence work system concepts
NASA Technical Reports Server (NTRS)
Jenkins, L. M.
1985-01-01
Telepresence has been used in the context of the ultimate in remote manipulation where the operator is provided with the sensory feedback and control to perform highly dexterous tasks. The concept of a Telepresence Work Station (TWS) for operation in space is described. System requirements, concepts, and a development approach are discussed. The TWS has the potential for application on the Space Shuttle, on the Orbit Maneuver Vehicle, on an Orbit Transfer Vehicle, and on the Space Station. The TWS function is to perform satellite servicing tasks and construction and assembly operations in the buildup of large spacecraft. The basic concept is a pair of dexterous arms controlled from a remote station by an operation with feedback. It may be evolved through levels of supervisory control to a smart adaptive robotic system.
Developing an Advanced Life Support System for the Flexible Path into Deep Space
NASA Technical Reports Server (NTRS)
Jones, Harry W.; Kliss, Mark H.
2010-01-01
Long duration human missions beyond low Earth orbit, such as a permanent lunar base, an asteroid rendezvous, or exploring Mars, will use recycling life support systems to preclude supplying large amounts of metabolic consumables. The International Space Station (ISS) life support design provides a historic guiding basis for future systems, but both its system architecture and the subsystem technologies should be reconsidered. Different technologies for the functional subsystems have been investigated and some past alternates appear better for flexible path destinations beyond low Earth orbit. There is a need to develop more capable technologies that provide lower mass, increased closure, and higher reliability. A major objective of redesigning the life support system for the flexible path is achieving the maintainability and ultra-reliability necessary for deep space operations.
Corrected Mean-Field Model for Random Sequential Adsorption on Random Geometric Graphs
NASA Astrophysics Data System (ADS)
Dhara, Souvik; van Leeuwaarden, Johan S. H.; Mukherjee, Debankur
2018-03-01
A notorious problem in mathematics and physics is to create a solvable model for random sequential adsorption of non-overlapping congruent spheres in the d-dimensional Euclidean space with d≥ 2 . Spheres arrive sequentially at uniformly chosen locations in space and are accepted only when there is no overlap with previously deposited spheres. Due to spatial correlations, characterizing the fraction of accepted spheres remains largely intractable. We study this fraction by taking a novel approach that compares random sequential adsorption in Euclidean space to the nearest-neighbor blocking on a sequence of clustered random graphs. This random network model can be thought of as a corrected mean-field model for the interaction graph between the attempted spheres. Using functional limit theorems, we characterize the fraction of accepted spheres and its fluctuations.
Next Generation Heavy-Lift Launch Vehicle: Large Diameter, Hydrocarbon-Fueled Concepts
NASA Technical Reports Server (NTRS)
Holliday, Jon; Monk, Timothy; Adams, Charles; Campbell, Ricky
2012-01-01
With the passage of the 2010 NASA Authorization Act, NASA was directed to begin the development of the Space Launch System (SLS) as a follow-on to the Space Shuttle Program. The SLS is envisioned as a heavy lift launch vehicle that will provide the foundation for future large-scale, beyond low Earth orbit (LEO) missions. Supporting the Mission Concept Review (MCR) milestone, several teams were formed to conduct an initial Requirements Analysis Cycle (RAC). These teams identified several vehicle concept candidates capable of meeting the preliminary system requirements. One such team, dubbed RAC Team 2, was tasked with identifying launch vehicles that are based on large stage diameters (up to the Saturn V S-IC and S-II stage diameters of 33 ft) and utilize high-thrust liquid oxygen (LOX)/RP engines as a First Stage propulsion system. While the trade space for this class of LOX/RP vehicles is relatively large, recent NASA activities (namely the Heavy Lift Launch Vehicle Study in late 2009 and the Heavy Lift Propulsion Technology Study of 2010) examined specific families within this trade space. Although the findings from these studies were incorporated in the Team 2 activity, additional branches of the trade space were examined and alternative approaches to vehicle development were considered. Furthermore, Team 2 set out to define a highly functional, flexible, and cost-effective launch vehicle concept. Utilizing this approach, a versatile two-stage launch vehicle concept was chosen as a preferred option. The preferred vehicle option has the capability to fly in several different configurations (e.g. engine arrangements) that gives this concept an inherent operational flexibility which allows the vehicle to meet a wide range of performance requirements without the need for costly block upgrades. Even still, this concept preserves the option for evolvability should the need arise in future mission scenarios. The foundation of this conceptual design is a focus on low cost and effectiveness rather than efficiency or cutting-edge technology. This paper details the approach and process, as well as the trade space analysis, leading to the preferred vehicle concept.
On a canonical quantization of 3D Anti de Sitter pure gravity
NASA Astrophysics Data System (ADS)
Kim, Jihun; Porrati, Massimo
2015-10-01
We perform a canonical quantization of pure gravity on AdS 3 using as a technical tool its equivalence at the classical level with a Chern-Simons theory with gauge group SL(2,{R})× SL(2,{R}) . We first quantize the theory canonically on an asymptotically AdS space -which is topologically the real line times a Riemann surface with one connected boundary. Using the "constrain first" approach we reduce canonical quantization to quantization of orbits of the Virasoro group and Kähler quantization of Teichmüller space. After explicitly computing the Kähler form for the torus with one boundary component and after extending that result to higher genus, we recover known results, such as that wave functions of SL(2,{R}) Chern-Simons theory are conformal blocks. We find new restrictions on the Hilbert space of pure gravity by imposing invariance under large diffeomorphisms and normalizability of the wave function. The Hilbert space of pure gravity is shown to be the target space of Conformal Field Theories with continuous spectrum and a lower bound on operator dimensions. A projection defined by topology changing amplitudes in Euclidean gravity is proposed. It defines an invariant subspace that allows for a dual interpretation in terms of a Liouville CFT. Problems and features of the CFT dual are assessed and a new definition of the Hilbert space, exempt from those problems, is proposed in the case of highly-curved AdS 3.
Remote sensing of phytoplankton chlorophyll-a concentration by use of ridge function fields.
Pelletier, Bruno; Frouin, Robert
2006-02-01
A methodology is presented for retrieving phytoplankton chlorophyll-a concentration from space. The data to be inverted, namely, vectors of top-of-atmosphere reflectance in the solar spectrum, are treated as explanatory variables conditioned by angular geometry. This approach leads to a continuum of inverse problems, i.e., a collection of similar inverse problems continuously indexed by the angular variables. The resolution of the continuum of inverse problems is studied from the least-squares viewpoint and yields a solution expressed as a function field over the set of permitted values for the angular variables, i.e., a map defined on that set and valued in a subspace of a function space. The function fields of interest, for reasons of approximation theory, are those valued in nested sequences of subspaces, such as ridge function approximation spaces, the union of which is dense. Ridge function fields constructed on synthetic yet realistic data for case I waters handle well situations of both weakly and strongly absorbing aerosols, and they are robust to noise, showing improvement in accuracy compared with classic inversion techniques. The methodology is applied to actual imagery from the Sea-Viewing Wide Field-of-View Sensor (SeaWiFS); noise in the data are taken into account. The chlorophyll-a concentration obtained with the function field methodology differs from that obtained by use of the standard SeaWiFS algorithm by 15.7% on average. The results empirically validate the underlying hypothesis that the inversion is solved in a least-squares sense. They also show that large levels of noise can be managed if the noise distribution is known or estimated.
Steerable Principal Components for Space-Frequency Localized Images*
Landa, Boris; Shkolnisky, Yoel
2017-01-01
As modern scientific image datasets typically consist of a large number of images of high resolution, devising methods for their accurate and efficient processing is a central research task. In this paper, we consider the problem of obtaining the steerable principal components of a dataset, a procedure termed “steerable PCA” (steerable principal component analysis). The output of the procedure is the set of orthonormal basis functions which best approximate the images in the dataset and all of their planar rotations. To derive such basis functions, we first expand the images in an appropriate basis, for which the steerable PCA reduces to the eigen-decomposition of a block-diagonal matrix. If we assume that the images are well localized in space and frequency, then such an appropriate basis is the prolate spheroidal wave functions (PSWFs). We derive a fast method for computing the PSWFs expansion coefficients from the images' equally spaced samples, via a specialized quadrature integration scheme, and show that the number of required quadrature nodes is similar to the number of pixels in each image. We then establish that our PSWF-based steerable PCA is both faster and more accurate then existing methods, and more importantly, provides us with rigorous error bounds on the entire procedure. PMID:29081879
Bjørneraas, Kari; Herfindal, Ivar; Solberg, Erling Johan; Sæther, Bernt-Erik; van Moorter, Bram; Rolandsen, Christer Moe
2012-01-01
Identifying factors shaping variation in resource selection is central for our understanding of the behaviour and distribution of animals. We examined summer habitat selection and space use by 108 Global Positioning System (GPS)-collared moose in Norway in relation to sex, reproductive status, habitat quality, and availability. Moose selected habitat types based on a combination of forage quality and availability of suitable habitat types. Selection of protective cover was strongest for reproducing females, likely reflecting the need to protect young. Males showed strong selection for habitat types with high quality forage, possibly due to higher energy requirements. Selection for preferred habitat types providing food and cover was a positive function of their availability within home ranges (i.e. not proportional use) indicating functional response in habitat selection. This relationship was not found for unproductive habitat types. Moreover, home ranges with high cover of unproductive habitat types were larger, and smaller home ranges contained higher proportions of the most preferred habitat type. The distribution of moose within the study area was partly related to the distribution of different habitat types. Our study shows how distribution and availability of habitat types providing cover and high-quality food shape ungulate habitat selection and space use.
A wavelet-based statistical analysis of FMRI data: I. motivation and data distribution modeling.
Dinov, Ivo D; Boscardin, John W; Mega, Michael S; Sowell, Elizabeth L; Toga, Arthur W
2005-01-01
We propose a new method for statistical analysis of functional magnetic resonance imaging (fMRI) data. The discrete wavelet transformation is employed as a tool for efficient and robust signal representation. We use structural magnetic resonance imaging (MRI) and fMRI to empirically estimate the distribution of the wavelet coefficients of the data both across individuals and spatial locations. An anatomical subvolume probabilistic atlas is used to tessellate the structural and functional signals into smaller regions each of which is processed separately. A frequency-adaptive wavelet shrinkage scheme is employed to obtain essentially optimal estimations of the signals in the wavelet space. The empirical distributions of the signals on all the regions are computed in a compressed wavelet space. These are modeled by heavy-tail distributions because their histograms exhibit slower tail decay than the Gaussian. We discovered that the Cauchy, Bessel K Forms, and Pareto distributions provide the most accurate asymptotic models for the distribution of the wavelet coefficients of the data. Finally, we propose a new model for statistical analysis of functional MRI data using this atlas-based wavelet space representation. In the second part of our investigation, we will apply this technique to analyze a large fMRI dataset involving repeated presentation of sensory-motor response stimuli in young, elderly, and demented subjects.
Xu, Enhua; Zhao, Dongbo; Li, Shuhua
2015-10-13
A multireference second order perturbation theory based on a complete active space configuration interaction (CASCI) function or density matrix renormalized group (DMRG) function has been proposed. This method may be considered as an approximation to the CAS/A approach with the same reference, in which the dynamical correlation is simplified with blocked correlated second order perturbation theory based on the generalized valence bond (GVB) reference (GVB-BCPT2). This method, denoted as CASCI-BCPT2/GVB or DMRG-BCPT2/GVB, is size consistent and has a similar computational cost as the conventional second order perturbation theory (MP2). We have applied it to investigate a number of problems of chemical interest. These problems include bond-breaking potential energy surfaces in four molecules, the spectroscopic constants of six diatomic molecules, the reaction barrier for the automerization of cyclobutadiene, and the energy difference between the monocyclic and bicyclic forms of 2,6-pyridyne. Our test applications demonstrate that CASCI-BCPT2/GVB can provide comparable results with CASPT2 (second order perturbation theory based on the complete active space self-consistent-field wave function) for systems under study. Furthermore, the DMRG-BCPT2/GVB method is applicable to treat strongly correlated systems with large active spaces, which are beyond the capability of CASPT2.
Recognition of human activities using depth images of Kinect for biofied building
NASA Astrophysics Data System (ADS)
Ogawa, Ami; Mita, Akira
2015-03-01
These days, various functions in the living spaces are needed because of an aging society, promotion of energy conservation, and diversification of lifestyles. To meet this requirement, we propose "Biofied Building". The "Biofied Building" is the system learnt from living beings. The various information is accumulated in a database using small sensor agent robots as a key function of this system to control the living spaces. Among the various kinds of information about the living spaces, especially human activities can be triggers for lighting or air conditioning control. By doing so, customized space is possible. Human activities are divided into two groups, the activities consisting of single behavior and the activities consisting of multiple behaviors. For example, "standing up" or "sitting down" consists of a single behavior. These activities are accompanied by large motions. On the other hand "eating" consists of several behaviors, holding the chopsticks, catching the food, putting them in the mouth, and so on. These are continuous motions. Considering the characteristics of two types of human activities, we individually, use two methods, R transformation and variance. In this paper, we focus on the two different types of human activities, and propose the two methods of human activity recognition methods for construction of the database of living space for "Biofied Building". Finally, we compare the results of both methods.
Large space systems technology, 1981. [conferences
NASA Technical Reports Server (NTRS)
Boyer, W. J. (Compiler)
1982-01-01
A total systems approach including structures, analyses, controls, and antennas is presented as a cohesive, programmatic plan for large space systems. Specifically, program status, structures, materials, and analyses, and control of large space systems are addressed.
Fourier band-power E/B-mode estimators for cosmic shear
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, Matthew R.; Rozo, Eduardo
We introduce new Fourier band-power estimators for cosmic shear data analysis and E/B-mode separation. We consider both the case where one performs E/B-mode separation and the case where one does not. The resulting estimators have several nice properties which make them ideal for cosmic shear data analysis. First, they can be written as linear combinations of the binned cosmic shear correlation functions. Secondly, they account for the survey window function in real-space. Thirdly, they are unbiased by shape noise since they do not use correlation function data at zero separation. Fourthly, the band-power window functions in Fourier space are compactmore » and largely non-oscillatory. Fifthly, they can be used to construct band-power estimators with very efficient data compression properties. In particular, we find that all of the information on the parameters Ωm, σ8 and ns in the shear correlation functions in the range of ~10–400 arcmin for single tomographic bin can be compressed into only three band-power estimates. Finally, we can achieve these rates of data compression while excluding small-scale information where the modelling of the shear correlation functions and power spectra is very difficult. Given these desirable properties, these estimators will be very useful for cosmic shear data analysis.« less
Atomic displacements in the charge ice pyrochlore Bi2Ti2O6O' studied by neutron total scattering
NASA Astrophysics Data System (ADS)
Shoemaker, Daniel P.; Seshadri, Ram; Hector, Andrew L.; Llobet, Anna; Proffen, Thomas; Fennie, Craig J.
2010-04-01
The oxide pyrochlore Bi2Ti2O6O' is known to be associated with large displacements of Bi and O' atoms from their ideal crystallographic positions. Neutron total scattering, analyzed in both reciprocal and real space, is employed here to understand the nature of these displacements. Rietveld analysis and maximum entropy methods are used to produce an average picture of the structural nonideality. Local structure is modeled via large-box reverse Monte Carlo simulations constrained simultaneously by the Bragg profile and real-space pair distribution function. Direct visualization and statistical analyses of these models show the precise nature of the static Bi and O' displacements. Correlations between neighboring Bi displacements are analyzed using coordinates from the large-box simulations. The framework of continuous symmetry measures has been applied to distributions of O'Bi4 tetrahedra to examine deviations from ideality. Bi displacements from ideal positions appear correlated over local length scales. The results are consistent with the idea that these nonmagnetic lone-pair containing pyrochlore compounds can be regarded as highly structurally frustrated systems.
Reinforced dynamics for enhanced sampling in large atomic and molecular systems
NASA Astrophysics Data System (ADS)
Zhang, Linfeng; Wang, Han; E, Weinan
2018-03-01
A new approach for efficiently exploring the configuration space and computing the free energy of large atomic and molecular systems is proposed, motivated by an analogy with reinforcement learning. There are two major components in this new approach. Like metadynamics, it allows for an efficient exploration of the configuration space by adding an adaptively computed biasing potential to the original dynamics. Like deep reinforcement learning, this biasing potential is trained on the fly using deep neural networks, with data collected judiciously from the exploration and an uncertainty indicator from the neural network model playing the role of the reward function. Parameterization using neural networks makes it feasible to handle cases with a large set of collective variables. This has the potential advantage that selecting precisely the right set of collective variables has now become less critical for capturing the structural transformations of the system. The method is illustrated by studying the full-atom explicit solvent models of alanine dipeptide and tripeptide, as well as the system of a polyalanine-10 molecule with 20 collective variables.
NASA Astrophysics Data System (ADS)
Yu, Garmay; A, Shvetsov; D, Karelov; D, Lebedev; A, Radulescu; M, Petukhov; V, Isaev-Ivanov
2012-02-01
Based on X-ray crystallographic data available at Protein Data Bank, we have built molecular dynamics (MD) models of homologous recombinases RecA from E. coli and D. radiodurans. Functional form of RecA enzyme, which is known to be a long helical filament, was approximated by a trimer, simulated in periodic water box. The MD trajectories were analyzed in terms of large-scale conformational motions that could be detectable by neutron and X-ray scattering techniques. The analysis revealed that large-scale RecA monomer dynamics can be described in terms of relative motions of 7 subdomains. Motion of C-terminal domain was the major contributor to the overall dynamics of protein. Principal component analysis (PCA) of the MD trajectories in the atom coordinate space showed that rotation of C-domain is correlated with the conformational changes in the central domain and N-terminal domain, that forms the monomer-monomer interface. Thus, even though C-terminal domain is relatively far from the interface, its orientation is correlated with large-scale filament conformation. PCA of the trajectories in the main chain dihedral angle coordinate space implicates a co-existence of a several different large-scale conformations of the modeled trimer. In order to clarify the relationship of independent domain orientation with large-scale filament conformation, we have performed analysis of independent domain motion and its implications on the filament geometry.
Parylene-based active micro space radiator with thermal contact switch
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ueno, Ai; Suzuki, Yuji
2014-03-03
Thermal management is crucial for highly functional spacecrafts exposed to large fluctuations of internal heat dissipation and/or thermal boundary conditions. Since thermal radiation is the only means for heat removal, effective control of radiation is required for advanced space missions. In the present study, a MEMS (Micro Electro Mechanical Systems) active radiator using the contact resistance change has been proposed. Unlike previous bulky thermal louvers/shutters, higher fill factor can be accomplished with an array of electrostatically driven micro diaphragms suspended with polymer tethers. With an early prototype developed with parylene MEMS technologies, radiation heat flux enhancement up to 42% hasmore » been achieved.« less
Transformers: Shape-Changing Space Systems Built with Robotic Textiles
NASA Technical Reports Server (NTRS)
Stoica, Adrian
2013-01-01
Prior approaches to transformer-like robots had only very limited success. They suffer from lack of reliability, ability to integrate large surfaces, and very modest change in overall shape. Robots can now be built from two-dimensional (2D) layers of robotic fabric. These transformers, a new kind of robotic space system, are dramatically different from current systems in at least two ways. First, the entire transformer is built from a single, thin sheet; a flexible layer of a robotic fabric (ro-fabric); or robotic textile (ro-textile). Second, the ro-textile layer is foldable to small volume and self-unfolding to adapt shape and function to mission phases.
Large scale exact quantum dynamics calculations: Ten thousand quantum states of acetonitrile
NASA Astrophysics Data System (ADS)
Halverson, Thomas; Poirier, Bill
2015-03-01
'Exact' quantum dynamics (EQD) calculations of the vibrational spectrum of acetonitrile (CH3CN) are performed, using two different methods: (1) phase-space-truncated momentum-symmetrized Gaussian basis and (2) correlated truncated harmonic oscillator basis. In both cases, a simple classical phase space picture is used to optimize the selection of individual basis functions-leading to drastic reductions in basis size, in comparison with existing methods. Massive parallelization is also employed. Together, these tools-implemented into a single, easy-to-use computer code-enable a calculation of tens of thousands of vibrational states of CH3CN to an accuracy of 0.001-10 cm-1.
Observation of CH⋅⋅⋅π Interactions between Methyl and Carbonyl Groups in Proteins.
Perras, Frédéric A; Marion, Dominique; Boisbouvier, Jérôme; Bryce, David L; Plevin, Michael J
2017-06-19
Protein structure and function is dependent on myriad noncovalent interactions. Direct detection and characterization of these weak interactions in large biomolecules, such as proteins, is experimentally challenging. Herein, we report the first observation and measurement of long-range "through-space" scalar couplings between methyl and backbone carbonyl groups in proteins. These J couplings are indicative of the presence of noncovalent C-H⋅⋅⋅π hydrogen-bond-like interactions involving the amide π network. Experimentally detected scalar couplings were corroborated by a natural bond orbital analysis, which revealed the orbital nature of the interaction and the origins of the through-space J couplings. The experimental observation of this type of CH⋅⋅⋅π interaction adds a new dimension to the study of protein structure, function, and dynamics by NMR spectroscopy. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Exploiting Multi-Step Sample Trajectories for Approximate Value Iteration
2013-09-01
WORK UNIT NUMBER IH 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AFRL/ RISC 525 Brooks Road, Rome NY 13441-4505 Binghamton University...S) AND ADDRESS(ES) Air Force Research Laboratory/Information Directorate Rome Research Site/ RISC 525 Brooks Road Rome NY 13441-4505 10. SPONSOR...iteration methods for reinforcement learning (RL) generalize experience from limited samples across large state-action spaces. The function approximators
The Market as an Institution for Zoning the Ocean
NASA Astrophysics Data System (ADS)
Clinton, J. E.; Hoagland, P.
2008-12-01
In recent years, spatial conflicts among ocean users have increased significantly, particularly in the coastal ocean. Ocean zoning has been proposed as a promising solution to these conflicts. Strikingly, most ocean zoning proponents focus on a centralized approach, involving government oversight, planning, and spatial allocations. We hypothesize that a market may be more efficient for allocating ocean space, because it tends to put ocean space in the hands of the highest valued uses, and it does not require public decision-makers to compile and analyze large amounts of information. Importantly, where external costs arise, a market in ocean space may need government oversight or regulation. We develop four case studies demonstrating that private allocations of ocean space are taking place already. This evidence suggests that a regulated market in ocean space may perform well as an allocative institution. We find that the proper functioning of a market in ocean space depends positively upon the strength of legal property rights and supportive public policies and negatively upon the number of users and the size of transaction costs.
Backscattering from a Gaussian distributed, perfectly conducting, rough surface
NASA Technical Reports Server (NTRS)
Brown, G. S.
1977-01-01
The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.
Marcus, Hani J; Seneci, Carlo A; Hughes-Hallett, Archie; Cundy, Thomas P; Nandi, Dipankar; Yang, Guang-Zhong; Darzi, Ara
2016-04-01
Surgical approaches such as transanal endoscopic microsurgery, which utilize small operative working spaces, and are necessarily single-port, are particularly demanding with standard instruments and have not been widely adopted. The aim of this study was to compare simultaneously surgical performance in single-port versus multiport approaches, and small versus large working spaces. Ten novice, 4 intermediate, and 1 expert surgeons were recruited from a university hospital. A preclinical randomized crossover study design was implemented, comparing performance under the following conditions: (1) multiport approach and large working space, (2) multiport approach and intermediate working space, (3) single-port approach and large working space, (4) single-port approach and intermediate working space, and (5) single-port approach and small working space. In each case, participants performed a peg transfer and pattern cutting tasks, and each task repetition was scored. Intermediate and expert surgeons performed significantly better than novices in all conditions (P < .05). Performance in single-port surgery was significantly worse than multiport surgery (P < .01). In multiport surgery, there was a nonsignificant trend toward worsened performance in the intermediate versus large working space. In single-port surgery, there was a converse trend; performances in the intermediate and small working spaces were significantly better than in the large working space. Single-port approaches were significantly more technically challenging than multiport approaches, possibly reflecting loss of instrument triangulation. Surprisingly, in single-port approaches, in which triangulation was no longer a factor, performance in large working spaces was worse than in intermediate and small working spaces. © The Author(s) 2015.
Asymptotic freedom in certain S O (N ) and S U (N ) models
NASA Astrophysics Data System (ADS)
Einhorn, Martin B.; Jones, D. R. Timothy
2017-09-01
We calculate the β -functions for S O (N ) and S U (N ) gauge theories coupled to adjoint and fundamental scalar representations, correcting longstanding, previous results. We explore the constraints on N resulting from requiring asymptotic freedom for all couplings. When we take into account the actual allowed behavior of the gauge coupling, the minimum value of N in both cases turns out to be larger than realized in earlier treatments. We also show that in the large N limit, both models have large regions of parameter space corresponding to total asymptotic freedom.
Brooks, Tessa L Durham; Miller, Nathan D; Spalding, Edgar P
2010-01-01
Plant development is genetically determined but it is also plastic, a fundamental duality that can be investigated provided large number of measurements can be made in various conditions. Plasticity of gravitropism in wild-type Arabidopsis (Arabidopsis thaliana) seedling roots was investigated using automated image acquisition and analysis. A bank of computer-controlled charge-coupled device cameras acquired images with high spatiotemporal resolution. Custom image analysis algorithms extracted time course measurements of tip angle and growth rate. Twenty-two discrete conditions defined by seedling age (2, 3, or 4 d), seed size (extra small, small, medium, or large), and growth medium composition (simple or rich) formed the condition space sampled with 1,216 trials. Computational analyses including dimension reduction by principal components analysis, classification by k-means clustering, and differentiation by wavelet convolution showed distinct response patterns within the condition space, i.e. response plasticity. For example, 2-d-old roots (regardless of seed size) displayed a response time course similar to those of roots from large seeds (regardless of age). Enriching the growth medium with nutrients suppressed response plasticity along the seed size and age axes, possibly by ameliorating a mineral deficiency, although analysis of seeds did not identify any elements with low levels on a per weight basis. Characterizing relationships between growth rate and tip swing rate as a function of condition cast gravitropism in a multidimensional response space that provides new mechanistic insights as well as conceptually setting the stage for mutational analysis of plasticity in general and root gravitropism in particular.
Durham Brooks, Tessa L.; Miller, Nathan D.; Spalding, Edgar P.
2010-01-01
Plant development is genetically determined but it is also plastic, a fundamental duality that can be investigated provided large number of measurements can be made in various conditions. Plasticity of gravitropism in wild-type Arabidopsis (Arabidopsis thaliana) seedling roots was investigated using automated image acquisition and analysis. A bank of computer-controlled charge-coupled device cameras acquired images with high spatiotemporal resolution. Custom image analysis algorithms extracted time course measurements of tip angle and growth rate. Twenty-two discrete conditions defined by seedling age (2, 3, or 4 d), seed size (extra small, small, medium, or large), and growth medium composition (simple or rich) formed the condition space sampled with 1,216 trials. Computational analyses including dimension reduction by principal components analysis, classification by k-means clustering, and differentiation by wavelet convolution showed distinct response patterns within the condition space, i.e. response plasticity. For example, 2-d-old roots (regardless of seed size) displayed a response time course similar to those of roots from large seeds (regardless of age). Enriching the growth medium with nutrients suppressed response plasticity along the seed size and age axes, possibly by ameliorating a mineral deficiency, although analysis of seeds did not identify any elements with low levels on a per weight basis. Characterizing relationships between growth rate and tip swing rate as a function of condition cast gravitropism in a multidimensional response space that provides new mechanistic insights as well as conceptually setting the stage for mutational analysis of plasticity in general and root gravitropism in particular. PMID:19923240
The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images
NASA Astrophysics Data System (ADS)
Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.
2001-06-01
We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.
Heterogeneity-induced large deviations in activity and (in some cases) entropy production
NASA Astrophysics Data System (ADS)
Gingrich, Todd R.; Vaikuntanathan, Suriyanarayanan; Geissler, Phillip L.
2014-10-01
We solve a simple model that supports a dynamic phase transition and show conditions for the existence of the transition. Using methods of large deviation theory we analytically compute the probability distribution for activity and entropy production rates of the trajectories on a large ring with a single heterogeneous link. The corresponding joint rate function demonstrates two dynamical phases—one localized and the other delocalized, but the marginal rate functions do not always exhibit the underlying transition. Symmetries in dynamic order parameters influence the observation of a transition, such that distributions for certain dynamic order parameters need not reveal an underlying dynamical bistability. Solution of our model system furthermore yields the form of the effective Markov transition matrices that generate dynamics in which the two dynamical phases are at coexistence. We discuss the implications of the transition for the response of bacterial cells to antibiotic treatment, arguing that even simple models of a cell cycle lacking an explicit bistability in configuration space will exhibit a bistability of dynamical phases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, A.; Avakian, H.; Burkert, V.
The target and double spin asymmetries of the exclusive pseudoscalar channel e→p→→epπ0 were measured for the first time in the deep-inelastic regime using a longitudinally polarized 5.9 GeV electron beam and a longitudinally polarized proton target at Jefferson Lab with the CEBAF Large Acceptance Spectrometer (CLAS). The data were collected over a large kinematic phase space and divided into 110 four-dimensional bins of Q2, xB, -t and Φ. Large values of asymmetry moments clearly indicate a substantial contribution to the polarized structure functions from transverse virtual photon amplitudes. The interpretation of experimental data in terms of generalized parton distributions (GPDs)more » provides the first insight on the chiral-odd GPDs H˜T and ET, and complement previous measurements of unpolarized structure functions sensitive to the GPDs HT and E¯T. These data provide a crucial input for parametrizations of essentially unknown chiral-odd GPDs and will strongly influence existing theoretical calculations based on the handbag formalism.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, A.; Avakian, H.; Burkert, V.
The target and double spin asymmetries of the exclusive pseudoscalar channelmore » $$\\vec e\\vec p\\to ep\\pi^0$$ were measured for the first time in the deep-inelastic regime using a longitudinally polarized 5.9 GeV electron beam and a longitudinally polarized proton target at Jefferson Lab with the CEBAF Large Acceptance Spectrometer (CLAS). The data were collected over a large kinematic phase space and divided into 110 four-dimensional bins of $Q^2$, $$x_B$$, $-t$ and $$\\phi$$. Large values of asymmetry moments clearly indicate a substantial contribution to the polarized structure functions from transverse virtual photon amplitudes. The interpretation of experimental data in terms of generalized parton distributions (GPDs) provides the first insight on the chiral-odd GPDs $$\\tilde{H}_T$$ and $$E_T$$, and complement previous measurements of unpolarized structure functions sensitive to the GPDs $$H_T$$ and $$\\bar E_T$$. Finally, these data provide necessary constraints for chiral-odd GPD parametrizations and will strongly influence existing theoretical handbag models.« less
NASA Astrophysics Data System (ADS)
Hanson-Heine, Magnus W. D.; George, Michael W.; Besley, Nicholas A.
2018-06-01
The restricted excitation subspace approximation is explored as a basis to reduce the memory storage required in linear response time-dependent density functional theory (TDDFT) calculations within the Tamm-Dancoff approximation. It is shown that excluding the core orbitals and up to 70% of the virtual orbitals in the construction of the excitation subspace does not result in significant changes in computed UV/vis spectra for large molecules. The reduced size of the excitation subspace greatly reduces the size of the subspace vectors that need to be stored when using the Davidson procedure to determine the eigenvalues of the TDDFT equations. Furthermore, additional screening of the two-electron integrals in combination with a reduction in the size of the numerical integration grid used in the TDDFT calculation leads to significant computational savings. The use of these approximations represents a simple approach to extend TDDFT to the study of large systems and make the calculations increasingly tractable using modest computing resources.
Adaptive Full Aperture Wavefront Sensor Study
NASA Technical Reports Server (NTRS)
Robinson, William G.
1997-01-01
This grant and the work described was in support of a Seven Segment Demonstrator (SSD) and review of wavefront sensing techniques proposed by the Government and Contractors for the Next Generation Space Telescope (NGST) Program. A team developed the SSD concept. For completeness, some of the information included in this report has also been included in the final report of a follow-on contract (H-27657D) entitled "Construction of Prototype Lightweight Mirrors". The original purpose of this GTRI study was to investigate how various wavefront sensing techniques might be most effectively employed with large (greater than 10 meter) aperture space based telescopes used for commercial and scientific purposes. However, due to changes in the scope of the work performed on this grant and in light of the initial studies completed for the NGST program, only a portion of this report addresses wavefront sensing techniques. The wavefront sensing techniques proposed by the Government and Contractors for the NGST were summarized in proposals and briefing materials developed by three study teams including NASA Goddard Space Flight Center, TRW, and Lockheed-Martin. In this report, GTRI reviews these approaches and makes recommendations concerning the approaches. The objectives of the SSD were to demonstrate functionality and performance of a seven segment prototype array of hexagonal mirrors and supporting electromechanical components which address design issues critical to space optics deployed in large space based telescopes for astronomy and for optics used in spaced based optical communications systems. The SSD was intended to demonstrate technologies which can support the following capabilities: Transportation in dense packaging to existing launcher payload envelopes, then deployable on orbit to form a space telescope with large aperture. Provide very large (greater than 10 meters) primary reflectors of low mass and cost. Demonstrate the capability to form a segmented primary or quaternary mirror into a quasi-continuous surface with individual subapertures phased so that near diffraction limited imaging in the visible wavelength region is achieved. Continuous compensation of optical wavefront due to perturbations caused by imperfections, natural disturbances, and equipment induced vibrations/deflections to provide near diffraction limited imaging performance in the visible wavelength region. Demonstrate the feasibility of fabricating such systems with reduced mass and cost compared to past approaches.
NASA Technical Reports Server (NTRS)
1979-01-01
The development of large space structure technology is discussed, with emphasis on space fabricated structures which are automatically manufactured in space from sheet-strip materials and assembled on-orbit. Definition of a flight demonstration involving an Automated Beam Builder and the building and assembling of large structures is presented.
Paraxial diffractive elements for space-variant linear transforms
NASA Astrophysics Data System (ADS)
Teiwes, Stephan; Schwarzer, Heiko; Gu, Ben-Yuan
1998-06-01
Optical linear transform architectures bear good potential for future developments of very powerful hybrid vision systems and neural network classifiers. The optical modules of such systems could be used as pre-processors to solve complex linear operations at very high speed in order to simplify an electronic data post-processing. However, the applicability of linear optical architectures is strongly connected with the fundamental question of how to implement a specific linear transform by optical means and physical imitations. The large majority of publications on this topic focusses on the optical implementation of space-invariant transforms by the well-known 4f-setup. Only few papers deal with approaches to implement selected space-variant transforms. In this paper, we propose a simple algebraic method to design diffractive elements for an optical architecture in order to realize arbitrary space-variant transforms. The design procedure is based on a digital model of scalar, paraxial wave theory and leads to optimal element transmission functions within the model. Its computational and physical limitations are discussed in terms of complexity measures. Finally, the design procedure is demonstrated by some examples. Firstly, diffractive elements for the realization of different rotation operations are computed and, secondly, a Hough transform element is presented. The correct optical functions of the elements are proved in computer simulation experiments.
Budday, Dominik; Leyendecker, Sigrid; van den Bedem, Henry
2015-01-01
Proteins operate and interact with partners by dynamically exchanging between functional substates of a conformational ensemble on a rugged free energy landscape. Understanding how these substates are linked by coordinated, collective motions requires exploring a high-dimensional space, which remains a tremendous challenge. While molecular dynamics simulations can provide atomically detailed insight into the dynamics, computational demands to adequately sample conformational ensembles of large biomolecules and their complexes often require tremendous resources. Kinematic models can provide high-level insights into conformational ensembles and molecular rigidity beyond the reach of molecular dynamics by reducing the dimensionality of the search space. Here, we model a protein as a kinematic linkage and present a new geometric method to characterize molecular rigidity from the constraint manifold Q and its tangent space Q at the current configuration q. In contrast to methods based on combinatorial constraint counting, our method is valid for both generic and non-generic, e.g., singular configurations. Importantly, our geometric approach provides an explicit basis for collective motions along floppy modes, resulting in an efficient procedure to probe conformational space. An atomically detailed structural characterization of coordinated, collective motions would allow us to engineer or allosterically modulate biomolecules by selectively stabilizing conformations that enhance or inhibit function with broad implications for human health. PMID:26213417
Frequency maps as a probe of secular evolution in the Milky Way
NASA Astrophysics Data System (ADS)
Valluri, Monica
2015-03-01
The frequency analysis of the orbits of halo stars and dark matter particles from a cosmological hydrodynamical simulation of a disk galaxy from the MUGS collaboration (Stinson et al. 2010) shows that even if the shape of the dark matter halo is nearly oblate, only about 50% of its orbits are on short-axis tubes, confirming a previous result: under baryonic condensation all orbit families can deform their shapes without changing orbital type (Valluri et al. 2010). Orbits of dark matter particles and halo stars are very similar reflecting their common accretion origin and the influence of baryons. Frequency maps provide a compact representation of the 6-D phase space distribution that also reveals the history of the halo (Valluri et al. 2012). The 6-D phase space coordinates for a large population of halo stars in the Milky Way that will be obtained from future surveys can be used to reconstruct the phase-space distribution function of the stellar halo. The similarity between the frequency maps of halo stars and dark matter particles (Fig. 1) implies that reconstruction of the stellar halo distribution function can reveal the phase space distribution of the unseen dark matter particles and provide evidence for secular evolution. MV is supported by NSF grant AST-0908346 and the Elizabeth Crosby grant.
NASA Astrophysics Data System (ADS)
Budday, Dominik; Leyendecker, Sigrid; van den Bedem, Henry
2015-10-01
Proteins operate and interact with partners by dynamically exchanging between functional substates of a conformational ensemble on a rugged free energy landscape. Understanding how these substates are linked by coordinated, collective motions requires exploring a high-dimensional space, which remains a tremendous challenge. While molecular dynamics simulations can provide atomically detailed insight into the dynamics, computational demands to adequately sample conformational ensembles of large biomolecules and their complexes often require tremendous resources. Kinematic models can provide high-level insights into conformational ensembles and molecular rigidity beyond the reach of molecular dynamics by reducing the dimensionality of the search space. Here, we model a protein as a kinematic linkage and present a new geometric method to characterize molecular rigidity from the constraint manifold Q and its tangent space Tq Q at the current configuration q. In contrast to methods based on combinatorial constraint counting, our method is valid for both generic and non-generic, e.g., singular configurations. Importantly, our geometric approach provides an explicit basis for collective motions along floppy modes, resulting in an efficient procedure to probe conformational space. An atomically detailed structural characterization of coordinated, collective motions would allow us to engineer or allosterically modulate biomolecules by selectively stabilizing conformations that enhance or inhibit function with broad implications for human health.
NASA Astrophysics Data System (ADS)
Khanpour, Hamzeh; Mirjalili, Abolfazl; Tehrani, S. Atashbar
2017-03-01
An analytical solution based on the Laplace transformation technique for the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equations is presented at next-to-leading order accuracy in perturbative QCD. This technique is also applied to extract the analytical solution for the proton structure function, F2p(x ,Q2) , in the Laplace s space. We present the results for the separate parton distributions of all parton species, including valence quark densities, the antiquark and strange sea parton distribution functions (PDFs), and the gluon distribution. We successfully compare the obtained parton distribution functions and the proton structure function with the results from GJR08 [Gluck, Jimenez-Delgado, and Reya, Eur. Phys. J. C 53, 355 (2008)], 10.1140/epjc/s10052-007-0462-9 and KKT12 [Khanpour, Khorramian, and Tehrani, J. Phys. G 40, 045002 (2013)], 10.1088/0954-3899/40/4/045002 parametrization models as well as the x -space results using
The impact of urban planning on land use and land cover in Pudong of Shanghai, China.
Zhao, Bin; Nakagoshi, Nobukazu; Chen, Jia-kuan; Kong, Ling-yi
2003-03-01
Functional zones in cities constitute the most conspicuous components of newly developed urban area, and have been a hot spot for domestic and foreign investors in China, which not only show the expanse of urban space accompanied by the shifts both in landscape (from rural to urban) and land use (from less extensive to extensive), but also display the transformation of regional ecological functions. By using the theories and methods of landscape ecology, the structure of landscape and landscape ecological planning can be analyzed and evaluated for studying the urban functional zones' layout. In 1990, the Central Government of China declared to develop and open up Pudong New Area so as to promote economic development in Shanghai. Benefited from the advantages of Shanghai's location and economy, the government of Pudong New Area has successively built up 7 different functional zones over the past decade according to their functions and strategic goals. Based on the multi-spectral satellite imageries taken in 1990, 1997 and 2000, a landscape ecology analysis was carried out for Pudong New Area of Shanghai, supported by GIS technology. Green space (including croplands) and built-up area are the major factors considered in developing urban landscape. This paper was mainly concerned with the different spatial patterns and dynamic of green space, built-up areas and new buildings in different functional zones, influenced by different functional layouts and development strategies. The rapid urbanization in Pudong New Area resulted in a more homogeneous landscape. Agricultural landscape and suburban landscape were gradually replaced by urban landscape as the degree of urbanization increased. As consequence of urbanization in Pudong, not only built-up patches, but also newly-built patches and green patches merged into one large patch, which should be attributed to the construction policy of extensive green space as the urban development process in Pudong New Area. The shape of green area of 7 functional zones became more and more regular because of the horticultural needs in Shanghai urban planning. Some suggestions were finally made for the study of future urban planning and layout.
Galaxy Clustering in Early Sloan Digital Sky Survey Redshift Data
NASA Astrophysics Data System (ADS)
Zehavi, Idit; Blanton, Michael R.; Frieman, Joshua A.; Weinberg, David H.; Mo, Houjun J.; Strauss, Michael A.; Anderson, Scott F.; Annis, James; Bahcall, Neta A.; Bernardi, Mariangela; Briggs, John W.; Brinkmann, Jon; Burles, Scott; Carey, Larry; Castander, Francisco J.; Connolly, Andrew J.; Csabai, Istvan; Dalcanton, Julianne J.; Dodelson, Scott; Doi, Mamoru; Eisenstein, Daniel; Evans, Michael L.; Finkbeiner, Douglas P.; Friedman, Scott; Fukugita, Masataka; Gunn, James E.; Hennessy, Greg S.; Hindsley, Robert B.; Ivezić, Željko; Kent, Stephen; Knapp, Gillian R.; Kron, Richard; Kunszt, Peter; Lamb, Donald Q.; Leger, R. French; Long, Daniel C.; Loveday, Jon; Lupton, Robert H.; McKay, Timothy; Meiksin, Avery; Merrelli, Aronne; Munn, Jeffrey A.; Narayanan, Vijay; Newcomb, Matt; Nichol, Robert C.; Owen, Russell; Peoples, John; Pope, Adrian; Rockosi, Constance M.; Schlegel, David; Schneider, Donald P.; Scoccimarro, Roman; Sheth, Ravi K.; Siegmund, Walter; Smee, Stephen; Snir, Yehuda; Stebbins, Albert; Stoughton, Christopher; SubbaRao, Mark; Szalay, Alexander S.; Szapudi, Istvan; Tegmark, Max; Tucker, Douglas L.; Uomoto, Alan; Vanden Berk, Dan; Vogeley, Michael S.; Waddell, Patrick; Yanny, Brian; York, Donald G.
2002-05-01
We present the first measurements of clustering in the Sloan Digital Sky Survey (SDSS) galaxy redshift survey. Our sample consists of 29,300 galaxies with redshifts 5700kms-1<=cz<=39,000kms-1, distributed in several long but narrow (2.5d-5°) segments, covering 690 deg2. For the full, flux-limited sample, the redshift-space correlation length is approximately 8 h-1 Mpc. The two-dimensional correlation function ξ(rp,π) shows clear signatures of both the small-scale, ``fingers-of-God'' distortion caused by velocity dispersions in collapsed objects and the large-scale compression caused by coherent flows, though the latter cannot be measured with high precision in the present sample. The inferred real-space correlation function is well described by a power law, ξ(r)=(r/6.1+/-0.2h-1Mpc)-1.75+/-0.03, for 0.1h-1Mpc<=r<=16h-1Mpc. The galaxy pairwise velocity dispersion is σ12~600+/-100kms-1 for projected separations 0.15h-1Mpc<=rp<=5h-1Mpc. When we divide the sample by color, the red galaxies exhibit a stronger and steeper real-space correlation function and a higher pairwise velocity dispersion than do the blue galaxies. The relative behavior of subsamples defined by high/low profile concentration or high/low surface brightness is qualitatively similar to that of the red/blue subsamples. Our most striking result is a clear measurement of scale-independent luminosity bias at r<~10h-1Mpc: subsamples with absolute magnitude ranges centered on M*-1.5, M*, and M*+1.5 have real-space correlation functions that are parallel power laws of slope ~-1.8 with correlation lengths of approximately 7.4, 6.3, and 4.7 h-1 Mpc, respectively.
Optimal group size in a highly social mammal
Markham, A. Catherine; Gesquiere, Laurence R.; Alberts, Susan C.; Altmann, Jeanne
2015-01-01
Group size is an important trait of social animals, affecting how individuals allocate time and use space, and influencing both an individual’s fitness and the collective, cooperative behaviors of the group as a whole. Here we tested predictions motivated by the ecological constraints model of group size, examining the effects of group size on ranging patterns and adult female glucocorticoid (stress hormone) concentrations in five social groups of wild baboons (Papio cynocephalus) over an 11-y period. Strikingly, we found evidence that intermediate-sized groups have energetically optimal space-use strategies; both large and small groups experience ranging disadvantages, in contrast to the commonly reported positive linear relationship between group size and home range area and daily travel distance, which depict a disadvantage only in large groups. Specifically, we observed a U-shaped relationship between group size and home range area, average daily distance traveled, evenness of space use within the home range, and glucocorticoid concentrations. We propose that a likely explanation for these U-shaped patterns is that large, socially dominant groups are constrained by within-group competition, whereas small, socially subordinate groups are constrained by between-group competition and predation pressures. Overall, our results provide testable hypotheses for evaluating group-size constraints in other group-living species, in which the costs of intra- and intergroup competition vary as a function of group size. PMID:26504236
Shape control of large space structures
NASA Technical Reports Server (NTRS)
Hagan, M. T.
1982-01-01
A survey has been conducted to determine the types of control strategies which have been proposed for controlling the vibrations in large space structures. From this survey several representative control strategies were singled out for detailed analyses. The application of these strategies to a simplified model of a large space structure has been simulated. These simulations demonstrate the implementation of the control algorithms and provide a basis for a preliminary comparison of their suitability for large space structure control.
Microbial astronauts: assembling microbial communities for advanced life support systems.
Roberts, M S; Garland, J L; Mills, A L
2004-02-01
Extension of human habitation into space requires that humans carry with them many of the microorganisms with which they coexist on Earth. The ubiquity of microorganisms in close association with all living things and biogeochemical processes on Earth predicates that they must also play a critical role in maintaining the viability of human life in space. Even though bacterial populations exist as locally adapted ecotypes, the abundance of individuals in microbial species is so large that dispersal is unlikely to be limited by geographical barriers on Earth (i.e., for most environments "everything is everywhere" given enough time). This will not be true for microbial communities in space where local species richness will be relatively low because of sterilization protocols prior to launch and physical barriers between Earth and spacecraft after launch. Although community diversity will be sufficient to sustain ecosystem function at the onset, richness and evenness may decline over time such that biological systems either lose functional potential (e.g., bioreactors may fail to reduce BOD or nitrogen load) or become susceptible to invasion by human-associated microorganisms (pathogens) over time. Research at the John F. Kennedy Space Center has evaluated fundamental properties of microbial diversity and community assembly in prototype bioregenerative systems for NASA Advanced Life Support. Successional trends related to increased niche specialization, including an apparent increase in the proportion of nonculturable types of organisms, have been consistently observed. In addition, the stability of the microbial communities, as defined by their resistance to invasion by human-associated microorganisms, has been correlated to their diversity. Overall, these results reflect the significant challenges ahead for the assembly of stable, functional communities using gnotobiotic approaches, and the need to better define the basic biological principles that define ecosystem processes in the space environment. Copyright 2004 Springer-Verlag
Microbial astronauts: assembling microbial communities for advanced life support systems
NASA Technical Reports Server (NTRS)
Roberts, M. S.; Garland, J. L.; Mills, A. L.
2004-01-01
Extension of human habitation into space requires that humans carry with them many of the microorganisms with which they coexist on Earth. The ubiquity of microorganisms in close association with all living things and biogeochemical processes on Earth predicates that they must also play a critical role in maintaining the viability of human life in space. Even though bacterial populations exist as locally adapted ecotypes, the abundance of individuals in microbial species is so large that dispersal is unlikely to be limited by geographical barriers on Earth (i.e., for most environments "everything is everywhere" given enough time). This will not be true for microbial communities in space where local species richness will be relatively low because of sterilization protocols prior to launch and physical barriers between Earth and spacecraft after launch. Although community diversity will be sufficient to sustain ecosystem function at the onset, richness and evenness may decline over time such that biological systems either lose functional potential (e.g., bioreactors may fail to reduce BOD or nitrogen load) or become susceptible to invasion by human-associated microorganisms (pathogens) over time. Research at the John F. Kennedy Space Center has evaluated fundamental properties of microbial diversity and community assembly in prototype bioregenerative systems for NASA Advanced Life Support. Successional trends related to increased niche specialization, including an apparent increase in the proportion of nonculturable types of organisms, have been consistently observed. In addition, the stability of the microbial communities, as defined by their resistance to invasion by human-associated microorganisms, has been correlated to their diversity. Overall, these results reflect the significant challenges ahead for the assembly of stable, functional communities using gnotobiotic approaches, and the need to better define the basic biological principles that define ecosystem processes in the space environment. Copyright 2004 Springer-Verlag.
Research and Analysis on Energy Consumption Features of Civil Airports
NASA Astrophysics Data System (ADS)
Li, Bo; Zhang, Wen; Wang, Jianping; Xu, Junku; Su, Jixiang
2017-11-01
Civil aviation is an important part of China’s transportation system, and also the fastest-growing field of comprehensive transportation. Airports, as a key infrastructure of the air transportation system, are the junctions of air and ground transportation. Large airports are generally comprehensive transportation hubs that integrate various modes of transportation, serving as important functional zones of cities. Compared with other transportation hubs, airports cover a wide area, with plenty of functional sections, complex systems and strong specialization, while airport buildings represented by terminals have exhibited characteristics of large space, massive energy consumption, high requirement for safety and comfort, as well as concentrated and rapidly changing passenger flows. Through research and analysis on energy consumption features of civil airports, and analysis on energy consumption features of airports with different sizes or in different climate regions, this article has drawn conclusions therefrom.
NASA Technical Reports Server (NTRS)
McFarland, Shane
2009-01-01
Field of view has always been a design feature paramount to helmets, and in particular space suits, where the helmet must provide an adequate field of view for a large range of activities, environments, and body positions. For Project Constellation, a different approach to helmet requirement maturation was utilized; one that was less a direct function of body position and suit pressure and more a function of the mission segment in which the field of view will be required. Through taxonimization of various parameters that affect suited field of view, as well as consideration for possible nominal and contingency operations during that mission segment, a reduction process was employed to condense the large number of possible outcomes to only six unique field of view angle requirements that still captured all necessary variables while sacrificing minimal fidelity.
NASA Astrophysics Data System (ADS)
Viens, L.; Miyake, H.; Koketsu, K.
2016-12-01
Large subduction earthquakes have the potential to generate strong long-period ground motions. The ambient seismic field, also called seismic noise, contains information about the elastic response of the Earth between two seismic stations that can be retrieved using seismic interferometry. The DONET1 network, which is composed of 20 offshore stations, has been deployed atop the Nankai subduction zone, Japan, to continuously monitor the seismotectonic activity in this highly seismically active region. The surrounding onshore area is covered by hundreds of seismic stations, which are operated the National Research Institute for Earth Science and Disaster Prevention (NIED) and the Japan Meteorological Agency (JMA), with a spacing of 15-20 km. We retrieve offshore-onshore Green's functions from the ambient seismic field using the deconvolution technique and use them to simulate the long-period ground motions of moderate subduction earthquakes that occurred at shallow depth. We extend the point source method, which is appropriate for moderate events, to finite source modeling to simulate the long-period ground motions of large Mw 7 class earthquake scenarios. The source models are constructed using scaling relations between moderate and large earthquakes to discretize the fault plane of the large hypothetical events into subfaults. Offshore-onshore Green's functions are spatially interpolated over the fault plane to obtain one Green's function for each subfault. The interpolated Green's functions are finally summed up considering different rupture velocities. Results show that this technique can provide additional information about earthquake ground motions that can be used with the existing physics-based simulations to improve seismic hazard assessment.
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
NASA Astrophysics Data System (ADS)
Field, F.; Goodbun, J.; Watson, V.
Architects have a role to play in interplanetary space that has barely yet been explored. The architectural community is largely unaware of this new territory, for which there is still no agreed method of practice. There is moreover a general confusion, in scientific and related fields, over what architects might actually do there today. Current extra-planetary designs generally fail to explore the dynamic and relational nature of space-time, and often reduce human habitation to a purely functional problem. This is compounded by a crisis over the representation (drawing) of space-time. The present work returns to first principles of architecture in order to realign them with current socio-economic and technological trends surrounding the space industry. What emerges is simultaneously the basis for an ecological space architecture, and the representational strategies necessary to draw it. We explore this approach through a work of design-based research that describes the construction of Ocean; a huge body of water formed by the collision of two asteroids at the Translunar Lagrange Point (L2), that would serve as a site for colonisation, and as a resource to fuel future missions. Ocean is an experimental model for extra-planetary space design and its representation, within the autonomous discipline of architecture.
The large-scale three-point correlation function of the SDSS BOSS DR12 CMASS galaxies
NASA Astrophysics Data System (ADS)
Slepian, Zachary; Eisenstein, Daniel J.; Beutler, Florian; Chuang, Chia-Hsun; Cuesta, Antonio J.; Ge, Jian; Gil-Marín, Héctor; Ho, Shirley; Kitaura, Francisco-Shu; McBride, Cameron K.; Nichol, Robert C.; Percival, Will J.; Rodríguez-Torres, Sergio; Ross, Ashley J.; Scoccimarro, Román; Seo, Hee-Jong; Tinker, Jeremy; Tojeiro, Rita; Vargas-Magaña, Mariana
2017-06-01
We report a measurement of the large-scale three-point correlation function of galaxies using the largest data set for this purpose to date, 777 202 luminous red galaxies in the Sloan Digital Sky Survey Baryon Acoustic Oscillation Spectroscopic Survey (SDSS BOSS) DR12 CMASS sample. This work exploits the novel algorithm of Slepian & Eisenstein to compute the multipole moments of the 3PCF in O(N^2) time, with N the number of galaxies. Leading-order perturbation theory models the data well in a compressed basis where one triangle side is integrated out. We also present an accurate and computationally efficient means of estimating the covariance matrix. With these techniques, the redshift-space linear and non-linear bias are measured, with 2.6 per cent precision on the former if σ8 is fixed. The data also indicate a 2.8σ preference for the BAO, confirming the presence of BAO in the three-point function.
A new single-particle basis for nuclear many-body calculations
NASA Astrophysics Data System (ADS)
Puddu, G.
2017-10-01
Predominantly, harmonic oscillator single-particle wave functions are the preferred choice for a basis in ab initio nuclear many-body calculations. These wave-functions, although very convenient in order to evaluate the matrix elements of the interaction in the laboratory frame, have too fast a fall-off at large distances. In the past, as an alternative to the harmonic oscillator, other single-particle wave functions have been proposed. In this work, we propose a new single-particle basis, directly linked to nucleon-nucleon interaction. This new basis is orthonormal and complete, has the proper asymptotic behavior at large distances and does not contain the continuum which would pose severe convergence problems in nuclear many body calculations. We consider the newly proposed NNLO-opt nucleon-nucleon interaction, without any renormalization. We show that, unlike other bases, this single-particle representation has a computational cost similar to the harmonic oscillator basis with the same space truncation and it gives lower energies for 6He and 6Li.
Minimum Sobolev norm interpolation of scattered derivative data
NASA Astrophysics Data System (ADS)
Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.
2018-07-01
We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.
NASA Technical Reports Server (NTRS)
Lu, Tao; Zhang, Ye; Wong, Michael; Feiveson, Alan; Gaza, Ramona; Stoffle, Nicholas; Wang, Huichen; Wilson, Bobby; Rohde, Larry; Stodieck, Louis;
2017-01-01
Space radiation consists of energetic charged particles of varying charges and energies. Exposure of astronauts to space radiation on future long duration missions to Mars, or missions back to the Moon, is expected to result in deleterious consequences such as cancer and comprised central nervous system (CNS) functions. Space radiation can also cause mutation in microorganisms, and potentially influence the evolution of life in space. Measurement of the space radiation environment has been conducted since the very beginning of the space program. Compared to the quantification of the space radiation environment using physical detectors, reports on the direct measurement of biological consequences of space radiation exposure have been limited, due primarily to the low dose and low dose rate nature of the environment. Most of the biological assays fail to detect the radiation effects at acute doses that are lower than 5 centiSieverts. In a recent study, we flew cultured confluent human fibroblasts in mostly G1 phase of the cell cycle to the International Space Station (ISS). The cells were fixed in space after arriving on the ISS for 3 and 14 days, respectively. The fixed cells were later returned to the ground and subsequently stained with the gamma-H2AX (Histone family, member X) antibody that are commonly used as a marker for DNA damage, particularly DNA double strand breaks, induced by both low-and high-linear energy transfer radiation. In our present study, the gamma-H2AX (Histone family, member X) foci were captured with a laser confocal microscope. To confirm that some large track-like foci were from space radiation exposure, we also exposed, on the ground, the same type of cells to both low-and high-linear energy transfer protons, and high-linear energy transfer Fe ions. In addition, we exposed the cells to low dose rate gamma rays, in order to rule out the possibility that the large track-like foci can be induced by chronic low-linear energy transfer radiation.
NASA Technical Reports Server (NTRS)
Kuo, B. C.; Singh, G.
1974-01-01
The dynamics of the Large Space Telescope (LST) control system were studied in order to arrive at a simplified model for computer simulation without loss of accuracy. The frictional nonlinearity of the Control Moment Gyroscope (CMG) Control Loop was analyzed in a model to obtain data for the following: (1) a continuous describing function for the gimbal friction nonlinearity; (2) a describing function of the CMG nonlinearity using an analytical torque equation; and (3) the discrete describing function and function plots for CMG functional linearity. Preliminary computer simulations are shown for the simplified LST system, first without, and then with analytical torque expressions. Transfer functions of the sampled-data LST system are also described. A final computer simulation is presented which uses elements of the simplified sampled-data LST system with analytical CMG frictional torque expressions.
Structural Dynamics and Control of Large Space Structures, 1982
NASA Technical Reports Server (NTRS)
Brumfield, M. L. (Compiler)
1983-01-01
Basic research in the control of large space structures is discussed. Active damping and control of flexible beams, active stabilization of flexible antenna feed towers, spacecraft docking, and robust pointing control of large space platform payloads are among the topics discussed.
Watanabe, Takanori; Kessler, Daniel; Scott, Clayton; Angstadt, Michael; Sripada, Chandra
2014-01-01
Substantial evidence indicates that major psychiatric disorders are associated with distributed neural dysconnectivity, leading to strong interest in using neuroimaging methods to accurately predict disorder status. In this work, we are specifically interested in a multivariate approach that uses features derived from whole-brain resting state functional connectomes. However, functional connectomes reside in a high dimensional space, which complicates model interpretation and introduces numerous statistical and computational challenges. Traditional feature selection techniques are used to reduce data dimensionality, but are blind to the spatial structure of the connectomes. We propose a regularization framework where the 6-D structure of the functional connectome (defined by pairs of points in 3-D space) is explicitly taken into account via the fused Lasso or the GraphNet regularizer. Our method only restricts the loss function to be convex and margin-based, allowing non-differentiable loss functions such as the hinge-loss to be used. Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection. We introduce a novel efficient optimization algorithm based on the augmented Lagrangian and the classical alternating direction method, which can solve both fused Lasso and GraphNet regularized SVM with very little modification. We also demonstrate that the inner subproblems of the algorithm can be solved efficiently in analytic form by coupling the variable splitting strategy with a data augmentation scheme. Experiments on simulated data and resting state scans from a large schizophrenia dataset show that our proposed approach can identify predictive regions that are spatially contiguous in the 6-D “connectome space,” offering an additional layer of interpretability that could provide new insights about various disease processes. PMID:24704268
Computational Design of Functionalized Metal–Organic Framework Nodes for Catalysis
2017-01-01
Recent progress in the synthesis and characterization of metal–organic frameworks (MOFs) has opened the door to an increasing number of possible catalytic applications. The great versatility of MOFs creates a large chemical space, whose thorough experimental examination becomes practically impossible. Therefore, computational modeling is a key tool to support, rationalize, and guide experimental efforts. In this outlook we survey the main methodologies employed to model MOFs for catalysis, and we review selected recent studies on the functionalization of their nodes. We pay special attention to catalytic applications involving natural gas conversion. PMID:29392172
Higher Order Heavy Quark Corrections to Deep-Inelastic Scattering
NASA Astrophysics Data System (ADS)
Blümlein, Johannes; DeFreitas, Abilio; Schneider, Carsten
2015-04-01
The 3-loop heavy flavor corrections to deep-inelastic scattering are essential for consistent next-to-next-to-leading order QCD analyses. We report on the present status of the calculation of these corrections at large virtualities Q2. We also describe a series of mathematical, computer-algebraic and combinatorial methods and special function spaces, needed to perform these calculations. Finally, we briefly discuss the status of measuring αs (MZ), the charm quark mass mc, and the parton distribution functions at next-to-next-to-leading order from the world precision data on deep-inelastic scattering.
Flavor dependence of the pion and kaon form factors and parton distribution functions
Hutauruk, Parada T. P.; Cloët, Ian C.; Thomas, Anthony W.
2016-09-01
The separate quark flavor contributions to the pion and kaon valence quark distribution functions are studied, along with the corresponding electromagnetic form factors in the space-like region. The calculations are made using the solution of the Bethe-Salpeter equation for the model of Nambu and Jona-Lasinio with proper-time regularization. Both the pion and kaon form factors and the valence quark distribution functions reproduce many features of the available empirical data. The larger mass of the strange quark naturally explains the empirical fact that the ratio u(K) + (x)/u(pi) + (x) drops below unity at large x, with a value of approximately Mmore » $$2\\atop{u}$$/Ms$$2\\atop{s}$$ as x → 1. With regard to the elastic form factors we report a large flavor dependence, with the u-quark contribution to the kaon form factor being an order of magnitude smaller than that of the s-quark at large Q 2, which may be a sensitive measure of confinement effects in QCD. Surprisingly though, the total K + and π + form factors differ by only 10%. Lastly, in general we find that flavor breaking effects are typically around 20%.« less
Flavor dependence of the pion and kaon form factors and parton distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutauruk, Parada T. P.; Cloët, Ian C.; Thomas, Anthony W.
The separate quark flavor contributions to the pion and kaon valence quark distribution functions are studied, along with the corresponding electromagnetic form factors in the space-like region. The calculations are made using the solution of the Bethe-Salpeter equation for the model of Nambu and Jona-Lasinio with proper-time regularization. Both the pion and kaon form factors and the valence quark distribution functions reproduce many features of the available empirical data. The larger mass of the strange quark naturally explains the empirical fact that the ratio u(K) + (x)/u(pi) + (x) drops below unity at large x, with a value of approximately Mmore » $$2\\atop{u}$$/Ms$$2\\atop{s}$$ as x → 1. With regard to the elastic form factors we report a large flavor dependence, with the u-quark contribution to the kaon form factor being an order of magnitude smaller than that of the s-quark at large Q 2, which may be a sensitive measure of confinement effects in QCD. Surprisingly though, the total K + and π + form factors differ by only 10%. Lastly, in general we find that flavor breaking effects are typically around 20%.« less
Spherical hashing: binary code embedding with hyperspheres.
Heo, Jae-Pil; Lee, Youngwoon; He, Junfeng; Chang, Shih-Fu; Yoon, Sung-Eui
2015-11-01
Many binary code embedding schemes have been actively studied recently, since they can provide efficient similarity search, and compact data representations suitable for handling large scale image databases. Existing binary code embedding techniques encode high-dimensional data by using hyperplane-based hashing functions. In this paper we propose a novel hypersphere-based hashing function, spherical hashing, to map more spatially coherent data points into a binary code compared to hyperplane-based hashing functions. We also propose a new binary code distance function, spherical Hamming distance, tailored for our hypersphere-based binary coding scheme, and design an efficient iterative optimization process to achieve both balanced partitioning for each hash function and independence between hashing functions. Furthermore, we generalize spherical hashing to support various similarity measures defined by kernel functions. Our extensive experiments show that our spherical hashing technique significantly outperforms state-of-the-art techniques based on hyperplanes across various benchmarks with sizes ranging from one to 75 million of GIST, BoW and VLAD descriptors. The performance gains are consistent and large, up to 100 percent improvements over the second best method among tested methods. These results confirm the unique merits of using hyperspheres to encode proximity regions in high-dimensional spaces. Finally, our method is intuitive and easy to implement.
NASA Technical Reports Server (NTRS)
Rule, William Keith
1991-01-01
A computer program called BALLIST that is intended to be a design tool for engineers is described. BALLlST empirically predicts the bumper thickness required to prevent perforation of the Space Station pressure wall by a projectile (such as orbital debris) as a function of the projectile's velocity. 'Ballistic' limit curves (bumper thickness vs. projectile velocity) are calculated and are displayed on the screen as well as being stored in an ASCII file. A Whipple style of spacecraft wall configuration is assumed. The predictions are based on a database of impact test results. NASA/Marshall Space Flight Center currently has the capability to generate such test results. Numerical simulation results of impact conditions that can not be tested (high velocities or large particles) can also be used for predictions.
Effect of Space Flight on Adrenal Medullary Function
NASA Technical Reports Server (NTRS)
Lelkes, Peter I.
1999-01-01
We hypothesize that microgravity conditions during space flight alter the expression and specific activities of the adrenal medullary CA synthesizing enzymes (CASE). Previously, we examined adrenals from six rats flown for six days aboard STS 54 and reported that microgravity induced a decrease in the expression and specific activity of rat adrenal medullary tyrosine hydroxylase, the rate limiting enzyme of CA synthesis, without affecting the expression of other CASE. In the past, we analyzed some of the > 300 adrenals from two previous Space Shuttle missions (PARE 03 and SLS 2). The preliminary results (a) attest to the good state of tissue preservation, thus proving the feasibility of subsequent large-scale evaluation, and (b) confirm and extend our previous findings. With this grant we will be able to expeditiously analyze all our specimens and to complete our studies in a timely fashion.
The benefits of in-flight LOX collection for airbreathing space boosters
NASA Astrophysics Data System (ADS)
Maurice, Lourdes Q.; Leingang, John L.; Carreiro, Louis R.
1992-12-01
In-flight LOX collection using a propulsion fluid system known as ACES (Air Collection and Enrichment System) yields large reductions in launch weights of airbreathing space boosters. The role of the ACES system is to acquire and store liquid oxygen en route to orbit for rocket use beyond the airbreathing envelope. Earth-to-orbit capability is achieved without carrying liquid oxygen from take-off or relying on scramjets. The superiority of ACES type space boosters over their LOX-carrying counterparts has been thoroughly documented in the past. This paper extends that work by presenting a direct comparison between single-stage and two-stage ACES and scramjet powered vehicles carrying similar payloads. ACES vehicles are shown to be weight competitive with scramjet powered vehicles, and require airbreathing function only up to Mach 5 to 8.
Discretely Self-Similar Solutions to the Navier-Stokes Equations with Besov Space Data
NASA Astrophysics Data System (ADS)
Bradshaw, Zachary; Tsai, Tai-Peng
2018-07-01
We construct self-similar solutions to the three dimensional Navier-Stokes equations for divergence free, self-similar initial data that can be large in the critical Besov space \\dot{B}_{p,∞}^{3/p-1} where 3 < p < 6. We also construct discretely self-similar solutions for divergence free initial data in \\dot{B}_{p,∞}^{3/p-1} for 3 < p < 6 that is discretely self-similar for some scaling factor λ > 1. These results extend those of Bradshaw and Tsai (Ann Henri Poincaré 2016. https://doi.org/10.1007/s00023-016-0519-0) which dealt with initial data in L 3 w since {L^3_w\\subsetneq \\dot{B}_{p,∞}^{3/p-1}} for p > 3. We also provide several concrete examples of vector fields in the relevant function spaces.
Human exposure to large solar particle events in space
NASA Technical Reports Server (NTRS)
Townsend, L. W.; Wilson, J. W.; Shinn, J. L.; Curtis, S. B.
1992-01-01
Whenever energetic solar protons produced by solar particle events traverse bulk matter, they undergo various nuclear and atomic collision processes which significantly alter the physical characteristics and biologically important properties of their transported radiation fields. These physical interactions and their effect on the resulting radiation field within matter are described within the context of a recently developed deterministic, coupled neutron-proton space radiation transport computer code (BRYNTRN). Using this computer code, estimates of human exposure in interplanetary space, behind nominal (2 g/sq cm) and storm shelter (20 g/sq cm) thicknesses of aluminum shielding, are made for the large solar proton event of August 1972. Included in these calculations are estimates of cumulative exposures to the skin, ocular lens, and bone marrow as a function of time during the event. Risk assessment in terms of absorbed dose and dose equivalent is discussed for these organs. Also presented are estimates of organ exposures for hypothetical, worst-case flare scenarios. The rate of dose equivalent accumulation places this situation in an interesting region of dose rate between the very low values of usual concern in terrestrial radiation environments and the high-dose-rate values prevalent in radiation therapy.
BLUE STRAGGLER EVOLUTION CAUGHT IN THE ACT IN THE LARGE MAGELLANIC CLOUD GLOBULAR CLUSTER HODGE 11
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Chengyuan; De Grijs, Richard; Liu Xiangkun
High-resolution Hubble Space Telescope imaging observations show that the radial distribution of the field-decontaminated sample of 162 'blue straggler' stars (BSs) in the 11.7{sup +0.2}{sub -0.1} Gyr old Large Magellanic Cloud cluster Hodge 11 exhibits a clear bimodality. In combination with their distinct loci in color-magnitude space, this offers new evidence in support of theoretical expectations that suggest different BS formation channels as a function of stellar density. In the cluster's color-magnitude diagram, the BSs in the inner 15'' (roughly corresponding to the cluster's core radius) are located more closely to the theoretical sequence resulting from stellar collisions, while thosemore » in the periphery (at radii between 85'' and 100'') are preferentially found in the region expected to contain objects formed through binary mass transfer or coalescence. In addition, the objects' distribution in color-magnitude space provides us with the rare opportunity in an extragalactic environment to quantify the evolution of the cluster's collisionally induced BS population and the likely period that has elapsed since their formation epoch, which we estimate to have occurred {approx}4-5 Gyr ago.« less
Human pose tracking from monocular video by traversing an image motion mapped body pose manifold
NASA Astrophysics Data System (ADS)
Basu, Saurav; Poulin, Joshua; Acton, Scott T.
2010-01-01
Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Morgan N.; Arkin, Adam P.; Alm, Eric J.
Operons are a major feature of all prokaryotic genomes, but how and why operon structures vary is not well understood. To elucidate the life-cycle of operons, we compared gene order between Escherichia coli K12 and its relatives and identified the recently formed and destroyed operons in E. coli. This allowed us to determine how operons form, how they become closely spaced, and how they die. Our findings suggest that operon evolution is driven by selection on gene expression patterns. First, both operon creation and operon destruction lead to large changes in gene expression patterns. For example, the removal of lysAmore » and ruvA from ancestral operons that contained essential genes allowed their expression to respond to lysine levels and DNA damage, respectively. Second, some operons have undergone accelerated evolution, with multiple new genes being added during a brief period. Third, although most operons are closely spaced because of a neutral bias towards deletion and because of selection against large overlaps, highly expressed operons tend to be widely spaced because of regulatory fine-tuning by intervening sequences. Although operon evolution seems to be adaptive, it need not be optimal: new operons often comprise functionally unrelated genes that were already in proximity before the operon formed.« less
2013-01-01
Background Many proteins tune their biological function by transitioning between different functional states, effectively acting as dynamic molecular machines. Detailed structural characterization of transition trajectories is central to understanding the relationship between protein dynamics and function. Computational approaches that build on the Molecular Dynamics framework are in principle able to model transition trajectories at great detail but also at considerable computational cost. Methods that delay consideration of dynamics and focus instead on elucidating energetically-credible conformational paths connecting two functionally-relevant structures provide a complementary approach. Effective sampling-based path planning methods originating in robotics have been recently proposed to produce conformational paths. These methods largely model short peptides or address large proteins by simplifying conformational space. Methods We propose a robotics-inspired method that connects two given structures of a protein by sampling conformational paths. The method focuses on small- to medium-size proteins, efficiently modeling structural deformations through the use of the molecular fragment replacement technique. In particular, the method grows a tree in conformational space rooted at the start structure, steering the tree to a goal region defined around the goal structure. We investigate various bias schemes over a progress coordinate for balance between coverage of conformational space and progress towards the goal. A geometric projection layer promotes path diversity. A reactive temperature scheme allows sampling of rare paths that cross energy barriers. Results and conclusions Experiments are conducted on small- to medium-size proteins of length up to 214 amino acids and with multiple known functionally-relevant states, some of which are more than 13Å apart of each-other. Analysis reveals that the method effectively obtains conformational paths connecting structural states that are significantly different. A detailed analysis on the depth and breadth of the tree suggests that a soft global bias over the progress coordinate enhances sampling and results in higher path diversity. The explicit geometric projection layer that biases the exploration away from over-sampled regions further increases coverage, often improving proximity to the goal by forcing the exploration to find new paths. The reactive temperature scheme is shown effective in increasing path diversity, particularly in difficult structural transitions with known high-energy barriers. PMID:24565158
Calculations of High-Temperature Jet Flow Using Hybrid Reynolds-Average Navier-Stokes Formulations
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Elmiligui, Alaa; Giriamaji, Sharath S.
2008-01-01
Two multiscale-type turbulence models are implemented in the PAB3D solver. The models are based on modifying the Reynolds-averaged Navier Stokes equations. The first scheme is a hybrid Reynolds-averaged- Navier Stokes/large-eddy-simulation model using the two-equation k(epsilon) model with a Reynolds-averaged-Navier Stokes/large-eddy-simulation transition function dependent on grid spacing and the computed turbulence length scale. The second scheme is a modified version of the partially averaged Navier Stokes model in which the unresolved kinetic energy parameter f(sub k) is allowed to vary as a function of grid spacing and the turbulence length scale. This parameter is estimated based on a novel two-stage procedure to efficiently estimate the level of scale resolution possible for a given flow on a given grid for partially averaged Navier Stokes. It has been found that the prescribed scale resolution can play a major role in obtaining accurate flow solutions. The parameter f(sub k) varies between zero and one and is equal to one in the viscous sublayer and when the Reynolds-averaged Navier Stokes turbulent viscosity becomes smaller than the large-eddy-simulation viscosity. The formulation, usage methodology, and validation examples are presented to demonstrate the enhancement of PAB3D's time-accurate turbulence modeling capabilities. The accurate simulations of flow and turbulent quantities will provide a valuable tool for accurate jet noise predictions. Solutions from these models are compared with Reynolds-averaged Navier Stokes results and experimental data for high-temperature jet flows. The current results show promise for the capability of hybrid Reynolds-averaged Navier Stokes and large eddy simulation and partially averaged Navier Stokes in simulating such flow phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchill, R. Michael
Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sunayama, Tomomi; Padmanabhan, Nikhil; Heitmann, Katrin
Precision measurements of the large scale structure of the Universe require large numbers of high fidelity mock catalogs to accurately assess, and account for, the presence of systematic effects. We introduce and test a scheme for generating mock catalogs rapidly using suitably derated N-body simulations. Our aim is to reproduce the large scale structure and the gross properties of dark matter halos with high accuracy, while sacrificing the details of the halo's internal structure. By adjusting global and local time-steps in an N-body code, we demonstrate that we recover halo masses to better than 0.5% and the power spectrum tomore » better than 1% both in real and redshift space for k =1 h Mpc{sup −1}, while requiring a factor of 4 less CPU time. We also calibrate the redshift spacing of outputs required to generate simulated light cones. We find that outputs separated by Δ z =0.05 allow us to interpolate particle positions and velocities to reproduce the real and redshift space power spectra to better than 1% (out to k =1 h Mpc{sup −1}). We apply these ideas to generate a suite of simulations spanning a range of cosmologies, motivated by the Baryon Oscillation Spectroscopic Survey (BOSS) but broadly applicable to future large scale structure surveys including eBOSS and DESI. As an initial demonstration of the utility of such simulations, we calibrate the shift in the baryonic acoustic oscillation peak position as a function of galaxy bias with higher precision than has been possible so far. This paper also serves to document the simulations, which we make publicly available.« less
Ghorai, Santanu; Mukherjee, Anirban; Dutta, Pranab K
2010-06-01
In this brief we have proposed the multiclass data classification by computationally inexpensive discriminant analysis through vector-valued regularized kernel function approximation (VVRKFA). VVRKFA being an extension of fast regularized kernel function approximation (FRKFA), provides the vector-valued response at single step. The VVRKFA finds a linear operator and a bias vector by using a reduced kernel that maps a pattern from feature space into the low dimensional label space. The classification of patterns is carried out in this low dimensional label subspace. A test pattern is classified depending on its proximity to class centroids. The effectiveness of the proposed method is experimentally verified and compared with multiclass support vector machine (SVM) on several benchmark data sets as well as on gene microarray data for multi-category cancer classification. The results indicate the significant improvement in both training and testing time compared to that of multiclass SVM with comparable testing accuracy principally in large data sets. Experiments in this brief also serve as comparison of performance of VVRKFA with stratified random sampling and sub-sampling.
Akhtar, Sultan; Strömberg, Mattias; Zardán Gómez de la Torre, Teresa; Russell, Camilla; Gunnarsson, Klas; Nilsson, Mats; Svedlindh, Peter; Strømme, Maria; Leifer, Klaus
2010-10-21
The present work provides the first real-space analysis of nanobead-DNA coil interactions. Immobilization of oligonucleotide-functionalized magnetic nanobeads in rolling circle amplified DNA-coils was studied by complex magnetization measurements and transmission electron microscopy (TEM), and a statistical analysis of the number of beads hybridized to the DNA-coils was performed. The average number of beads per DNA-coil using the results from both methods was found to be around 6 and slightly above 2 for samples with 40 and 130 nm beads, respectively. The TEM analysis supported an earlier hypothesis that 40 nm beads are preferably immobilized in the interior of DNA-coils whereas 130 nm beads, to a larger extent, are immobilized closer to the exterior of the coils. The methodology demonstrated in the present work should open up new possibilities for characterization of interactions of a large variety of functionalized nanoparticles with macromolecules, useful for gaining more fundamental understanding of such interactions as well as for optimizing a number of biosensor applications.
Linearly resummed hydrodynamics in a weakly curved spacetime
NASA Astrophysics Data System (ADS)
Bu, Yanyan; Lublinsky, Michael
2015-04-01
We extend our study of all-order linearly resummed hydrodynamics in a flat space [1, 2] to fluids in weakly curved spaces. The underlying microscopic theory is a finite temperature super-Yang-Mills theory at strong coupling. The AdS/CFT correspondence relates black brane solutions of the Einstein gravity in asymptotically locally AdS5 geometry to relativistic conformal fluids in a weakly curved 4D background. To linear order in the amplitude of hydrodynamic variables and metric perturbations, the fluid's energy-momentum tensor is computed with derivatives of both the fluid velocity and background metric resummed to all orders. We extensively discuss the meaning of all order hydrodynamics by expressing it in terms of the memory function formalism, which is also suitable for practical simulations. In addition to two viscosity functions discussed at length in refs. [1, 2], we find four curvature induced structures coupled to the fluid via new transport coefficient functions. In ref. [3], the latter were referred to as gravitational susceptibilities of the fluid. We analytically compute these coefficients in the hydrodynamic limit, and then numerically up to large values of momenta.
Aquilante, Francesco; Autschbach, Jochen; Carlson, Rebecca K; Chibotaru, Liviu F; Delcey, Mickaël G; De Vico, Luca; Fdez Galván, Ignacio; Ferré, Nicolas; Frutos, Luis Manuel; Gagliardi, Laura; Garavelli, Marco; Giussani, Angelo; Hoyer, Chad E; Li Manni, Giovanni; Lischka, Hans; Ma, Dongxia; Malmqvist, Per Åke; Müller, Thomas; Nenov, Artur; Olivucci, Massimo; Pedersen, Thomas Bondo; Peng, Daoling; Plasser, Felix; Pritchard, Ben; Reiher, Markus; Rivalta, Ivan; Schapiro, Igor; Segarra-Martí, Javier; Stenrup, Michael; Truhlar, Donald G; Ungur, Liviu; Valentini, Alessio; Vancoillie, Steven; Veryazov, Valera; Vysotskiy, Victor P; Weingart, Oliver; Zapata, Felipe; Lindh, Roland
2016-02-15
In this report, we summarize and describe the recent unique updates and additions to the Molcas quantum chemistry program suite as contained in release version 8. These updates include natural and spin orbitals for studies of magnetic properties, local and linear scaling methods for the Douglas-Kroll-Hess transformation, the generalized active space concept in MCSCF methods, a combination of multiconfigurational wave functions with density functional theory in the MC-PDFT method, additional methods for computation of magnetic properties, methods for diabatization, analytical gradients of state average complete active space SCF in association with density fitting, methods for constrained fragment optimization, large-scale parallel multireference configuration interaction including analytic gradients via the interface to the Columbus package, and approximations of the CASPT2 method to be used for computations of large systems. In addition, the report includes the description of a computational machinery for nonlinear optical spectroscopy through an interface to the QM/MM package Cobramm. Further, a module to run molecular dynamics simulations is added, two surface hopping algorithms are included to enable nonadiabatic calculations, and the DQ method for diabatization is added. Finally, we report on the subject of improvements with respects to alternative file options and parallelization. © 2015 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Golden, Johnny L.
2016-01-01
The International Space Station (ISS) utilizes two large rotating mechanisms, the solar alpha rotary joints (SARJs), as part of the solar arrays' alignment system for more efficient power generation. Each SARJ is a 10.3m circumference, nitrided 15-5PH steel race ring of triangular cross-section, with 12 sets of trundle bearing assemblies transferring load across the rolling joint. The SARJ mechanism rotates continuously and slowly - once every orbit, or every 90 minutes. In 2007, the starboard SARJ suffered a lubrication failure, resulting in severe damage (spalling) to one of the race ring surfaces. Extensive effort was conducted to prevent the port SARJ from suffering the same failure, and fortunately that effort was ultimately successful in also recovering the functionality of the starboard SARJ. The M&P engineering function was key in determining the cause of failure and the means for mechanism recovery. From a M&P lessons-learned perspective, observations are made concerning the original SARJ design parameters (boundary conditions), the perceived need for nitriding the race ring, the test conditions employed during qualification, the environmental controls used for the hardware preflight, and the lubrication robustness necessary for complex kinematic mechanisms expecting high-reliability and long-life.
Extraordinary Structured Noncoding RNAs Revealed by Bacterial Metagenome Analysis
Weinberg, Zasha; Perreault, Jonathan; Meyer, Michelle M.; Breaker, Ronald R.
2012-01-01
Estimates of the total number of bacterial species1-3 suggest that existing DNA sequence databases carry only a tiny fraction of the total amount of DNA sequence space represented by this division of life. Indeed, environmental DNA samples have been shown to encode many previously unknown classes of proteins4 and RNAs5. Bioinformatics searches6-10 of genomic DNA from bacteria commonly identify novel noncoding RNAs (ncRNAs)10-12 such as riboswitches13,14. In rare instances, RNAs that exhibit more extensive sequence and structural conservation across a wide range of bacteria are encountered15,16. Given that large structured RNAs are known to carry out complex biochemical functions such as protein synthesis and RNA processing reactions, identifying more RNAs of great size and intricate structure is likely to reveal additional biochemical functions that can be achieved by RNA. We applied an updated computational pipeline17 to discover ncRNAs that rival the known large ribozymes in size and structural complexity or that are among the most abundant RNAs in bacteria that encode them. These RNAs would have been difficult or impossible to detect without examining environmental DNA sequences, suggesting that numerous RNAs with extraordinary size, structural complexity, or other exceptional characteristics remain to be discovered in unexplored sequence space. PMID:19956260
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
The 1980 Large space systems technology. Volume 2: Base technology
NASA Technical Reports Server (NTRS)
Kopriver, F., III (Compiler)
1981-01-01
Technology pertinent to large antenna systems, technology related to large space platform systems, and base technology applicable to both antenna and platform systems are discussed. Design studies, structural testing results, and theoretical applications are presented with accompanying validation data. A total systems approach including controls, platforms, and antennas is presented as a cohesive, programmatic plan for large space systems.
NASA Technical Reports Server (NTRS)
Buchanan, H. J.
1983-01-01
Work performed in Large Space Structures Controls research and development program at Marshall Space Flight Center is described. Studies to develop a multilevel control approach which supports a modular or building block approach to the buildup of space platforms are discussed. A concept has been developed and tested in three-axis computer simulation utilizing a five-body model of a basic space platform module. Analytical efforts have continued to focus on extension of the basic theory and subsequent application. Consideration is also given to specifications to evaluate several algorithms for controlling the shape of Large Space Structures.
Developing closed life support systems for large space habitats
NASA Technical Reports Server (NTRS)
Phillips, J. M.; Harlan, A. D.; Krumhar, K. C.
1978-01-01
In anticipation of possible large-scale, long-duration space missions which may be conducted in the future, NASA has begun to investigate the research and technology development requirements to create life support systems for large space habitats. An analysis suggests the feasibility of a regeneration of food in missions which exceed four years duration. Regeneration of food in space may be justified for missions of shorter duration when large crews must be supported at remote sites such as lunar bases and space manufacturing facilities. It is thought that biological components consisting principally of traditional crop and livestock species will prove to be the most acceptable means of closing the food cycle. A description is presented of the preliminary results of a study of potential biological components for large space habitats. Attention is given to controlled ecosystems, Russian life support system research, controlled-environment agriculture, and the social aspects of the life-support system.
Cavity Resonator Wireless Power Transfer System for Freely Moving Animal Experiments.
Mei, Henry; Thackston, Kyle A; Bercich, Rebecca A; Jefferys, John G R; Irazoqui, Pedro P
2017-04-01
The goal of this paper is to create a large wireless powering arena for powering small devices implanted in freely behaving rodents. We design a cavity resonator based wireless power transfer (WPT) system and utilize our previously developed optimal impedance matching methodology to achieve effective WPT performance for operating sophisticated implantable devices, made with miniature receive coils (<8 mm in diameter), within a large volume (dimensions: 60.96 cm × 60.96 cm × 30 cm). We provide unique cavity design and construction methods which maintains electromagnetic performance of the cavity while promoting its utility as a large animal husbandry environment. In addition, we develop a biaxial receive resonator system to address device orientation insensitivity within the cavity environment. Functionality is demonstrated with chronic experiments involving rats implanted with our custom designed bioelectric recording device. We demonstrate an average powering fidelity of 93.53% over nine recording sessions across nine weeks, indicating nearly continuous device operation for a freely behaving rat within the large cavity resonator space. We have developed and demonstrated a cavity resonator based WPT system for long term experiments involving freely behaving small animals. This cavity resonator based WPT system offers an effective and simple method for wirelessly powering miniaturized devices implanted in freely moving small animals within the largest space.
Introduction: The challenge of optimum integration of propulsion systems and large space structures
NASA Technical Reports Server (NTRS)
Carlisle, R. F.
1980-01-01
A functional matrix of possible propulsion system characteristics for a spacecraft for deployable and assembled spacecraft structures shows that either electric propulsion or low thrust chemical propulsion systems could provide the propulsion required. The trade-off considerations of a single propulsion engine or multiengines are outlined and it is shown that a single point engine is bounded by some upper limit of thrust for assembled spacecraft. The matrix also shows several additional functions that can be provided to the spacecraft if a propulsion system is an integral part of the spacecraft. A review of all of the functions that can be provided for a spacecraft by an integral propulsion system may result in the inclusion of the propulsion for several functions even if no single function were mandatory. Propulsion interface issues for each combination of engines are identified.
NASA Astrophysics Data System (ADS)
Vlah, Zvonimir; Seljak, Uroš; McDonald, Patrick; Okumura, Teppei; Baldauf, Tobias
2012-11-01
We develop a perturbative approach to redshift space distortions (RSD) using the phase space distribution function approach and apply it to the dark matter redshift space power spectrum and its moments. RSD can be written as a sum over density weighted velocity moments correlators, with the lowest order being density, momentum density and stress energy density. We use standard and extended perturbation theory (PT) to determine their auto and cross correlators, comparing them to N-body simulations. We show which of the terms can be modeled well with the standard PT and which need additional terms that include higher order corrections which cannot be modeled in PT. Most of these additional terms are related to the small scale velocity dispersion effects, the so called finger of god (FoG) effects, which affect some, but not all, of the terms in this expansion, and which can be approximately modeled using a simple physically motivated ansatz such as the halo model. We point out that there are several velocity dispersions that enter into the detailed RSD analysis with very different amplitudes, which can be approximately predicted by the halo model. In contrast to previous models our approach systematically includes all of the terms at a given order in PT and provides a physical interpretation for the small scale dispersion values. We investigate RSD power spectrum as a function of μ, the cosine of the angle between the Fourier mode and line of sight, focusing on the lowest order powers of μ and multipole moments which dominate the observable RSD power spectrum. Overall we find considerable success in modeling many, but not all, of the terms in this expansion. This is similar to the situation in real space, but predicting power spectrum in redshift space is more difficult because of the explicit influence of small scale dispersion type effects in RSD, which extend to very large scales.
Evans, Alan C; Janke, Andrew L; Collins, D Louis; Baillet, Sylvain
2012-08-15
The core concept within the field of brain mapping is the use of a standardized, or "stereotaxic", 3D coordinate frame for data analysis and reporting of findings from neuroimaging experiments. This simple construct allows brain researchers to combine data from many subjects such that group-averaged signals, be they structural or functional, can be detected above the background noise that would swamp subtle signals from any single subject. Where the signal is robust enough to be detected in individuals, it allows for the exploration of inter-individual variance in the location of that signal. From a larger perspective, it provides a powerful medium for comparison and/or combination of brain mapping findings from different imaging modalities and laboratories around the world. Finally, it provides a framework for the creation of large-scale neuroimaging databases or "atlases" that capture the population mean and variance in anatomical or physiological metrics as a function of age or disease. However, while the above benefits are not in question at first order, there are a number of conceptual and practical challenges that introduce second-order incompatibilities among experimental data. Stereotaxic mapping requires two basic components: (i) the specification of the 3D stereotaxic coordinate space, and (ii) a mapping function that transforms a 3D brain image from "native" space, i.e. the coordinate frame of the scanner at data acquisition, to that stereotaxic space. The first component is usually expressed by the choice of a representative 3D MR image that serves as target "template" or atlas. The native image is re-sampled from native to stereotaxic space under the mapping function that may have few or many degrees of freedom, depending upon the experimental design. The optimal choice of atlas template and mapping function depend upon considerations of age, gender, hemispheric asymmetry, anatomical correspondence, spatial normalization methodology and disease-specificity. Accounting, or not, for these various factors in defining stereotaxic space has created the specter of an ever-expanding set of atlases, customized for a particular experiment, that are mutually incompatible. These difficulties continue to plague the brain mapping field. This review article summarizes the evolution of stereotaxic space in term of the basic principles and associated conceptual challenges, the creation of population atlases and the future trends that can be expected in atlas evolution. Copyright © 2012 Elsevier Inc. All rights reserved.
Ground-Based and Space-Based Laser Beam Power Applications
NASA Technical Reports Server (NTRS)
Bozek, John M.
1995-01-01
A space power system based on laser beam power is sized to reduce mass, increase operational capabilities, and reduce complexity. The advantages of laser systems over solar-based systems are compared as a function of application. Power produced from the conversion of a laser beam that has been generated on the Earth's surface and beamed into cislunar space resulted in decreased round-trip time for Earth satellite electric propulsion tugs and a substantial landed mass savings for a lunar surface mission. The mass of a space-based laser system (generator in space and receiver near user) that beams down to an extraterrestrial airplane, orbiting spacecraft, surface outpost, or rover is calculated and compared to a solar system. In general, the advantage of low mass for these space-based laser systems is limited to high solar eclipse time missions at distances inside Jupiter. The power system mass is less in a continuously moving Mars rover or surface outpost using space-based laser technology than in a comparable solar-based power system, but only during dust storm conditions. Even at large distances for the Sun, the user-site portion of a space-based laser power system (e.g., the laser receiver component) is substantially less massive than a solar-based system with requisite on-board electrochemical energy storage.
Analysis of delay reducing and fuel saving sequencing and spacing algorithms for arrival traffic
NASA Technical Reports Server (NTRS)
Neuman, Frank; Erzberger, Heinz
1991-01-01
The air traffic control subsystem that performs sequencing and spacing is discussed. The function of the sequencing and spacing algorithms is to automatically plan the most efficient landing order and to assign optimally spaced landing times to all arrivals. Several algorithms are described and their statistical performance is examined. Sequencing brings order to an arrival sequence for aircraft. First-come-first-served sequencing (FCFS) establishes a fair order, based on estimated times of arrival, and determines proper separations. Because of the randomness of the arriving traffic, gaps will remain in the sequence of aircraft. Delays are reduced by time-advancing the leading aircraft of each group while still preserving the FCFS order. Tightly spaced groups of aircraft remain with a mix of heavy and large aircraft. Spacing requirements differ for different types of aircraft trailing each other. Traffic is reordered slightly to take advantage of this spacing criterion, thus shortening the groups and reducing average delays. For heavy traffic, delays for different traffic samples vary widely, even when the same set of statistical parameters is used to produce each sample. This report supersedes NASA TM-102795 on the same subject. It includes a new method of time-advance as well as an efficient method of sequencing and spacing for two dependent runways.
Synthesis of a large communications aperture using small antennas
NASA Technical Reports Server (NTRS)
Resch, George M.; Cwik, T. W.; Jamnejad, V.; Logan, R. T.; Miller, R. B.; Rogstad, Dave H.
1994-01-01
In this report we compare the cost of an array of small antennas to that of a single large antenna assuming both the array and single large antenna have equal performance and availability. The single large antenna is taken to be one of the 70-m antennas of the Deep Space Network. The cost of the array is estimated as a function of the array element diameter for three different values of system noise temperature corresponding to three different packaging schemes for the first amplifier. Array elements are taken to be fully steerable paraboloids and their cost estimates were obtained from commercial vendors. Array loss mechanisms and calibration problems are discussed. For array elements in the range 3 - 35 m there is no minimum in the cost versus diameter curve for the three system temperatures that were studied.
Enhanced solar energy options using earth-orbiting mirrors
NASA Technical Reports Server (NTRS)
Gilbreath, W. P.; Billman, K. W.; Bowen, S. W.
1978-01-01
A system of orbiting space reflectors is described, analyzed, and shown to economically provide nearly continuous insolation to preselected ground sites, producing benefits hitherto lacking in conventional solar farms and leading to large reductions in energy costs for such installations. Free-flying planar mirrors of about 1 sq km are shown to be optimum and can be made at under 10 g/sq m of surface, thus minimizing material needs and space transportation costs. Models are developed for both the design of such mirrors and for the analysis of expected ground insolation as a function of orbital parameters, time, and site location. Various applications (agricultural, solar-electric production, weather enhancement, etc.) are described.
Abundant stable gauge field hair for black holes in anti-de Sitter space.
Baxter, J E; Helbling, Marc; Winstanley, Elizabeth
2008-01-11
We present new hairy black hole solutions of SU(N) Einstein-Yang-Mills (EYM) theory in asymptotically anti-de Sitter (AdS) space. These black holes are described by N+1 independent parameters and have N-1 independent gauge field degrees of freedom. Solutions in which all gauge field functions have no zeros exist for all N, and for a sufficiently large (and negative) cosmological constant. At least some of these solutions are shown to be stable under classical, linear, spherically symmetric perturbations. Therefore there is no upper bound on the amount of stable gauge field hair with which a black hole in AdS can be endowed.
Chaos in charged AdS black hole extended phase space
NASA Astrophysics Data System (ADS)
Chabab, M.; El Moumni, H.; Iraoui, S.; Masmar, K.; Zhizeh, S.
2018-06-01
We present an analytical study of chaos in a charged black hole in the extended phase space in the context of the Poincare-Melnikov theory. Along with some background on dynamical systems, we compute the relevant Melnikov function and find its zeros. Then we analyse these zeros either to identify the temporal chaos in the spinodal region, or to observe spatial chaos in the small/large black hole equilibrium configuration. As a byproduct, we derive a constraint on the Black hole' charge required to produce chaotic behaviour. To the best of our knowledge, this is the first endeavour to understand the correlation between chaos and phase picture in black holes.
Space Observations for Global Change
NASA Technical Reports Server (NTRS)
Rasool, S. I.
1991-01-01
There is now compelling evidence that man's activities are changing both the composition of the atmospheric and the global landscape quite drastically. The consequences of these changes on the global climate of the 21st century is currently a hotly debated subject. Global models of a coupled Earth-ocean-atmosphere system are still very primitive and progress in this area appears largely data limited, specially over the global biosphere. A concerted effort on monitoring biospheric functions on scales from pixels to global and days to decades needs to be coordinated on an international scale in order to address the questions related to global change. An international program of space observations and ground research was described.
NASA Astrophysics Data System (ADS)
Boffi, Nicholas M.; Jain, Manish; Natan, Amir
2016-02-01
A real-space high order finite difference method is used to analyze the effect of spherical domain size on the Hartree-Fock (and density functional theory) virtual eigenstates. We show the domain size dependence of both positive and negative virtual eigenvalues of the Hartree-Fock equations for small molecules. We demonstrate that positive states behave like a particle in spherical well and show how they approach zero. For the negative eigenstates, we show that large domains are needed to get the correct eigenvalues. We compare our results to those of Gaussian basis sets and draw some conclusions for real-space, basis-sets, and plane-waves calculations.
Emulation: A fast stochastic Bayesian method to eliminate model space
NASA Astrophysics Data System (ADS)
Roberts, Alan; Hobbs, Richard; Goldstein, Michael
2010-05-01
Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.
NASA Astrophysics Data System (ADS)
Hahn, Oliver; Angulo, Raul E.
2016-01-01
N-body simulations are essential for understanding the formation and evolution of structure in the Universe. However, the discrete nature of these simulations affects their accuracy when modelling collisionless systems. We introduce a new approach to simulate the gravitational evolution of cold collisionless fluids by solving the Vlasov-Poisson equations in terms of adaptively refineable `Lagrangian phase-space elements'. These geometrical elements are piecewise smooth maps between Lagrangian space and Eulerian phase-space and approximate the continuum structure of the distribution function. They allow for dynamical adaptive splitting to accurately follow the evolution even in regions of very strong mixing. We discuss in detail various one-, two- and three-dimensional test problems to demonstrate the performance of our method. Its advantages compared to N-body algorithms are: (I) explicit tracking of the fine-grained distribution function, (II) natural representation of caustics, (III) intrinsically smooth gravitational potential fields, thus (IV) eliminating the need for any type of ad hoc force softening. We show the potential of our method by simulating structure formation in a warm dark matter scenario. We discuss how spurious collisionality and large-scale discreteness noise of N-body methods are both strongly suppressed, which eliminates the artificial fragmentation of filaments. Therefore, we argue that our new approach improves on the N-body method when simulating self-gravitating cold and collisionless fluids, and is the first method that allows us to explicitly follow the fine-grained evolution in six-dimensional phase-space.
An event map of memory space in the hippocampus
Deuker, Lorena; Bellmund, Jacob LS; Navarro Schröder, Tobias; Doeller, Christian F
2016-01-01
The hippocampus has long been implicated in both episodic and spatial memory, however these mnemonic functions have been traditionally investigated in separate research strands. Theoretical accounts and rodent data suggest a common mechanism for spatial and episodic memory in the hippocampus by providing an abstract and flexible representation of the external world. Here, we monitor the de novo formation of such a representation of space and time in humans using fMRI. After learning spatio-temporal trajectories in a large-scale virtual city, subject-specific neural similarity in the hippocampus scaled with the remembered proximity of events in space and time. Crucially, the structure of the entire spatio-temporal network was reflected in neural patterns. Our results provide evidence for a common coding mechanism underlying spatial and temporal aspects of episodic memory in the hippocampus and shed new light on its role in interleaving multiple episodes in a neural event map of memory space. DOI: http://dx.doi.org/10.7554/eLife.16534.001 PMID:27710766
Cavity polariton in a quasilattice of qubits and its selective radiation
NASA Astrophysics Data System (ADS)
Ian, Hou; Liu, Yu-xi
2014-04-01
In a circuit quantum eletrodynamic system, a chain of N qubits inhomogeneously coupled to a cavity field forms a mesoscopic quasilattice, which is characterized by its degree of deformation from a normal lattice. This deformation is a function of the relative spacing, that is the ratio of the qubit spacing to the cavity wavelength. A polariton mode arises in the quasilattice as the dressed mode of the lattice excitation by the cavity photon. We show that the transition probability of the polariton mode is either enhanced or decreased compared to that of a single qubit by the deformation, giving a selective spontaneous radiation spectrum. Further, unlike a microscopic lattice with large-N limit and nearly zero relative spacing, the polariton in the quasilattice has uneven decay rate over the relative spacing. We show that this unevenness coincides with the cooperative emission effect expected from the superradiance model, where alternative excitations in the qubits of the lattice result in maximum decay.
da Silva, Paulo Sérgio Lucas; Waisberg, Daniel Reis
2011-05-01
Pseudoaneurysm of the cervical internal carotid artery is a very rare, potentially fatal complication of a neck space infection in children associated with high mortality and morbidity. A 3-year-old boy presented with spontaneous massive epistaxis 45 days after a deep neck space infection caused by a peritonsillar abscess. During nasopharyngeal packing, he evolved with cardiac arrest. Intra-arterial angiography was then performed that revealed a large pseudoaneurysm. Endovascular treatment using detachable balloons achieved complete exclusion of the pseudoaneurysm. The child made an uneventful recovery and was discharged with mild left hemiparesis and no deficit of sensory or cognitive functions. Pseudoaneurysms of the internal carotid artery after a deep neck space infection can be associated with delayed and potentially fatal massive epistaxis. Furthermore, a regional (ie, extranasal) blood vessel should be promptly investigated when there are signs of hypovolemic shock. A high level of suspicion and definitive treatment are essential for successful management of these patients.
Orbital Debris Assesment Tesing in the AEDC Range G
NASA Technical Reports Server (NTRS)
Polk, Marshall; Woods, David; Roebuck, Brian; Opiela, John; Sheaffer, Patti; Liou, J.-C.
2015-01-01
The space environment presents many hazards for satellites and spacecraft. One of the major hazards is hypervelocity impacts from uncontrolled man-made space debris. Arnold Engineering Development Complex (AEDC), The National Aeronautics and Space Administration (NASA), The United States Air Force Space and Missile Systems Center (SMC), the University of Florida, and The Aerospace Corporation configured a large ballistic range to perform a series of hypervelocity destructive impact tests in order to better understand the effects of space collisions. The test utilized AEDC's Range G light gas launcher, which is capable of firing projectiles up to 7 km/s. A non-functional full-scale representation of a modern satellite called the DebriSat was destroyed in the enclosed range enviroment. Several modifications to the range facility were made to ensure quality data was obtained from the impact events. The facility modifcations were intended to provide a high impact energy to target mass ratio (>200 J/g), a non-damaging method of debris collection, and an instrumentation suite capable of providing information on the physics of the entire imapct event.
Guthoff, R F; Schmidt, W; Buss, D; Schultze, C; Ruppin, U; Stachs, O; Sternberg, K; Klee, D; Chichkov, B; Schmitz, K-P
2009-09-01
The purpose of this study was to develop a microstent with valve function, which normalizes the intraocular pressure (IOP) and drains into the suprachoroidal space. In comparison to the subconjunctival space the suprachoroidal space is attributed with less fibroblast colonization and activity. Different glaucoma drainage devices were idealized as tubes and the flow rates were calculated according to Hagen-Poiseuille. The dimensions of the ideal glaucoma implant were modified with respect to an aqueous humor production of 2 microl/min and the different outflow pathways. Specific components of glaucoma drainage devices at the inlet and outlet were not included. The volume flow calculation of the tested glaucoma implants showed that the dimensions of all lumina were too large to prevent postoperative hypotension. A maximum inner tube diameter of 53 microm was calculated for drainage into the suprachoroidal space based on an intra-ocular pressure (IOP) of 20 mmHg. The glaucoma microstent has to guarantee an aqueous humor flow for physiological IOP. An increase of IOP has to be regulated to physiological pressure conditions by the microvalve.
Preliminary Concept of Operations for the Deep Space Array-Based Network
NASA Astrophysics Data System (ADS)
Bagri, D. S.; Statman, J. I.
2004-05-01
The Deep Space Array-Based Network (DSAN) will be an array-based system, part of a greater than 1000 times increase in the downlink/telemetry capability of the Deep Space Network. The key function of the DSAN is provision of cost-effective, robust telemetry, tracking, and command services to the space missions of NASA and its international partners. This article presents an expanded approach to the use of an array-based system. Instead of using the array as an element in the existing Deep Space Network (DSN), relying to a large extent on the DSN infrastructure, we explore a broader departure from the current DSN, using fewer elements of the existing DSN, and establishing a more modern concept of operations. For example, the DSAN will have a single 24 x 7 monitor and control (M&C) facility, while the DSN has four 24 x 7 M&C facilities. The article gives the architecture of the DSAN and its operations philosophy. It also briefly describes the customer's view of operations, operations management, logistics, anomaly analysis, and reporting.
From free fields to AdS space. II
NASA Astrophysics Data System (ADS)
Gopakumar, Rajesh
2004-07-01
We continue with the program of paper I [Phys. Rev. D 70, 025009 (2004)] to implement open-closed string duality on free gauge field theory (in the large-N limit). In this paper we consider correlators such as <∏ni=1TrΦJi(xi)>. The Schwinger parametrization of this n-point function exhibits a partial gluing up into a set of basic skeleton graphs. We argue that the moduli space of the planar skeleton graphs is exactly the same as the moduli space of genus zero Riemann surfaces with n holes. In other words, we can explicitly rewrite the n-point (planar) free-field correlator as an integral over the moduli space of a sphere with n holes. A preliminary study of the integrand also indicates compatibility with a string theory on AdS space. The details of our argument are quite insensitive to the specific form of the operators and generalize to diagrams of a higher genus as well. We take this as evidence of the field theory’s ability to reorganize itself into a string theory.
NASA Technical Reports Server (NTRS)
1972-01-01
An analysis of the nuclear safety aspects (design and operational considerations) in the transport of nuclear payloads to and from earth orbit by the space shuttle is presented. Three representative nuclear payloads used in the study were: (1) the zirconium hydride reactor Brayton power module, (2) the large isotope Brayton power system and (3) small isotopic heat sources which can be a part of an upper stage or part of a logistics module. Reference data on the space shuttle and nuclear payloads are presented in an appendix. Safety oriented design and operational requirements were identified to integrate the nuclear payloads in the shuttle mission. Contingency situations were discussed and operations and design features were recommended to minimize the nuclear hazards. The study indicates the safety, design and operational advantages in the use of a nuclear payload transfer module. The transfer module can provide many of the safety related support functions (blast and fragmentation protection, environmental control, payload ejection) minimizing the direct impact on the shuttle.
Safe landing area determination for a Moon lander by reachability analysis
NASA Astrophysics Data System (ADS)
Arslantaş, Yunus Emre; Oehlschlägel, Thimo; Sagliano, Marco
2016-11-01
In the last decades developments in space technology paved the way to more challenging missions like asteroid mining, space tourism and human expansion into the Solar System. These missions result in difficult tasks such as guidance schemes for re-entry, landing on celestial bodies and implementation of large angle maneuvers for spacecraft. There is a need for a safety system to increase the robustness and success of these missions. Reachability analysis meets this requirement by obtaining the set of all achievable states for a dynamical system starting from an initial condition with given admissible control inputs of the system. This paper proposes an algorithm for the approximation of nonconvex reachable sets (RS) by using optimal control. Therefore subset of the state space is discretized by equidistant points and for each grid point a distance function is defined. This distance function acts as an objective function for a related optimal control problem (OCP). Each infinite dimensional OCP is transcribed into a finite dimensional Nonlinear Programming Problem (NLP) by using Pseudospectral Methods (PSM). Finally, the NLPs are solved using available tools resulting in approximated reachable sets with information about the states of the dynamical system at these grid points. The algorithm is applied on a generic Moon landing mission. The proposed method computes approximated reachable sets and the attainable safe landing region with information about propellant consumption and time.
Neuromuscular adaptation to actual and simulated weightlessness
NASA Technical Reports Server (NTRS)
Edgerton, V. R.; Roy, R. R.
1994-01-01
The chronic "unloading" of the neuromuscular system during spaceflight has detrimental functional and morphological effects. Changes in the metabolic and mechanical properties of the musculature can be attributed largely to the loss of muscle protein and the alteration in the relative proportion of the proteins in skeletal muscle, particularly in the muscles that have an antigravity function under normal loading conditions. These adaptations could result in decrements in the performance of routine or specialized motor tasks, both of which may be critical for survival in an altered gravitational field, i.e., during spaceflight and during return to 1 G. For example, the loss in extensor muscle mass requires a higher percentage of recruitment of the motor pools for any specific motor task. Thus, a faster rate of fatigue will occur in the activated muscles. These consequences emphasize the importance of developing techniques for minimizing muscle loss during spaceflight, at least in preparation for the return to 1 G after spaceflight. New insights into the complexity and the interactive elements that contribute to the neuromuscular adaptations to space have been gained from studies of the role of exercise and/or growth factors as countermeasures of atrophy. The present chapter illustrates the inevitable interactive effects of neural and muscular systems in adapting to space. It also describes the considerable progress that has been made toward the goal of minimizing the functional impact of the stimuli that induce the neuromuscular adaptations to space.
Space Industrialization: The Mirage of Abundance.
ERIC Educational Resources Information Center
Deudney, Daniel
1982-01-01
Large-scale space industrialization is not a viable solution to the population, energy, and resource problems of earth. The expense and technological difficulties involved in the development and maintenance of space manufacturing facilities, space colonies, and large-scale satellites for solar power are discussed. (AM)
Space construction system analysis. Part 2: Space construction experiments concepts
NASA Technical Reports Server (NTRS)
Boddy, J. A.; Wiley, L. F.; Gimlich, G. W.; Greenberg, H. S.; Hart, R. J.; Lefever, A. E.; Lillenas, A. N.; Totah, R. S.
1980-01-01
Technology areas in the orbital assembly of large space structures are addressed. The areas included structures, remotely operated assembly techniques, and control and stabilization. Various large space structure design concepts are reviewed and their construction procedures and requirements are identified.
Anesthesia for cesarean delivery in an achondroplastic dwarf: a case report.
Huang, Jeffrey; Babins, Noah
2008-12-01
There are more than 100 different types of dwarfism. Achondroplasia is the most common form of this rare condition. The incidence of achondroplasia in the United States is about 15 per 1 million births. Although inherited as an autosomal dominant condition, 80% of cases result from spontaneous mutation. Underdevelopment and premature ossification of bones result in characteristic craniofacial and spinal abnormalities. Limited neck extension, foramen magnum stenosis, a large tongue, large mandible, and atlanto-axial instability can lead to increased difficulty of airway management. Severe kyphosis, scoliosis, spinal stenosis, and unpredictable spread of local anesthetics in the epidural space and subarachnoid space lead to reluctance to apply regional anesthesia in this patient group. In addition, pregnancy in a person with achondroplasis poses more problems for anesthetic selection. These problems include potential hypoxia, severely decreased functional residual capacity, risk of gastric aspiration, and supine hypotension. In this case report, we describe the anesthetic management of an achondroplastic dwarf who underwent cesarean delivery.
DataWarrior: an open-source program for chemistry aware data visualization and analysis.
Sander, Thomas; Freyss, Joel; von Korff, Modest; Rufener, Christian
2015-02-23
Drug discovery projects in the pharmaceutical industry accumulate thousands of chemical structures and ten-thousands of data points from a dozen or more biological and pharmacological assays. A sufficient interpretation of the data requires understanding, which molecular families are present, which structural motifs correlate with measured properties, and which tiny structural changes cause large property changes. Data visualization and analysis software with sufficient chemical intelligence to support chemists in this task is rare. In an attempt to contribute to filling the gap, we released our in-house developed chemistry aware data analysis program DataWarrior for free public use. This paper gives an overview of DataWarrior's functionality and architecture. Exemplarily, a new unsupervised, 2-dimensional scaling algorithm is presented, which employs vector-based or nonvector-based descriptors to visualize the chemical or pharmacophore space of even large data sets. DataWarrior uses this method to interactively explore chemical space, activity landscapes, and activity cliffs.
Chen, Guan-Liang; Shau, Shi-Min; Juang, Tzong-Yuan; Lee, Rong-Ho; Chen, Chih-Ping; Suen, Shing-Yi; Jeng, Ru-Jong
2011-12-06
In this study, we used direct molecular exfoliation for the rapid, facile, large-scale fabrication of single-layered graphene oxide nanosheets (GOSs). Using macromolecular polyaniline (PANI) as a layered space enlarger, we readily and rapidly synthesized individual GOSs at room temperature through the in situ polymerization of aniline on the 2D GOS platform. The chemically modified GOS platelets formed unique 2D-layered GOS/PANI hybrids, with the PANI nanorods embedded between the GO interlayers and extended over the GO surface. X-ray diffraction revealed that intergallery expansion occurred in the GO basal spacing after the PANI nanorods had anchored and grown onto the surface of the GO layer. Transparent folding GOSs were, therefore, observed in transmission electron microscopy images. GOS/PANI nanohybrids possessing high conductivities and large work functions have the potential for application as electrode materials in optoelectronic devices. Our dispersion/exfoliation methodology is a facile means of preparing individual GOS platelets with high throughput, potentially expanding the applicability of nanographene oxide materials. © 2011 American Chemical Society
Membrane Shell Reflector Segment Antenna
NASA Technical Reports Server (NTRS)
Fang, Houfei; Im, Eastwood; Lin, John; Moore, James
2012-01-01
The mesh reflector is the only type of large, in-space deployable antenna that has successfully flown in space. However, state-of-the-art large deployable mesh antenna systems are RF-frequency-limited by both global shape accuracy and local surface quality. The limitations of mesh reflectors stem from two factors. First, at higher frequencies, the porosity and surface roughness of the mesh results in loss and scattering of the signal. Second, the mesh material does not have any bending stiffness and thus cannot be formed into true parabolic (or other desired) shapes. To advance the deployable reflector technology at high RF frequencies from the current state-of-the-art, significant improvements need to be made in three major aspects: a high-stability and highprecision deployable truss; a continuously curved RF reflecting surface (the function of the surface as well as its first derivative are both continuous); and the RF reflecting surface should be made of a continuous material. To meet these three requirements, the Membrane Shell Reflector Segment (MSRS) antenna was developed.
500 Gb/s free-space optical transmission over strong atmospheric turbulence channels.
Qu, Zhen; Djordjevic, Ivan B
2016-07-15
We experimentally demonstrate a high-spectral-efficiency, large-capacity, featured free-space-optical (FSO) transmission system by using low-density, parity-check (LDPC) coded quadrature phase shift keying (QPSK) combined with orbital angular momentum (OAM) multiplexing. The strong atmospheric turbulence channel is emulated by two spatial light modulators on which four randomly generated azimuthal phase patterns yielding the Andrews spectrum are recorded. The validity of such an approach is verified by reproducing the intensity distribution and irradiance correlation function (ICF) from the full-scale simulator. Excellent agreement of experimental, numerical, and analytical results is found. To reduce the phase distortion induced by the turbulence emulator, the inexpensive wavefront sensorless adaptive optics (AO) is used. To deal with remaining channel impairments, a large-girth LDPC code is used. To further improve the aggregate data rate, the OAM multiplexing is combined with WDM, and 500 Gb/s optical transmission over the strong atmospheric turbulence channels is demonstrated.
Zerze, Gül H; Miller, Cayla M; Granata, Daniele; Mittal, Jeetain
2015-06-09
Intrinsically disordered proteins (IDPs), which are expected to be largely unstructured under physiological conditions, make up a large fraction of eukaryotic proteins. Molecular dynamics simulations have been utilized to probe structural characteristics of these proteins, which are not always easily accessible to experiments. However, exploration of the conformational space by brute force molecular dynamics simulations is often limited by short time scales. Present literature provides a number of enhanced sampling methods to explore protein conformational space in molecular simulations more efficiently. In this work, we present a comparison of two enhanced sampling methods: temperature replica exchange molecular dynamics and bias exchange metadynamics. By investigating both the free energy landscape as a function of pertinent order parameters and the per-residue secondary structures of an IDP, namely, human islet amyloid polypeptide, we found that the two methods yield similar results as expected. We also highlight the practical difference between the two methods by describing the path that we followed to obtain both sets of data.
A near term space demonstration program for large structures
NASA Technical Reports Server (NTRS)
Nathan, C. A.
1978-01-01
For applications involving an employment of ultralarge structures in space, it would be necessary to have some form of space fabrication and assembly in connection with launch vehicle payload and volume limitations. The findings of a recently completed NASA sponsored study related to an orbital construction demonstration are reported. It is shown how a relatively small construction facility which is assembled in three shuttle flights can substantially advance space construction know-how and provide the nation with a permanent shuttle tended facility that can further advance large structures technologies and provide a construction capability for deployment of large structural systems envisioned for the late 1980s. The large structures applications identified are related to communications, navigation, earth observation, energy systems, radio astronomy, illumination, space colonization, and space construction.
Cooperativity to increase Turing pattern space for synthetic biology.
Diambra, Luis; Senthivel, Vivek Raj; Menendez, Diego Barcena; Isalan, Mark
2015-02-20
It is hard to bridge the gap between mathematical formulations and biological implementations of Turing patterns, yet this is necessary for both understanding and engineering these networks with synthetic biology approaches. Here, we model a reaction-diffusion system with two morphogens in a monostable regime, inspired by components that we recently described in a synthetic biology study in mammalian cells.1 The model employs a single promoter to express both the activator and inhibitor genes and produces Turing patterns over large regions of parameter space, using biologically interpretable Hill function reactions. We applied a stability analysis and identified rules for choosing biologically tunable parameter relationships to increase the likelihood of successful patterning. We show how to control Turing pattern sizes and time evolution by manipulating the values for production and degradation relationships. More importantly, our analysis predicts that steep dose-response functions arising from cooperativity are mandatory for Turing patterns. Greater steepness increases parameter space and even reduces the requirement for differential diffusion between activator and inhibitor. These results demonstrate some of the limitations of linear scenarios for reaction-diffusion systems and will help to guide projects to engineer synthetic Turing patterns.
NASA Technical Reports Server (NTRS)
Yam, Y.; Lang, J. H.; Johnson, T. L.; Shih, S.; Staelin, D. H.
1983-01-01
A model reduction procedure based on aggregation with respect to sensor and actuator influences rather than modes is presented for large systems of coupled second-order differential equations. Perturbation expressions which can predict the effects of spillover on both the aggregated and residual states are derived. These expressions lead to the development of control system design constraints which are sufficient to guarantee, to within the validity of the perturbations, that the residual states are not destabilized by control systems designed from the reduced model. A numerical example is provided to illustrate the application of the aggregation and control system design method.
Cryogenic expansion joint for large superconducting magnet structures
Brown, Robert L.
1978-01-01
An expansion joint is provided that accommodates dimensional changes occurring during the cooldown and warm-up of large cryogenic devices such as superconducting magnet coils. Flattened tubes containing a refrigerant such as gaseous nitrogen (N.sub.2) are inserted into expansion spaces in the structure. The gaseous N.sub.2 is circulated under pressure and aids in the cooldown process while providing its primary function of accommodating differential thermal contraction and expansion in the structure. After lower temperatures are reached and the greater part of the contraction has occured, the N.sub.2 liquefies then solidifies to provide a completely rigid structure at the cryogenic operating temperatures of the device.
Free-decay time-domain modal identification for large space structures
NASA Technical Reports Server (NTRS)
Kim, Hyoung M.; Vanhorn, David A.; Doiron, Harold H.
1992-01-01
Concept definition studies for the Modal Identification Experiment (MIE), a proposed space flight experiment for the Space Station Freedom (SSF), have demonstrated advantages and compatibility of free-decay time-domain modal identification techniques with the on-orbit operational constraints of large space structures. Since practical experience with modal identification using actual free-decay responses of large space structures is very limited, several numerical and test data reduction studies were conducted. Major issues and solutions were addressed, including closely-spaced modes, wide frequency range of interest, data acquisition errors, sampling delay, excitation limitations, nonlinearities, and unknown disturbances during free-decay data acquisition. The data processing strategies developed in these studies were applied to numerical simulations of the MIE, test data from a deployable truss, and launch vehicle flight data. Results of these studies indicate free-decay time-domain modal identification methods can provide accurate modal parameters necessary to characterize the structural dynamics of large space structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levchenko, Igor; Bazaka, Kateryna; Ding, Yongjie
Rapid evolution of miniaturized, automatic, robotized, function-centered devices has redefined space technology, bringing closer the realization of most ambitious interplanetary missions and intense near-Earth space exploration. Small unmanned satellites and probes are now being launched in hundreds at a time, resurrecting a dream of satellite constellations, i.e., wide, all-covering networks of small satellites capable of forming universal multifunctional, intelligent platforms for global communication, navigation, ubiquitous data mining, Earth observation, and many other functions, which was once doomed by the extraordinary cost of such systems. The ingression of novel nanostructured materials provided a solid base that enabled the advancement of thesemore » affordable systems in aspects of power, instrumentation, and communication. However, absence of efficient and reliable thrust systems with the capacity to support precise maneuvering of small satellites and CubeSats over long periods of deployment remains a real stumbling block both for the deployment of large satellite systems and for further exploration of deep space using a new generation of spacecraft. The last few years have seen tremendous global efforts to develop various miniaturized space thrusters, with great success stories. Yet, there are critical challenges that still face the space technology. These have been outlined at an inaugural International Workshop on Micropropulsion and Cubesats, MPCS-2017, a joint effort between Plasma Sources and Application Centre/Space Propulsion Centre (Singapore) and the Micropropulsion and Nanotechnology Lab, the G. Washington University (USA) devoted to miniaturized space propulsion systems, and hosted by CNR-Nanotec—P.Las.M.I. lab in Bari, Italy. This focused review aims to highlight the most promising developments reported at MPCS-2017 by leading world-reputed experts in miniaturized space propulsion systems. Recent advances in several major types of small thrusters including Hall thrusters, ion engines, helicon, and vacuum arc devices are presented, and trends and perspectives are outlined.« less
Levchenko, Igor; Bazaka, Kateryna; Ding, Yongjie; ...
2018-02-22
Rapid evolution of miniaturized, automatic, robotized, function-centered devices has redefined space technology, bringing closer the realization of most ambitious interplanetary missions and intense near-Earth space exploration. Small unmanned satellites and probes are now being launched in hundreds at a time, resurrecting a dream of satellite constellations, i.e., wide, all-covering networks of small satellites capable of forming universal multifunctional, intelligent platforms for global communication, navigation, ubiquitous data mining, Earth observation, and many other functions, which was once doomed by the extraordinary cost of such systems. The ingression of novel nanostructured materials provided a solid base that enabled the advancement of thesemore » affordable systems in aspects of power, instrumentation, and communication. However, absence of efficient and reliable thrust systems with the capacity to support precise maneuvering of small satellites and CubeSats over long periods of deployment remains a real stumbling block both for the deployment of large satellite systems and for further exploration of deep space using a new generation of spacecraft. The last few years have seen tremendous global efforts to develop various miniaturized space thrusters, with great success stories. Yet, there are critical challenges that still face the space technology. These have been outlined at an inaugural International Workshop on Micropropulsion and Cubesats, MPCS-2017, a joint effort between Plasma Sources and Application Centre/Space Propulsion Centre (Singapore) and the Micropropulsion and Nanotechnology Lab, the G. Washington University (USA) devoted to miniaturized space propulsion systems, and hosted by CNR-Nanotec—P.Las.M.I. lab in Bari, Italy. This focused review aims to highlight the most promising developments reported at MPCS-2017 by leading world-reputed experts in miniaturized space propulsion systems. Recent advances in several major types of small thrusters including Hall thrusters, ion engines, helicon, and vacuum arc devices are presented, and trends and perspectives are outlined.« less
NASA Astrophysics Data System (ADS)
Levchenko, Igor; Bazaka, Kateryna; Ding, Yongjie; Raitses, Yevgeny; Mazouffre, Stéphane; Henning, Torsten; Klar, Peter J.; Shinohara, Shunjiro; Schein, Jochen; Garrigues, Laurent; Kim, Minkwan; Lev, Dan; Taccogna, Francesco; Boswell, Rod W.; Charles, Christine; Koizumi, Hiroyuki; Shen, Yan; Scharlemann, Carsten; Keidar, Michael; Xu, Shuyan
2018-03-01
Rapid evolution of miniaturized, automatic, robotized, function-centered devices has redefined space technology, bringing closer the realization of most ambitious interplanetary missions and intense near-Earth space exploration. Small unmanned satellites and probes are now being launched in hundreds at a time, resurrecting a dream of satellite constellations, i.e., wide, all-covering networks of small satellites capable of forming universal multifunctional, intelligent platforms for global communication, navigation, ubiquitous data mining, Earth observation, and many other functions, which was once doomed by the extraordinary cost of such systems. The ingression of novel nanostructured materials provided a solid base that enabled the advancement of these affordable systems in aspects of power, instrumentation, and communication. However, absence of efficient and reliable thrust systems with the capacity to support precise maneuvering of small satellites and CubeSats over long periods of deployment remains a real stumbling block both for the deployment of large satellite systems and for further exploration of deep space using a new generation of spacecraft. The last few years have seen tremendous global efforts to develop various miniaturized space thrusters, with great success stories. Yet, there are critical challenges that still face the space technology. These have been outlined at an inaugural International Workshop on Micropropulsion and Cubesats, MPCS-2017, a joint effort between Plasma Sources and Application Centre/Space Propulsion Centre (Singapore) and the Micropropulsion and Nanotechnology Lab, the G. Washington University (USA) devoted to miniaturized space propulsion systems, and hosted by CNR-Nanotec—P.Las.M.I. lab in Bari, Italy. This focused review aims to highlight the most promising developments reported at MPCS-2017 by leading world-reputed experts in miniaturized space propulsion systems. Recent advances in several major types of small thrusters including Hall thrusters, ion engines, helicon, and vacuum arc devices are presented, and trends and perspectives are outlined.
Central charge from adiabatic transport of cusp singularities in the quantum Hall effect
NASA Astrophysics Data System (ADS)
Can, Tankut
2017-04-01
We study quantum Hall (QH) states on a punctured Riemann sphere. We compute the Berry curvature under adiabatic motion in the moduli space in the large N limit. The Berry curvature is shown to be finite in the large N limit and controlled by the conformal dimension of the cusp singularity, a local property of the mean density. Utilizing exact sum rules obtained from a Ward identity, we show that for the Laughlin wave function, the dimension of a cusp singularity is given by the central charge, a robust geometric response coefficient in the QHE. Thus, adiabatic transport of curvature singularities can be used to determine the central charge of QH states. We also consider the effects of threaded fluxes and spin-deformed wave functions. Finally, we give a closed expression for all moments of the mean density in the integer QH state on a punctured disk.
Cardiovascular Adjustments to Gravitational Stress
NASA Technical Reports Server (NTRS)
Blomqvist, C. Gunnar; Stone, H. Lowell
1991-01-01
The effects of gravity on the cardiovascular system must be taken into account whenever a hemodynamic assessment is made. All intravascular pressure have a gravity-dependent hydrostatic component. The interaction between the gravitational field, the position of the body, and the functional characteristics of the blood vessels determines the distribution of intravascular volume. In turn this distribution largely determines cardiac pump function. Multiple control mechanisms are activated to preserve optimal tissue perfusion when the magnitude of the gravitational field or its direction relative to the body changes. Humans are particularly sensitive to such changes because of the combination of their normally erect posture and the large body mass and blood volume below the level of the heart. Current aerospace technology also exposes human subjects to extreme variations in the gravitational forces that range from zero during space travel to as much an nine-times normal during operation of high-performance military aircraft. This chapter therefore emphasizes human physiology.
Study of Permanent Magnet Focusing for Astronomical Camera Tubes
NASA Technical Reports Server (NTRS)
Long, D. C.; Lowrance, J. L.
1975-01-01
A design is developed of a permanent magnet assembly (PMA) useful as the magnetic focusing unit for the 35 and 70 mm (diagonal) format SEC tubes. Detailed PMA designs for both tubes are given, and all data on their magnetic configuration, size, weight, and structure of magnetic shields adequate to screen the camera tube from the earth's magnetic field are presented. A digital computer is used for the PMA design simulations, and the expected operational performance of the PMA is ascertained through the calculation of a series of photoelectron trajectories. A large volume where the magnetic field uniformity is greater than 0.5% appears obtainable, and the point spread function (PSF) and modulation transfer function(MTF) indicate nearly ideal performance. The MTF at 20 cycles per mm exceeds 90%. The weight and volume appear tractable for the large space telescope and ground based application.
Large-scale, high-density (up to 512 channels) recording of local circuits in behaving animals
Berényi, Antal; Somogyvári, Zoltán; Nagy, Anett J.; Roux, Lisa; Long, John D.; Fujisawa, Shigeyoshi; Stark, Eran; Leonardo, Anthony; Harris, Timothy D.
2013-01-01
Monitoring representative fractions of neurons from multiple brain circuits in behaving animals is necessary for understanding neuronal computation. Here, we describe a system that allows high-channel-count recordings from a small volume of neuronal tissue using a lightweight signal multiplexing headstage that permits free behavior of small rodents. The system integrates multishank, high-density recording silicon probes, ultraflexible interconnects, and a miniaturized microdrive. These improvements allowed for simultaneous recordings of local field potentials and unit activity from hundreds of sites without confining free movements of the animal. The advantages of large-scale recordings are illustrated by determining the electroanatomic boundaries of layers and regions in the hippocampus and neocortex and constructing a circuit diagram of functional connections among neurons in real anatomic space. These methods will allow the investigation of circuit operations and behavior-dependent interregional interactions for testing hypotheses of neural networks and brain function. PMID:24353300
DomSign: a top-down annotation pipeline to enlarge enzyme space in the protein universe.
Wang, Tianmin; Mori, Hiroshi; Zhang, Chong; Kurokawa, Ken; Xing, Xin-Hui; Yamada, Takuji
2015-03-21
Computational predictions of catalytic function are vital for in-depth understanding of enzymes. Because several novel approaches performing better than the common BLAST tool are rarely applied in research, we hypothesized that there is a large gap between the number of known annotated enzymes and the actual number in the protein universe, which significantly limits our ability to extract additional biologically relevant functional information from the available sequencing data. To reliably expand the enzyme space, we developed DomSign, a highly accurate domain signature-based enzyme functional prediction tool to assign Enzyme Commission (EC) digits. DomSign is a top-down prediction engine that yields results comparable, or superior, to those from many benchmark EC number prediction tools, including BLASTP, when a homolog with an identity >30% is not available in the database. Performance tests showed that DomSign is a highly reliable enzyme EC number annotation tool. After multiple tests, the accuracy is thought to be greater than 90%. Thus, DomSign can be applied to large-scale datasets, with the goal of expanding the enzyme space with high fidelity. Using DomSign, we successfully increased the percentage of EC-tagged enzymes from 12% to 30% in UniProt-TrEMBL. In the Kyoto Encyclopedia of Genes and Genomes bacterial database, the percentage of EC-tagged enzymes for each bacterial genome could be increased from 26.0% to 33.2% on average. Metagenomic mining was also efficient, as exemplified by the application of DomSign to the Human Microbiome Project dataset, recovering nearly one million new EC-labeled enzymes. Our results offer preliminarily confirmation of the existence of the hypothesized huge number of "hidden enzymes" in the protein universe, the identification of which could substantially further our understanding of the metabolisms of diverse organisms and also facilitate bioengineering by providing a richer enzyme resource. Furthermore, our results highlight the necessity of using more advanced computational tools than BLAST in protein database annotations to extract additional biologically relevant functional information from the available biological sequences.
Integration of an expert system into a user interface language demonstration
NASA Technical Reports Server (NTRS)
Stclair, D. C.
1986-01-01
The need for a User Interface Language (UIL) has been recognized by the Space Station Program Office as a necessary tool to aid in minimizing the cost of software generation by multiple users. Previous history in the Space Shuttle Program has shown that many different areas of software generation, such as operations, integration, testing, etc., have each used a different user command language although the types of operations being performed were similar in many respects. Since the Space Station represents a much more complex software task, a common user command language--a user interface language--is required to support the large spectrum of space station software developers and users. To assist in the selection of an appropriate set of definitions for a UIL, a series of demonstration programs was generated with which to test UIL concepts against specific Space Station scenarios using operators for the astronaut and scientific community. Because of the importance of expert system in the space station, it was decided that an expert system should be embedded in the UIL. This would not only provide insight into the UIL components required but would indicate the effectiveness with which an expert system could function in such an environment.
NASA Astrophysics Data System (ADS)
Bastida Virgili, Benjamin; Krag, Holger
2016-07-01
Space traffic has always been subject to considerable fluctuations. In the past, these fluctuations have been mainly driven by geopolitical and economic factors. During the last years there has been a considerable increase due to the use of cubesats by non-traditional space operators, and due to a significant change of mission scopes and mission orbits in Low Earth Orbit (LEO). In the near future, however, many indications point to a further increase in the space traffic in LEO. This increase is mainly driven by a cheaper access to space, also triggered by the miniaturisation of spacecraft systems. An acceleration of this trend is expressed by the announcement of large constellations in LEO with the purpose to provide broadband internet communication, allowing to minimise the required infrastructure on Earth. The number of artificial objects in orbit continues to increase and, with it, a key threat to space sustainability. In response, space agencies have identified a set of mitigation guidelines aimed at enabling space users to reduce the generation of space debris by, for example, limiting the orbital lifetime of their spacecraft and of launcher stages after the end of their mission to 25 years in LEO. However, several recent studies have shown that, today, current guidelines for the LEO protected zone are insufficiently applied by space systems of all sizes. Under these conditions, a step increase in the launch rate is a potential concern for the environment, in particular if the current End of Life (EOL) behaviour prevails in the future. Even in a perfect behaviour w.r.t. the 25 year lifetime rule, the new traffic might lead to unrecoverable environment trends. Furthermore, the requirement for reliability of the disposal function is of 90%, however, weighted with the reliability of the entire system. A failure rate of 10%, in general, was found to be acceptable under current space traffic conditions. This might not be sustainable when the LEO launch rates increase drastically. In this study, we report the detected issues of such a mega-constellation traffic, and we analyse the response of the space object population to the introduction of a large constellation conforming to the post-mission disposal guideline with differing levels of success and with different disposal orbit options.
Research Possibilities Beyond Deep Space Gateway
NASA Astrophysics Data System (ADS)
Smitherman, D. V.; Needham, D. H.; Lewis, R.
2018-02-01
This abstract explores the possibilities for a large research facilities module attached to the Deep Space Gateway, using the same large module design and basic layout planned for the Deep Space Transport.
Large bowel injuries during gynecological laparoscopy.
Ulker, Kahraman; Anuk, Turgut; Bozkurt, Murat; Karasu, Yetkin
2014-12-16
Laparoscopy is one of the most frequently preferred surgical options in gynecological surgery and has advantages over laparotomy, including smaller surgical scars, faster recovery, less pain and earlier return of bowel functions. Generally, it is also accepted as safe and effective and patients tolerate it well. However, it is still an intra-abdominal procedure and has the similar potential risks of laparotomy, including injury of a vital structure, bleeding and infection. Besides the well-known risks of open surgery, laparoscopy also has its own unique risks related to abdominal access methods, pneumoperitoneum created to provide adequate operative space and the energy modalities used during the procedures. Bowel, bladder or major blood vessel injuries and passage of gas into the intravascular space may result from laparoscopic surgical technique. In addition, the risks of aspiration, respiratory dysfunction and cardiovascular dysfunction increase during laparoscopy. Large bowel injuries during laparoscopy are serious complications because 50% of bowel injuries and 60% of visceral injuries are undiagnosed at the time of primary surgery. A missed or delayed diagnosis increases the risk of bowel perforation and consequently sepsis and even death. In this paper, we aim to focus on large bowel injuries that happen during gynecological laparoscopy and review their diagnostic and management options.
NASA Astrophysics Data System (ADS)
Chen, Wen-Yuan; Liu, Chen-Chung
2006-01-01
The problems with binary watermarking schemes are that they have only a small amount of embeddable space and are not robust enough. We develop a slice-based large-cluster algorithm (SBLCA) to construct a robust watermarking scheme for binary images. In SBLCA, a small-amount cluster selection (SACS) strategy is used to search for a feasible slice in a large-cluster flappable-pixel decision (LCFPD) method, which is used to search for the best location for concealing a secret bit from a selected slice. This method has four major advantages over the others: (a) SBLCA has a simple and effective decision function to select appropriate concealment locations, (b) SBLCA utilizes a blind watermarking scheme without the original image in the watermark extracting process, (c) SBLCA uses slice-based shuffling capability to transfer the regular image into a hash state without remembering the state before shuffling, and finally, (d) SBLCA has enough embeddable space that every 64 pixels could accommodate a secret bit of the binary image. Furthermore, empirical results on test images reveal that our approach is a robust watermarking scheme for binary images.
Metasurface Enabled Wide-Angle Fourier Lens.
Liu, Wenwei; Li, Zhancheng; Cheng, Hua; Tang, Chengchun; Li, Junjie; Zhang, Shuang; Chen, Shuqi; Tian, Jianguo
2018-06-01
Fourier optics, the principle of using Fourier transformation to understand the functionalities of optical elements, lies at the heart of modern optics, and it has been widely applied to optical information processing, imaging, holography, etc. While a simple thin lens is capable of resolving Fourier components of an arbitrary optical wavefront, its operation is limited to near normal light incidence, i.e., the paraxial approximation, which puts a severe constraint on the resolvable Fourier domain. As a result, high-order Fourier components are lost, resulting in extinction of high-resolution information of an image. Other high numerical aperture Fourier lenses usually suffer from the bulky size and costly designs. Here, a dielectric metasurface consisting of high-aspect-ratio silicon waveguide array is demonstrated experimentally, which is capable of performing 1D Fourier transform for a large incident angle range and a broad operating bandwidth. Thus, the device significantly expands the operational Fourier space, benefitting from the large numerical aperture and negligible angular dispersion at large incident angles. The Fourier metasurface will not only facilitate efficient manipulation of spatial spectrum of free-space optical wavefront, but also be readily integrated into micro-optical platforms due to its compact size. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Wilson, Gregory S.; Huntress, Wesley T.
1990-01-01
The rationale behind Mission to Planet Earth is presented, and the program plan is described in detail. NASA and its interagency and international partners will place satellites carrying advanced sensors in strategic earth orbits to collect muultidisciplinary data. A sophisticated data system will process and archive an unprecedented large amount of information about the earth and how it functions as a system. Attention is given to the space observatories, the data and information systems, and the interdisciplinary research.
Doll, J.; Dupuis, P.; Nyquist, P.
2017-02-08
Parallel tempering, or replica exchange, is a popular method for simulating complex systems. The idea is to run parallel simulations at different temperatures, and at a given swap rate exchange configurations between the parallel simulations. From the perspective of large deviations it is optimal to let the swap rate tend to infinity and it is possible to construct a corresponding simulation scheme, known as infinite swapping. In this paper we propose a novel use of large deviations for empirical measures for a more detailed analysis of the infinite swapping limit in the setting of continuous time jump Markov processes. Usingmore » the large deviations rate function and associated stochastic control problems we consider a diagnostic based on temperature assignments, which can be easily computed during a simulation. We show that the convergence of this diagnostic to its a priori known limit is a necessary condition for the convergence of infinite swapping. The rate function is also used to investigate the impact of asymmetries in the underlying potential landscape, and where in the state space poor sampling is most likely to occur.« less
Edge Singularities and Quasilong-Range Order in Nonequilibrium Steady States.
De Nardis, Jacopo; Panfil, Miłosz
2018-05-25
The singularities of the dynamical response function are one of the most remarkable effects in many-body interacting systems. However in one dimension these divergences only exist strictly at zero temperature, making their observation very difficult in most cold atomic experimental settings. Moreover the presence of a finite temperature destroys another feature of one-dimensional quantum liquids: the real space quasilong-range order in which the spatial correlation functions exhibit power-law decay. We consider a nonequilibrium protocol where two interacting Bose gases are prepared either at different temperatures or chemical potentials and then joined. We show that the nonequilibrium steady state emerging at large times around the junction displays edge singularities in the response function and quasilong-range order.
Edge Singularities and Quasilong-Range Order in Nonequilibrium Steady States
NASA Astrophysics Data System (ADS)
De Nardis, Jacopo; Panfil, Miłosz
2018-05-01
The singularities of the dynamical response function are one of the most remarkable effects in many-body interacting systems. However in one dimension these divergences only exist strictly at zero temperature, making their observation very difficult in most cold atomic experimental settings. Moreover the presence of a finite temperature destroys another feature of one-dimensional quantum liquids: the real space quasilong-range order in which the spatial correlation functions exhibit power-law decay. We consider a nonequilibrium protocol where two interacting Bose gases are prepared either at different temperatures or chemical potentials and then joined. We show that the nonequilibrium steady state emerging at large times around the junction displays edge singularities in the response function and quasilong-range order.
Reinforcement Learning with Orthonormal Basis Adaptation Based on Activity-Oriented Index Allocation
NASA Astrophysics Data System (ADS)
Satoh, Hideki
An orthonormal basis adaptation method for function approximation was developed and applied to reinforcement learning with multi-dimensional continuous state space. First, a basis used for linear function approximation of a control function is set to an orthonormal basis. Next, basis elements with small activities are replaced with other candidate elements as learning progresses. As this replacement is repeated, the number of basis elements with large activities increases. Example chaos control problems for multiple logistic maps were solved, demonstrating that the method for adapting an orthonormal basis can modify a basis while holding the orthonormality in accordance with changes in the environment to improve the performance of reinforcement learning and to eliminate the adverse effects of redundant noisy states.
Charting, navigating, and populating natural product chemical space for drug discovery.
Lachance, Hugo; Wetzel, Stefan; Kumar, Kamal; Waldmann, Herbert
2012-07-12
Natural products are a heterogeneous group of compounds with diverse, yet particular molecular properties compared to synthetic compounds and drugs. All relevant analyses show that natural products indeed occupy parts of chemical space not explored by available screening collections while at the same time largely adhering to the rule-of-five. This renders them a valuable, unique, and necessary component of screening libraries used in drug discovery. With ChemGPS-NP on the Web and Scaffold Hunter two tools are available to the scientific community to guide exploration of biologically relevant NP chemical space in a focused and targeted fashion with a view to guide novel synthesis approaches. Several of the examples given illustrate the possibility of bridging the gap between computational methods and compound library synthesis and the possibility of integrating cheminformatics and chemical space analyses with synthetic chemistry and biochemistry to successfully explore chemical space for the identification of novel small molecule modulators of protein function.The examples also illustrate the synergistic potential of the chemical space concept and modern chemical synthesis for biomedical research and drug discovery. Chemical space analysis can map under explored biologically relevant parts of chemical space and identify the structure types occupying these parts. Modern synthetic methodology can then be applied to efficiently fill this “virtual space” with real compounds.From a cheminformatics perspective, there is a clear demand for open-source and easy to use tools that can be readily applied by educated nonspecialist chemists and biologists in their daily research. This will include further development of Scaffold Hunter, ChemGPS-NP, and related approaches on the Web. Such a “cheminformatics toolbox” would enable chemists and biologists to mine their own data in an intuitive and highly interactive process and without the need for specialized computer science and cheminformatics expertise. We anticipate that it may be a viable, if not necessary, step for research initiatives based on large high-throughput screening campaigns,in particular in the pharmaceutical industry, to make the most out of the recent advances in computational tools in order to leverage and take full advantage of the large data sets generated and available in house. There are “holes” in these data sets that can and should be identified and explored by chemistry and biology.
Impact of lunar and planetary missions on the space station: Preliminary STS logistics report
NASA Technical Reports Server (NTRS)
1984-01-01
Space station requirements for lunar and planetary missions are discussed. Specific reference is made to projected Ceres and Kopff missions; Titan probes; Saturn and Mercury orbiters; and a Mars sample return mission. Such requirements as base design; station function; program definition; mission scenarios; uncertainties impact; launch manifest and mission schedule; and shuttle loads are considered. It is concluded that: (1) the impact of the planetary missions on the space station is not large when compared to the lunar base; (2) a quarantine module may be desirable for sample returns; (3) the Ceres and Kopff missions require the ability to stack and checkout two-stage OTVs; and (4) two to seven manweeks of on-orbit work are required of the station crew to launch a mission and, with the exception of the quarantine module, dedicated crew will not be required.
Liquid Methane Testing With a Large-Scale Spray Bar Thermodynamic Vent System
NASA Technical Reports Server (NTRS)
Hastings, L. J.; Bolshinskiy, L. G.; Hedayat, A.; Flachbart, R. H.; Sisco, J. D.; Schnell. A. R.
2014-01-01
NASA's Marshall Space Flight Center conducted liquid methane testing in November 2006 using the multipurpose hydrogen test bed outfitted with a spray bar thermodynamic vent system (TVS). The basic objective was to identify any unusual or unique thermodynamic characteristics associated with densified methane that should be considered in the design of space-based TVSs. Thirteen days of testing were performed with total tank heat loads ranging from 720 to 420 W at a fill level of approximately 90%. It was noted that as the fluid passed through the Joule-Thompson expansion, thermodynamic conditions consistent with the pervasive presence of metastability were indicated. This Technical Publication describes conditions that correspond with metastability and its detrimental effects on TVS performance. The observed conditions were primarily functions of methane densification and helium pressurization; therefore, assurance must be provided that metastable conditions have been circumvented in future applications of thermodynamic venting to in-space methane storage.
A unified design space of synthetic stripe-forming networks
Schaerli, Yolanda; Munteanu, Andreea; Gili, Magüi; Cotterell, James; Sharpe, James; Isalan, Mark
2014-01-01
Synthetic biology is a promising tool to study the function and properties of gene regulatory networks. Gene circuits with predefined behaviours have been successfully built and modelled, but largely on a case-by-case basis. Here we go beyond individual networks and explore both computationally and synthetically the design space of possible dynamical mechanisms for 3-node stripe-forming networks. First, we computationally test every possible 3-node network for stripe formation in a morphogen gradient. We discover four different dynamical mechanisms to form a stripe and identify the minimal network of each group. Next, with the help of newly established engineering criteria we build these four networks synthetically and show that they indeed operate with four fundamentally distinct mechanisms. Finally, this close match between theory and experiment allows us to infer and subsequently build a 2-node network that represents the archetype of the explored design space. PMID:25247316
Joint U.S./Japan Conference on Adaptive Structures, 1st, Maui, HI, Nov. 13-15, 1990, Proceedings
NASA Technical Reports Server (NTRS)
Wada, Ben K. (Editor); Fanson, James L. (Editor); Miura, Koryo (Editor)
1991-01-01
The present volume of adaptive structures discusses the development of control laws for an orbiting tethered antenna/reflector system test scale model, the sizing of active piezoelectric struts for vibration suppression on a space-based interferometer, the control design of a space station mobile transporter with multiple constraints, and optimum configuration control of an intelligent truss structure. Attention is given to the formulation of full state feedback for infinite order structural systems, robustness issues in the design of smart structures, passive piezoelectric vibration damping, shape control experiments with a functional model for large optical reflectors, and a mathematical basis for the design optimization of adaptive trusses in precision control. Topics addressed include approaches to the optimal adaptive geometries of intelligent truss structures, the design of an automated manufacturing system for tubular smart structures, the Sandia structural control experiments, and the zero-gravity dynamics of space structures in parabolic aircraft flight.
Joint U.S./Japan Conference on Adaptive Structures, 1st, Maui, HI, Nov. 13-15, 1990, Proceedings
NASA Astrophysics Data System (ADS)
Wada, Ben K.; Fanson, James L.; Miura, Koryo
1991-11-01
The present volume of adaptive structures discusses the development of control laws for an orbiting tethered antenna/reflector system test scale model, the sizing of active piezoelectric struts for vibration suppression on a space-based interferometer, the control design of a space station mobile transporter with multiple constraints, and optimum configuration control of an intelligent truss structure. Attention is given to the formulation of full state feedback for infinite order structural systems, robustness issues in the design of smart structures, passive piezoelectric vibration damping, shape control experiments with a functional model for large optical reflectors, and a mathematical basis for the design optimization of adaptive trusses in precision control. Topics addressed include approaches to the optimal adaptive geometries of intelligent truss structures, the design of an automated manufacturing system for tubular smart structures, the Sandia structural control experiments, and the zero-gravity dynamics of space structures in parabolic aircraft flight.
Modularity in protein structures: study on all-alpha proteins.
Khan, Taushif; Ghosh, Indira
2015-01-01
Modularity is known as one of the most important features of protein's robust and efficient design. The architecture and topology of proteins play a vital role by providing necessary robust scaffolds to support organism's growth and survival in constant evolutionary pressure. These complex biomolecules can be represented by several layers of modular architecture, but it is pivotal to understand and explore the smallest biologically relevant structural component. In the present study, we have developed a component-based method, using protein's secondary structures and their arrangements (i.e. patterns) in order to investigate its structural space. Our result on all-alpha protein shows that the known structural space is highly populated with limited set of structural patterns. We have also noticed that these frequently observed structural patterns are present as modules or "building blocks" in large proteins (i.e. higher secondary structure content). From structural descriptor analysis, observed patterns are found to be within similar deviation; however, frequent patterns are found to be distinctly occurring in diverse functions e.g. in enzymatic classes and reactions. In this study, we are introducing a simple approach to explore protein structural space using combinatorial- and graph-based geometry methods, which can be used to describe modularity in protein structures. Moreover, analysis indicates that protein function seems to be the driving force that shapes the known structure space.
Functional decor in the International Space Station: Body orientation cues and picture perception
NASA Technical Reports Server (NTRS)
Coss, Richard G.; Clearwater, Yvonne A.; Barbour, Christopher G.; Towers, Steven R.
1989-01-01
Subjective reports of American astronauts and their Soviet counterparts suggest that homogeneous, often symmetrical, spacecraft interiors can contribute to motion sickness during the earliest phase of a mission and can also engender boredom. Two studies investigated the functional aspects of Space Station interior aesthetics. One experiment examined differential color brightnesses as body orientation cues; the other involved a large survey of photographs and paintings that might enhance the interior aesthetics of the proposed International Space Station. Ninety male and female college students reclining on their backs in the dark were disoriented by a rotating platform and inserted under a slowly rotating disk that filled their entire visual field. The entire disk was painted the same color but one half had a brightness value that was about 69 percent higher than the other. The effects of red, blue, and yellow were examined. Subjects wearing frosted goggles opened their eyes to view the rotating, illuminated disk, which was stopped when they felt that they were right-side up. For all three colors, significant numbers of subjects said they felt right-side up when the brighter side of the disk filled their upper visual field. These results suggest that color brightness could provide Space Station crew members with body orientation cues as they move about. It was found that subjects preferred photographs and paintings with the greatest depths of field, irrespective of picture topic.